OpenAI has formally admitted that ChatGPT users’ login credentials were hacked, allowing illegal access and misuse of accounts. This statement follows previous accusations made by ArsTechnica, in which screenshots sent by a reader showed that ChatGPT was leaking private chats, including sensitive information such as usernames and passwords, which the business categorically denied.
OpenAI confirmed that their fraud and security teams were actively investigating the situation and called the first ArsTechnica claim false. According to OpenAI, the hacked account login credentials enabled a hostile actor to access and misuse the impacted accounts. The disclosed chat history and data were the consequence of illicit access, and ChatGPT did not reveal another user’s history.
Read More: Moto G24 Power Smartphone in India, Power Smartphone Under the Budget
The impacted individual, whose account was allegedly hacked, did not think it had been accessed. OpenAI stressed that the continuing investigation will reveal further information about the scope of the breach and the procedures required to remedy the security issue.
ArsTechnica has previously revealed that ChatGPT displayed private discussions, raising worries about the disclosure of sensitive information. After utilizing ChatGPT for an unrelated inquiry, the impacted user found more chats in their chat history that belonged to someone else.
The stolen communications contained information from a support system used by workers of a pharmacy prescription medication portal, such as troubleshooting difficulties, the app’s name, store numbers, and other login passwords. Another intercepted discussion included information about a presentation and an unpublished research proposal.
This issue adds to a slew of previous security concerns about ChatGPT. In March 2023, a flaw apparently leaked conversation titles, and in November 2023, researchers were able to retrieve private data used in training the Language Model by altering searches.
Users are cautioned to take caution while utilizing AI bots such as ChatGPT, particularly those built by third parties. The ChatGPT website lacks typical security features such as two-factor authentication (2FA) and the opportunity to examine recent logins, raising worries about the platform’s security.