PEN        
        Global Insight
World News Fashion Magazine Malayalam News Finance Entertainment Kerala News Technology Travel Health Automotive
മലയാളം | English
ARTIFICIAL INTELLIGENCE𝙲𝚘𝚖𝚖𝚞𝚗𝚒𝚌𝚊𝚝𝚒𝚘𝚗 𝙰𝚙𝚙𝚜,TECHNOLOGY NEWSTechnology News

Unveiling the GPT 3.5 Turbo Email Leak: Privacy Risks in AI Models

Uncovering Vulnerabilities: The GPT 3.5 Turbo Email Leak

A recent study by Indiana University Bloomington PhD researcher Rui Zhu and team has shed light on a privacy flaw in OpenAI‘s GPT 3.5 Turbo, the powerhouse behind Chat GPT. The researchers discovered a loophole allowing access to emails logged into Chat GPT, emphasizing a risk to personal information. Notably, emails from New York Times employees were among those exposed.

The flaw stems from the model’s unique ability to remember personal data, bypassing conventional security measures. Shockingly, 80 percent of the searched New York Times employee emails were accurate, showcasing the potential for AI tools like Chat GPT to inadvertently leak sensitive information.

Despite efforts by companies like OpenAI, Meta, and Google to implement security measures, researchers found ways to exploit vulnerabilities and access personal data. OpenAI, in response, emphasized its rejection of queries seeking personal information, asserting a commitment to customer safety. However, concerns linger about transparency in data handling for AI training and the storage of private information.

The security flaw in GPT 3.5 Turbo raises alarms about privacy risks in widely-used language models. Critics argue that commercially available models lack robust safeguards, continuously collecting information from diverse sources and posing substantial risks. The situation is further complicated by the opaque nature of #OpenAI‘s data practices, prompting calls for enhanced data protection and transparency standards in AI models.

0 Reactions

Leave a Reply

Your email address will not be published. Required fields are marked *