Tag: sensitive
Samsung Issues Ban On Artificial Intelligence Use After Sensitive Data Leak
Samsung has issued a ban on so-called ‘generative artificial intelligence’ after discovering the services were being misused by company employees….
The post Samsung Issues Ban On Artificial Intelligence Use After Sensitive Data Leak appeared first on TechRound.
‘Sensitive’ files of £1.3billion nuclear submarine ‘found in Wetherspoon pub toilet’
Google disrupts malware that steals sensitive data from Chrome users
Google has disrupted infrastructure linked to the notorious CryptBot malware, which the company claims has stolen data from hundreds of thousands of browser users in the past year alone. CryptBot is malicious information-stealing malware first discovered in 2019. The infostealer malware is typically distributed by spoofed websites masquerading as legitimate software sites that offer free […]
Google disrupts malware that steals sensitive data from Chrome users by Carly Page originally published on TechCrunch
Armed forces member appears in court accused of ‘sharing highly sensitive information’
Hackers publish sensitive employee data stolen during CommScope ransomware attack
Hackers published a trove of data stolen from U.S. network infrastructure giant CommScope, including thousands of employees’ Social Security numbers and bank account details. The North Carolina–based company, which designs and manufactures network infrastructure products for a range of customers, including hospitals, schools and U.S. federal agencies, was listed on the dark web leak site […]
Hackers publish sensitive employee data stolen during CommScope ransomware attack by Carly Page originally published on TechCrunch
Three Samsung employees reportedly leaked sensitive data to ChatGPT
On the surface, ChatGPT might seem like a tool that can come in useful for an array of work tasks. But before you ask the chatbot to summarize important memos or check your work for errors, it’s worth remembering that anything you share with ChatGPT could be used to train the system and perhaps even pop up in its responses to other users. That’s something several Samsung employees probably should have been aware of before they reportedly shared confidential information with the chatbot.
Soon after Samsung’s semiconductor division started allowing engineers to use ChatGPT, workers leaked secret info to it on at least three occasions, according to The Economist Korea (as spotted by Mashable). One employee reportedly asked the chatbot to check sensitive database source code for errors, another solicited code optimization and a third fed a recorded meeting into ChatGPT and asked it to generate minutes.
Reports suggest that, after learning about the security slip-ups, Samsung attempted to limit the extent of future faux pas by restricting the length of employees’ ChatGPT prompts to a kilobyte, or 1024 characters of text. The company is also said to be investigating the three employees in question and building its own chatbot to prevent similar mishaps. Engadget has contacted Samsung for comment.
ChatGPT’s data policy states that, unless users explicitly opt out, it uses their prompts to train its models. The chatbot’s owner OpenAI urges users not to share secret information with ChatGPT in conversations as it’s “not able to delete specific prompts from your history.” The only way to get rid of personally identifying information on ChatGPT is to delete your account — a process that can take up to four weeks.
The Samsung saga is another example of why it’s worth exercising caution when using chatbots, as you perhaps should with all your online activity. You never truly know where your data will end up.
This article originally appeared on Engadget at https://www.engadget.com/three-samsung-employees-reportedly-leaked-sensitive-data-to-chatgpt-190221114.html?src=rss
OpenAI says a bug leaked sensitive ChatGPT user data
OpenAI was forced to take its wildly-popular ChatGPT bot offline for emergency maintenance on Tuesday after a user was able to exploit a bug in the system to recall the titles from other users’ chat histories. On Friday the company announced its initial findings from the incident.
In Tuesday’s incident, users posted screenshots on Reddit that their ChatGPT sidebars featured previous chat histories from other users. Only the title of the conversation, not the text itself, were visible. OpenAI, in response, took the bot offline for nearly 10 hours to investigate. The results of that investigation revealed a deeper security issue: the chat history bug may have also potentially revealed personal data from 1.2 percent of ChatGPT Plus subscribers (a $20/month enhanced access package).
“In the hours before we took ChatGPT offline on Monday, it was possible for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date. Full credit card numbers were not exposed at any time,” the OpenAI team wrote Friday. The issue has since been patched for the faulty library which OpenAI identified as the Redis client open-source library, redis-py.
The company has downplayed the likelihood of such a breach occurring, arguing that either of the following criteria would have to be met to place a user at risk:
– Open a subscription confirmation email sent on Monday, March 20, between 1 a.m. and 10 a.m. Pacific time. Due to the bug, some subscription confirmation emails generated during that window were sent to the wrong users. These emails contained the last four digits of another user’s credit card number, but full credit card numbers did not appear. It’s possible that a small number of subscription confirmation emails might have been incorrectly addressed prior to March 20, although we have not confirmed any instances of this.
– In ChatGPT, click on “My account,” then “Manage my subscription” between 1 a.m. and 10 a.m. Pacific time on Monday, March 20. During this window, another active ChatGPT Plus user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date might have been visible. It’s possible that this also could have occurred prior to March 20, although we have not confirmed any instances of this.
The company has taken additional steps to prevent this from happening again in the future including adding redundant checks to library calls, “programatically examined our logs to make sure that all messages are only available to the correct user,” and “improved logging to identify when this is happening and fully confirm it has stopped.” The company says that it has also reached out to alert affected users of the issue.
This news follows a costly public faux pas committed by Google’s rival Bard AI in February when it incorrectly assured Twitter that the JWST was the first telescope to image an exoplanet, as well as revelations that CNET had surreptitiously used generative AI to write financial explainer posts (a week before laying off a sizable chunk of its editorial department). Whether OpenAI will suffer the same market-based repercussions as its competitors remains to be seen.
This article originally appeared on Engadget at https://www.engadget.com/openai-says-a-bug-leaked-sensitive-chatgpt-user-data-165439848.html?src=rss