Samsung Employees Exposed Sensitive Data While Communicating with ChatGPT
According to media reports, Samsung allowed its engineers to use ChatGPT in their work and they disclosed confidential company data.
Samsung employees began to use AI to quickly fix errors in the source code, and “leaked” confidential data to the chatbot, including notes from internal meetings and data related to the company’s production and profitability. As a result, access to ChatGPT may be blocked for Samsung employees.
Let me remind you that we also wrote that Amateur Hackers Use ChatGPT to Create Malware, and also that Microsoft to Limit Chatbot Bing to 50 Messages a Day.
Also information security specialists reported that Blogger Forced ChatGPT to Generate Keys for Windows 95.
The Economist reports that in 20 days, the company recorded three cases of data leaks via ChatGPT at once. In one case, a Samsung developer gave a chatbot the source code for a proprietary error-correction program, essentially exposing the code to a secret AI application run by a third party.
In the second case, an employee exposed ChatGPT test patterns designed to identify defective chips and requested their optimization. These templates are also highly sensitive data, and optimizing them can speed up testing and verification of chips, significantly reducing costs for the company.
In the third case, a Samsung employee used the Naver Clova app to convert a recording of a private meeting to text and then sent it to ChatGPT to prepare a presentation.
All this forced Samsung management to warn employees about the dangers of using ChatGPT. The company informed managers and employees that the data that ChatGPT receives is transmitted and stored on external servers, it cannot be “revoked”, and all this increases the risk of confidential information leakage. In addition, ChatGPT learns from the received data, which means it can disclose confidential information to third parties.