Samsung bans employees from using AI tools like ChatGPT and Google Bard after an accidental data leak, report says
- Samsung has banned employees from using ChatGPT in the workplace, per Bloomberg.
- This comes after Samsung engineers accidentally leaked internal source code to ChatGPT in April.
Samsung has introduced a new policy banning employees from using generative AI tools like Open AI's ChatGPT and Google Bard in the workplace, Bloomberg reported Tuesday.
In an internal memo viewed by Bloomberg, the company expressed concerns about data being shared on AI platforms and ending up in the hands of other users.
The new policy comes after Samsung engineers accidentally leaked internal source code by uploading it into ChatGPT in April, the memo said.
Staff are now banned from using generative AI tools on company-owned devices including computers, tablets, phones, and internal networks, per Bloomberg.
"We ask that you diligently adhere to our security guidelines and failure to do so may result in a breach or compromise of company information resulting in disciplinary action up to and including termination of employment," Samsung wrote in the memo.
It added that the company is reviewing its security measures to "create a secure environment" for employees to use AI but at present it's "temporarily restricting the use of generative AI."
Samsung, OpenAI, and Google did not immediately respond to Insider's request for comment about the ban.
Wall Street Banks including Citigroup, Goldman Sachs, and JPMorgan were among the first companies to restrict employee use of ChatGPT over concerns about third-party software accessing sensitive information.
These banks also feared that the chatbot could share financial information that could lead to regulatory action.
Tech giant Amazon similarly warned staff against using ChatGPT because of instances of the chatbot's responses resembling internal Amazon data, Insider's Eugene Kim reported in January.
OpenAI, the company behind ChatGPT, introduced new measures in April to address concerns about managing data on the chatbot. This included giving users the ability to disable chat history.
"When chat history is disabled, we will retain new conversations for 30 days and review them only when needed to monitor for abuse, before permanently deleting," the company said.
OpenAI is also working on a ChatGPT Business subscription for professionals and businesses who want greater control over how their data is used and stored.
Google said it helps Bard improve by selecting a "subset of conversations and use automated tools to help remove personally identifiable information," according to its website.
"These sample conversations are reviewable by trained reviewers and kept for up to 3 years, separately from your Google Account."