Samsung has prohibited its employees from using AI tools such as ChatGPT, Bard or Bing, fearing that confidential information might end up in the wrong hands.
The biggest problem when using these types of tools is that queries end up on other company servers in other countries, making it impossible to erase. Imagine an employee asking one of these AI tools whether a piece of proprietary code is correct. The server will retain the code, and a data breach may expose that information to other users.
This is no longer only a hypothetical scenario. It already happened when a security breach affected OpenAI, and its ChatGPT tool, exposing histories. According to a Bloomberg report, Samsung sent out an internal memo detailing the measure and explaining why employees must abstain from using these tools.
"Interest in generative AI platforms such as ChatGPT has been growing internally and externally," Samsung explained. "While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI."
"HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees' productivity and efficiency. However, until these measures are prepared, we are temporarily restricting the use of generative AI."
Samsung decided to take this extra step mainly because of a security incident in April, in which several engineers shared proprietary code with ChatGPT, risking exposure of the information in a potential data breach.
Samsung is not the only organization to worry about privacy issues. Italy decided to ban ChatGPT, citing privacy concerns, a measure that was only recently reversed after the company made some changes to appease the authorities.
tags
Silviu is a seasoned writer who followed the technology world for almost two decades, covering topics ranging from software to hardware and everything in between.
View all postsDecember 19, 2024
November 14, 2024