It was reported earlier that Samsung wants to integrate the powers of generative AI tools such as ChatGPT to boost its chip business. However, things aren't looking good as Samsung has ordered its staff to stop using generative AI tools such as ChatGPT and Bard out of security concerns.
This move comes after some sensitive information was leaked by Samsung staff three times. This happened after a few Samsung employees were introduced to sensitive information in ChatGPT. Apparently, the fed information has become a part of the AI's learning database forever. The sensitive information entered into the generative AI tools includes semiconductor equipment measurement data, among other things.
Any staff failing to comply with the guidelines will be released from employment
As reported by Bloomberg, Samsung notified its employees at one of its biggest divisions on Monday about the new policy. The notification reads, “Interest in generative AI platforms such as ChatGPT has been growing internally and externally. While this Interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI.”
The memo also mentions explicitly that this move was taken amid heightened concerns over the accidental leak of internal source code to the AI. Since the information is now fed to the database of the AI, the data could turn up in response to an inquiry made by any user living in any part of the world. The new rule bans using generative AI tools on Samsung-owned devices, including computers, tablets, and phones.
Notably, the memo also states that Samsung employees who fail to comply with the security guidelines could face disciplinary action, including termination from employment.