Google ban staff from using Bard AI-generated code internally
Google/Pexels/CreaZillaNew rules imposed by Apple and Google prevent employees from using AI platforms like ChatGPT and Bard from internal use.
The ongoing AI boom has led to many companies having to quickly make adjustments to their internal rules for employees. Samsung has banned the use of generative AI like ChatGPT due to privacy concerns.
This included asking ChatGPT to correct internal code, which OpenAI could use for future learning purposes.
Apple also banned employees from using generative AI outright, over similar privacy concerns. The growing list now includes the likes of Amazon and banks.
Google has now joined in too, but Google hasn’t been as strict as its competitors. There’s no mention of an outright ban for the software so far, with the parent company, Alphabet, cautioning employees about how they use it with sensitive data.
The firm has told employees to stop using code written by Bard for internal projects. With every user interaction being used to train the AI model, Google can’t risk the chance of delicate code falling into the wrong hands.
Bard is similar to ChatGPT, in that it stores conversation history that could be accessed by malicious parties, or possibly use the information in conversations with unsuspecting users.
Reuters asked Google the reason for the ban, which they told them that it was to prevent buggy programs from being launched.
Google restricts employees from using Bard and AI to create code
Google Bard can code in over 20 different languages, including archaic ones that aren’t widely used anymore.
Until recently, Google Bard was invite-only but launched months after ChatGPT did. OpenAI’s quick launch of its language model rapidly changed the course of tech companies worldwide.
Not long after, various versions of ChatGPT or other generative AI started to be released. This includes research projects like DarkBERT, intended for use in investigations of the dark web.