In what’s being known as a “first-in-the-nation” safeguard for AI, California Governor Gavin Newsom has signed a brand new AI legislation that can require AI chatbots to explicitly inform customers that they’re “artificially generated and never human.” The brand new invoice, signed into legislation as Senate Invoice 243, will hopefully assist lower down on the frequency of individuals being confused in regards to the actuality of the “companion” AI chatbots they work together with.
For starters, the capabilities of AI chatbots proceed to advance at a fast tempo because the fashions operating them enhance, making it tougher for some customers to inform AI from people. With this new invoice, although, the builders behind these chatbots might want to present new safeguards. Extra particularly, the invoice states that “if an affordable individual interacting with a companion chatbot can be misled to perception that the individual is interacting with a human,” then the chatbot developer should present a transparent notification that the chatbot isn’t human.
Now, you will need to word that the invoice says this ruling doesn’t apply to customer support chatbots or voice assistants the place the AI doesn’t preserve a transparent and constant relationship with the person. It is clear that AI chatbots reminiscent of ChatGPT, Gemini, and Claude are the first targets.
Why Governor Newsom pushed this invoice ahead
After all, the arrival of Senate Invoice 243 isn’t an surprising one. Over the previous a number of months we have seen a weird pattern with AI chatbots as extra folks have turned to them for all the pieces from analysis to friendship to romance. AI firms like OpenAI have even discovered themselves caught up in lawsuits, reminiscent of when a teen died by suicide after allegedly consulting with ChatGPT. These lawsuits led to OpenAI including its personal security guardrails in ChatGPT, in addition to the launch of latest parental controls and different options to assist monitor ChatGPT utilization.
However these safeguards do not fully resolve the difficulty that so many different AI companion chatbots have launched to the world. With an rising variety of “AI girlfriend” apps showing on the web, having a clear-cut approach for builders to make sure that customers know what they’re moving into is essential to assist be certain that folks do not fall prey to harmful or deceptive AI responses.