Safety First: California to Regulate AI Training

·

2 min read

The ethical dilemmas posed by the use of Artificial Intelligence (AI) could be endless without a legal framework to govern them. In a pioneering move, California has taken the first step by becoming the first U.S. state to pass legislation regulating the training of large-scale AI models.

The bill, known as SB 1047 or the Safe Innovation for Frontier AI Models Act, outlines a series of requirements for companies developing and training these models. As expected, the measure has sparked debate among the companies it affects.

AI Doesn't Escape the Law Supporters of the bill argue that it aims to serve as a "safeguard to protect society" from the misuse of AI. Some of the key provisions of the new law include:

Emergency Deactivation: Companies must ensure that it is possible to quickly and completely deactivate an AI model if it is deemed unsafe. Protection Against Unsafe Modifications: Language models must be safeguarded against "unsafe post-training modifications" or any alterations that could cause "critical harm." However, several experts have expressed concerns that these new regulations could stifle innovation and make it difficult for independent developers to operate. Smaller developers and companies like OpenAI worry about the challenges they might face in meeting all the law's requirements and the penalties for non-compliance.

On the other hand, Professor Geoffrey Hinton, former head of AI at Google, praised the bill, stating that the risks posed by powerful AI systems are "very real and must be taken extremely seriously," according to reports from 9to5mac.

Naturally, Apple has shown significant interest in these regulations, as the company is actively developing Apple Intelligence features slated for public release this fall with the iOS 18.1 and macOS Sequoia 15.1 updates. Some of these functionalities are already available in the iOS 18.1 beta version.

However, it remains to be seen whether the bill will be approved or vetoed by the end of September.

Setting a Precedent With this legislation, California positions itself as a leader in AI regulation, potentially influencing federal laws. The bill aims to combat the risks associated with the misuse of AI, such as the spread of misinformation and the creation of distorted images and videos (deepfakes). The latter has become a significant concern, particularly highlighted by the controversy surrounding Google's Pixel Studio tool, which allowed the generation of violent images.

At the federal level, the Biden administration has introduced a set of voluntary guidelines for tech companies.

Meanwhile, in March of this year, the European Parliament approved the Artificial Intelligence Act, which seeks to ensure safety and respect for fundamental rights while promoting innovation.

https://affairep1-shop.fourthwall.com/products/affairep1hd https://affairep1th.webflow.io/

https://kidnapep1-shop.fourthwall.com/products/kidnapep1

https://kidnap-ep-1.webflow.io/

https://4minutesep6-shop.fourthwall.com/products/4minutesep6 https://4minutesep6hd.webflow.io/