Salesforce Embraces EU AI Act as Approval Looms
Making EU Law a Reality with Every Stride
The European Parliament’s committee level has approved the European Union’s Artificial Intelligence Act (EU AI Act), marking a significant stride toward its enactment as EU law. This groundbreaking legislation stands as the world’s inaugural comprehensive legal framework designed to regulate Artificial Intelligence (AI).
Salesforce believes that to trust AI, governments, corporations, and civil society must collaborate to promote AI policy frameworks. Also, they should be internationally interoperable, responsible, safe, and based on risk. The Act divides artificial intelligence systems into several risk categories, with distinct sets of restrictions applied to each. For reasons of “unacceptable risk,” the European Union forbids the use of AI systems. The degree of responsibility that is expected of AI systems classified as “high risk” or “limited risk” varies. This morning, the European Parliament took the penultimate step before becoming EU legislation by endorsing the EU AI Act at the committee level. To regulate AI, the European Union passed the first all-encompassing legal framework in this area.
Transforming Policies into Action with Every Step
Discussions around AI policy have progressed significantly worldwide thanks to the EU AI Act. Salesforce applauds the European Union AI Act’s authors for their thoughtful consideration of complex approaches, such as a risk-based methodology and ethical safeguards. Businesses have flocked to AI-powered services in an effort to boost productivity, but governments worldwide are still figuring out how to make sure AI is developed and used fairly, safely, and with trust. The European Union Artificial Intelligence Act, which was proposed by lawmakers in 2021, offers a blueprint to direct both public and private organizations as they embrace and develop AI.
Read: Google’s Snap & Transform: Unlocking Surprising Image Magic with MobileDiffusion
The EU AI Act will primarily follow a risk-based approach, categorizing AI systems into four distinct risk groups based on their use cases:
There are four levels of risk:
(1) intolerable,
(2) high,
(3) limited, and
(4) low or no danger.
“Artificial intelligence system” (AI system) refers to a computer program or network that can learn and make decisions on its own using data and instructions from humans or other machines. It can then use this information to figure out how to accomplish goals that humans have set.
Nadella’s AI Revolution: Microsoft Lures Indian Developers with Cutting-Edge Tools
The Age of Digital Evolution: A New Frontier
Generative AI, particularly, represents the next significant technological shift, comparable to the advent of the internet and mobile technology. However, like any emerging technology, it comes with inherent limitations and potential risks spanning accuracy, bias, inequality, privacy, security, and content sourcing. As companies swiftly embrace AI to enhance productivity, prioritizing trust becomes paramount. Hence, earlier this year, Salesforce introduced five guidelines for responsible development of generative AI, aimed at addressing trust-related concerns. Furthermore, Salesforce advocates for tailored, risk-based regulation of AI that acknowledges various contexts and applications of the technology. Such regulation is essential for safeguarding individuals, fostering trust, and fostering innovation.
[To share your insights with us as part of editorial or sponsored content, please write to sghosh@martechseries.com]
Comments are closed.