Responsible AI: How to Adopt an Ethical Model Within a Regulatory Environment
With the EU’s proposal of an act mandating the use of Responsible AI, businesses should start preparations now
Balancing Risk and Opportunity
Artificial intelligence (AI) is a market disrupter bringing business opportunities, competitive advantage and transformative methods that accelerate the realization of desirable outcomes. AI also has the potential to threaten and undermine human rights or reinforce bias. The challenge is compounded by the fact that AI systems can be incredibly complex, including thousands of evolving variables in decision-making processes. If measurement and monitoring strategies are not included from the outset, AI systems will become hard to analyze.
To ensure this immensely powerful technology continues to foster economic growth and outcomes aligned with respect for societies and human rights, the European Union (EU) has proposed a landmark AI harmonization act mandating the use of Responsible AI.
The EU AI Act
The proposed EU AI Act introduces a risk-based responsibility model designed to apply regulation in proportion to the anticipated system design intentions and the severity of impact that AI decisions have on EU citizens if misused.
From an operational point of view, there are four main categories of risk:
- Prohibited Risk – AI systems that are NOT allowed (e.g., automated drone strike system)
- High Risk – AI systems that are allowed, but subject to many regulatory requirements that are applicable in any EU nation (e.g., an automated mortgage application system with the power to reject applications, or systems used to grant access to educational and vocational training systems). A full list of high-risk AIs can be found in the Annex III of the AI Act
- Limited – Any AI system that is not prohibited or high risk. These systems will be subject to national compliance standards (e.g., chatbot)
- Minimal – AI systems that pose ‘minimal or no risk’ will be permitted with no restrictions providers are encouraged to adhere to voluntary codes of conduct. The European Commission envisages that most AI systems will fall into this category (e.g., spam filters)
When enacted, it is reported that the proposed legislation will apply to AI systems currently deployed. For example, if a bank used AI in a system designed to approve mortgages, then mortgage decisions previously made might be subject to this Act. The Act is extra-territorial, meaning the regulation applies to any AI used by an EU citizen, no matter where they reside. This regulation will likely induce the ‘Brussels Effect,’ like GDPR, in which the rest of the world complies with the EU law. It is anticipated that the Act will be enacted into law between 2024 and 2025. Based on this assumption, organizations have between now and the end of 2023 to prepare a strategy and ensure their intention and use of AI is supported by appropriate governance, training, evaluation methodologies and documentation.
How do Businesses Fit into the Regulation?
There are five key roles the EU has defined as part of this framework, including:
-
- For payment or not, a ‘Provider’ develops or has an AI system designed to place on the market or initiate it into service under its name.
- ‘Authorized Representatives’ in the EU Market are appointed to act on behalf of the provider.
- A ‘User’ operates an AI system under its authority, except when it can be employed in a personal, non-professional activity.
- Established in the Union, an ‘Importer’ places the AI system on the market or into service with the trademark or name of a legal person outside the Union.
- Without affecting the AI system’s properties, a ‘Distributor’ constructs an AI system available on the Union market.
One of the most important articles in the regulation, Article 21, states:
‘Providers of high-risk AI systems which consider or have reason to consider that a high-risk AI system which they have placed on the market or put into service is not in conformity with this Regulation shall immediately take the necessary corrective actions to bring that system into conformity, to withdraw it or to recall it, as appropriate. They shall inform the distributors of the high-risk AI system in question and, where applicable, the authorized representative and importers accordingly.’
Therefore, providers, likely to be the client organization, are the most liable for the AI Act’s requirements placed on AI systems. It is why businesses are encouraged to start preparations now.
What steps can be taken with responsible AI?
-
- Implement your governance plan now – Name and elect the governance board. Appoint your leaders of change. Assign a budget to work on readiness in 2023. Build and communicate the plan. Know how you will measure risk objectively by establishing key performance indicators.
- Consider what new career opportunities and roles are required – The AI Act motivates the role of human oversight for which lawyers, social scientists and psychologists are likely to be a fit.
- Determine the role of your organization at a product and enterprise level. Work with your in-house legal counsel to implement any adjustments required to contractual relationships, commit to the next steps and take inventory of existing AI. By examining project workflow now, changes and improvements can be made wherever practices are not yet up to standard and future risks can be mitigated by introducing new best practices.
- Determine your approach based on research – Responsible AI and Explainable AI is a burgeoning field of academic literature that contains many great insights into how to act, like CapAI, a conformity assessment developed by Oxford University researchers. Engage now to adopt a method with a view to learning and refining over time.
- Consult with third party experts – Responsible AI and Explainable AI is a burgeoning field of academic literature that contains many great insights into how to act, like CapAI, a conformity assessment developed by Oxford University researchers. Engage now to adopt a method with a view to learning and refining over time.
AI introduces many benefits and poses unique challenges. The technology is disruptive, but progress is being made to adopt an ethical model within an evolving regulatory environment. Businesses that want to thrive in the AI development ecosystem should adapt now.
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.