Davos Panel Hosted by IBM CEO Ginni Rometty Explores Precision Regulation of AI & Emerging Technology
Event Formally Launches IBM Policy Lab, a New Forum to Advance Bold, Actionable Policy Recommendations for a Digital Society and Foster Trust in Innovation
Today at the World Economic Forum in Davos, IBM launched the IBM Policy Lab – a new global forum aimed at advancing bold, actionable policy recommendations for technology’s toughest challenges – at an event hosted by IBM Chairman, President and Chief Executive Officer Ginni Rometty that explored the intersection of regulation and trust in emerging technology.
The IBM Policy Lab, led by co-directors Ryan Hagemann and Jean-Marc Leclerc, two long-standing experts in technology and public policy, provides a global vision and actionable recommendations to help policymakers harness the benefits of innovation while building societal trust in a world reshaped by data and emerging technology. Its approach is grounded in the belief that technology can continue to disrupt and improve civil society while protecting individual privacy, and that responsible companies have an obligation to help policymakers address these complex questions.
Recommended AI News: Change Healthcare Unveils API & Services Connection, a Marketplace to Accelerate Innovation in Healthcare
Christopher Padilla, Vice President of Government & Regulatory Affairs, IBM, said:
“The IBM Policy Lab will help usher in and build a new era of trust in technology. IBM pushes the boundaries of technology every day, but we also recognize our responsibility relating to trust and transparency and address how technology is impacting society. I see an abundance of technology but a shortage of actionable policy ideas to ensure we protect people while allowing innovation to thrive. The IBM Policy Lab will set a new standard for how business can partner with governments and other stakeholders to help serve the interests of society.”
Ahead of the launch event, the IBM Policy Lab released landmark priorities for the precision regulation of artificial intelligence, as well as a new Morning Consult study on attitudes toward regulation of emerging technology. The perspective, Precision Regulation for Artificial Intelligence, lays out a regulatory framework for organizations involved in developing or using AI based on accountability, transparency, and fairness and security. This builds upon IBM’s calls for a “precision regulation” approach to facial recognition and illegal online content—laws tailored to hold companies more accountable, without becoming over-broad in a way that hinders innovation or the larger digital economy. These approaches are reinforced by a Morning Consult survey, sponsored by IBM, which found that 62% of Americans and 7 in 10 Europeans responding prefer a precision regulation approach for technology, with less than 10% of respondents in either region supporting broad regulation of tech.
Recommended AI News: Accenture Announces Changes to Its Growth Model and Global Management Committee
IBM’s policy paper on AI regulation outlines five policy imperatives for companies, whether they are providers or owners of AI systems that can be reinforced with government action. They include:
- Designate a lead AI ethics official. To oversee compliance with these expectations, providers and owners should designate a person responsible for trustworthy AI, such as a lead AI ethics official.
- Different rules for different risks. All entities providing or owning an AI system should conduct an initial high-level assessment of the technology’s potential for harm. And regulation should treat different use cases differently based on the possible inherent risk.
- Don’t hide your AI. Transparency breeds trust; and the best way to promote transparency is through disclosure making the purpose of an AI system clear to consumers and businesses. No one should be tricked into interacting with AI.
- Explain your AI. Any AI system on the market that is making determinations or recommendations with potentially significant implications for individuals should be able to explain and contextualize how and why it arrived at a particular conclusion.
- Test your AI for bias. All organizations in the AI developmental lifecycle have some level of shared responsibility in ensuring the AI systems they design and deploy are fair and secure. This requires testing for fairness, bias, robustness and security, and taking remedial actions as needed, both before sale or deployment and after it is operationalized. For higher risk use cases,this should be reinforced through “co-regulation”, where companies implement testing and government conducts spot checks for compliance.
Recommended AI News: Moody’s Analytics Wins an Artificial Intelligence Award
These recommendations come as the new European Commission has indicated that it will legislate on AI within the first 100 days of 2020 and the White House has released new guidelines for regulation of AI.
The new Morning Consult study commissioned by the IBM Policy Lab also found that 85% of Europeans and 81% of Americans surveyed support consumer data protection in some form, and that 70% of Europeans and 60% of Americans responding support AI regulation. Moreover, 74% of American and 85% of EU respondents agree that artificial intelligence systems should be transparent and explainable, and strong pluralities in both countries believe that disclosure should be required for companies creating or distributing AI systems. Nearly 3 in 4 Europeans and two-thirds of Americans of respondents support regulations such as conducting risk assessments, doing pre-deployment testing for bias and fairness, and reporting to consumers and businesses that an AI system is being used in decision-making.
In addition to its new AI perspective, the IBM Policy Lab has released policy recommendations on regulating facial recognition, technological sovereignty, and climate change, as well as principles to guide a digital European future.
The IBM-hosted event in Davos, Walking the Tech Tightrope: How to Balance Trust with Innovation, also featured the President and CEO of Siemens AG Joe Kaeser, White House Deputy Chief of Staff for Policy Coordination Chris Liddell, and Organisation for Economic Co-operation and Development Secretary-General José Ángel Gurría Treviño. CNN International Anchor and Correspondent Julia Chatterley moderated the discussion.
Recommended AI News: Virtual Reality and Augmented Reality Projected as a ‘Game Changer’ for Future of Content
Comments are closed, but trackbacks and pingbacks are open.