Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

AiThority Interview with Ray Eitel-Porter, Global Lead for Responsible AI at Accenture

Ray Eitel-Porter, Global Lead for Responsible AI at Accenture

Hi Ray, please tell us about your journey in the AI and Machine learning space. How did you arrive at Accenture?

 As Accenture’s Global Lead for Responsible AI, I am interested in the power of applied technology and am conscious of how to mitigate the risks that can accompany AI’s great potential. Our Technology Vision 2022 report found that only 35% of global consumers trust how AI is being implemented by organizations, which means we have a greater responsibility to help companies scale the use of data and AI responsibly and ethically. 

I came to Accenture with over two decades of experience including in technology startups. Most recently, I led the European business for Opera Solutions, which was one of the first companies to understand the potential for Big Data and develop solutions for clients in that space. Aided with my academic background in the humanities and business, I have combined both interests into leading a team that takes an interdisciplinary approach to AI ethics and governance. 

What is the core objective of the Responsible AI Team at Accenture? What can other AI companies and innovators learn from Accenture’s AI team?

Responsible AI at Accenture is the practice of designing, developing and deploying AI with good intention to empower employees and businesses, and fairly impact customers and society – allowing companies to engender trust and scale AI with confidence. As companies deploy AI for a growing range of tasks, adhering to laws, regulations and ethical standards will be critical to building a sound AI foundation. For instance, we recently surveyed 850 C-suite executives across 20 industries, finding that nearly all (97%) respondents believe regulation will impact them to some extent. 

While working to understand companies’ attitudes towards AI regulation and assessing their readiness to comply, we also learned that most (94%) are struggling to operationalize across all key elements of Responsible AI. Consequently, we developed a framework of four key pillars to help organizations design AI responsibly from the start that includes an emphasis on clear principles and governance structures for AI, risk management that monitors and operationalizes current and future policies, technology tools that support the key requirements of Responsible AI and company culture that prioritizes Responsible AI as a business imperative. Together, these can ensure that we are always setting the best standards for Responsible AI. 

Recommended: AiThority Interview with Amy White, Director of Social Impact and Communications at Adobe

Tell us about the most complex problems in the data science world that AI is solving or intending to be solved. 

So much of what we take for granted in our daily lives stems from machine learning, whether searching for driving directions, using dictation to convert speech to text, or unlocking our phones with face ID. Across industries, companies are relying on AI, and they are increasingly investing in the technology. By 2024, 49% of organizations will invest more than 30% of their technology budgets in AI

For instance, we have seen AI improve energy efficiency, and today, we are seeing the metaverse’s potential to transform business, commerce, entertainment and how we live.

Recommended: AiThority Interview with Wendy Johnstone, Chief Operating Officer at Zendesk APAC

How is the roadmap to Ethics in AI achieved? Which factors and constituents within AI engineering influence the efficacy of Ethics? 

The most effective way to ensure the ethical use of AI is to build a sound data and AI foundation, which incorporates the four key pillars to design responsibly from the start. These efforts also include embedding questions and controls throughout the machine leaning development life cycle, which can help foresee unintended consequences and ensure other elements of good Responsible AI practices are observed. Most organizations have a given development process with distinct stages that move from the initial use case, via data sourcing and model development, to deployment and post-deployment monitoring. Responsible AI risk assessment questions should be asked at each stage, covering the seven requirement areas: fairness, transparency, soundness, accountability, robustness, privacy and sustainability.

Please tell us more about the proposed EU AI Act. What provisions do you foresee in this act that would impact the development and adoption of AI?  

Governments and regulators in different countries are considering how to supervise and set standards for the responsible development and use of AI. The EU’s proposed AI Act is the best-known example: once ratified, anyone who wants to use, build or sell AI products and services within the EU will have to consider the legislation’s requirements for their organization and for their suppliers of AI services or products. AI systems being used in areas of higher risk will be subject to the most stringent aspects, according to the draft regulation. Our latest research found that most (95%) of C-suite executives believe at least part of their business will be affected by the proposed EU AI Act specifically. 

Companies will need to start preparing for AI regulation now, instead of taking a ‘wait and see’ approach or viewing compliance as just checking a box for completion. Although the proposed regulation will have a two-year grace period, our experience is that it can take large companies at least this long to put Responsible AI foundations in place. 

How do you see newer technologies such as AI Optimization and AutoML further improving business intelligence product offerings in the coming months?  

Technologies such as AI Optimization and AutoML are a welcome evolution to democratizing data and can be great accelerators in helping both data science professionals and companies reach scale with speed. Still in the nascent stage, there’s growing opportunity to leverage these technologies for further automation. As with all AI development, we will need to prioritize creating guardrails that help companies at large and citizen data scientists remain both productive and safe. 

Recommended: AiThority Interview with Matthew Hogg, Vice President at Criteo

Your predictions for the future of AI industry in 2022-2023: how should brands leverage Responsible AI compliant technologies to enhance their reach and appeal? 

Responsible AI capabilities are an essential part of being an AI-mature organization, which pays off as the most AI-mature companies already enjoy 50% higher revenue growth than their peers. As mentioned earlier, 80% of companies are already planning to increase investment in Responsible AI, and 77% view regulation of AI as a priority. 

Designing AI responsibly from the start is not just about risk mitigation or legal compliance, though these are vital. Rather, leading organizations should see Responsible AI as an opportunity to increase trust in AI systems from employees, customers, investors and society at large, which will allow them to scale their AI with confidence and reap the benefits. 

Thank you, Ray! That was fun and we hope to see you back on AiThority.com soon.

[To share your insights with us, please write to sghosh@martechseries.com]

Ray Eitel-Porter is Accenture’s Global Lead for Responsible AI where he focuses on developing Accenture’s solutions for data and AI ethics, as well as advising clients in the space.  Based in London, he joined Accenture in 2013 and has worked across several industries, mainly focusing on analytics strategy and operating model projects. Ray led the formation of Accenture’s Strategic Partnership with The Alan Turing Institute, UK’s national institute for data science and artificial intelligence, to further the research of innovation applied to real-world business challenges. He chairs the government’s Data Skills Taskforce and has led the creation of a national data science prize for schools to encourage interest in data science careers, in association with TeenTech. Ray holds earned a Master of Arts in Modern Languages from Christ Church, Oxford as well as an MBA from INSEAD.

Accenture Logo

Accenture is a global professional services company with leading capabilities in digital, cloud and security. Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services and Accenture Song—all powered by the world’s largest network of Advanced Technology and Intelligent Operations centers. Our 710,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. We embrace the power of change to create value and shared success for our clients, people, shareholders, partners and communities.

Comments are closed.