Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

The Future of AI Is Here. Now Let’s Make It Ethical

SnapLogic CMO and host of Automating the Enterprise Dayle Hall recently sat down with data and analytics leader Dr Alex Antic, CDO and Co-Founder of Healices Health and Co-Founder of Two Twigs Analytics, to uncover how to unlock the value of data and AI and the importance of ethical AI when driving digital transformation.

The Future of AI Is Here. Now Let’s Make It EthicalArtificial Intelligence (AI) is fast becoming a mainstay in our business operations. In fact, according to IDC, over the next three years, governments and businesses around the world will invest more than AU$723 billion in AI. Meanwhile, AI technology is projected to be integrated into 90% of the most cutting-edge enterprise applications by 2025.

Already, it’s beginning to transform everyday life. Just look at ChatGPT, the chatbot developed by OpenAI. It’s currently taking the world by storm, putting the power of AI into everyone’s hands. While it’s undoubtedly an exciting time to be alive with all these technological advancements, it’s vital to keep a pulse on the human component of technology, ensuring everyone benefits.

When It Comes to Ethical AI, Focus on the Three ‘T’s

A well-respected leader in the field of data and analytics and recognized in 2021 as one of the top five analytics leaders in Australia, Antic advises organizations to focus on what he calls the “three Ts”: trust, technology and talking.

  • Trust: Centered around the people and culture, trust is about having the support of senior leadership when embarking on digital transformation.
  • Technology: Having the right data literacy — understanding what data is, how to ask questions of staff and how to leverage technology that exists at a high level is key to driving strategic goals.
  • Talking: Reliant upon cross-disciplinary teams, not just having the traditional tech team, sitting in a corner doing their own thing. Data literacy must be integrated throughout the entire business so everyone is aware of how what they’re doing is driving strategic success.

“Whether organizations do really well or struggle severely depends on senior leadership. Success hinges on their support. It has to be top-down. If they’re pushing from the bottom up, it is going to be a huge journey ahead,” Antic explains.

Recommended GPT News: OpenAI’s ChatGPT 4 Is Here. Is It Time to Forget ChatGPT?

“I’ve seen this fail so many times, and I’ve been through it myself in the past. Having support from senior leaders is absolutely paramount to success broadly. And part of that is really around having the right culture.”

To that, Antic adds that a mid-to-senior-level data leader, not necessarily the CTO, is at the crux of many organizations. “They’re the linchpin because they have to translate business problems into technical solutions,” he explains.

“Ultimately, they try and traverse both sides, the business world and the technical world. That’s one of the roles that can be the most difficult to fill successfully. But if you get that right, I think those people can really make a huge difference.”

Data literacy is also key, he explains. To get a feel for where the organization is, he advises answering the following questions:

Do staff understand what data is?

Can managers ask the right questions of their staff?

And do they know at least conceptually, how to leverage technology that already exists at a high level to drive strategic goals?

“Data literacy must be integrated into the entire business, and employees have to be aware of how and what they’re doing will drive success,” Antic says.

Read More: Google Brings Generative AI Experiences to Google Workspace

Responsible AI: What Is It and Why Is It Important?

Related Posts
1 of 7,753

AI and ethics have become hot talking points in the business landscape as of late. The benefits of implementing it responsibly can’t be ignored. In fact, Accenture research shows that companies that scale AI successfully understand and implement responsible AI at 1.7 times the rate of their counterparts. CISCO research also finds that every AU$1.45 invested in data privacy measures enables AU$3.91 of benefits to be leveraged by businesses. Even more, a survey conducted by the Economist Intelligence Unit reveals that 80% of business respondents believe ethical AI is critically important to attracting and retaining talent.

However, there are still some misconceptions around ethical AI, including what it really means and why it’s important. Antic likes to keep the answer simple by saying, “AI ethics ensures that the human is at the center of the solution.

“If you’re developing a system that leverages technology, it’s about understanding what impact it can possibly have on the end user. You have to remember that this technology can be easily scaled, so you don’t have just one person making a decision that affects a small group of people — this could be one solution affecting thousands or millions of people. How do you make sure that’s done in an ethical, fair and just manner?”

Ethical AI is also impacting whether or not certain vendors get selected for contracts. I was recently on a podcast where one of the speakers was looking into implementing an AI solution in Human Resources (HR). When the vendor they were looking at could not explain how the AI works, where the data was from, how they use it, etc, they chose not to work with that vendor. The danger of trusting a technology to do the right thing and collect data in the right way is just too much of a risk these days in all roles, but particularly in HR.

Antic believes that having some level of transparency and understanding in terms of how decisions are made is vital. The following questions can help guide you, helping to ensure you’re leveraging AI ethically and responsibly:

  • Why are you collecting the data in the first place?
  • What data do you need?
  • What will you use that data for immediately?
  • If you were on the other end, how would you feel about that data being used for that purpose? Would you be comfortable with that?
  • Are there any potential issues or questions you have?

“Don’t collect data just for the sake of it, not just from an ethical standpoint, but also you need quality data that’s fit for purpose for your organization,” Antic stresses. “You can’t take any data, and suddenly, it’s magic. You don’t just throw it into a black box and get a meaningful solution.

“Rather, it’s important to focus on safe and secure capture and storage of data with overarching clear ethical guidelines on what can be captured and why it’s being captured and used. Also, the specific guidelines and frameworks can dictate the use of the data, but dependent on the outcome you’re looking for, it could be data matching and de-identification for data sharing. But all must be done within the bounds of regulation and legislation and just what is right.”

He adds, “Organizations need to think about the end-to-end pipeline and data lifecycle. They need to think about what they are collecting, how they are storing it, what’s it being used for, when is the data deleted, how can someone delete data, what’s the metadata that’s been collected around it and stored and data governance processes.

“These are all important aspects, rather than just collecting data, get an outcome and not worry about all the grey bits on the sides. I mean, when it comes to ethical and responsible use of AI, they become absolutely paramount to having responsible solutions at the end.”

Breaking down bias with AI

Leveraging responsible, ethical AI to protect against bias and promote diversity is not a passive process, it takes action. 
“First of all, it’s important for organizations to understand the business context they work within and what bias means for them. Typically, bias is around areas that can manifest themselves and result in unfair outcomes,” Antic explains.

Often, many organisations don’t understand how it actually occurs, he says. Initially, it’s about understanding, how bias can creep in. How do you identify bias? You’ll most likely never remove it, but how do you identify it and try and work with it? And then how do you define bias in your organization, and how can you try and resolve it?

“Some people will turn around to me and say, ‘Look, humans are biased. If AI merely reflects human bias, then why is that a concern?’ And I think there are two really important parts to this that people need to understand,” he shares.

“One is scale. Models, as we mentioned earlier, can have far-reaching implications. They can reinforce and perpetuate bias in a way that no single biased human could ever do in terms of how we use this technology. But they also have the potential to allow us to hide behind our moral obligations and justify immoral judgments.”

Antic argues that bias cannot be completely eliminated. So the best thing organizations can do is work towards understanding it, identifying it and then reducing it in systems that can scale.

For the three things that can be done at a high-level, he says to think of the ‘three Ds’. “It’s all about data, discovery and diversity,” he says. “With data, it’s about understanding it from both a technical perspective and also from a domain perspective. This is crucial in understanding how bias is embedded within the data.

“Then the next D is discovery. How is fairness and bias actually defined? If you talk to three different companies, you can get three very different definitions. What is explainable to me will be very different to what’s explainable to you. So there’s always this trade-off between fairness and accuracy, especially when developing machine learning models that need to be juggled and accepted. And that is a nuance that I think some people at high levels miss. That can be quite difficult.”

Ultimately, he says, AI systems should be supporting organizations to make better decisions. Regarding diversity (the third D), it’s up to the organisation to define fairness, morality, privacy, transparency and explainability. “The future lies in humans and machines working together to advance society rather than just us being dependent on machines with AI,” Antic says. “It’s really about integration and working in this human-machine relationship, which I think is the core.”

Listen to the full podcast episode of ‘Maximising the Benefits of Data and AI’ here.

Comments are closed.