Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

AiThority Interview with Triveni Gandhi, PhD and Responsible AI Lead at Dataiku

Triveni Gandhi, PhD and Responsible AI Lead at Dataiku

Hi, Triveni, welcome to our Interview Series. Please tell us a little bit about your journey in the AI and ML technology space. How did you start at Dataiku? 

Before joining Dataiku, I worked as a data analyst with a large non-profit dedicated to improving education outcomes in New York. Rather than a technical background, mine actually stems from social science. As a data scientist, my Ph.D. in political science has enabled me to understand AI from a social equity perspective and helped bridge the gap between data scientists and end users by making data science more perceptible to social science.

Please tell us more about your role in the Responsible AI domain. How do you define Responsible AI for your data engineers?

As Responsible AI Lead at Dataiku, I build and implement custom solutions to support the responsible and safe scaling of AI. Training to educate audiences on their use cases and the personal biases in data is crucial for responsible AI. While everyone in an organization can be responsible for AI, there also must be gatekeepers with oversight and management of the tool. We need to be able to think about the different ways in which harm can be experienced – not just data bias, but how it’s going to be used and what the deployment issues are.

How does AI magnify harmful elements in society or businesses without “responsible” guidelines?

AI is a reflection of our world – it holds a mirror up to what is already present in our society and systems. Thus, without mitigating those harmful elements or providing responsible guidelines, we run the risk of repeating existing biases and harms. Cases of AI systems replicating biases across gender, race, and even age groups are numerous (you can check out the AI Incident Database for a full list of ongoing harms). Responsible guidelines ask developers to first understand the kind of context they are building AI systems in and think through the type of harm they want to minimize before they send a model out to deployment.

Who regulates the responsibility of AI models?

The responsibility to regulate AI models lies within governments, standards organizations, and enterprises themselves. Regulating technology across domains and borders is important for building more trusted and reliable systems overall and should not be seen as an impediment to progress.

Every company will have its own ethics and values that can guide the development of its AI.  Platforms like Dataiku can help organizations put those principles into practice. With end to end capabilities for data processing, model building and deployment, and AI governance, Dataiku is a platform that makes it easy for anyone to get value from AI, while staying aligned with their stated AI intentions.

In the era of ChatGPT and generative AI, how does Dataiku continue to lead the market with its breakthrough AI? Tell us more about the AI engine at the backdrop of your recent AI developments?

With nearly $600M in funding and a $3.7B valuation, Dataiku is one of the top AI startups in the world. Dataiku’s platform is built with many different users in mind, from data scientists to cloud architects, analysts and line of business managers. Customers can take a traditionally complex process and systemize it from start to finish, making it faster and simpler to get AI online and deliver business results.

Dataiku is the platform for Everyday AI, enabling data experts and domain experts to work together to build AI into their daily operations. Together, they design, develop and deploy new AI capabilities, at all scales and in all industries. Organizations that use Dataiku enable their people to be extraordinary, creating the AI that will power their company into the future. More than 500 companies worldwide use Dataiku, including leaders across industries including life sciences, logistics, retail, manufacturing, energy, financial services, software, and technology. With a strong focus on the Forbes Global 2000, Dataiku also supports non-profits and academic institutions through its AI-for-Good initiatives.

How to make AI innovations more inclusive and equitable? 

Inclusivity is often overlooked when developing AI systems, which leads to repeated inequalities, particularly for end users of color, with harmful effects when scaled. Increasing diversity in tech, implementing blueprints for data products and emphasizing transparency are all ways data practitioners and end users can prevent bias and improve AI systems. Adding new and different voices to the development table also requires a culture of collaboration and transparency among the different teams building AI and emphasis should be placed on thinking through implications of AI on all potential users or groups.

Your take on AI ethics and democratization of data science ecosystems? 

The biggest issue in AI ethics right now is achieving an AI ethics operationalization that fully meets the expectations set through principles. This is a difficult but not impossible issue to address. Some approaches to putting AI Ethics into practice include setting up processes for accountability and transparency over all AI projects, developing models with bias and fairness issues in mind, and monitoring pipelines in production. It is also important that education practices within data science highlight the relevance of social impact and potential harms as a part of any training or upskilling course so that new entrants to the field have a baseline understanding of the ethics of AI.

Thank you, Triveni! That was fun and we hope to see you back on AiThority.com soon.

[To share your insights with us, please write to sghosh@martechseries.com]

Triveni is a Jill-of-all-trades data scientist, thought leader, and advocate for the responsible use of AI who likes to find simple solutions to complicated problems. As Responsible AI Lead at Dataiku, she builds and implements custom solutions to support the responsible and safe scaling of AI. Previously, Triveni worked as a Data Analyst with a large non-profit dedicated to improving education outcomes in NYC and holds a Ph.D in Political Science from Cornell University.

Dataiku Logo

Dataiku is the platform for Everyday AI, enabling data experts and domain experts to work together to build AI into their daily operations. Together, they design, develop and deploy new AI capabilities at all scales and in all industries. Organizations that use Dataiku enable their people to be extraordinary, creating the AI that will power their company into the future.

Founded in 2013, Dataiku has proven its ability to continue to develop its founding vision for Everyday AI, and to execute on its growth. With more than 500 customers and more than 1,000 employees, Dataiku is proud of its rapid growth and 95% retention of Forbes Global 2000 customers.

Comments are closed.