Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Gigster’s CTO Predicts Future Belongs to Edge Computing, Deep AI and Embedded Analytics

We recently covered Google Cloud AI partners and how they identify and solve Big Data problems for customers. In the continuation of our conversation with Gigster’s CTO Debo Olaosebikan, we found out the company’s advancement in 2019 as a major enabler in IT modernization and digital transformation.

Many futurists would be embarrassed to have their predictions analyzed years later. But Debo Olaosebikan, CTO of Gigster, is providing an exclusive mid-year check-in for our readers of his end-of-2018 predictions for 2019. Debo, who oversees technology for this leader in developing Digital Transformation applications, made his predictions to help all of us understand trends related to AI and Digital Transformation in the enterprise. Remarkably, Debo’s predictions are already proving correct.

Debo’s last prediction on Edge AI is one I think will be very interesting to watch as 2019 progresses.

Here is his assessment of his own predictions for the AI industry and what areas data science professionals could possibly focus on.

Prediction: Expect IBM, Google, Microsoft, Amazon and providers of Machine Learning APIs to release more inclusive datasets to combat embedded discrimination and bias in AI.

Machine Learning is dominating Artificial Intelligence and driving success in diverse applications. In Machine Learning, decisions are learned from existing data records of human decisions and labels. To distinguish a dog from a cat, we show a computer image of dogs and cats so it can learn the difference. The problem here is bias. If we present computers the labels and decisions of humans, the computer may replicate our biases.

Another issue is a selection bias when the data itself is not representative of the broader group we want to understand. For example, work by Joy Buolawumi and Timnit Gebru shows that for the task of classifying a person’s gender, major commercially available computer vision products performed best when considering images of light-skinned men and worst with images of dark-skinned women. It is a major problem if datasets do not capture broader cultural nuances because they lack a sufficient number of people of color.  Fortunately, in 2019, large companies with major computer vision products will release more inclusive datasets that will be more balanced in terms of geography, race, gender, cultural concepts and other dimensions. This open release will supercharge research on minimizing bias in AI.

Read More: Gigster’s CTO Chalks Out a Roadmap for All AI ML Engineers and DevOps Product Managers

Mid 2019 Check-in: Earlier this year, IBM released the Diversity in Faces Dataset – a dataset of 1 million images intended to advance the study of diversity. Expect this trend to continue especially as controversy grows over the increasing use of computer vision in security and surveillance applications.

Prediction: Adoption of AI for Healthcare and Financial services will rise as products that make previously BlackBox AI decisions more interpretable become mainstream.

Previously, AI was based on algorithms making decisions that could be easily explained. For example, an algorithm asks if you have a headache, and then checks if you have a fever; concluding that you have the flu is interpretable.

Regardless of whether the algorithm made the right or wrong prediction, there is huge value in the fact that it is possible to explain how it made its decision.

Related Posts
1 of 1,519

Mid 2019 Check-in: Microsoft recently open-sourced InterpretML – a toolkit for advancing

Interpretability in Machine Learning systems. Expect more tools like this to come online as well as more models that purport to be explainable out of the box.

Read More: Google’s Cloud AI Partners and How They Solve Big Problems

Prediction: Algorithms versus algorithms.

There will be successful AI-powered hacks of AI systems that go beyond “fake news.”

It’s time to finally take the battle to ‘deepfakes’ who use AI ML technology to morphe images, videos and other digital assets to promote hate, lies, propaganda theories and terror.

More sophisticated techniques for generating fake, but realistic images and videos create new security issues for self-driving cars and other mission-critical systems. Thus far, public concern has been centered around images, videos, audio – broadly speaking, the proliferation of “fake media” and “fake news.” We’ll soon see demonstrations of attacks that generate convincing, but fake textual data that can cause problems in automated decision-making around crucial tasks such as credit scoring and extracting data from documents.

Mid 2019 Check-in: We’ve already seen a doctored video of U.S. Speaker of the House Nancy Pelosi, as well as an AI-generated face apparently used by a spy to connect with a lot of Washington targets. While we haven’t yet had a major incident of textual data being corrupted in misleading ways in business settings, this prediction correctly calls out the increased prominence of textual and natural language in the misinformation conversation.

The most prominent incident so far was the release of OpenAI’s text generator, which generates extremely convincing long stories based on simple prompts. The results were striking enough that OpenAI decided it would be safer not to open source the full model used to train the generator because of uncertainty about what the model could be used for. As language models get more advanced, the misinformation surface area expands. It’ll be interesting to see how and if these advances are leveraged to attack automated systems and not just to misinform humans.

Read More: The AI Marketing Mistake You’re Making in Healthcare

Prediction: Increasing demands for privacy will push more AI to happen on the Edge, and large internet giants will invest in edge AI to gain a competitive advantage.

As consumers become reluctant to hand off their data to large internet companies, companies will exploit this opportunity with services that do not require handing off data to the cloud. A combination of hardware advances and a climate that is more privacy-aware will drive more Machine Learning to happen directly on mobile devices. Examples include Apple’s intelligent processing (running Machine Learning models) on mobile devices instead of on the cloud. In  2019, we will see this trend accelerate and more of the mobile, smart home and IoT ecosystem will move Machine Learning work to the edge.

Mid 2019 Check-in: Aligned with the expectation in this prediction, Google released an upgrade to Gboard, its virtual keyboard app, that supports a dictation feature powered by on-device speech recognition. By running the Machine Learning model on the device, the feature does not require a wifi connection and is faster and more private since user audio can be processed locally. Expect software and hardware advances to further this trend.

Read More: G20 Summit Brings AI, Data, Privacy and Security to the Forefront

Leave A Reply

Your email address will not be published.