Humility and AI: Operating within the comfort zone

Microland
5 min readSep 3, 2021

--

Data gives Artificial Intelligence (AI) its potency. With the break neck growth in data and connectivity, AI has become the go-to technology to solve the world’s problems. AI can find friends for us, help address climate change, predict how the COVID-19 virus will evolve, improve cybersecurity, tell CPG organizations what to produce for us, change the way we learn and make driving safer. There is no part of life that AI has left untouched. With data generation and consumption forecasted to grow from 64.2 zettabytes in 2020 to 180 zettabytes in 2025, AI will get bigger. Analysts forecast that the AI market will grow from ~$327.5 billion in 2021 to $500 billion in 2024. Bigger is good, but can we make AI better is the question. The answer lies in introducing humility into the science of AI. Armed with humility, AI will prove to be a long-term friend of businesses, consumers and society.

On the path to commercialization

For the moment, there is widespread debate around the social, ethical and political impact of AI. About a year ago, Gendrify, an AI tool designed to identify a person’s gender by analyzing their name, user name and email address shut down after a backlash from social media. Gendrify used assumptions that reinforced gender stereotypes due to its learning algorithms. In another incident, after a public outcry the Harrisburg University in Pennsylvania retracted its study, A Deep Neural Network Model to Predict Criminality Using Image Processing, that proposed using an AI-based facial recognition system. AI researchers and scholars pointed out that the methodology discussed in the paper amplified certain forms of discrimination. IBM’s Watson for Oncology had to be withdrawn after medical specialists demonstrated that the AI (Watson) was unsafe and was making incorrect treatment recommendations. Some recent forms of AI being tested with a fake patient in the healthcare industry encouraged suicide!

The reasons for these failures are manifold. They range from the absence of data diversity to the lack of Deep Learning, inappropriate models, and flaws in industry knowledge. Unless these are set right, effectively commercializing AI may be difficult.

Fortunately, for every failure of AI, there are several — perhaps hundreds — of successes. It is a powerful technology and the responsibility to make it work, to uphold the ethical standards and social benchmarks that define mankind, lies with organizations using the technology.

The coming regulatory tsunami

Organizations must take the responsibility of developing safe and ethical AI. This means investing in training and testing AI models, creating uniform standards, monitoring the health and output of AI and investigating the poor decisions taken by AI (just as we do for humans).

Governments have been dithering around how to address AI. The technology is therefore being developed without adequate oversight from regulators and governments. The European Union (EU) is setting this right. The EU believes that consumers and society should be protected from being forced to accept the outcomes of AI without having any control over it. It has gone ahead and recently proposed a regulatory framework with suggested penalties for AI. The goal is to “guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU.”

As many commentators have pointed out, AI is not just a technology but also a political battleground for global supremacy. It is worth reading the commentary offered by Meredith Broadbent, Senior Adviser, Scholl Chair in International Business, Center for Strategic and International Studies, on the EU proposal. She contends that in light of the aggressive posture adopted by China, the US and Europe share common ground in matters related to AI and should agree “on a basic framework of topline, democratic, regulatory principles for AI that can be promoted with trading partners in the Asia-Pacific…” That statement actually means this: Technology is creating political tensions; expect a slew of regulatory requirements to crop up over AI in the coming months that impact the entire world. If those regulatory principles encourage humility in AI, the technology will have begun its ascent to becoming the problem-solver the world is looking for.

The spirit of deference and submission

Regardless of regulations there is a need for ethical AI. This is above and beyond the baked-in transparency and explainability that AI will be forced to demonstrate by coming regulations. But there is a deeper need for humility to be infused into AI, one that helps it stay “aware” of its own limitations, work within them and appreciate the motivations and drivers of human actions. Humility is defined as the spirit of deference or submission. That is what AI must be if it has to become trusted and useful.

How should organizations think about bringing humility to their AI so they can continue to use the technology without stumbling over yet-to-come policy? The concept of humility in AI works to ensure that uncertainty in the behavior of decision-making systems is eliminated. If an uncertainty arises, the system takes a pause and looks for alternative solutions; it does not look for probabilistic answers; instead, it uses a deterministic model that forgoes the performance that was expected of it. It chooses caution. This AI is built on the construct that it is preferable to consult a human (in society this translates to the tremendously valued ability “to seek help and listen”) and align with human values and motivations instead of going ahead with “smart” decisions that could be detrimental to the user.

This AI is humble, it knows its limitations. Its users understand the limitations and are comfortable with it. Humble AI is never forced to operate beyond its “comfort zone”. As humans, we will trust such systems.

--

--

Microland
Microland

Written by Microland

Connect with the best. Find out more at www.microland.com

No responses yet