AI: The race for regulation

Microland
4 min readApr 27, 2023

--

There is so much good that AI is achieving, it can almost blindside us to its evils. While opponents argue that AI can only regurgitate known facts — after all, it looks up some existing database to arrive at its inferences — AI has been instrumental in identifying facts that may have continued to escape our notice. For example, earlier in February, researchers at the University of Georgia pointed out a new planet outside our solar system using AI by sifting through data humans had already analyzed, breaching new frontiers in space exploration. It is helping doctors diagnose cancer more accurately than traditional methods and in determining treatment planning and monitoring, helping make life better on this planet. But, recently, AI was also used to clone a teenager’s sobbing voice to say she had been kidnapped and demand a ransom of US$1 million from her parents (fortunately, the teenager’s mother figured it was a hoax, but was terrified nevertheless). In 2020, the University College London (UCL) compiled a prioritized list of 20 interesting AI-enabled crimes (to read the research findings, visit Crime Science). Back then, commenting on the findings, Professor Shane Johnson, Director of the Dawes Centre for Future Crimes at UCL, said, “We live in an ever-changing world which creates new opportunities — good and bad. As such, it is imperative that we anticipate future crime threats so that policymakers and other stakeholders with the competency to act can do so before new ‘crime harvests’ occur.” Today, the race to regulate AI is well and truly on.

China has set in motion the race in AI, a field of computer science that is creating new headlines — good and bad — every day. The Cyberspace Administration of China is making technology companies responsible for using legitimate sources for pre-training data that reflect the core values of socialism and do not allow for the subversion of state power, incitement to split the country, or undermine its national unity. The regulations will also ensure that personal data cannot be used for training models using AI for applications that determine payments, control searches, and recommend videos.

The US hasn’t been far behind. In April, its National Telecommunications and Information Administration, which advises the White House on telecommunications and information policy, said it wanted to bring an “accountability mechanism” to AI and identify measures that provide assurance “that AI systems are legal, effective, ethical, safe, and otherwise trustworthy.” In March, Italy became the first country to block ChatGPT unless OpenAI, its owners, provided an age-verification system to prevent minors from being exposed to illicit material (which, it appears, OpenAI is complying with). The EU’s Data Protection Board has launched a task force on ChatGPT “to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities.”

Japan is planning to discuss regulatory policy at the 2023 G7 summit it is hosting shortly, to build global consensus. Japan has been ahead of the game. In 2019, it published the Social Principles of Human-Centric AI, focusing on three philosophies: human dignity, diversity and inclusion, and sustainability. Japan, facing the challenges of a declining birthrate, aging population, and rural depopulation, considers AI a key technology “to rescue society from these problems.” Similarly, Australia has 8 AI Ethics Principles designed to ensure AI is safe, secure, and reliable under a voluntary framework. The race is truly on to shield users from the harm that AI has the potential to cause. It may not be unreasonable to assume that the world will quickly align itself to contain the damage AI can cause.

Each nation coming to the AI regulatory table will have its primary agenda — from protecting the state to protecting youth, consumers, and society. Doubtless, these are potent considerations that demand time and effort be invested in regulating AI. The good news? The OECD.AI Policy Observatory lists 69 countries with over 800 AI policies, regulations, platforms, public awareness programs, etc., to address AI. Admittedly, the activity around regulations has not begun to match the pace of development in AI, but this is a great start.

Alarmed by the pace of AI developments, the Future of Life Institute published an open letter recently, asking for a six-month pause on giant AI experiments. “Should we risk loss of control of our civilization?” asked the letter. Among the luminaries who signed the letter were Turing Prize winner and professor at the University of Montreal Yoshua Bengio, CEO of SpaceX, Tesla, and Twitter Elon Musk, Co-founder of Apple Steve Wozniak, and Author and Professor at the Hebrew University of Jerusalem Yuval Noah Harari.

As always, regulations have to emerge after we have discovered that we are making mistakes in the use of technology. Regulation carries the risk of killing the innovation that technology germinates. In the case of AI, regulation is necessary, and it is urgent — so we can expect the regulations to be created in a hurry.

In the next few months, we will test the wisdom of those creating the regulations. Regulations must be such that they continue to allow us to take humankind far beyond the limits of the known universe and to make the planet a better place to live in.

--

--

Microland
Microland

Written by Microland

Making Digital Happen. Find out more at www.microland.com.

No responses yet