Looking Back, Looking Forward: The year in generative AI

Microland
4 min readDec 7, 2023

Over the last 12 months, generative AI has captured and dominated center stage in technology. Meanwhile, crypto, NFT, and metaverse technologies, seen as ascending, are still languishing on the back burner of businesses. These have been around for much longer. Bitcoin appeared in 2008, NFTs arrived in 2014 with Quantum, and the metaverse first appeared in Neal Stephenson’s 1992 sci-fi novel Snow Crash. Open AI’s ChatGPT, the star in the generative AI stable, by contrast, celebrated its first birthday as recently as 30 November 2023. However, it must be confessed that, technically, the history of generative AI rests on Ian J Goodfellow’s work on generative adversarial networks dating back to 2014. Purists will go further back, past Alan Turing and John McCarthy, to the Hidden Markov Models (1960s) and perhaps even further back to the Gaussian Mixture Models (around 1846). Whatever your preference and belief is on the recency of the technology, its future is incontestable: Gartner says more than 80 percent of enterprises will have used generative AI APIs or deployed an application enabled by it by 2026.

An MIT symposium at the end of November 2023 offered two clear perspectives on the future of generative AI. The keynote speaker, Rodney Brooks, co-founder of iRobot and once director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, cautioned fanboys against overestimating the capabilities of the technology. “No one technology has ever surpassed everything else,” he said. Conservatives will love this. On the other hand, Daniela Rus, CSAIL Director, said the technology had blurred the distinction between science fiction and reality and stood “as a source of hope and a force for good.” Technology evangelists will sign up for her thinking immediately. The truth lies somewhere between.

Over the last 12 months, we have seen why there is unguarded optimism around generative AI coupled with fearful caution. Earlier this March, I wrote about the ethical challenges around generative AI, which, if not addressed, could result in a flood of misinformation and also reduce the value of intellectual capital to zero. By April, every nation, from China to Japan and from the US to the EU, was racing to implement generative AI regulations. I wrote about it here, but looking back, it has become clear that the ethical challenges generative AI presents will consume the most intelligent minds in the coming months, and we will see an uncomfortable battle erupt between technologists, legislators, and regulators.

By June of 2023, there was great excitement around the giant leap forward that generative AI was enabling, and it was time to step back and take a quick snapshot of the history of AI and the “science fiction” it was now enabling. By August, we were looking for trends in generative AI, poking around, trying to decipher the shape of things to come. And in October, it seemed appropriate to look at how and when Gen AI will plateau and what we can do to delay this eventuality. We now know that generative AI can plateau only when your data is exhausted, and the talent available to exploit the technology cannot keep pace. But here is the interesting part: Generative AI is about to throw the world into a next-level spin with Q* (pronounced Q-star). The shadowy Q* model being developed by OpenAI, can solve mathematical problems it has not been trained on. Apparently, Q* can apply symbolic reasoning to abstract concepts, a key element of mathematics. All eyes will be focused on Q* in 2024.

Meanwhile, generative AI is changing everything. NASA is using it to develop lightweight space instruments, making space exploration cheaper and more effective. Microsoft’s disability answer desk, Be My Eyes, which was set up to make products more accessible to 285 million users who are blind or suffer from low vision, is using it. Be My Eyes leverages natural language conversations and is augmented by GPT-4’s vision model, letting customers show their screens for the system to interpret. Law enforcement agencies can, potentially, reconstruct events using generative AI to enhance MRI scans of the brain activity of witnesses long after the crime has occurred. These are just a few amazing applications that change the world as we know it.

But to fight the indiscriminate, unethical, and illegal use of intellectual property to train generative AI models, ingenious methods are being developed. Nightshade and Glaze, developed by the University of Chicago, can poison visual work in a way that humans cannot discern but mislead image generation models such as DALL-E, Midjourney, and Stable Diffusion. Generative AI developers are themselves trying to ensure their models do not provide harmful responses/content to users. However, as Carnegie Mellon University researchers show, it is a losing battle for the moment. Oversight, regulations, and voluntary disclosure are the other interesting area that will come in for scrutiny in 2024.

Given the developments around generative AI, could 2024 be the year that determines the path we take toward artificial general intelligence (AGI) and implement better ethical standards for the advancement of mankind?

--

--