Wednesday, April 12, 2023
HomeTechGenerative AI should make haste slowly

Generative AI should make haste slowly

Tech companies are adopting three different approaches to releasing generative AI models: the cautious, the clever and the possibly crazy. Whichever approach prevails may well determine who makes the most money out of the artificial intelligence revolution, but it may also have far broader implications.

For years, tech companies have experimented with powerful generative AI models, which can almost magically conjure up text, images and code when prompted. But the Silicon Valley giants have been wary of opening up these models to the public for fear of embarrassing blowback. Google, which probably boasts the most AI expertise, has been notably cautious in giving access to its technology, although it has now released, somewhat clumsily, its own Bard model.

Last November, Meta pulled its Galactica AI service three days after launch. Users were quick to ridicule Galactica — designed to summarise academic papers and generate articles — for spewing out nonsense. Shortly afterwards, OpenAI, the San Francisco-based research company, had a storming success with the launch of its ChatGPT model. Within two months, it had been tried by 100mn users, one of the fastest ever take-ups of a consumer internet service. The move triggered an explosion of investor interest. OpenAI has attracted a further $10bn of investment from Microsoft, at a $29bn valuation.

OpenAI was able to launch its model with relatively little embarrassment because it had built in some basic guardrails to prevent its model being maliciously exploited. Before release, OpenAI had fine-tuned its model by employing a technique known as Reinforcement Learning with Human Feedback. Humans helped train the model to reduce the chances it would generate toxic content. The millions of users who have since experimented with ChatGPT provided priceless real-world feedback, allowing OpenAI to pursue further iterative deployment.

Some researchers argue that the reluctance of big tech companies to release generative AI models stems from these dominant firms wanting to preserve competitive moats. They also claim that human guardrails on generative models, such as those instituted by OpenAI, amount to censorship. Far better for free market competition and free speech if these models are released unencumbered into the world. Scores of start-ups are now doing precisely that in a burst of creative, and somewhat crazy, deployment. The investment firm NFX has already identified 450 start-ups looking to commercialise the technology that has collectively raised about $12bn of funding.

Regulators are watching this latest frenzy with bewilderment. Generative AI models can be used for many creative end purposes. But they can also pump out industrial quantities of disinformation. Not to mention the disconcerting content Microsoft’s Bing chatbot displayed in recent interactions with a New York Times columnist, professing to love him and admitting it harboured desires to hack computers and break free of its parent company’s rules. But it is hard to know how regulators could intervene even if they so desired. Technology cannot simply be uninvented.

One sensible provision contained in the EU’s AI directive is to outlaw companies that try to pass off bots as humans. It should be made clear to users whenever they interact with an AI-enabled technology. Beyond that, it is important for users to recognise generative AI models for what they are: mindless, probabilistic bots that have no intelligence, sentience or contextual understanding. They should be viewed as technological tools but should never masquerade as humans.

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments