Big tech players are interfering with artificial intelligence

Events over the past week or so show that AI is still advancing by leaps and bounds. They also assert that the biggest tech companies are in the driver’s seat.

But given the potentially harmful uses to which the latest generation of AI could be put, the world might not feel more fair – or safer – if the technology were more widespread.

Two advances in artificial intelligence have brought this point home with renewed vigor. One includes a new Google language model called PaLM. According to Percy Liang, an assistant professor of computer science at Stanford University, this is the biggest advance in such systems since the release of OpenAI’s GPT-3, the automated typewriter that revolutionized the world of artificial intelligence two years ago.

PaLM’s main claim to fame is that it can explain why a joke is funny with a reasonable degree of accuracy. This feat indicates that machines are beginning to make progress on difficult problems such as common sense and heuristics—although, as has always been the case in artificial intelligence, designing a system to pull off a one-party trick does not guarantee progress on a broader front.

Another development from OpenAI last week represents a leap forward in the new field of “multimedia” systems, which work with both text and images. Microsoft has funded OpenAI with up to $1 billion and has the exclusive right to commercialize its technology.

OpenAI’s latest system, known as Dall-E 2, takes a text message (say, “an astronaut riding a horse”) and turns it into a photo-realistic. Long, Photoshop.

Because of its obvious applications, its developers are trying to push systems like this out of the research lab into the mainstream. It could potentially have an impact in any data-rich area where machines can make recommendations or provide suggested answers, Liang says. Already, OpenAI technology is used to suggest fonts to software programmers.

It could be the following writers and graphic designers. It still takes a human to cherry-pick the output to find what really works. But as drivers of creativity, these systems are unparalleled.

However, it is somehow the worst-behaved of all AI models, and it comes encased in caveats. One problem is its effect on global warming: it requires a tremendous amount of computational effort to train. And they reproduce all the biases in the (very large) datasets on which they’ve been trained.

They are also natural disinformation factories, mindlessly producing their best guesses in response to prompts without any understanding of what they are producing. Just because they know how to put together doesn’t mean they are.

Then there is the risk of intentional misuse. CohereAI, a startup that built a smaller version of GPT-3, reserves the right in its terms of service to cut users off from things like “sharing divisive content created to turn the community against itself.”

The potential harms of generative AI models like these are not limited to language. A company that built a machine learning system to aid drug discovery, for example, recently experimented with changing some parameters in its model to see if it would come up with less benign materials. As reported in Nature, the system immediately began designing chemical warfare agents, including some that were said to be more dangerous than anything known to the public.

For critics of big tech companies, it may sound alarm bells to believe that a handful of powerful and unaccountable companies control these tools. But it could be even more alarming if they didn’t.

Publishing models like this has been considered good practice in the past, so that other researchers can test the claims and anyone who uses them can see how they work. But due to the risks, the developers of today’s larger models have kept it a secret.

This is already fueling the search for alternatives, as open source developers and novices try to wrest some control of the technology from the big tech companies. CohereAI, for example, managed to raise $125 million in venture capital last month. It may seem too costly for a startup to compete with Google, but Cohere has an agreement to use the search company’s most powerful AI chips to train its own model.

Meanwhile, a group of independent researchers set out to build a system similar to GPT-3 with the express goal of putting it in the public domain. The group, which calls itself EleutheraAI, released an open source version of a smaller business model earlier this year.

Moves like this suggest that big tech companies won’t have this entire field of their own. Who will guard the frontiers of this powerful new technology is another matter.

Leave a Comment