The real reason to be nervous about AI

0

In recent weeks, an unlikely drama has unfolded in the media. The center of this drama is not a celebrity or a politician, but a sprawling computer system, created by Google, called LaMDA (Language Model for Dialogue Applications). A Google engineer, Blake Lemoine, was suspended for saying on Medium that LaMDA, with whom he interacted via text, was “sensitive”. This declaration (and a subsequent declaration Washington Post article) has sparked a debate between people who think that Lemoine is merely stating an obvious truth – that machines can now, or will soon, exhibit the qualities of intelligence, autonomy and sensitivity – and those who reject this claim as naive at best and deliberate misinformation at worst. Before I explain why I think those who oppose the sentience narrative are right, and why this narrative serves the power interests of the tech industry, let’s define what we’re talking about.

LaMDA is a large language model (LLM). LLMs ingest large amounts of text – almost always from internet sources such as Wikipedia and Reddit – and, by iteratively applying statistical and probabilistic analysis, identify patterns in that text. This is the entrance. These models, once “learned” – a word loaded with artificial intelligence (AI) – can be used to produce plausible text as output. The ELIZA program, created in the mid-1960s by MIT computer scientist Joseph Weizenbaum, is a famous early example. ELIZA didn’t have access to a vast ocean of text or high-speed processing like LaMDA does, but the basic premise was the same. One way to get a better idea of ​​LLMs is to note that artificial intelligence researchers Emily M. Bender and Timnit Gebru call them “stochastic parrots.”

The growing use of LLMs has many troubling aspects. LLM-scale computing requires massive amounts of electrical power; most of these resources come from fossil sources, adding to climate change. The supply chains that power these systems and the human cost of extracting raw materials for computer components are also of concern. And there are pressing questions about what these systems should be used for and for whose benefit.

The goal of most AI (which began as a pure research aspiration announced at a Dartmouth conference in 1956 but is now dominated by Silicon Valley guidelines) is to replace the effort and human skills by thinking machines. So whenever you hear about self-driving trucks or cars, instead of marveling at technical prowess, you should be picking up on the contours of an anti-worker agenda.

Futuristic promises about thinking machines do not hold. It’s hype, yes, but also a propaganda campaign by the tech industry to convince us that they have created, or are about to create, systems that can be doctors, chefs, and even life companions.

A simple Google search for the phrase “AI will…” returns millions of results, usually accompanied by images of ominous sci-fi-style robots, suggesting that AI will soon replace humans in a dizzying array of domains. What is missing is an examination of how these systems actually work and their limitations. Once you pull back the curtain and see the wizard pulling levers, struggling to maintain the illusion, you wonder: why are we being told this?

Take the case of radiologists. In 2016, computer scientist Geoffrey Hinton, convinced that automated analysis had surpassed human insight, said that “we should stop training radiologists now.” Extensive research has shown that his statement was extremely premature. And while it’s tempting to see this as a temporarily embarrassing overstep, I think we need to ask ourselves questions about the political economy behind such statements.

Radiologists are expensive and, in the United States, in high demand, creating what some call a labor aristocracy. In the past, resulting shortages were solved by providing incentives to workers. If this could be solved by automation, it would devalue the skilled work performed by radiologists, solving the problem of scarcity while increasing the power of owners over the remaining staff.

Promoting the idea of ​​automated radiology, regardless of existing capabilities, is attractive to the proprietary class as it promises to weaken the power of labor and increase – via the reduction in the cost of labor and greater scalability – profitability. Who wants more robot taxis than the owner of a taxi company?

I say promotion because there is a big gap between hype and reality. This discrepancy is irrelevant to the larger goal of convincing the general population that their labor can be replaced by machines. The most important result of AI is not thinking machines – a still distant goal – but a demoralized population, subjected to a maze of fragile automated systems sold as better than people who are forced to navigate through life. through these systems.

The AI ​​debate may seem remote from everyday life. But the stakes are extraordinarily high. Such systems already determine who gets hired and fired, who receives benefits, and who makes their way to our roads, though they are untrustworthy, error-prone, and no substitute for human judgment.

And there is an additional peril: although inherently unreliable, such systems are being used, step by step, to mask the guilt of the companies that deploy them by invoking “sensibility”.

This escape from corporate accountability may be their greatest danger.

Share.

Comments are closed.