Transparency and Accountability, Not Apocalyptic Fear Mongering is the Key to Regulating AI

 

Twice now have we been warned by leaders of tech industries and AI luminaries that the technology they have created can inadvertently lead to dangerous outcomes that may imperil humanity itself. The recently released single sentence statement by the Center for AI Safety,  signed by the likes of Sam Altman and Bill Gates, states that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

This comes following another open letter published in March after the release of GPT-4 by the OpenAI. The previous letter by the Future of Life Institute had called for a six month pause of the development of Giant AI experiments more powerful than GPT-4, and was signed by similar luminaries like Elon Musk, Steve Wozniak, and Yuval Noah Harari. The question is - are these fears warranted or do they distract from the real dangers of AI?

Real Harms in the Real World

There are plenty of ‘real’ and currently existent harms that can be attributed to the development and use of Artificial Intelligence as it exists currently. While not nearly the existential threat the previous petitions have led us to believe, the risks posed to marginalised individuals and communities are no less.

In 2021, before the ChatGPT hype brought LLMs into frenzied media attention, Dr. Emily M. Bender, Dr. Timnit Gebru, Angelina McMillan-Major, and Dr. Margaret Mitchell published a paper titled, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, pointing out the neglected harms associated with LLMs.

First, they point out that the financial and environmental costs imposed in the process of training LLMs must be integral to the analysis. In particular they point out that the emissions and energy consumption, which are required for such training, disproportionately harm the Global South / Majority World.

Second, they critique the training of models on expansive datasets that inevitably lead to safety risks that emanate from their uncurated nature. Contrary to a common assertion in the tech industry, the large size of the training datasets does not prevent racist, sexist, homophobic, and other discriminatory language and talking points from taking hold over LLMs. In her book Algorithms of Oppression, Safiya Umoja Noble also points out that the existence of bias in data leads to algorithmic discrimination against persons of colour, especially women.

Third, while LLMs lack access to ‘meaning’ of a text, results in simply churning out probabilistic responses to any input, including many of the harmful outcomes mentioned previously. Yet we find tech platforms waxing eloquent about the possible use of AI in content moderation, potentially further entrenching the same harms.

In the case of generative-image AI technology, concerns have been raised about them being trained by datasets gathered without the permission from artists. Given the fact that many of those who earn a living by selling their artwork already suffer from the precarity and uncertainty of the market, legitimate concerns have been raised about their artwork being used unauthorised, without adequate compensation, often by some of the wealthiest companies in the world.

The close relationship between a number of founders and CEOs of AI-oriented companies and the Far Right has been well documented at this point. In the case of Clearview AI (which was used by Immigration and Customs Enforcement in the US), the racist intentions of developing a powerful facial recognition technology and collaborating with law-enforcement and anti-immigration authorities have always been there.

Marketing Through Fear

Two of the so-called ‘godfathers’ of AI have presented themselves as Robert Oppenheimer-esque figures, who upon realising what their creation is capable of, seek to spend the rest of their lives combating the same.

Dr. Geoffrey Hinton’s resignation from Google after a decade-long collaboration, during which he had been the recipient of the prestigious Turing Award, drew headlines as he seemed poised to campaign against the products of his own creation. Hinton’s fears are an amalgamation of a wide range of concerns including killer-robots and job losses in the services sector. His co-godfather Yoshua Bengio also voiced similar concerns, fearing that ‘bad actors’ could use AI towards harmful ends or even AI technologies could resort to such actions themselves.

In presenting AI as an enigmatic and unpredictable (though inevitable) form of technological development, Big Tech seeks to shield itself from responsibilities of the harms that are being noticed currently. At the same time, it presents any governmental intervention or attempts at regulation as a bureaucratic hindrance forwarded by an out-of-touch gerontocracy that cannot and does not understand the possibilities of technological advances.

AI itself needs to be understood as a marketing term rather than as a well defined set of technological developments. The fear of missing out on a critical piece of technology is weaponised in order to drive the uptake of anything that has been labelled AI, and tech giants have utilised this fear to expand upon their own market share. Companies have been encouraged to retrofit AI into their technology, regardless of its applicability. In fact monopolistic practices seem to be the driving force behind the current hype. In other cases, the use of AI is being used to justify precariatization of workers by invoking the threat of obsolescence of labour.

The media has been effectively deployed as an unwitting ally in the generation of what is being termed an ‘AI hype’. Recent reports of a simulation conducted by the U.S. Air Force about an AI-enabled drone that "killed" its human operator is one such example. Another method of effectively fueling AI hype has been to claim that China is swiftly catching up (or surpassing) the US in AI.

Critics have called out many of the ‘AI Doomers’ for essentially drawing attention away from documented and existing harms in order to promote fantastical and hypothetical dangers.

The impact of AI on the entertainment industry could be understood along similar lines. Streaming services initially offered freedom from traditional TV structures, allowing concise storylines in fewer episodes. However, production houses used this as a chance to exclude writers from the production process. In the past, writers collaborated with actors due to longer seasons, working around 35-40 weeks a year. Today, production houses insist on finishing writing before shooting, resulting in only 8-10 weeks of work for writers and reduced pay.

News headlines often oversimplify the issue of AI potentially replacing writers and actors. It's essential to recognize that behind this lies a complex story of production houses and studios not fairly compensating their workers. While some companies like Netflix, Disney, and Sony are hiring AI specialists, there's no concrete evidence that AI can or will replace human writers and actors. However, employers may promote this idea to deter workers from fighting for their rights.

In recognition of this dangerous phenomenon, the Writers Guild of America (WGA) voted to go on strike in April over residuals from streaming media and demanding that "mandatory staffing" and "duration of employment" terms be added to their contract. When the Screen Actors Guild joined the strike in July, it marked the first time such a strike had happened since the 1960s, which were led by the then actor, and future US president, Ronald Reagan - who is known for the neoliberal austerity measures and anti-worker policies he introduced when in power.

Towards Effective Regulation

Calls for a regulatory framework that governs AI have emerged from all quarters. OpenAI CEO Sam Altman in his congressional testimony called for a new government agency with licensing AI models. Of course, a call for regulations could be an effective means of legitimising a grift. One must remember how Sam Bankman-Fried was a champion of an ‘effective’ regulatory framework for cryptocurrencies.

However, when the Italian regulatory authority insisted that OpenAI comply with the European GDPR regulations, it decided to throw a minor tantrum, only complying with the regulations after a month. Antonio Casilli points out that this was “reminiscent of old-time television tycoons trying to rally the audience against law enforcement”.

Effective regulation of AI cannot mean the appointment of a ‘regulator’, who can be then pushed around by Big Tech at will. A report published by the AI Now Institute provides an effective framework that goes beyond traditional policy recommendations. Reliance on Big Tech to lead the conversation on its own regulation will only serve to preserve their entrenched interests. We find ‘checkbox’ regulations that allow these companies to waive away all liability while taking no proactive measures to address the basis of harms.

Finally, it is crucial for the media (old and new) to cast a more critical eye in everyday reporting on the world of tech. Examining if the effectiveness of an AI tool is what it claims to be, should be an essential component of journalistic inquiry rather than repeating verbatim the claims of founders and CEOs of these tech firms. Of course, in the age of algorithmically generated news articles, this level of complexity and reflexivity is easier said than done.

The Writers Guild of America is also opposed to AI-generated storylines or dialogue from being regarded as “literary material” — a term in their contracts for scripts and other story forms a screenwriter produces. Given that the production houses are likely to establish their own generative AI tools, they will be able to syphon off credit (and wages) from writers when they ‘collaborate’ with AI.

What the current juncture demonstrates precisely is how AI is likely to be wielded as an instrument in the service of capital and disciplining labour rather than an actual technology. It is crucial for us to remember that there is nothing ‘inevitable’ about AI. Instead, we have certain people with an entrenched interest in fuelling a speculative bubble, which can undermine workers’ rights in the long run. The push for AI uptake has more to do with the industry requiring a steady stream of data input rather than any actual benefit conferred by these technologies. It is from this position that we need to be able to imagine collective bargaining.