|
||
|
||
AI: Promise and pitfalls An introduction TODAY’S advances in artificial intelligence (AI) may have seemed like the stuff of science fiction even just a few years ago – but would it be scifi of a utopian or dystopian bent? To its cheerleaders, like billionaire venture capitalist Marc Andreessen, ‘AI is quite possibly the most important – and best – thing our civilisation has ever created’. Touted for its cutting-edge capabilities and near-boundless potential, it is seen as heralding a golden age of progress, productivity and creativity. The ‘boosters’ call for AI developers to be given free rein to work their magic, unshackled from overbearing regulation. At the other end of the spectrum lie the ‘doomers’, who warn that, instead of saving the world, AI could very well end it. They flag the possibility that sufficiently advanced AI systems could gain autonomy from their human operators and proceed to wipe out everything – and everyone – that stands in their way. ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,’ stresses a statement published in May 2023 by the Center for AI Safety. Neither the Pollyannas nor the Cassandras may be presenting the most accurate picture about AI, however. Consider the fact that the above statement on AI risk was signed by some of the leading lights of the AI community and technology industry, including the CEOs of top AI labs. Is this simply a case of the creators conscientiously sounding the alarm over their own creations, or could there be more to it than that? After all, there are far more concrete and more immediate concerns around AI than the fear that it could one day go rogue and kill us all. Focusing on a hypothetical doomsday scenario diverts attention from the very real harms already being caused by AI. Tackling these harms would likely involve greater regulatory oversight, public scrutiny and expenditure of resources – all of which would not rank high on the AI industry’s wishlist. Yet, as the articles in this cover story of Third World Resurgence underline, these issues are too important to ignore and must be addressed, even as we continue making use of the myriad innovations AI brings. In their article, Philip Di Salvo and Antje Scharenberg point to the existence of ‘AI errors’: as AI systems are ‘trained’ on human-collated datasets subject to bias and falsification, they can themselves perpetuate pre-existing forms of discrimination and bias. They give an example of a Black man in the United States who was wrongly arrested because he had been mistakenly identified by an AI-powered algorithmic policing system. Apart from such ‘errors’, AI could also be intentionally deployed to spread disinformation and propaganda, greatly skewing the political and social discourse. On the economic front, the longstanding worry of workers everywhere that they could be displaced by machines has now been amplified manifold with the rise of AI. As Garrison Lovely wrote in Jacobin: ‘Employers are already using AI to surveil, control, and exploit workers. But the real dream is to cut humans out of the loop. … A compliant AGI [artificial general intelligence – AI with at least human-level capabilities in performing intellectual tasks] would be the worker capitalists can only dream of: tireless, motivated, and unburdened by the need for bathroom breaks.’ It is not just jobs that are set to be siphoned off by AI; data centres which house the server computers running AI applications consume energy and water voraciously, at the potential expense of environmental sustainability, as will be explained later within these pages. Besides the direct carbon footprint of AI infrastructure, even AI tools that are supposed to enhance energy efficiency can end up driving greater consumption and consequently increase total energy use, as Felippa Amanta points out in her article. Above all though, what an inordinate focus on AI’s supposed extinction risk will obscure is: who sits behind the technology’s controls. The recipe for success in AI creation comprises data, computing power, and engineer and developer talent – and all three ingredients have largely been hoovered up by just a handful of US tech companies, although their Chinese counterparts are not too far behind. With the barriers to entry being so formidable, and with many AI programs being ‘black boxes’ whose inner workings are non-transparent and inaccessible, US Big Tech is well placed to retain its dominance. Where does this leave the developing world then? According to Cédric Leterme in his article in this issue, ‘countries in the South (with, of course, significant variations between them) tend to occupy the least enviable positions in AI value chains’. Many are reduced to supplying the minerals (like lithium and cobalt) and labour needed to assemble the material infrastructure of the AI and digital economy, with all the well-known problems of extractivism and exploitation this entails. Then there are the AI-era analogues of hewers of wood and drawers of water – ‘click workers’ in countries like Kenya employed in poorly paid and unstable jobs sorting and labelling the data used to train AI models. As for the data itself, that is also something the developing countries furnish in abundance – for nothing, most of the time. Computer scientist Kai-Fu Lee has remarked, as cited by Leterme: ‘If a country in Africa uses largely Facebook and Google, they will be providing their data to help Facebook and Google make more money….’ Instead of letting just a select few rake in all the profits and steer the course of AI research, ‘[s]ociety must discuss in democratic spaces whether and what type of AI should be developed, by whom, and for what’, as Cecilia Rikap contends in her piece in this issue. Such spaces include the United Nations, which could also, suggests Leterme, serve as the site for ‘a global digital governance architecture’. A more inclusive digital sphere would in turn facilitate more digital public goods such as open AI systems, open-source software and open data accessible to all. Democratising AI won’t be easy, of course, but, like AI itself, it need not be confined to the imaginings of science fiction. – The Editors *Third World Resurgence No. 359, 2024/2, pp 9-10 |
||
|