TWN  |  THIRD WORLD RESURGENCE |  ARCHIVE
THIRD WORLD RESURGENCE

Developing ethics for a demystified AI

By unravelling the false narrative of an all-powerful AI, it is possible to formulate a proactive ethics that orients the technology towards closing, instead of widening, the development gap between the global minority and the global majority.

Quito Tsui


IN ancient mythologies, creatures defying all sense of logic and possibility roamed the lands. They tamed seas, made mountains, and were responsible for the myriad of systems that governed our natural world. They could also be capricious and cruel, punishing and playing with humanity. At times, they were gods.

In many ways, this formula has been transferred to technologies. Echoing the genre of myth, discussions around emerging technologies are today infused with a sense of incomprehensibility, or a fundamental inability to understand or audit the ‘decision-making’ of predictive tools, and an inviolable sense that these technologies defy our mortal ethical frameworks. In the pantheon of the technological gods, artificial intelligence (AI) would be Zeus.

It’s no wonder then in all these processes of AI myth-making that it’s increasingly difficult to know where one stands, or where we collectively ought to stand in regard to AI technology. To figure this out, we have to undertake the parallel processes of demystifying AI and developing appropriate ethics.

This is easier said than done within an AI landscape that often defies simple communication. Storytelling around AI often overshadows sober discussion. Indeed, narratives surrounding AI pitch it as something beyond our wildest dreams – and therefore capable of re-making reality itself. Experts positing AI as a silver-bullet solution add to its dizzyingly elevated status, dwarfing attempts to articulate an ethical framework for the technology. A purposeful move, the mythologisation of AI serves to overwhelm, its scale seemingly out of reach of our pedestrian ethics. At the same time, we come to this question with reactive ethics that is playing catchup with AI – where AI, even though not fully formed, gets to shape our ethics.

The perceived ubiquity of AI, as both a solution and an inevitability, makes us feel like we do not have a scale of ethics to match. But it is vital to query this very scale of supposed incomprehensibility. This article seeks to explore how active querying of the realistic scale of AI can help us be proactive in developing an ethical framework that repositions AI well within our ethical grasp and intellectual comprehension.

Establishing the limits of AI

Understanding the limitations of AI technology can empower our ethical stance. But the semantic duality in discussions around AI can prohibit our ability to discern the limits of AI technology. Particularly problematic are the linguistic distortions that result from the anthropomorphisation of AI. Champions of AI point to the ways in which the proximity of AI to humanness, its self-aware consciousness, or supposed ability to mimic cognitive functions of ‘perceiving, reasoning, learning … and exercising creativity’ testify to the technology’s power. Anthropomorphisation overstates the capacity of AI even as it justifies its shortcomings. We are told simultaneously that AI technologies possess the capabilities to existentially threaten humankind, but that the mistakes and limitations of AI are evidence of its ‘humanness’ – mere foibles that should endear us to the technology.

It is a heady mix to hold the two claims as simultaneous truths, but this is an unsustainable duality that provides an entry point into unravelling the stories that make an AI ethics seem hard to grasp. Broadly, we can understand the key mistakes of AI thus: AI technologies struggle with grasping complexity and are hamstrung by the limits of their original training datasets. We don’t see either of these as innovative or really excusable when undertaken by humans but are told to indulge the machines. Instead, we should see them as what they are: clear technological limitations that should inform how we use AI technology.

When AI’s defenders point to its supposed humanness, they are showing us exactly the reasons why AI cannot provide the answers or solutions on the scale we are being told it can. Once we have peeled away the layers of myth surrounding AI, we are left with a curiously limited technology from which we can begin to consider AI as an object within the scope of our ethical understandings and subject it to established principles of social justice.

Ethics of action

So, what then constitutes an ethics of AI? AI ethics often focus on what we shouldn’t do, or on abstract goals such as transparency and accountability. All of this is important, but it does not actively inform us in trying to assess the utility and appropriateness of AI for the vast array of tasks we envision it to fulfil. Undoing the mythologisation of AI technologies requires closing the gap between AI’s technical capabilities and its governing ethics.

We should see this vagueness as a direct product of the ephemerality AI has been imbued with. In our ethics, much like our understanding of AI, we can instead seek tangible answers. What should we use AI for? Identifying and clarifying the use case, and by extension the purpose of developing AI, is fundamental to sober assessments of the technology.

In this article, I want to focus in particular on what a directive and proactive ethics of action for AI could look like in the context of development. A proactive ethics opens up space for us to consider how we might use AI to create space to reimagine equitable development. In other words, how could AI be in service of something new and better, rather than AI in itself being seen as new and better? The development context allows us to embed technological progress into the question of shared economic upliftment.

Towards this end, what is the potential of an ethical approach to AI that prioritises AI for creative development? How might we use the process of demystifying AI to allow us to posit an AI ethics that is proactive: one that allows us to push against the historical determinacy of capitalism, and empowers us to seek collective forms of upliftment?

Reorienting the direction of AI

In the context of economic development, one key mechanism of mystification is at work: the creation of distance. AI is a technology fuelled by extraction, taking information and skills from one place and transferring them into a product that serves another. By severing the connection between producer and product, AI technologies make it more difficult to ensure that economic gain accrues to all spaces and places that contribute to its production. Content moderators for social media platforms or data labellers for machine learning who are paid a pittance in the Global South, very rarely have connection or claim on the Global North corporations profiting from their work. Indeed, data labellers in Kenya were not told whom or what product or purpose their labelling was for, nor that their work was for the multi-billion-dollar company Scale AI, a supplier to some of the biggest names in AI. Both Scale AI and its competitors have spread this approach to AI labour across the Global South, segmenting further the production line of AI in the pursuit of cheap labour.

The diffused network that feeds AI’s expansion exacerbates this inaccessibility. AI takes the production line to new heights: where before, individuals placed a single bolt onto a car door ad infinitum, now they do not even know that the bolt is for a car door, let alone the car. With ever more diffused layers, the means of production of AI are obscured. Meaning, ill-treated workers or communities living with the environmental impacts of building AI models have little capacity to challenge the systems wreaking havoc on their lives.

This distance is venerated by economists and other proponents of AI, who point to ways in which AI advances the supposed frontier of innovation. They point to abstract improvements in processing speed or indicate the efficiency gains AI technology can contribute to the economy. This is a misnomer; what they really point to is the way AI broadens the gap between human and machine, rendering humans mere cogs in the AI machine. In this environment, processes of mystification thrive.

I would argue that it is vital to reorient the direction of AI: instead of widening economic inequalities and extending the technological gap between spaces, AI should be used to close this gap.

The politics of development itself inhibits the potential economic utility of AI in the Global South. The historical lineage of the current model of development propped up by plunder, extraction and slavery – systems of white supremacy – has set up the global minority to enjoy the dividends of plunder in both the past and the present. Within this paradigm, the global majority has long experienced reduced opportunities, a differential that AI explicitly amplifies.

In fact, we can contextualise AI in a larger trend of technology widening the gap between the global majority and the global minority. The World Bank’s Identification for Development (ID4D) initiative and expansion of digital ID systems in the global majority world underpinned by global minority companies means profit and data both flow back to the global minority. The biometric technology underpinning these systems is also often developed or provided by global minority companies. The use of technology by global minority donors, governments and corporations to entrench the spoils of ill-gotten gains is clear.

Rather than feeding the gap between global minority and global majority countries, AI should instead be aligned with trying to close this gap, to narrow the spaces in between, and reduce the margins on the edge of global development.

What does this mean in practice?

There is currently very little room to dream about alternatives to these models, or even what capitalism might look like in a very different landscape to the one it emerged in. Those trying to tread a different path face high opportunity costs, in part because there are high barriers of entry to creating the kind of infrastructure and living standards that support economic growth and societal development. Additionally, trying new things is risky in an environment where incorrect choices can have substantial long-term consequences.

There are two clear ways in which a proactive AI ethics focused on reducing distance can be useful in addressing these central challenges:

1.       Processing information we already have about infrastructure. A key facet of AI is the ability to process large amounts of data in a fairly mundane sense. Tapping into the more mundane abilities of AI to help process existing information on prior developmental and infrastructure decisions from urban planners, architects and policymakers can help paint a picture of how the prior infrastructural decisions have panned out. The ability to then sift through this data and augment it with contextual and historical knowledge can allow for new pathways to be mapped out, and previous pitfalls to be avoided. Rather than turning to AI technologies for answers, planners and policymakers can look to them as a synthesising measure which is beholden to wider concerns and interests.

2.       Supporting decision making about alternative futures. Imagining is often an expensive game and, at times, an impossible one given the strength of the status quo. But having the opportunity to dream, to explore different ways of growing, is critical to decision making. AI technologies can assist in visualising different scenarios, making alternatives more tangible for assessment and reflection. Using AI in this assistive rather than deterministic manner can reduce the start-up costs of undertaking economic development differently.

In this ethical paradigm, AI is limited to its utility in aid of uncovering new and more contextually rooted pathways of development. By putting AI technologies functionally in service of larger imaginings, we can unpick the threads that have woven a narrative of AI as being itself autonomously capable of imagining a new era.

Where to next?

A proactive approach to AI ethics posits that we should be explicit about the direction we want AI to face, the direction in which it should serve. By focusing the gaze of AI on responding to the needs of the global majority, and mobilising AI for those purposes in a directed manner, we can rein in its mystical status. In some ways this may seem limiting, and may at first glance appear insufficiently capacious for all that AI could be used for. And that is exactly the point. In developing an ethics of AI explicitly designed to limit AI’s reach, motivated by a desire to close economic and technological distance rather than increase it, we are forced to engage earnestly with what AI can realistically, meaningfully and ethically provide.

Saying no to the techno-solutionists’ voracious calls for the AI-ification of decision making, is saying no to the whims of rudderless innovation. It is saying no to a surveillance capitalism under which environmental impacts and the loss of labour rights are seen as simply the cost of doing business. Though this approach does not reshape capitalism fundamentally, it is possible, in bending the capitalistic tendencies of AI to the majority will, to find a clarity of ethics rooted in a clear-eyed understanding of AI.

Perhaps the most pernicious myth of all is that of AI’s ungovernability. The tale that tells us we are incapable of either comprehending or curtailing AI’s power. An ethical framework that is able to concisely inform and direct when AI is used can serve as an important  reminder  that  sometimes  myths  really  are  just  mere  stories.                               

Quito Tsui is a researcher coordinator at The Engine Room, where she works on technology in the context of conflict and humanitarian organisations, and the development of a technological environment rooted in human rights. Her other research work includes thinking about transitional justice, memory and how to go about the work of care. The above article was first published in Bot Populi (botpopuli.net) under a Creative Commons licence (CC BY-SA 4.0).

*Third World Resurgence No. 359, 2024/2, pp 23-25


TWN  |  THIRD WORLD RESURGENCE |  ARCHIVE