AI: More than Human exhibition advisor, Ramon Amaro explores some of the future impacts of data and artificial intelligence.
How is AI used?
It is widely known that AI is employed for a number of applications, such as online search engines, mobile apps, surveillance, weaponised and recreational drones, border-control systems, genetic sequencing, health diagnosis, financial markets, city planning, global logistics, manufacturing automation, and so on.
In many ways, it is hoped that these mathematically-driven processes might materialise into new ways of understanding the world. The efficiency at which these processes operate was previously out of reach for most humans. However, inexpensive storage devices, powerful GPUs (graphic processing units), and more sophisticated mathematical techniques have extended our view of social, political, and ecological circumstances.
How is AI changing our view of the world?
World views have changed from that of sociality, relation, and experience to ones that rely on the simulation and fragmentation of life in the form of data analysis. Our current perceptions of the world are not only more granular, but have been replaced by models of a future we have yet to experience – which may or may not account for actual human needs or desires (or in some cases, might only account for the desires of some at the expense of others).
In other words, data is effective at, using Rob Kitchin’s words: ‘abstracting the world into categories, measures and other representational forms – numbers, characters, symbols, images, sounds, electromagnetic waves, bits – that constitute the building blocks from which information and knowledge are created’.
AI and the truth?
Put another way, data has been used throughout history to reduce our often-messy human interactions into one giant mathematical problem.
This implies that the production of data-driven knowledge is a collective process that emerges as a discordant symphony of humans, machines, violent and non-violent histories, symbols, and algorithms, not to mention our fantasies about the future. This is an important distinction since data is often thought to be isolated from human interaction.
Ezekiel Dixon-Román has written extensively on the relational aspects of data. While data are often measured or generated by human intervention, he writes, the human also inherits data under the premise of objective truth. According to Dixon-Román it is here that data needs to be investigated, as what is presented as ‘truth’ is rather ‘an ongoing process of the world trying to become intelligible to itself’.
AI as an extension of human discovery
It is clear that data are, in this way, performative. However, this performance is not necessarily rooted in the desire to find new forms of intelligences (however this might be defined), but lodged in a cycle of self-reflection. It might be claimed, then, that data as well as AI are important extensions of human discovery. Yet this empirical reality does not address the lived reali¬ties of race and discrimination, but instead distances humans and technology from the inherent inequitability in existing social relations. We are faced with a tension between a desire for something ‘other than’ while remaining committed to what we’ve already become.
It is no secret that, in their present form, algorithms are fallible to social discriminations, cultural biases, racisms, segregations, and other reductions of life chances. They echo an abundance of alienation. For some, this alienation necessitates the modelling, prediction, and regulation of social behaviour. Others aspire to engineer a future from limited points of view. The consequence is the production of a social body that is trapped between a past it seeks to return to (or escape from) and a future that is uncomfortably uncertain. Left unmitigated, AI can be overrun by this neurosis.
AI and social racisms in technology
While algorithms alone cannot comprehend the complexities of race or social racisms, they can replicate existing inequitable race dynamics. This is seen most readily in algorithms that produce racist and/or discriminatory outputs on widely recognised digital platforms, even when they are not designed with this intention.
For instance, racialised individuals are targeted for exploitation, discrimination, redlining, criminality and suspicion in credit card transactions, online payments, browsing habits, customer reward programmes, barcode scans, digital access points, biometric sampling, retinal scans, job applications, parole, drug testing, and other systems where AI is being deployed.
While the identification of social racisms in technology is nothing new, data enables an unprecedented penetration of discriminatory logics. The convergence of AI and control makes apparent that our over-dependencies on data can, and do, arrest social conviviality.
Addressing algorithmic error and computational inefficiency with AI
Nonetheless, most AI algorithms remain hidden from public view while accountability for these operations are abstracted. Responsibility is directed away from what Sylvia Wynter describes as a recurrent substance of racial production – a human condition – to a problem of algorithmic error and computational inefficiency. As such, AI provides access to the interior architecture of race, while simultaneously prohibiting access to its own computational logics. These mechanisms do not merely reinstate systems of control. They are, as Wynter contends, a bioepistemic relation that flows through each of us, altering the nature of racial perception.
AI and the future of human relations
In terms of race and racial difference, the future of human relations has already arrived. It is one that blurs the lines between engineering, science, and accountability, while attempting to mask computationally aided forms of control. This is far from the promise of objectivity. Objectivity becomes, as it has always been, as elusive as it is dependent on the reduction of the world into a pre-determined set of categories.
Human difference, in this sense, is a projection of an already racialised imaginary enacted through technological solution – an imaginary that already understands the black, brown, criminalised, gendered, and otherwise Othered human as the principle site of exclusion, quantification, and social organisation.
Ramon Amaro is a lecturer in the Department of Visual Cultures at Goldsmiths, University of London and a researcher in the areas of machine learning, the philosophy of mathematics, black ontologies, and philosophies of being. He completed his PhD in Philosophy in the Department of Media, Communications and Cultural Studies (the former Centre for Cultural Studies) at Goldsmiths and holds a Masters degree in Sociological Research from the University of Essex and a BSe in Mechanical Engineering from the University of Michigan, Ann Arbor. Amaro was an advisor for AI: More than Human.
This essay was originally published in the AI: More than Human exhibition catalogue as 'AI and the Empirical Reality of a Radicalised Future' by Ramon Amaro.
AI: More Than Human is a major exhibition exploring creative and scientific developments in AI, demonstrating its potential to revolutionise our lives. The exhibition takes place at the Barbican Centre, London from 16 May—26 Aug 2019.
Part of Life Rewired, our 2019 season exploring what it means to be human when technology is changing everything.