Comment on page
Definitions of AI
… humanity’s Other, and defines the repositories of normativity … the human mirror, as a metaphor for making sense of the world … renegotiating values and rules in society … the relationship between science and technology … the role and agency of society and individuals in technological development
The concept artificial intelligence originates from the historical Dartmouth Summer Research Project (1956) authored by JohnMcCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon (written in 1955). The summer project laid foundations for the ideologies of AI as a field for decades to come.
As seen in the proposal, McCarthy and colleagues propose that they can make considerable progress on AI, working with a few dedicated researchers over the summer. Now, 70 years later, significant advancements have been made, although maybe not quite as far as they imagined. However, the discursive hopes and high expectations have remained the same. As we will see, this discursive projection of machines as replacement for humans is quite persistent throughout history, and dates back several hundred, if not thousands of years.
A page from the proposal (McCarthy et al 1955; 2016). Red annotations added.
Today, AI usually signifies a form of machine learning and/or a combination of software, hardware, and data. For instance, in the AI special issue of Royal United Services Institute (RUSI) in 2019, Trevor Taylor defines AI as follows:
“AI refers to the output of three interacting elements:
- 1.computer hardware, [...]
- 2.software, often referred to as algorithms [...]; and
- 3.data [...] or statistics [comprising] collections of ‘facts’ that are seen to be sufficiently similar as to be meaningfully aggregable” (Taylor 2019, 73).
He continues by outlining three types of outputs (functions) of such a system:
- 1.probablistic forecasts;
- 2.prescriptions or recommendations for action;
- 3.the capacity to act or implement the prescription (Taylor 2019, 74).
The concept of AI is also associated with autonomy and decision-making. According to Kenneth Payne, “AI is a decision-making technology, rather than a weapon,” while he sees no connection to the nature or concept of intelligence (Payne 2021, 3).
Recent literature on AI and the military recommends the distinctive use of the concepts of AI and autonomy (Scharre 2018; Taylor 2019; Payne 2021). In his book Army of None: Autonomous Weapons and the Future of War, Scharre argues that computer systems using AI technology are not necessarily autonomous, or vice versa — machine autonomy can also exist without any AI. Similarly, Kenneth Payne, Professor of Strategy at King's College London, argues in his book I, Warbot that autonomy ought to be taken as a spectrum.
At the same time, ‘autonomy’ is heavily criticized as an “intrinsically human term” : it is precisely due to the use of this concept that “we attribute [machines] with human-like behaviour that they are not likely to possess in the near future” (Johansen 2018, 95, 90).
In public consciousness, AI is understood mainly through specific objects and functions. The fields of cybernetics, behavioural psychology and economic theories have also contributed to the historical understanding and development of intelligent systems and functions.