Thinking about the Future and the New AI
What are the issues?
Richard Harper Institute for Social Futures Lancaster University
The past few years have shown a society-wide interest in the remarkable developments within machine learning and associated techniques that are enabling what has been called the New AI. This is said to supplement and even substitute human reasoning, with its powers being amply demonstrated in the capacity of AI machines to beat humans at even the most complex rule-based activities, such as the game of Go. In the longer term, the New AI will be at the heart of self-driving cars, humanless factories and service industries ‘populated’ by artificial assistants.
The benefits that are seen in this are, of course, immense. But so are the concerns. If robots can do more work, will that mean unemployment for the humans who used to be required to do that work, for instance? In the long run, what will be the effect on human dignity if work is no longer the central currency of identity? More philosophically, if machines are able to reason more effectively than people, what will be the future of learning and further education? Why should society invest in people if machines are better learners?
Much of these claims are hyperbole, some are simply overexcited. Many of those making the grandest assertions are computer scientists, it is worth noting, and while the excitement they feel about the advances of their field is justified, this does not mean their claims about the wider implications of this technology are well-judged or accurate. Being able to build a Turing Machine does not necessarily qualify one on moralities. But by the same token, whilst many other disciplines have explored the implications of the New AI, very few have done so on the basis of careful examination of what the technology can actually do – the Turing theoretic models that underscore them. Instead, they adopt the excitement exuding from computer science and mix it with their own topics and concerns, creating a melange of claims that are often well removed from algorithms and Turing. Meanwhile, government and policy makers hear this cacophony and, quite rightly, have sought to factor in the New AI in decision-making – even though they are all too aware that it is not quite clear what impact the technology will have. Finally, the general public is informed on these various issues by journalists who do not always investigate the claims in question with great care: tales about a robot-controlled future make better copy than one about more efficient production lines.
The result of all this is that the true impact of the New AI is unclear, the hyperbole surrounding it is making careful policy analysis hard, and the full range of social consequences that follow on from what the technology will provide remain, in many respects, unexamined. The future of intelligence as a social phenomena, as one that affects not only how machines function, what those machines can do, and how, in turn, this alters their role in society more generally, is largely unchartered territory.
What is Intelligence?
This is not to say little has been written on the general topic of AI. A great deal has. But this has largely confined itself to a very narrow understanding of AI and its consequences. It presumes, for example, that the meaning of intelligence used in combination with the word artificial points towards ways of calculating, not so much the kinds of calculations that an abacus can do, but a calculating that is entailed when people play tightly ruled games – chess, for instance, and perhaps the most complex game of all, the one mentioned above – Go. These are not simply based on if/then choices, the summings of an abacus, but weigh different outcomes given different choices. Elaborate statistical techniques are used for this, most often Bayesian, named after the English vicar who invented them in the eighteenth century.
New techniques of engineering have resulted in state of the art computers being able to do these sorts of endeavours, calculative, probalistic, choice-making, in such a fashion that when persons are asked to do them the machines often do a better job. On this basis, some commentators have come to the conclusion that all of life will be affected. It won’t only be when some calculation needs to be done, it will be whenever some rule governed conduct that has game-like features needs to be.
There are many such activities, of course, even if any suggestion that they might be game-like in everyday talk is most often meant to be mocking rather than descriptive. Stock trading, and all the activities related to it, for example, we say of that – ‘It’s all a game’. The vast sums that are made in this domain also suggest that this game is of a particular kind, a betting one, and hence naturally creates great excitement when undertaken, and, too, great profits, vast prizes when won. Our mockery might be mixed with envy.
This is not the only way of treating the topic of intelligence or even more specifically AI; as game-like in essence and game-like in purpose. For one thing, and as many of the engineers of AI would be the first to point out, while it might be true that their machines are calculating engines, what they do with their calculations is often quite removed from what is ordinarily thought of as calculation. AI systems can often see things in the visual field, for example, and thus can be used to identify objects, even persons. This happens every time someone goes through certain airports for example. As it happens, the type of technology, computer vision uses probalistic techniques to interrogate data it gets from its digital cameras and thereby comes up with labels for what objects are, guesses calculated through various fairly clever statistics that allow aggregations of colour to be seen as edges, shapes, forms. However clever these techniques and however startling the power of the computer to label one shape over another, and hence one person from another, all they are doing is treating that task of identification as one that can be calculated. The machine is being tasked, via the code, to function like an amazingly sophisticated abacus, when the sums this abacus has to add and subtract, subdivide and combine, are determined by elaborate, game-like rules related to the task of recognition. These presuppose what the machine is to look for, how it might do this and how it might know when it has seen the things its calculations are designed to recognise – adequate distinctions between John, Fred, Harry, Sandra and Carolina who are queuing up at the passport gates and being seen by the computer at the same time. To see, in this view, is not to know that it is Harry or Sandra or whoever; recognition is not familiarity, a cue to say ‘Hello!’; on the contrary it is to behave like a stockbroker making their gambles on the prices of stocks and shares; there is no interest in what is seen or why it is seen. It’s only different by the topic in question, not the mechanics of the process.
Others would come to this question of what intelligence is from another starting place but would end up in similar ground. Some cognitive psychologists (as well as some philosophers) believe that the way that mechanisms inside the body work are to be thought of as behaving in this sort of way, calculatively, probalistically, with rules as in a game. When a cell confronts another, in this view, its reaction is determined by probability – the cell would seem to calculate how to react. This vision is used to explain how ‘communication’ between and across cells occurs, and ultimately within any system of cells. In this vision, each cell makes a calculation based on some kind of rules; these are variously constrained (by the rules) with only certain outcomes possible such that when cells are arranged in particular ways, as in a part of the body, one can be confident of how they will react when confronted with some new external factor, some change of state they need to deal with, such that there is stability through time – the outcomes of these probability processes are, if you like, predictable. From this, these commentators come to assert that the human ‘mind’, consciousness in particular, emerges; it is the outcome of a vast, intricate system of probabilistic calculations – and while the mind thinks of itself as singular – that is to say you and I think of ourselves as such, that our minds are ours and ours alone, millions of little acts, little calculations produced this sense of self.
Of course in the body system these calculations are undertaken by enzymes and chemical processes whereas with a AI machine these calculations are done by logical gates carved in silicon by light; but those who hold this view thinking as both as more or less the same – the machine and the body. The material of the ‘machine’ in question is irrelevant; ‘doing’ intelligent activities in this way is common to one and all. In essence, this is the argument that gets called singularity. Intelligence, consciousness, choice- making, all this have common roots; if there is a measure of intelligence it relates to this; the latest computers function this way; the age of AI has arrived. We are no different from these machines, they no different from us.
Similarity, difference, science, the humanities
There is much dispute about this. But the dispute is not about whether cells react probalistically, in a rule-governed way. It is whether saying that is the same as saying a person calculates. Most would agree that to do so is confused. What is meant in each case is not the same. To explain that cells behave this way is to account for the outcomes of their behaviour, it is not to say that they do, in and of themselves, calculate (if one can summarise this view with one word); it is to say that their functioning can be thought of as being like that. Whether they calculate or not is largely irrelevant; indeed, it can be unhelpful particularly if it leads to inappropriate similes. One might say it is a way of describing cells that accounts for their behaviour. In contrast, when a person calculates they are very much aware that they do. It might even they have to neglect other tasks, for example. In short, they choose to; one cannot say the same of cells or the systems they are part of.
This might seem merely a question of words, of conceptual distinctions that seem minor. But there are important issues here to do with language and the relationship between scientific description and the phenomena being described. Such descriptions are intended to galvanise research agendas, to make scientists ask such things as ‘Is this the best description?’ Whether it is right or not, research will tell. In the desire to motivate, such descriptions come at a price, what one might say is the price that science pays when it decides how to proceed. It focuses on particular concerns and so puts aside other ones. Science does not answer ‘everything’ (as if that were a possible thing to do!) but breaks up questions into parts that can be dealt with by the current tools and techniques at hand. Of course, the goal is to produce ever bigger, more encompassing pictures, but this is an outcome of piecemeal research. Science is done in steps.
In relation to the question of intelligence and the New AI this is concern. For while some scientists have been properly focusing on particular issues, such as the similarities between how cells process and how one might devise an algorithm machine to work, and hence, to how consciousness works – they have put aside important differences between the phenomena in question. Such differences relate not just to different ways that the word calculation can be used – one to characterise a feature of a system, the other a conscious act. It can also inhibit us seeing differences that might be important features of the world. It often is the case that things are similarity in surprising ways, but as often – if not more so – that the surprise comes from the other direction: when the world is more diverse and more different than we expect. The scientific vision can be limited in these respects, especially if it is not handled carefully.
This is at the heart of the debates about the New AI. Not questions about whether machines can or cannot equal the human for certain tasks, but whether the way we explore what machines can do is relevant for the questions we might ask about what humans might do. Clearly some are, but many are not.
Thus far the discussion would seem to about a kind so science, a form of engineering if you prefer, and how the vision associated with that might be overextending itself; explanations of and design for machines comes to be used to account for how the human mind words for example. We suggested above that this can sometimes be the price paid in science, insofar as science seeks to explain some particular phenomenon and, out of desire, seeks to explain other phenomena in the same way. One has to be careful, we have been saying, one must guard this desire. This habit of mind is one easily can slip into – this desiring. After all, noticing how different something is from what one is familiar with is startling and demands our attention, seeing the same is easy, it allows one to relax – to look away. Seeing something new forces us to think, to examine what we see.
It seems to us that the excitement about new engineering possibilities encapsulated under the term New AI is allowing this narrowing of view and is allowing the desire for similarity to overcome more pluralistic views. It focuses on a particular notion of intelligence but extends itself to making claims about many other things, too, seeking similarities where a more balanced focus would see differences as well as similarities. What is intelligence is too complicated a question to come up with singular answers, what it is to be a human is far too subtle a topic to be imagined by building a computer that acts like one.
Culture and Science
It is important to bear in mind that, while one might observe this about New AI and its science, one should not limit our remarks to this area alone. Most crucially, these debates have flooded the public consciousness – claims about the New AI are in the press, on the news, filling you-tube clips and infecting government policy. While one might want to clarify what these debates are about, the claims that underscore them and so forth, once there, they can gain a life of their own.
One reason for this echoes what we have said about desire in science – it has to do with similarity, with searching for the same. Consider, our cultural practices seem designed with the apparently reasonable goal of making our experiences common to one and all. We go to watch a movie along with thousands of others so as to share in a perspective on love, on war, on fear. We make our unique experiences subject to common experiences on these topics and as we do so we become as one, an audience.
This is to simplify things, of course, but if this is so, one might come to see that this desire for a shared currency in the cultural domain might feed on and in turn be fed by the limited currency of thinking about intelligence and the New AI. The one can reduce the prospects seen by the other and vice versa. So, while these cultural activities like the movies, theatre, novels, the news, and of course the content of social media can be enormously entertaining, the stories they make for us, articulating and echoing those from science and its claims about intelligence, produce stories that are narrow in their compass, and while this makes them tractable to the narrative seeking audience, the world and its future that is thus produced is less than it might be or even could be. If at the current time the stories we share about artificial intelligence are of one particular kind, emphasising, as we say, one particular view about intelligence, its calculativeness, its rule-like form, the similarity between how cells and machines act, is this how we have always thought of intelligence and the artificial?
Think of Mary Shelley. When she wrote Frankenstein: or The Modern Prometheus (1818) was she thinking of artificial intelligence and robots? Was she wondering about how to make a man, given that technologies were advancing so far that one might (at that time) be at the cusp of being able to replicate them? Was Frankenstein a gothic version of the exquisite robot humans one sees in the latest blockbuster from Hollywood?
No. She was writing in reference to other things of concern at that time, one of which was changing attitudes to God. Then, crudely speaking at the peak of the romantic period, the cultural imagination was focused on how the individual person was special, and how explorations of what they could do was more interesting than anything before. If, before Rousseau, individuals were made great by their performance of social roles – as King, say, as Pope, as priest or scholar – with romanticism the idea emerged that the individual had powers and capacities within. So great were these that the relationship between the world and the individual ought to change, so this view held. Hence The Rights of Man; written during this time, the romantic period (as an aside, democracy was nothing to do with this).
But this begged a question. If the power was now seen in the individual, what is the relationship between one man and another? What is the relationship between a person and the strange sense of specialness that all felt? Thus, when Frankenstein’s ‘creature’ awoke, it wasn’t ‘his’ ugliness that upset him, not the vision of appalling scars that we now come to associate with the creature because of the Bela Legusi Hollywood movies; not at all. He was appalled when it discovered he had been made by another person. What rights of his were reflected in that manufacture? What kind of man could he be if he was made by another? Every man and woman was equal and unique, not something made as with putty in the hands of another. His murderous pursuit of Frankenstein was revenge for condemning him to being less than a man by being made by a man.
How different this view is from how we see human-made creatures today. Now it is emphatically not the shocking ugliness that we note. We are far more sophisticated than Frankenstein with his lightning-powered butchery; we use the almost magical powers of computers to make people who reason better than us. And we have the manufacturing skill, the materials to make these new people even more beautiful than us. It is Alicia Vikanda in Ex Machina – ethereal, perfect, glorious.
Of course, there is pathos – and, as with Shelly, the pathos is not in us, it is in the creature. In Ex Machina it is in the discovery by the cyborg, played by Vikander, that she might not be ‘real’; this is what she has been programmed to think, that she is ‘real’, but this is false. She is not. If Shelley’s monster was trapped by his romantic notion of human dignity – no man can make me – then the modern creature is trapped by programmes that don’t tell her everything. She is being made a fool of even by her own code. The humiliation- and our sympathy – is driven by her own self-awareness. And if ever there is a motif of our age, its zeitgeist if you like, it is this – a fear that one is not in control of oneself. This fear is related to all sorts of ways that one might loose control – in not having control in work or in our career for example, in not being able to control our emotions; through not being able to discipline ourselves – to exercise as we might, to eat as we should, to restrain our desires when we know desire is an admission that we are not in control of everything about us – we are not a machine that can be governed, we are unruly.
Looking at the future in different ways
What we might learn from the example of Shelley’s work is that the future as seen from the past is different from the future we imagine now. While these differences might appear, at first glance, subtle, on closer inspection they can point towards profoundly different feelings: Consider what Frankenstein’s monster felt, indignation, righteous anger; this is surely not the same as the doubt and crushing insignificance that the cyborg is meant to feel in Ex Machina.
As we look at our future today, we might, perhaps, not only allow our contemporary fixations to conjure what that future might be. One might travel back and look at the future and its different shapes when viewed from points in the past. Doing so might not only be a piece of historiography; it might allow us to broaden our own visions, allowing us to see that what the future might be is, perhaps, more distinct, varied, strange and familiar than seems implied by the trajectories of our current thinking. Is the future of intelligence only to do with machines that calculate? Are the activities that matter only ones where that kind of calculation occurs? Or are we being enchanted by some state of the art computer science into thinking that this – theirs – is the only way to see a way ahead, to imagine what the future might be?
At the Institute for Social Futures, here at the University of Lancaster, we think the future is a richer and perhaps stranger place than some prognoses imply. One such imagined future relates to the New AI and what is seen from that point of view. Such views confine the future to what is related to the trend, and does not allow other elements of how a future is made to come into play, to jostle with each other, if you like, to make the future an outcome of numerous possibilities. And these possibilities can relate to that trend itself – in this case, to what machines can do, or more particularly computers, and how the things they do might allow us to reassess what we mean and how we assess intelligence, of ourselves as well of the machines.
In our research, we are exploring some of those possibilities, laying them side-by-side so as to help us populate our imaginings. We don’t only explore scientific or technical views, instances of analysis based on, let us say, advances in computing or related business (like the manufacturing of silicon chips). We are also combining some aspects of this with elements of what one might call the cultural imagination. We draw, for example, on the images presented in paintings, literature, and cinema over the past one hundred years or so, that expressly seek to point out (or towards) what the future will be.
We also refer to other images which indirectly help frame those prospects. Modern spy movies are a resource for such inquiries – telling stories about the current world but showing how, in some hidden place, new technologies are being used to identify, track and trap people in ways that foreshadows a different future, one were our relationship with technology, and hence government, crime, travel is quite different. But many older movies do too – one needs only think of Metropolis. But as we saw with Frankenstein, literature offers images too, pictures of what the future might be.
Frankenstein also attests to how we will not confine our trawl of these cultural representations to only those that echo our current peccadillos – the use of computing, for example, the age of cyborgs. The future has been seen in many different ways and it is not always technology that is the key or defining characteristics. Think of George Orwell’s Animal Farm (1947). In that, there is virtually no novel technology; it is the relations between the creatures – standing as proxy for people – that is novel. In the same author’s Nineteen Eighty Four (1949) there is more technology, the echoing voice of big brother being relayed by speakers in every room and public space, for example. But is no computer technology, not the technology we are fixated with now.
And we should bear in mind too that our current fixation is, relatively speaking new, even if it does echo concerns for a future in which machines take over that have flickered into the centre of the popular imagination in prior years ago before fading only to reassert themselves in today. Consider Aldous Huxley’s Brave New World. This was written in the 1930 and was picked up and made popular again in the nineteen sixties. Why? In part because of the attitudes to sex that the book presented – sex was game- like, a physical exercise, something that certainly resonated with the period of free love. The book also alluded to how the mind can be managed and controlled; ‘conditioned’. This too spoke to the period with fear about the ‘Military Complex’ controlling the minds of the masses through television, for example. Today we don’t think in those terms. In the 1960s they did. Today, as we say, we have other notions in mind, other technologies. But these should constrain how we think about the future, the images we bring to mind, how we might marshal them, create different visions of where we might go, what we might become.
Lancaster, November 2017