Transcript Learn@Lunch with Scientia Professor Toby Walsh
Our Artificially Intelligent Future | 9 May 2018

Welcoming remarks from Professor Mark Hoffman, Dean, UNSW Engineering. 

Welcome to this Learn@Lunch with Professor Toby Walsh. I'm Mark Hoffman, the Dean of Engineering at UNSW. It's a pleasure to have you with us today to reflect on some of the social, economic and moral issues associated with artificial intelligence.
This year marks 50 years since the science fiction writer Arthur C. Clarke published his classic 2001: A Space Odyssey. It was a book that foretold the idea of artificial intelligence. Through the book, and later the movie we encountered was seemed an impossible fantasy.

In Space Odyssey, HAL 9000, a computer on a space station was given the unbelievable powers of speech recognition, facial recognition, language processing, lip-reading and automated reasoning. HAL, as he was called could speak with a reassuring voice, just like Google Maps, and was so smart he could even play chess.

Arthur C. Clarke's sole mistake was not to patent his ideas. In fact, he left the idea's creation up to the audience and I quote from Arthur C. Clarke, "You're free to speculate as you wish about the film but I don't want to spell out a verbal road map for 2001."
Now I have an admission to make about this quote, academics are supposed to always go to the original source, but this time I wasn't able to because it's a quote from an interview in a 1968 edition of Playboy Magazine, which I could not find in the UNSW library. So I'm hoping there's someone out there in the audience who can get me out of this professional hole, but no need to put your hand up.

We can look at many professions today, banking being the most topical, and we can see how it is that our leaders in those professions do not understand the social and moral responsibilities of their work, just like HAL. HAL would be joined in popular culture by John Connor and Skynet. It foretold a world where humanity was linked and eventually overthrown by a common technological platform.

At the time the idea of a global platform for AI was as probable as the lead actor in Terminator becoming the governor of America's most popular state. And so, artificial intelligence is an area that touches both our hopes and our fears. It makes us question what part of decision making must always be human and what part can be shared with machines.
Recently Paul Dougherty and James Wilson in their book, Humans and Machines, reflect on the work in the era of AI. They argue that the AI jobs of the future will be in what they call, training, explaining and sustaining, which is all about bridging the gaps between human and artificial intelligence.

Artificial intelligence does not standalone, it can have social and moral consequences. This is a great topic for us to explore today because engineering, where I come from, is not just a discipline about theoretical problem-solving, it is, and I might add, always has been, a discipline about social and moral implications. It solves the challenges of the world, but always to create a better world.
At UNSW today we have over 16,000 engineering students studying. We educate 17% of Australia's engineers. We teach within a professional, society's at the centre, not that our profession is at the centre of society.

As Australia ramps up its need for engineers, and Australia already imports half of its engineering needs, a challenge is to provide engineers that are grounded not just in the science but particularly in shaping how that science is applied. As engineers, we seek to solve the challenges of our age and that has always had a moral dimension.

Now, engineering is a collaborative profession. No one achieves anything of substance alone in our profession, and our challenge as a profession, and as an engineering school is to develop leaders that are thoughtful and who are willing to engage with, and maybe challenge the collaborative work being undertaken by a team.

At times engaging with those discussions could mean having the willingness to be rejected and that takes courage, but such courage is always found in leaders that have a strong intellectual grounding in their work.
This brings us to today's speaker. It is the moral and social challenges of AI that Toby Walsh has been engaged in. Toby Walsh is a Scientia Professor of Artificial Intelligence at UNSW. He leads the Algorithmic Decision Theory group at Data61, Australia's Centre of Excellence for ICT research.

He's been elected a Fellow of the Australian Academy of Science, and has won a prestigious Humbolt Research Award, as well as the New South Wales Premier's Prize for Excellence in Engineering and IT. His work has appeared in New Scientist, American Scientist, Le Scienze, Cosmos, and Princeton University's, The Best Writing in Mathematics. He has spoken at the UN in both New York and Geneva.

Toby is not only a brilliant researcher and thinker, he is an outstanding communicator as well. So much so that the Australian newspaper gave him one of the highest, most elusive titles for an academic, they called him, "A rock star." And we need our rock stars because their work must not leave the community or society behind.

As an engineering school and a university we want, if fact, we're expected to engage with the debates that are shaping our profession and future, and in Toby Walsh we have one of our university's greatest ambassadors.

Ladies and gentlemen, please welcome Toby Walsh.

5.55 Learn@Lunch presentation by Scientia Professor Toby Walsh, UNSW Engineering
Thanks Mark. Thank you Mark for that very kind introduction. The only person who thinks it's more funny to be called a rock star is my daughter, who's down in the audience there today, because she knows that daddy isn't a rock star. I don't even own a leather jacket.

It was interesting that Mark introduced me by talking about Arthur C. Clarke, because when I was at my daughter's age that's who I was reading, and that's why I'm standing in front of you today because that's when I started to dream about the future that Arthur C. Clarke painted in his novels, and that Stanley Kubrick showed us in that wonderful space opera movie, 2001, a future that was full of robots and intelligent machines.

That future seems to be arriving as we speak. There's HAL, our friend HAL. For many people I think though it started to become mainstream, to become part of the everyday discourse two years ago in 2016, March 2016, when for the first time humans were beaten, the very best humans were beaten at the ancient Chinese game of Go.

This was undoubtedly a landmark moment. Now, don't take my word for it, it was a landmark moment, this is one of the oldest, if not the oldest strategy game on the planet. We've been playing it for several thousand years, much longer than we've been playing chess.

It was a landmark moment when computers got better than humans at playing this game, and now much, much better than humans. According to the New York Times, 20 years before, 1997 Garry Kasparov, who was at the time World Chess Champion, probably the best chess player ever to have lived.

He became the World Chess Champion at the youngest age anyone's ever become, at the age of 18, I think it was. When he retired from competitive professional chess at the age of 35 or so, he was still world champion.

He had the misfortune though in 1997 to be alive and to be the world chess champion when computers got good enough to beat him. This was IBM's Deep Blue. In a consoling article at the time, the New York Times said, "Well it's okay" mankind was still ... artificial intelligence, the idea of building machines that were as intelligent, and maybe even more intelligent than us, had still a long way to go. We wouldn't be succeeding till we could play the much more difficult game of Go.

In fact, some Go masters said we would never succeed. Some commentators in 2015 were saying was at least still another decade away. So it really was a landmark moment when this game, which is much more challenging in some sort of academic sense than chess was played better by computers than by humans.

There are more games of Go than there are atoms in the universe, unlike chess where the solution was rather brute force in some respects that the programme, the computer was comparing all the possible moves you can make, and all the counter moves you could make to that.

It's a much more intuitive game, it's much more difficult to play. It takes a lifetime of playing to actually play at the level they were playing. Now the computer's playing at a level that the Chinese, who are the experts at this game say that it's a, "Go God."
So why? Why is it that now, today, I've been thinking about, dreaming about this since I was a young boy, why 30 or 40 years later is it that it's now making the newspapers? Now to the point where literally every day you open the newspaper there's some article about some other skill that the machines are starting to do well.

It is this rather misquoted expression because we're living in exponential times. Now there's a lot of mumbo-jumbo said about exponentials and about how exponentials are going to solve everything, they're not.

There are plenty of problems where we need much better than exponential scaling to be able to tackle, but there have been four exponential that have contributed to the success that I just want to briefly tell you about.
The first that you all know because it actually has a name, it's called, Moore's law, is the fact that every two years or so there's been a doubling in computer power, well technically a doubling in the number of transistors we can put on a chip, with that roughly speaking translates into a doubling of computer power.

This is a graph, you can see we're engineers and scientists here, we're already into my second graph in five minutes. This is a graph showing Moore's law, which many of you will have seen, although the form that many of you will have seen it would be different because typically when you see this graph it's plotted with the compressed logarithmic scale, 1, 10, a 100, a 1,000, 10,000, a 100,000.

I haven't done that, I've plotted it with a straightforward linear scale to really show to you what exponential means, what are doubling. We go from 500,000 to a million transistors, sorry, two million transistors.

We're going up in a linear scale. You can see how the graph takes off at the end and now the smartphone in your pocket has got more computer power than took us to the Moon and back. So that's the first exponential. We've got a lot more computer power, which means the things I dreamt about 20 years ago, we can just do, some of those things at least we can just do.

The second one, it's another exponential, interestingly enough, again it's a doubling every two years. In fact, actually all the exponentials I'm going to show you today are doubling every two years. There's no reason why it should be every two years, it's just one of these interesting quirks that each of the doubling rates is every two years.

Doubling every two years, again this now is in the amount of data we have. A lot of what we're doing these days is machine learning, things like deep learning that you read a lot about, and that needs a lot of data to learn from. So not only do we have computer power, but also we have the data to run the algorithms on. So that's the first two exponentials.
The third exponential is in algorithm of performance. This is a standard AI benchmark. It's the image recognition. You're shown a picture and you have to say what's in the picture. "There's a cat. There's a dog. There's a lion. There's a tiger. There's a bicycle" or "There's a car."

We have been working on AI the last 50 odd years or so, and we have started to build by our hard work, better and better algorithms, and we're seeing some returns in that. Here's the error rate, the number of images misclassified back in 2010, it was 1 in 6. If you're going to have an autonomous car you can't recognise the bicyclist, only 1 in 6, misrecognize the bicyclist 1 in times, that's going to be rather fatal. So that was no good, but now it's over 1 in 30. There's been a doubling in performance in the error rate every two years.

There's a red line across the middle of that graph. The important thing about that red line is that's human performance on this particular benchmark set. So now we can recognise images at a superhuman level, which is obviously an important component if you're going to build an autonomous car to be able to see and understand what's in the picture. So there's the first three exponentials. They're all technical things, all technical advances that we're making in the field.

The fourth exponential is nothing technical at all, it's money. The amount of money going into the field, interestingly enough, every two years, in the last five or six years that has been doubling. You put those four things together, more computer power, more data, better algorithms, more money, more people, you have recipe for making some significant progress, and we are making some significant progress.

So what can we do? What can we do? What can't we do? Let's go back to HAL 2001, "Hello day. Hello Toby." Well to a certain extent we already have that in our homes, "Hello Google." You can already ask quite interesting questions of your smart speaker, your home assistant, your smartphone. We can already say that.

Perhaps we still can't open the pod bay doors, but we can turn the lights off by saying, "Turn the lights off Alexa." So that's one movie where in some sort of respects we're already starting to succeed. How about another movie? Total Recall. This is Johnny Cab, the driverless taxis in Total Recall.

Well you go out to today, you can buy yourself a Tesla, which has an autopilot, which will at least on the highway you have to keep your hands on the wheel, but, nevertheless it will drive most of the time completely autonomously. It's not just Tesla, most of the other major car manufacturers are claiming that they're going to be selling level 5, fully autonomous cars by 2025. That's only seven years away.

So within the next decade or two it seems likely most of us will be sitting in autonomous cars and it's going to be a great benefit to society. 1,000 people, five times the content of this room will die in road traffic accidents in the next year in Australia caused almost all by the driver of the car.

Those road deaths will go away, we'll have almost zero road deaths once we have autonomous cars that are laser focused on driving. They're not driving texting. They're not driving they're drunk, when they're tired, distracted, all the things that we do wrong as drivers, with far better sensors they'll see the world on wavelengths that we don't have sensors for.

And so, it's going to be a much brighter future and going to change the nature of our cities, the landscape of our cities. We may come to that in the questions, in quite dramatic ways, just like the car defined the United States, the autonomous car, it's going to redefine many aspects of our lives.

Okay. The next movie, The Hitchhiker's Guide to the Galaxy. Well it started out as a radio series of course, and a book, and a TV series, but then it did eventually become a movie. This is the famous Babel fish, this strange fish that you put in your ear that would translate between any language and your brainwaves.

Again, to a certain extent we already have that. You can use, you can call up on Skype. You can do simultaneous translation between many pairs of languages. You can buy yourself some earphones that will do that for you, that we almost have. It's not course good enough to translate legal documents yet, but it's good enough to have a conversation.

So many of those things are happening today. What about the other one from Terminator, Skynet? Well one thing I want you to do, is to sleep easy tonight, the machines are not taking over. The robots aren't suddenly decide that they want to rule the planet. That they could do much better job than we can, which is true.

Machines have no desires of their own, they have no consciousness, no sentients. They do what we tell them to do and they still only do very narrow, focused tasks. They play Go at a superhuman level. They read x-rays at a superhuman level. They do one task and maybe one task very well, but they don't have the full breadth of ability, the adaptability, the flexibility, the creativity, the emotional intelligence, the social intelligence, all those things that still make us uniquely human.

There are some things still that we do that they really struggle with. There's this interesting thing, it's called, Moravec’s paradox, after a roboticist, Hans Moravec, which is the easy things for us are hard for robots, and the hard things for us are easy for robots.
So folding a towel, there is a robot that will fold a towel. This was a robot that was developed at UC, Berkeley. When they first started having it folding towels it took 20 minutes to fold one towel. Something that we do effortlessly in seconds.

The thing that we find hard like playing Go, of course those are some of the things that actually was easiest to get computers to do first and there are many other things. Computers don't have any common sense. They don't have any really deep understanding of language. There's still a lot that we do that we've got 50, or a 100 ... If you ask most of my colleagues will say, "There's 50 or a 100 years before we can get machines to do those things."

Of course, in Australia we are world leading, in fact, not just in Australia, UNSW are world leading in this field, with respect to the Socceroos, we are five times world champions at robot soccer. Here you can see some of our team playing.

The ambition there is by 2050 to be playing at the level of the human world champions. To be able to beat the Germans at their own game. If you're interested in that, next year we're going to have the World RoboCup here in Sydney, so you'll be able to see a lot of robot soccer, a lot of robots being used for other things as well. So keep your eyes peeled for that in August, I think it is next year.
So hopefully I've convinced you that there's lots of interesting things that's happening now and it's going to happen very soon about artificial intelligence, and that some of the things that Hollywood would have worry about are things that you shouldn't be worrying about, but that doesn't mean there aren't some things that you should be worried about.

There are three things that I think are really pressing today that we should be thinking about. While I'm very happy to have conversations like this, because I think it's really important, people like myself can help inform this conversation, but these are questions for the whole of society to be discussing and to be making decisions.

The future's not fixed. The future's not something that we have to adapt to. The future is the product of the decisions we as a society make as to how we let the technology into our lives to make it better for all of us.

And so, there's three areas where I think we really should be having this conversation, and in many cases we're already starting to have some of a conversation. The first is in work. The second is the impact it's having on our society in terms of the fairness and transparency of the algorithms. The third where it was mentioned in the introduction, I've spent a lot of time advocating is that the impact it's going to have on war.

So let me just quickly talk about those three areas. There's a lot of misconceptions about the impact it's going to have on work. There is some frightening numbers that get put out. There was a report that came out of the University of Oxford that said, "47% of jobs at risk." The Chief Economist of the Bank of England said, "Half of all jobs, 50 million jobs are at risk in the United Kingdom in the next two decades."

There's a lot of scaremongering. We don't really know what the risks are. Technology will create lots of new jobs. We don't know what the balance is going to be. I looked very carefully at some of these reports. I looked at the University of Oxford report, one of the jobs they predicted with 98% probability was going to be automated in the next two decades was bicycle repair person.
I can assure you there is a zero percent chance this job is going to be automated anytime soon. None of my colleagues are working on building robots to repair bicycles. None of them. It's a really difficult, fiddly thing a bicycle, and anyone who's ever done any work on bicycle knows this, we're not going to ... it's going require a really expensive robot to do. I'm afraid, bicycle repair person is not a very well-paid job. We're not going to build a very, very expensive robot to replace a very cheap person.

I was saying this to a friend of mine who owns a bike shop and she said, "Well actually, you know the funny thing is we lose money repairing bicycles" I said, "Oh so why do you repair bicycles?" She says, "Well it's to get people in the shop to sell them kits. To talk about the latest rides. It's all about the social interaction."

Again, that's something we're not going to do with a robot, that's something we want people to do. But there are some jobs that will probably be replaced in the next decade or two.

This is Uber trialling autonomous taxis now, and it's clear Uber is going to only scale like all the other internet businesses, if they can get the most expensive thing out of the taxi, which is the driver, which is going to be great for all the rest of us.

The price Ubers is going to plummet. Taxis are going to be as cheap buses, but that's not good news if you're a taxi driver. The irony here is that one of the newest jobs on the planet, being an Uber driver is probably one of the most short-lived jobs on the planet.
By contrast there are plenty of other jobs that I'm sure are going to be very safe. One of the oldest jobs on the planet, being a carpenter. We're going to value things that have been touched by the human hand for the foreseeable future. The values of being a carpenter is probably one of the longest lived job and one of the oldest jobs that will exist on the planet.

It's certainly happening today. We can already see the beginnings of this today. NAB have announced that they're going to lay off 6,000 workers in the next year or so, and back in February they laid off the first 1,000 people due to automation, due to digitalization of the banking sector.

Rio Tinto in the same week announced that they'd automated the trucks in another one of their mines. 200 odd people being employed as truck drivers. Rio Tinto, I will say did exactly the right thing, which is, said that they were going to reskill those truck drivers so that they could have jobs.

That's the sort of conversation that we should be having. Thinking about we don't know what the net balance of jobs created and jobs destroyed are, but we are pretty sure that the new jobs will require different skills to the old ones. And so, that's the conversation that we should be having.

I note in the budget yesterday the government announced just shy of 200 million dollars to help older workers reskill. That's exactly the sort of conversation we need to be having, thinking about what are the right skills? How do we support people to be doing that? So that was the first conversation I think we should be having about the impact on work.

The second one is the impact that algorithms are already having, stupid algorithms even are having in terms of fairness and transparency. There are plenty of examples that are coming to light of some of the risks. We see this in this ongoing discussion around Cambridge Analytica. We see this in many other discussions happening about how our smartphones can recognise Caucasian faces, but can't recognise black faces.

That if we're not careful, because the algorithms have no common sense, have no idea, they will encapsulate values whether we like it or not. Algorithms are not unbiased. They are the products of how we design them and increasingly the data we train on them and there are lots of challenges there.

The data that we train on them, frequently it's historical data, and there are undoubtedly many historical biases that exist in our world, racial, sexual, age and other biases that exist in that training data that if we're not very careful we will put into the algorithms, and that we will bake in those biases that we've spent the last 50 years trying to eliminate, we will bake into these algorithms.
It's worse than having humans making the decisions because at the moment the state of the art is that we can build algorithms that can make good decisions, but they're mostly black boxes, we have very little way of asking those algorithms to explain their decisions. And so, unlike humans they'll be giving you decisions without being able to explain the basis for those decisions.
This is again a conversation that people need to start waking up, and we are starting to wake up about the fairness, the transparency, the biases that we may be baking into the algorithms that increasingly are making decisions that impact our lives, about who gets to get a loan, who gets insurance, who gets welfare, who gets locked up?

Increasingly we're handing these decisions over to algorithms and we're discovering the challenges that those algorithms can reflect, biases that already exist in our society if we're not careful. So that's the second area.

The third area, one that I've been a very passionate advocate over the last couple years, is the impact that it's having on warfare. Often people talk about killer robots, it's a nice evocative term, it catches the media's attention, but it gives you the wrong picture.
The picture is not Terminator and is not Arnold Schwarzenegger in some Hollywood movie, the picture of technologies that are at best a few years away. This is an example of one of those technologies, it's BAE Systems Taranis drone. It's a fully autonomous drone. Unlike the drones flying above Pakistan and Iraq today there's no human in the loop. There's no soldier in a container who's making the final decision to set off the Hellfire missile, it's a computer that's making that decision. That crosses a moral line.
It will transform warfare in a way that has rightly been called, the third revolution in warfare. It will make warfare a much more terrible, terrifying thing. It will lower the barriers to war. We don't know how to build systems today that can make the right moral judgements, that can follow international humanitarian law. That cannot be hacked by terrorists and rogue states to behave in vile ways. These would be the perfect weapons for terrorists. They would follow any order, however evil.

I'm pleased to say that the concerns that myself and many of my colleagues, thousands of my colleagues have expressed about weaponising this sort of technology have been listened to. The United Nations are discussing these issues. I was at the United Nations just a month ago talking to the diplomats, saying again, warning them of the risks. There is growing momentum, small but, nevertheless growing momentum behind this idea - 26 nations now around the world have called for a pre-emptive ban.
There are some technologies, like chemical weapons, biological weapons, cluster munitions, blinding lasers, that we have as a society have decided were morally unacceptable to use and have banned. My hope is that we will decide this again is a technology that's morally unacceptable to use, and that we will use, reserve the use the technology for all the good things.

The same algorithms will go into our autonomous cars and will save a million road deaths around the planet every year. The same technologies will be used to make us healthier, wealthier and happier. We get to choose how technologies get used. It doesn't have to be used for killing.

Oh I said there were three things you should think about, I forgot, there's four things, China. I always end my talks in the last few months now telling people about the challenge, challenge, yes, challenge posed by China.

China has made it very clear that this artificial intelligence is a significant component of their economic, military and other dominance of the planet moving forwards, and they're investing huge sums of money to achieve that. Just, this is one headline from the Financial Times, it didn't even make the front page, "Alibaba investing 15 billion dollars over the next five years." Most of it in AI, internet and things, and quantum computing.

I applaud the fact that in the federal budget yesterday the government put forwards another 30 million in AI. That's a beginning, let me be kind, that's a beginning, but when Alibaba's investing three billion dollars a year themselves alone, 30 billion dollars is only a down payment to be participating in this game.

So I hope to see government putting more money up, being more, putting more in innovation, giving perhaps us less tax buying, less tax breaks, buying votes with less tax breaks and investing more in the thing that will make Australia a great country going forwards, as it has been in the past, which is embracing innovation, investing in science and engineering.

I'm going to finish there and it will be far more interesting to answer your questions, but I just would say to help inform the conversation I've also written a whole book for the general public, available in all leading bookstores. So please do look that up. Thank you very much.

32.12 Q&A
Toby's very kindly left us some time for questions and is very willing to take questions. We've got some microphones moving around, so do we have some questions for Toby? We've got a lady, wonderful. We'll start the way we should go on with the women asking the questions.

Speaker 3: Thank you for the talk. I've got two questions. Can you explain why the Uber drivers have such a short length, is it because they're not good drivers and they're getting knocked off by accidents? That's number one. And two, this just shows my ignorance but you know when you said, "The Go game" that the machines that are doing are super, are more intelligent than us, but don't we put in something initially from us so that they can work? So therefore-

Toby Walsh: A fantastic question, yes. So just quickly but Ubers. Uber drivers, well half the cost of an Uber is the cost of the driver, and they can only drive eight hours a day. So if we can have a computer do the driving it's going to be far cheaper, and also far safer than having a human. So that's great for 98% of us who are Uber passengers, but it's not great news if you're Uber driver because that job is probably going to disappear. Then Uber can scale much more quickly. They're limited by the number of drivers.
There's actually been an exponential increase in the number of Uber drivers over last ... a doubling every six months of Uber drivers, but then obviously there aren't that number of people willing to drive Ubers for that much longer. So it will mean that it would be far cheaper for the rest of us to have Ubers.

Now, to your question, which is a fantastic question about, do we have to put something into the programme, some of our expertise to programme AlphaGo to play at superhuman level? Well in some logical sense if we knew how to play Go better than a human we could do that ourselves and play Go better than a human.

So the way that the programmes have got so good is by learning. In fact, that's how we got to be intelligent. When we were born none of us could speak, none of us could read, none of us could write. We learnt those things, so most of our intelligence is something that we've learnt. Equally, we make programmes today that learn and they learn in a very nice way, they play themselves.

You can play yourself against the game of Go, and which side wins, you say, "Well all those moves, those were good things, let's try and do those sorts of moves again." All the moves on the losing side you say, "Those were bad moves, let's not repeat those moves in future games." And so, by playing itself millions and millions of times it learnt to play better than any human.

Now, if you started playing Go the moment you were born and only play Go the whole of your life you would not have played that number of games of Go. So the computer's actually quite a slow learner, but it's just played so much Go, more than any human could, in fact, more than this whole room could play if you all collectively just played Go, all of you, together the whole of your lives.
That is now, see much more Go than any human possibly could and therefore is playing moves that in fact, humans have never played. The interesting thing is the Go masters say, "This is opening up the game." That actually they think this has happened with chess 20 years ago when chess programmes got better than humans.

It told us new things about the game of chess and Go masters are quite excited, it's going to tell us new things about the game of Go, the moves that Go masters never thought were interesting to follow, it's making these moves and winning. And so, it will tell us new things about the game of Go.

The interesting thing about chess is the fact that now we chess programmes even on your smartphone that play as well as any human, that hasn't actually diminished our interest in the game of chess. There are more professional chess players playing chess today as a living than have ever been.

It's actually made it much easier for people to play chess, for amateurs to play at a much higher level. So it's actually augmented the world of chess, rather than take away from the world of chess.

Mark Hoffman: Now we've got lots of questions. Let's go one, two, three, four.

Chris Skinner: Chris Skinner. Excellent talk. Thank you. I'd like to ask a question on your second area of concern, which is fairness.
It's all very well having soccer or Go, or chess where the rules are clear-cut, you know when you've won the game, or lost it, but in many fields, let's say, in international migration, just for an example, winners and losers are not quite so clear-cut. My question is, what should we be teaching children? Is coding the right thing to do or is it more a philosophical approach that we should be imparting?

Toby Walsh: Another great question. I have actually been working just across the street with the New South Wales Department of Education helping them think about, "Well how do we reinvent the curriculum?" Because kids today are going to be working in 30 or 40 years time in jobs that we can't even think of today?

Just like today people are working in things like social media, which they never learnt at university because 10 years ago we didn't have social media, full stop. There wasn't social media to be taught because we didn't have it. What are the things that we should be teaching?

I do think we should be teaching people, coding is useful but there aren't going to be that many jobs in coding. We do need people to understand computational thought. It's like calculus, I think it's probably more important today than calculus is, and we think we should be pushing that, otherwise it's going to magic.

If the computers are black boxes that people don't understand the principles behind, then they're not going to be able to take advantage of the technology, even if they're a user they're going to be taken advantage of. So I do think we should be teaching the ideas, the fundamental ideas of computational thoughts so that people can be active participants in society.

I was asked, "If you could only have four subjects at school what would they be?" I said, "Well they should be English, mathematics, philosophy, because we need to teach people about being good citizens and about values, and all the things that are important in society. Then the fourth subject was comedy, because that would prepare them for everything in life, and give them the confidence to stand up and make something of themselves.

Speaker 5: Hi. I'm interested in you expanding on the discussions you're having at United Nations, et cetera. You seemed to suggest that a fully autonomous drone would be going too far, but do you have concerns with what's already happening with drones or other technologies, and the overlays, and whether we should be scaling back what is already part of international warfare?
Toby Walsh: Thank you. When you go into a forum like the United Nations you pick one target to go against, you don't go against all the things that you're worried about in life. I am also worried about the fact that semi-autonomous drones have perhaps lowered the barriers to war, dragged us into conflicts where we're not going to solve them by raining terror from above. In fact, arguably we're deeper in these problems now.

My personal view is that we would have to make those painful decisions of putting boots on the ground and thinking about, well how do we solve these problems? What are the ways for these countries to move forwards? How do we support them to do that, than just sending drones in?

That's a separate question for the broader question, the one that I feel that we could make progress on already, which is, let's not introduce this new type of weapon into the battlefield.

Speaker 6: You mentioned that computers currently don't have a conscience and don't make decisions, they follow what we tell them essentially, but of course the issue is that we can tell them to do bad things, or even we can tell them to do things, make a better paperclip, or make more paperclips. Who knows, we'll actually end up doing that? My question is more, there's this initiative, OpenAI, you might have heard of, and they talked about how AI is quite powerful and can be concentrated very easily in large companies, in the Googles, Facebooks and the nation states.

Essentially because it can be so powerful, if it gets concentrate like that it could run away from us in some ways. What can we do to democratise AI and make it so that it's available to everyone equally? Make sure that it's not the select few who have the resources to who can actually advance it and control it.

Toby Walsh: Thank you. Again, another wonderful question. We should be having these sort of conversations. I think people are waking up to the idea, the five largest companies by market cap in the world today are all technology companies, that wasn't true 20 years ago.

We're discovering that these tend to be rather natural monopolies. There's only one dominant search engine on the planet except in China where they regulated there would be competition, there's only one dominant social media company on the planet except in China where they regulated there was some competition. There's only one messaging, Twitter like service on the planet, pretty much except for in China where they regulated that there should be.

We are discovering some of the risks, some of the challenges. This whole debate around Cambridge Analytica and the impact that it seems to be having on political discourse. The impact it almost certainly seems to have had on the Brexit vote, the impact it almost certainly seems to have had on the election of Trump.

Yes, these are things that we should be very concerned about. I mean the corrosive impact it's having on political discourse. The impact it's having on our privacy. So far, I think we've probably been a little too relaxed in how we've governed the space.
If we look at history we know that other big businesses, big oil, we had to regulate to make it competitive. Big telecoms that had to be regulated. That was a wonderful example, Ma Bell was broken up. Telecommunications industry in United States now is far more competitive, and far more innovative than it ever was in the days that there was a dominant player.

We are seeing worrying signs that the period, the rapid period of innovation that happened over the last 10 or 20 years in the tech space may actually be slowing down, because it's very hard to compete against the companies that have all the data, the companies that have all the buying power, the companies that can buy out any competition.

It may very well require greater regulation to make sure that the benefits are shared around and that the market remains competitive. Markets are only competitive when they're properly regulated. We're discovering that the worst excesses as we discovered in the banking industry, and we're rediscovering in the banking industry, there needs to be a careful interplay between the freedom to operate and the regulation of the market in which you operate.

So these are all really important conversations we should have been having actually I think several years ago, we're now having today. We do need government to wake up and take responsibility. We need to take responsibility as citizens, and as consumers we need to take responsibility, as employees of companies and company directors there's lots of responsibility that needs to be taken, and lots of change that needs to happen.

If you think about we're going through a revolution, the fourth Industrial Revolution, in the second Industrial Revolution, the revolution where we introduced steam and mechanisation, and electricity into our lives, we did make some pretty radical changes to society.

We introduced unions. We introduced labour laws. We introduced the welfare state. We introduced universal education, so that all of us could contribute, and all of us could be protected by the benefits that the technology brought.

I believe probably we need to think about equally significant changes to our society and our institutions to make sure that all of us prosper from the technology. But the technology is the only thing, the only hand of cards we have to deal with all the challenges that face us. This lady on the ...

Speaker 9: Thank you. Is it at all possible for machine to learn to make moral decisions or is it something like speed of light that we cannot surpass?

Toby Walsh: That's a lovely question. I think the challenge is we're working out today how we could build moral machines. We don't know today how to build moral machines. We're beginning to understand some of the things we can do, and as increasingly we hand over decisions to machines that impact our lives in significant ways, we are going to have to make machines that make moral decisions.

Our autonomous car will have to make a split-second moral decision when it's barrelling down the road, an accident is going to happen it will have to decide what to do, and that may require deciding who to drive into and who to avoid.

So yes, we're going to have to build machines that make moral decisions. So how do we do that? There are three ways, one of them, you hinted at one of them. One is that we programme them. We literally code in the instructions. The challenge that, is we have to work out, well what are those instructions?

As a society there are many moral decisions that are not codified and the problem with computers is that they're incredibly literate devices. They follow to the letter what you tell them to do. So we're going to have to be very precise if we're going to code them.
Another way is that they learn to be moral. That's certainly, I suspect how much of our own morality as humans came about. As children, we went through a period of education, and we're exposed to various ideas, and our parents who helped guide us, hopefully.

The challenge there is that we can have machines watch what we do and try and copy them, and learn those sorts of the things, but we all know that our actions aren't what we would like them to be. All of us behave in ways that don't come up to the standards that we have of ourselves. So the challenge is, how could we build machines that have the right morality when lots of human activity, lots of human actions are not very moral?

In fact, we should probably hold machines to higher moral standards because we can, because they are machines they will do what we tell them to do, and because they will do things like, they will be willing to sacrifice themselves at whatever cost for us because they're machines, and certainly in this they have any suffering, they have in some sense less. We don't have to worry about the moral quandary of them being destroyed.

Speaker 10: Thanks. Perhaps just a follow on from the last question, but if an autonomous car crashes... and someone's injured someone might make a civil suit against the Uber technology, but not the car itself. In criminal law I need to do the act and I need to intend the act and one of my defences might be that I was insane, that I didn't know what I was doing was wrong.

Is artificial intelligence at a point where it can escape the criminal law? Could we ever bring it back inside criminal law by going through the tests that are currently done to bring humans inside criminal law, as opposed to civil compensations?

Toby Walsh: These are the sorts of really important questions we need to answer really quickly because you can already buy an autonomous car. It's not level 5 yet, but you can buy yourself Tesla and we're already seeing there have been a few accidents have happened, and we are starting to worry about, well who is accountable? You can't lock a machine up. These are really interesting questions. As ever, the technology is way ahead of where the law is, and where society is in answering these questions.
Volvo, for example they're developing autonomous cars. They've tried to finesse the problem by saying they'll take responsibility for any accident using their technology. That's fine for working out who to sue, but that doesn't help for criminal negligence.
We may, maybe we have to introduce a new type of actor in our in legal assessing. We did, when we had the Industrial Revolution we invented essentially the modern day idea of a corporation. We also invented various legal structures around corporate responsibility that go with that. So maybe there will be a new type of actor that enters our legal frames, or maybe there won't. These are all really important questions we don't have the answers to today.

Speaker 11: Thank you. There are plenty of questions to ask, and we'll be here all day, but one question I'd like to ask is about the image of that drone aeroplane.

What happens when we have quantum computing and some terrorist can compute or decrypt the codes that go between the aeroplane and the military headquarters, or it could even be an autonomous train or an autonomous vehicle? Thank you.
Toby Walsh: Okay. Interesting question. So first of all, I mean the reason that the military from an operational point of view are very keen to have autonomous drones is the weak link is the radio link back to base. In fact, drones have been brought down in Afghanistan by hacking that radio link. And so, if it's autonomous you don't need a radio link, it's flying itself. It's making its own decisions. There's no communication that's needed, which makes it a much more difficult device for your opponents to defend against.

So there's obviously a military advantage to that, but the broader question is a very important one, which is that we are going to increasingly will have to worry about people hacking these devices and using artificial intelligence, most hacking is really quite stupid hacking. It's phishing, it's password breaking, things that are very stupid ways of getting into systems.
Increasingly we're going to have to worry about smart ways, people are using very smart programmes to broke into system. Increasingly it's going to be, most major corporations are being attacked in a cyber sense almost 24/7 and that is going to be the new nature of warfare.

In fact, interestingly enough, I've got two minutes. When I first went to the United Nations I happened to see what was going on in the next room where we were discussing kinetic autonomous weapons, in the next room they were discussing cyber war. I sat through the first half an hour, and I said to my host, I said, "Everything I'm going to say in this room about kinetic autonomous weapons, they could be saying about cyber weapons."

The world is merging into one, the difference between the physical and the cyber world is disappearing. I said, "You should be worried about all this in the cyberspace." They said, "Yes." So they said, "Yeah, but already it's difficult enough to get legislation in one so don't make it more difficult by introducing the challenges and problems."
But yes, we're going to be very challenged by people increasingly hacking these systems. The systems will be much more important to our lives and bringing down the freshwater system, the electricity, the Triple O system, as we already saw recently. These are integral to the safety of our lives. There are plenty people who are going to attack us by taking those systems out.
Mark Hoffman: We've got lots of questions, I think we could move this into an evening event. We could just keep going, but unfortunately we're out of time. There's no doubt that particularly with the answer to that last question or the addition to the answer to the last question, Toby's given a great explanation of how artificial intelligence seems to touch both our dreams and fears. So would you join me in thanking Toby for a great presentation.