office interior design lincoln ne

office interior design lincoln ne

welcome everyone to the royalgeographical society and this imperial college debate on the ethics of ai myname is ian samp i'm the science editor of the guardian don't hold it gates meit's great to see so many of you here for what is surely one of the mostimportant questions of our time how do we ensure that artificial intelligencebenefits us all what makes it such a pressing issue well i'll leave our panelof experts to explain all of that but there's no doubt that ai is powerfultechnology and one that's already shaping our lives and that alone i thinkmakes it worthy of serious scrutiny we've got a cracking panel of people towalk us through the issues starting from


the far left as you're looking at thestage it's maya pontic is who's organized this event actually works onaffective and behavioral computing at imperial reverend reverend dr. malcolmbrown on is on the archbishop's council at the church of englandyou have jonah bryson who works on ai and ai ethics at the university of bathandrew blake is the research director at the alan turing institute and rima patelis at the royal society of arts on the new rsa deepmind project on the role ofcitizens in developing ethical ai can you still hear me from my lapel mici think hopefully the way the evenings go to work is i'm going to talk to theseguys for we'll be discussing these


events for about an hour and then it'llbe over to you for questions so please do get those ready i want to ask you allto start off with just a minute or so on who you are and why you're interested inthe ethics of i particularly interest really stems from when i was a youngphilosophy student and that's essentially looking at engaging peoplecitizen in in grappling with complex and controversial policy problems i can'tthink of anything is really understood so we don't actually know what thefuture looks like to asking the public what they think isn't just aboutcommissioning an opinion poll it really really require engaging with theuncertainty of the question the other


reason is that it really affects choiceso the trade-off that we made has a society and i'm sure that we're gonnahave that discussion panel moved on but there's a real reason for why it'sshonen importantly fine way new ways of engaging citizen in an informed and ithink i'm probably the historic member of the panel for two reasons first ofall i was born in the same year as ai i won't tell you which that was andsecondly i got 35 years ago i think it is now i got a phd in ai but that wasreally in the middle of the ai winter and you didn't go around telling peoplethat you had a phd i used to say it was computer science or something like thisbut in the last few years i've been able


to come out which is a great pleasureand i'm you know unashamedly a techie i love thinking about the principles of aisystems and building them but of course i do recognize that when we're buildingthings that are as powerful as ai systems cuz we got to think verycarefully and deeply and from the beginning about how they're going to besafe and how they're going to behave in the way that that we want so iabsolutely welcome and this is also the view that the turing is who takes thekind of very broad participation of different disciplines to help us thinkthrough these very important things intelligence why are some species moreintelligent than our species why are


some people using intelligence more thanother people soon to use it and those were the questions i wanted tounderstand but i also just happened to be a good programmer so i went into aias kind of a safety i thought well i want to go to a good university andsince i'm a great programmer that'll help me get up so that's why i that's why i do ai how igot into ai ethics originally was because i just noticed people beingweird around robots because i was actually a psychologist so i noticedthat people thought that if you piled up a bunch of motors together like a personand you put it the mit ai lab but you


had an ethical obligation to it itdidn't work they'd say you can't unplug that i'm like it's not plugged in andthey're like well if you did plug in i'm like well it doesn't work and they theyclearly had no idea what they were talking about so for a long time ithought they didn't understand ai and i tried to explain any either people i nowrealize the problems we don't understand ethics or what it means to be human andso there's so that that's another set of problems they because i had those papersgoing back longer i think i got invited to a lot of tables like this andespecially when it was policymakers i found out that because of the other worki was doing on human cooperation that i


actually had a lot to offer so thereason i spend so much time doing this stuff now is because as people have saidit's important and the questions that policy me because they're asking rightnow really big questions and so i'm puttingthe time i can find it for research into working on those problems very much inlearning mode i'm an ethicist my job is to support thechurch's leadership and in fact the whole church of england in trying tomake sense of things religions are about trying to make senseof the human condition and the nature of society and here we have somethinghappening very rapidly which people


don't know very much about i dare say itmost of our church leaders come from a humanities background which is why we'rerunning a big program with durham university on scientific literacy forchurch leaders it's just quite clear i think that most ethicists like myselfargue from analogy we try and look at things we do understand and apply themto things we don't understand and so on this one i find myself really scratchingmy head which is the right analogy what are the right analogies in understandingsomething that is really opening up the world in new ways to us and i want totry and help contribute to the field parts a theological angle to what shouldi think be an interdisciplinary study i


think to understand the ethics of ai youneed a bit of history a bit of politics some psychology you know and quite a mixof things and very much in learning mode so my expertise of human emotions by itssearch of course this raises a lot of ethical issues and i kept our questionsso this is one of the things that i'm interested in but also i live in thisworld and i used all these things that we all have here which is the mobilephones and ipads and they are all actually with a i but what they do theyuse the data they use our data they use our private data and actually some ofthe companies use this data because we allow them to do so and they're sellingthis data


i think this is not ethically fine sothat's the second thing the first thing is about is wonderful because you haveapproximately fifty percent of females and there is quite a lot non-white facesyet if you think about computer science a who builds ai technology it's whitemale it's ten percent of females it is even lower percentage of non-whitepeople i don't think we should use the technology that is built by such a smallminority of this population so that's another ethical constraint on a starproperly by looking purely at the technology not the companies making itthe research is making it not the applications yet just the technologybecause there may be people here who


will benefit from this and know what areyou talking about with ai but crucially is there anything about the technologyin ethical terms it apart from other kinds of intelligence so this is onlyone small part of what it is to be human and i do think this is what people getconfused about if you just mean human when you say intelligent then you're onthe wrong track we aren't building artificial humans butwe are increasing what we can act on increasing what we can sense and sothat's what ai is doing so i guess there's two ways i would say that ai isdifferent is really different from the rest of computer science i mean arguablycomputer science is a subset of ai just


part of the way you do ai of course but especially because it's especially aboutthe perception so we are able to perceive using ai things we couldn'tperceived before and companies can perceive things about us and governmentscan things about us and we can perceivethings about ourselves as well as each other it's not just about uncoveringsecrets it's about discovering regularities nobody knew before so theperception is one side of that the other thing which was my phd was about thebasic difference in software engineering is that you have a system that has itspriorities so how do you set those


priorities how you describe it so it'snot a system that's just passive it's something that that's what we callautonomous is when it actually acts without having to be necessarily toldthat how these systems the moderns many of the modern systems at least somethingin how they are created how they work that introduces particular ethicalissues as well and i'm thinking about how machine learn from training whatdoes that introduce that you don't get from normal programs where you just codeexactly what's gonna happen sure well there's a lot to talking aboutthat i think actually it's worth a quick deviation to talk about the differencebetween machine learning and ai because


i'm not sure if that's one time theeconomist for example one of my favorite favorite sources of science insight wassaying that machine learning and ai were the same now they've changed their viewdecided they're a little different for me machine learning is a set of veryspecific algorithms that are designed to take in data and learn patterns andrules from that from that data artificial intelligence is a ratherbroad activity that encompasses machine learning as one of its components butthere are lots of other things going on there too and my favorite example is andi think perhaps the most sophisticated ai system that has yet been built iswatson not so much the commercial


offering that ibm talking about now butthe original watson that won the game show jeopardy which was an incredibleachievement i mean this is a very a rather broad set of skills that you needyou've got to be able to hear questions you've got to be able to access a lot ofgeneral knowledge gotta be able to produce coherent speech and even get thetiming right knowing when to to jump in with the answer so to me that is anincredible achievement and when you look at how it's built there's a lovelypaper that the ibm people wrote about the sort of overall design of the systemyou see it has many many moving parts each individual moving part this onemight be machine learning this one might


be about speech this one about accessingknowledge and also lots of duplication to make it robust so components that aresupposed to be doing the same thing and maybe voting too what's the answer so ithink ai is a sort of sophisticated engineering discipline that is pullingtogether many of these things now that wasn't not your original question whatwas the back to machine learning yes yes it is does seem to be the case and i'maware that there may be a media bias on this but does seem to be the case thatmachine learning is driving a lot of the current interest but the question i wasinterested in was can you nail down if there's anything specific about ai asopposed to other kinds of coding yeah


that has its unique or particularethical issues because done and well i think i go back to what joanna wassaying about systems that make decisions you talked about actions actions anddecisions and do so autonomously so they are you know there are not machines thatare taupey under control of humans these are machines if you like but are takinga lot of initiative so we really ought to be concerned about what sort ofdecisions and how those sit in human termshasn't come out so of course if the data is biased in any way these bars will bepicked up and it will be propagated from all the decisions so for example if youhave say giving the jobs to people if


thejob was always given to people coming from a certain area because that's awealthy area or it's the area where most of the people can have certain degreethen you will continue predicting that these people from this area will begetting the job more regularly so immediately you do introduce the busbecause somebody else from another area could also have a good degree and butalso you know be wonderful but it's not from the area so the prediction willactually be so there are these biases in the data that you can pick up and youneed to deal with it there is another thing and that is of course you do notknow when you do machine learning and


when you do everything automaticallywhether there is something that would actually be automarketer automaticallymade to collect certain things of data so for either you know cyberattack or toactually collect certain things from you or to for example block your mobilephone battery that will die after two years and you know you need to buy a newphone so it doesn't matter all of these kind of things could be actually biasesin the data or in the way a is employed potentially could exhibit in the futureto the extent that they might be able to resemble human competencies and that'sreally interesting if you think about an example which kali gave me and she saidthat her three-year-old daughter were


speaking to siri and she said to serioussiri do you love me and you know i love you and there was a kind of reallyinteresting moment there where she was really reflecting on the potential ofthis technology in future to really pose some really challenging questions aboutwhat it is to be human in a world where is developing chemistry and i thinkthat's a fundamental issue i agree that you were trying to get atthe bias but i'd like to say that i don't think that the bias itself isdifferent because human what we're saying is that human culture createsbiased artifacts and that's actually been true our line and a is no differenti think the difference comes from again


this this weird over identification sowhen people and there are there are pretty major people in ai that do thehype and say oh you know programming is over now we use machine learning andmachine learning is all there is today i know machine learning is one way weprogram ai and people are using that that that magic dust the as excuses togo back to things that we had previously outlawed like persistence of thesepersistence of these stereotypes um hr departments have creates the differenceis authority surely if we treat the ai program that has biases built into it assomehow overruling our human judgement is rather different from an hr directorwho can be challenged it's maybe old


politics doing very conscious ofsomething tony benn used to say he said he asked this these questions of anyonewho had power what power do you have who gave you that power in whoseinterest we use that power to whom are you accountable and how can we get ridof you and i think those are actually very interesting questions and ai isagain maybe i'm reaching for the wrong metaphor here but it seems to me to bethe advance that makes this problematic is that it involves manifestations ofpower that we're not completely used to handling is it in the people who createthe ai is it in the user where does responsibility and accountability lieand how can we change it if it goes


wrong and i think those are areas wherewe're floundering because the lines of accountability responsibility andauthority and that's how the you know the innocent user attributes authorityas well as how authority is built into the program and all that's still veryunclear there's something else that hasn't comeout so far which is and i know this doesn't apply to all ais but there willbe some where you will not be able to get a good understanding of how adecision has been made and i know that can sometimes get the backs up ofcertain ai researchers but it's true as nandri yes and it's some something thatis a very live issue at the moment both


from people interested in ethics butalso the people who are building the systems i guess what's really broughtinto a head is that in 2012 we had this real breakthrough which is why we're alltalking about ai now which is deep networks and the networks werecelebrated because they were so effective they were three times aseffective in in doing vision recognition they were three times as effective inspeech recognition but they are black boxes and even more so than previoustechnologies so you have this of course if if you have a black box that'sdeciding what is the meaning of a word you perhaps don't worry too much aboutunderstanding the rule but if that same


black box is deciding whether to giveyou credit or not the bank you then want to challenge that because it becomesmuch more important whether you can do that so this has actually inspiredresearchers there's a lot of rethinking going on now about how you break openthese black boxes and design them from the beginning to be less black can youmake the box and that's black or can you parallel up the black box system withanother sort of shadow system that is that is more transparent and it's atremendously interesting field of research the turing institute isabsolutely all over this that if you have a product and it's dangerousdo you sell it now you know what it when


do you let it goand i feel like because people are fooled because they think intelligentmeans person and people are so ready to say you know siri do you love me then alot of companies are trying to get out of what would have beentheir responsibilities of due diligence before they really softwarei don't think deep learning is the end of responsibility you audit accountingdepartments without knowing how their synapses are connected of the humansright if even if deep learning was a complete black box we could still dotests and characterize and have other processes that are making a ring aroundwhat it could what it allowed to do


there's all kinds of ways we couldhandle that and we've been doing with more complicated things which are peoplefor a long time but in fact there are ways to kind of get at it and pro thatwhat it's doing but the point is that a lot of this comes down to power and alot of it comes down to deception by those in power trying to createbasically she'll get shell companies without even people in them we'veidentified these issues the the issue around black boxes the issue around biasbut i don't feel i've got a good sense of how big a deal those problems are arethey are they side issues i get the sense that they're okay so i think deeplearning is definitely not the only way


forward however what's going on now isit everybody's doing deep learning if if you go in any of the of the universitiescompanies everybody's talking about deep learning everybody wants udupi deeplearning something like all my students want to do deep learning never mind thetime tell them - all of them deep learning does work really well we don'tknow much about the planning we don't have theoretical underpinning and all ofthese things it still deep learning so i don't know about it i would not say youknow i think it's a big problem you know how these legislations and things andcontrol really can be can be done this is this is - we need to discuss and weneed to discuss this issue with the


government and we need to find a way toactually what what we discussed some time ago i believe that we need to havesomething like auditing of the software because a lot of things can go wrong andespecially if we cannot actually have explicit machine learning programs thatwill that are open boxes and not black boxesright so so and this auditing of machine learning is something that it'sabsolutely not there nobody talks about it and and that's one of the things weneed to talk to the government because in that way you you can stop apriltaking all of our data all the time under the you know quotes like you haveto have it that way otherwise we will


get the virus i i wondered where thisaccountability is going to come from because it's amazing how often you cantalk to academics who know all about these issues and they're working ongreat ways of making black boxes you know transparent i suppose the otherwould be in and making them suitable getting rid of biases and yet there arethere are things out at the moment that are biased and inscrutable humans are abiased and inscrutable as well but there's like this is extra stuff we'readding to affect lives you can't just say yeah well there are no worse than usbut there's just a load more decision-making going on through thesesystems and work on academically fine


but when does it actually get better forpeople in the real world really something very interesting about thatthat people in the real world never have an agreed answer to a question there isno more consensus to a whole host of question so in a way what iscomplicating this situation is the ai is increasingly starting to make decisionsthat otherwise people would made and people would make those decisionsdifferently so so that's adds and yet another layer at the core complexity towhat is already challenging ethical landscape and so one of the things thatwe're doing at the rsa is we're convening citizen juries so essentiallyrandomly selected groups of people to


deliberate on particular ethical issuesso we're looking in this instance at criminal justice system you use in theeducation of ai in relation to criminal justice system to understand better theparameter for ethical applications and we're also looking at the issue the wayin which a is influencing democratic debate in this space to do that and thereason that when it's not just about well what does any citizen think or whatdo people think off the top of the head reason why we're doing this is becausefinding a moral consensus is going to be incredibly difficult and actuallyunderstanding what could create a moral consensus but in a particular socialcultural ethical context it has to


happen and we have fine new ways ofdoing that and we're prototyping experimenting it's extremelyexperimental space it's really really crucial and but there's actually thatwhole other space where they've known agreed moral consensus and then the kindof question come back is this decision and our mission should make and whoshould take responsibility so and most engineering achievements were done witha purpose of improving the human lot this fantastic bridge is in order toenable something to cross it it strikes me that much of this conversationinsofar as i've heard it has been about the achievement for its own sake and theapplication seems to be thought of after


the achievement i think the other thingthat worries me slightly is that well slightly majorly is that these fantasticachievements and they are amazing stretching of the human capability arehitting the ground at a time when our political and economic cultures areessentially dominated by an intense individualism that's utterly relaxedabout wealth inequality despite being singularly occupied withthings like gender equality we haven't quite squared those yet and where itfeels as if the failings of our political culture are beginning tomorphing - something that returns and retrievesthe concept of the common good but it


hasn't got there yet and i see the aiindustry operating in that old paradigm of late capitalism you might say wherethe common good and moral consensus are actually written out of the the equationever since hayek that / possibility of moral consensus has been out of thediscipline of economics it's beginning to come back but not yet in the contextof these developments so it feels to me as if we're seeing something fantastichappening but into an old political culture that can't handle itand yet the emerging political culture isn't yet talking about it either butwhat i wanted to do was to come back and and defend myself a little bit from thecharacterization i got it's not the fact


that we've figured out how to handleaccountants more or less i mean there still is a lot of white-collar crimedoesn't mean that we figured out how to handle ai i'm just saying it's no harderof a problem let's and and i thought of a really great example of an ethicalexcess that was immediately recognized as such so what happened was there was acompany that was absolutely evil and bad and everything in in in america calledenron and when they went bankrupt there their email database was seen as anasset that was could be given away it would just became a property of the usgovernment and so they exposed the email and we are probably all of us who domachine learning have used the unknown


database at some time - the researchwhen that happened you know okay it was supposedly your business emailbut you all know you know there's other stuff in there right you know people'spersonal lives were destroyed there was you know you know affairs revealed allthese things happened and now nobody does it anymorewe no longer consider email something that even no matter how evil the companywas no matter bankrupt it's no longer an asset thatyou just give away or sell off and and there's very strict rules about how youcan access even government email where everybody's supposed to know that it'saccessible and things like that so so we


can do that and and also with inequalitywe had to solve inequality early in the 20th century and it took us way too longand i hope we're faster this time myself i want just to say we areattacking ai quite a lot but you asked very beautiful when it will be you knowwhen will it be useful for humanity it is actually useful in in now becausethink about it we can currently go okay how manyresearch papers we have just in cancer researchsomething like 300 to 500 per day not a single doctor can go through that numberof papers per day so having actually a i giving you summarization of the thingsis wonderful right it actually increases


your cognitive ability that's greatanother thing vision that's my field okay i can currently measure on 30frames or 60 frames per second any movement of the face there are certaintremors of the muscles that are indicative of various diseases such asfor example parkinson's or depression different kind of movement or dementiaagain different kinds of movements i can measure those from a single webcam wewith the human eye cannot do that how it works on 15 frames per second we justdon't simply see it and the camera can see it and it's the same as the x-ray itcan raise of y again tells you i see something right so it's not replacingreally it is a handsome and it is the


symbiosis between the ai and the humansso we are currently in in the in the place where the technology really canhelp us and is helping us it is the fact however that it should not bemonopolized it should not belong to for companies it should belong to thesociety and that goes this to this to the greater good it should not beindividual would you brought us on to the next bit which is about let's talkabout the people making them the companies making them and so on we needto trust the big tech firms to behave responsibly don't we yes because thereis nothing else there is no other mechanism apart from existing productlaw or is that is that wrong really big


tech might be like it's another companyunfair company it's another country we need to negotiate with like at the unbecause they have more power than quite a lot of countries you know and it is intheir interest because of the nature of their business that they are not theirinterests that we flourish they want humanity to flourish and so like if youlook at what's happened in the face with facebook in the last year i think theyreally have realized that they made a mistake and they're trying to figure outhow to do a better business model and they realized that they can afford tolose some money while they figure it out so i think i and i believe other techgiants are also in in similar places


although not all the same ones but it isit's complicated it's about treaties i've had someone from google say to mewe know something has to happen we just don't want it to happen tony in oneplace you know they don't want you to shoot down one i have others come upthey want to figure out a way to change the playing ground but they're willingto sign they're willing to sign treaties there's everybody will play in the samething one of the aspiration certainly of the british government who've written alot about it recently in their industrial strategy is that this isgoing to be something that enriches productivity and our ability to dothings in a much broader way and


actually the industrial strategy thatcame out at the end of november is rather fulsome in it it has four mainareas that it thinks will really transform productivity and an ai is oneof them i think it is important to the d tax that is broadly accessible and ithink also while we're kind of thinking about all of the pitfalls and theminefield if you like that i presents also just to keep in mind thehuge benefits and you know maya was already talking about that in thecontext of health i just wanted to mention one an interview on the bbc fromjohn bell who's a senior physician in oxford saying well he thinks actually aiwill be essential to save the nhs that


we won't be able to afford the the nhsunless we can mobilise the efficiencies in many in many domains that i willbring microsoft yeah you you're maybe closer to the industrythe companies or certainly have been they're not people on the panel do youthink those companies have earned public trust you know that's a very complicatedquestion i mean you know we're in a position where we do trust them for someessential services i mean you know you can switch off your mobile phone andgive it up if you want i don't think many people are making that choice ofcourse a few people do they switch off their facebook accounts because theythey find that it's it's not what they


want but i think we're in a positionwhere we are thinking about the trade-offs between let's say giving awayour data and knowing that we're releasing we might worry about that butwe also see huge benefits so you know i like being able to i'm not very good atremembering routes to places and you know i love having the phone you knowtake me around the city and guide me in my car so you know we see huge benefitsfrom the technology we're going to kind of we're going to engage in thisstruggle i don't think we're just gonna come down in one size they knowtechnology is too dangerous a bit like you're you know going back to yourbridges you know we could say well you


know bridges may fall down shouldn't dobridges but you know bridges are so important to us that we're willing tosort of engage with the hazards and of course we expect the engineers andpeople who build that ai systems are a species of engineer we expect them totake safety very very seriously and in this kind of ethical context safety hasthis rather broad reach to these companies to behave sociallyresponsibly the big tech firms have set up the partnership on a ai' yes thebenefit society pretty much all of those big companies have also been criticizedfor aggressive tax avoidance and it's quite hard to for me to square in myhead how a company i mean let's let's


take microsoft and you know graham ate abillion through ireland but now ireland is now being forced by a view to takeback unpaid taxes from other companies but i need to think of that same companyin that same group of companies as setting the rules for how through thepartnership how ai will be beneficial to society because i think a lot of uswould think that taxes are beneficial to society because they help buildeverything how do i square that well you know we're ranging far and wide now intointo politics and way beyond technology i suppose you know there are manypowerful organizations not just companies but let's say governments thatthat we trust to look after our


interests and these organizations areseldom unalloyed you know they have a complex job to do sometimes they makegood decisions sometimes they make decisions that are not so good and youknow i think these powerful companies are like that they because i know a lotof the people in in microsoft that i work with there are a lot of verythoughtful and well motivated people sometimes things don't get done rightand i think you know when you're entering a kind of very complex arenawhere the stakes are high you know that things are happening that we really careabout then you should expect that some things will be done well some thingswill be done not so well but personally


i do have a lot of confidence that thesebig companies we're talking about are very serious about making good systemsjust as you gave the example of a facebook they hadn't appreciated perhapsthe consequences of the services they were offering once it's become kind ofclear seems like they really want to do something about it right now but i havesaid recently that they're willing to pay more tax so like i said i think theyrealized they are starting to make the right noises and i think they arestarting i was at the un's internet governanceforum and they were sitting there right next to the countries and the ngo so ithink there this is sort of happening


and and i want to go back to somethingyou said about bridges falling down yeah one of the metaphors i make because ihad the pleasure to talk to some architects or somebody said it broughtit so fun you know it used to be that any rich person could build a buildingand it would fall on people with some pre-built probability and some peoplewould die and whatever and now you know you have to get planning permission yougo out and you figure out where the building belongseverybody is has been licensed you know they all know how to go out and get andthe building gets inspected you know that's what computer science was a toyand and we could build lots of stuff


only human build pretty cool tools itwasn't just a toy those tools but now that it's become infrastructure and nowthat it's falling down on a few people and maybe killed some people you know weneed to think about licensing and inspection we we've talked about thesebig companies you see a lot of really good people go to the same small numberof really big companies for pretty big salaries is there an issue and we talkedabout ethics so that's probably why it feels like we're beating a little bit isthere an issue of concentrating intellectual wealth i guess intellectualcapital in a small area is there actually a financial inequality issuewith all these people not only going to


these small number of big companies butgetting a lot of money compared to many issues so yes and let me answer first onthat yes we have a problem because we have currently an inequality basedpurely on the knowledge of ai machine learning people who who are experts inthe field are able to get the salaries which are currently five to ten timesthe average the average salary in london which is reallyso if the inequality is huge five to ten times yes so five is minimum so it'sthat's like one problem the second problem is the taxes and how thecompanies are are made there are there is no geopolitical border for thiscompanies so they they are their globo


they can go anywhere and they don't haveto pay the taxes they can make the deals with the government saying that theywill employ people hence the government give them the tax breaks these givesthem more money this is how they buy more people the problem here result isthat you will have so-called intellectual capital concentration inthese few companies meaning further that they will have monopolies in the futurebecause everything is about ai so we will not have a free market we will havemonopoly do we want that i think the regulationsare really of importance we need to regulate these companies the fact thatthey are global and they don't have to


pay taxes is also actually applicable totheir people this doesn't happen now but it could easily happenanybody can live anywhere we are talking about programming and machine learningand ai they can work from swahili and for a company which is in stateswherever right so it's really important to understand these things that actuallythe government's will lose hugely in their taxes so this is the disturbancefor the government they need to do something if they want to survive simpleas that right so this is like one part the second part is what andrew mentionedi really would like to go to that point he said something so many people say iwill give my data because there is this


greater good that will help me this ishow many people say the situation sure give the data but get the money for itit's your data you know so it's just i bet that's my issue i don't i why don'twe are the owners of is data so if somebody else wants toprofit from it why don't we get the piece of that because it's more thanjust government's losing control when governments lose control people losecontrol at the moment at least in democracies governments who are mode ofbeing able to keep these things under some sort of control and i think youcould touch on something really important there whose is this data andagain i'm struggling to find metaphors


that work but are we looking atsomething here that is so ubiquitous or potentially so ubiquitous to the way weshall live in the future that it is more like language than product languagewhich you know he's owned collectively it does evolve it changes you cancontrol it to some extent i mean if i'm called mcdonald and open a restaurant ican't call it mcdonald's because it's plating you know it's a brand but evenso the academie francaise has found real difficulty trying to control theevolution of the french language because it's owned by everybody who speaksfrench now is this really something in ai that is going to be so ubiquitousthat the idea of monetizing it turning


it always into a product rather thanseeing it as a collective possession for the benefit of everybody to work throughin the for the good of all can we say if we started working with that metaphor oflanguage would we think of it differently would we find more creativeways of handling the fact that i own my data but if i can't access my datawithout someone else's product in what sense have i got any control over wealthconcentration in a small number of companies but we are we worrying toomuch about that you know that's what what did teresa may say and i think itcame out in wendyhall and during percent his own report about a new startup inlondon every or new in country every


week or every month or some a good ratewe're worrying too much is there not an ethical issuethis concentration of smart people into a small number of companies how do yousee it i think it's a call to action that's that's how i see it i mean ithink you know in in many respects the success of these relatively fewcompanies is inspiring and they've invented things that simply didn't existbefore you know apple invented the smart phone we hadn't conceived of anythinglike that and so i think the kind of the positive way to react to this is to beinspired to do likewise and this is course what the the small companies aredoing there's the challenge of how'd you


get the small companies bigger i thinkthat's one of the big challenges that actually the uk is very good at atstartups but getting startups to grow big i've been is the biggest computercompany that we've grown in recent times course now that's japanese owned but youhave a very good long run as a british british company so i think we should beinspired i think we should spread the goodness think about i think training isvery important you know if we want to have a vigorous kind of ecosystem in theuk innovating these technologies where we we need to train more people we needto think about how those people may also benefit small companies rather thansimply going to the highest bidder to


think about those mechanisms and i justwant to say one thing about about going back to the thing about data jaronlanier is a very interesting writer and he has a whole book which is pretty muchabout this thing about data concentrating in fewer and fewer handsand one of the things he put suggests is the idea of kind of micro payments fordata but just and it's not so much about the the cash i suppose is acknowledgingthat the data has value and into the kind of the infrastructure that we builtof course we do sometimes get that danny when we get valuable services for freeso if we go online and we do search you know actually what's happening there isnot trivial i mean this is access to all


the world's knowledge and let's notforget that we didn't have that 20 years agothat is a pretty big benefit and so that that in a sort of rather indirect and inexact way is some of what we're getting back we are not giving away our data weare bartering our data and and so we are bartering our data for services and whatbartering one of the things bartering entails is ducking out of taxes inside iwas supporting the the big companies a few times but one of the things thatmade me really unhappy recently it was one of them said well i don't rememberwhich said we're willing to pay more taxes there's evidence of this we'vejust opened up a research branch in


paris and we'll be paying a lot of taxthere no that's not tax right and when you when we don't so we have all thesefree services that means we have it denominated the value both of the datagoing out and that the service is coming back and that means we can't tax it andit's really hard to denominator it's not a traditional product i think this isone of the reasons the supposedly productivity is stagnant is we can'teven measure productivity right now but what we can do is we can see how muchmoney people are getting i say a company it's getting we could see its valuationchange and then we could say what proportion of its citizens are say inthe eu and then they you is big enough


to say you know even if it's you know achinese or an american company say like hey if you want to do business here weneed to see that a proportional amount of tax coming into the eu and so that'sthe way i would solve that problem i wouldn't even try to you know ashley tonominate the transactions we can't do that except to see what how rich thecompanies become all talk about the products the actual things that we youknow we're using day to day that has ai in it and my do you think the productsthat are ai driven at the moment the kind of day-to-day stuff is affectingour psychology in any way that has ethical implications okay so inprinciple facebook is using ai it's a


simple version of ai currently but itdoes use ai and it uses things like tagging for example which is not anymore simple right so it's recognising your pictures and who youare so that's that's quite advanced however the problem with all of this isthat you can see that everywhere you go to the restaurant what you see peoplesit on a date so why do they date the phone why do they have this other personon the other side it's unbelievable you know i have seen the whole family's tokids to parents everybody with the phone i mean we we forget to communicate witheach other we are hiding behind these phones and behind this kind oftechnology so this is a big issue i was


so that was funny what i said what sirsaid is that i was called from colorado state they had an epitome of suicidesbetween their teenagers and the reason is exactly that they they found on thefacebook whoever killed himself somebody puts the picture and these guys becamecelebrities and the kids got this idea well i will become a celebrity if i camemikael myself they had something like 35 suicides in less than a year so it wasit's a it's a horrible thing then there is a lot of bullying through the socialside a lot and whoever have teenage kids know about thatso instagram is currently the way to bully other people in the high schoolsthat's horrible i mean this is not the


way we invest envisage the the usage ofthis technology think about relationships 35 currently uk 35 % ofall relationships is made through the online dating i don't say that's bad ijust say it changed us it's my turn to be positive about ai now well we'retalking about ethics i think sig's mostly bad stuff right ai is alsohelping good things happen with relationships questions about ai i think we've got thekey there are these houses and they're real hazards and there are also thebenefits so now if you move to a new town and you've known nobody you've gotfar more tools for meeting people and


getting friendships and and that is allgoing to kind of happen much faster if you've got family in australia you knowyou came to the europe to study let's say now you've got skype and you cancommunicate with them and you know that is a huge thing that you have to do so ithink you know it's high stakes all around i think in our society yeah thepolarization of society but also to polarization of narrative and thechallenge with something like ni is that it had the effect of creating as you'vementioned already networks of like-minded people which shared andsimilar values but it also had the effect of creating echo chambers virtualbubble and in many ways creating context


in which misinformation disinformationare often used and applied and that's a distinct issue but it's very closelyconnected to the way that is developing and so what we are seeing is a reallyinteresting kind of polarization of narrative both online and offline but ithink very much perpetuated by a polarization of narrative online so areally interesting example of the sort of narrative tends to be that inrelation to and a range of elections that we just witnessed so many peopledidn't really expect outcome general election to be the way what many peopledidn't really expect the trump dudes and lots of people i think why it'ssurprised by the using the application


of tech within those contextsin order to polarize narrative so our perception increasingly of what otherpeople think is becoming quite shaped by the chambers in the context that we arewe are heart of now i think that's really interesting in terms of anethical issue because we were faced with a challenge and that speaks to tocompetition that we were having just earlier which is how do we create aconception of a common good when you you've essentially used technology tocreate very disparate groups of society disconnected from each other what yousaid okay polarization is incredibly highly correlated with in fact so thishappened you know in the previous it's


in fact in the united states politicsyou get you get very clear examples of it coming and going and waxing andwaning so it's not technology is not necessary for it and also every timepeople actually do look at the role of social media it doesn't seem to actuallybe a major factor so the proportion of time you spend on social media is notone of the determinants actually it seems that for most people they don'tget that much of filter bubble because most people are facebook friends withthe people they went to high school with and the people who live around them andthat's it and so the elite tend to have filter buffers so it's a problem withreally in the league do you have a lot


of impact but it's not as much of aproblem for most people so the the or at least it's not coming from the socialmedia the there was one other piece with that well anyway interactions arecomplicated oh i was gonna say another thing which is correlated with higherquality and high political polarization is these 50/50 elections they're to comedown to and what and so then you come to list of problems but the other one thatsitting there is when you've got this 50/50 election what if you can tweakjust a few voters can you throw it in a way you didn't expect and it really didlook like in both bribes a and the trump case that the losing sideit's the winning side expected to lose


they had their concession speech nigelfarage gave his trump's was written so it wasn't just that the winners weresurprised i mean the losers were surprised to lose it was that thewinners were surprised not to lose yeah so that did make it look like obviouslyit's a i don't think it's straightforward in terms of i'm notsaying that they're the cause or relationship but i think that what'sreally interesting about the technology in the applications use of ai is it hasthe potential to perpetuate polarization we have quite polarized societies--along very different lines and so oursocieties are interestingly and


increasingly polo i on other lineseconomic social you know demographic such as thatcher but the reallyinteresting in intervention of technology and in particular ai is thepotential it has to perpetuate that inequality that we were making which isthe extent to which a i potentially could address help address inequalitybut it also had the potential the potential and the and i don't knowenough about it but i can see no reason in principle why i might not be designedin such a way as to reinforce community but in practice what it seems to bedoing is privileged enjoys in a way that is actually quite undermining ofcommunity in the sense that we now see


ourselves as members of communities ofchoice we unfriend people if i introduce just a moment of of christian theologysaying love thy neighbor we tend not to actually choose our neighbors in thegeographical sense beyond a certain point you have been community meansliving with people you don't necessarily get on with and haven't chosen it's thebalance between the chosen and the given which i think is part of dare i say thehuman condition and we are moving for better or worseand it can be for better the boundary to make more things chosen than given but ithink in the end if i can again introduce a theological concept somepoint we die and we tend not to choose


that choice has its limits and so thegiven and i'm serious that the govern is part of being human and if we move thatboundary too far into imagining that we can choose the people in all these wayswe're actually denying something quite important about ourselves before we goto the audience questions which we'll do after this i wanted to ask that thetechy ones the techy three on the panel is probably going to make more sense butthe rest of you put in i get a sense we're in the foothills actually of whatdo you see in your labs in your interactions with the people in the areado you see products applications coming down the line what do you think that's ithink you know one thing i would like to


say is that i think we're getting a veryinteresting debate here and from the arts and humanities crowd and i thinkactually but i think technology needs the arts and humanities crowd in at amuch earlier stage not become in after the products have been built and sooni'm not sure this is really going to do what we want but to be in the technologycompanies and there is the closest we get so far maybe this is the right wayto go is the discipline of design i was very keen when i was running themicrosoft lab to get designers in and you do get a completely differentthinking so we don't want to have just because you're good at coding doesn'tmean that you should be the person who


decides what these systems look like inpractice and pragmatically that happens a lot but actually we need a muchbroader reach in in design and in kind of philosophical thinking and in legalthinking i noticed you haven't mentioned terminator robots in a very rich fieldfor that which is a company that 5ai company is building autonomous cars andi'm scientific advisor i'm getting quite involved in that of course that is awhole new set of issues than the ones we've been talking about i guess thebest way to characterize is is that this for the first time is safety-critical aiwe talked about bridges and their safety critical the other systems we've talkedabout are mostly social systems although


you know sometimes safety criticalautonomous cars are very obviously safety critical and so that that'sprovoking a whole new kind of thinking there are unique vulnerabilities inmachine learning vision systems you can you know generally a perception systemwas artificial and i think my will probably agree with nia not and not atthe same level of reliability if you read the scientific papers and you findsome perception system that is right 99 times out of 100 something like 90 thatstop exactly so let's be generous and say ninety-nine thousand five hundredbut that's not nearly where we need to be for cars so that's what you know thethe industry calls that two nines of


reliability 99 percent we need somethinglike seven nines to get to this roughly human level of performance and safetydriving that is a fascinating challenge there's a very directsafety first of all disagree some of the things that you're attributing toartificial intelligence i would say are at least as much just ictcommunication technology because humans are sticking their brains together rightso in a way it doesn't matter if it's a machine or a human you're getting moreideas faster and for more sources so why i mentioned that is that if we don'treally know how much just our ability to communicate is making these politicaland economic changes then how will it


change when we get real-time translationso real-time translation will just change the way things feel and it isalready changing i would love to see things like a real-time translation ofinstitutions so if you're a migrant that you suddenly plug into the local taxsystem you don't just wind up being an illegal alien somehow but anyway thatand then the other thing that i really worry about and that i wrote a paperabout in september it's something from the european parliament was thisproposal about artificial intelligence as being a persons and people say wellit's as a legal a legal personhood for artificial intelligence like you havefor corporations and they say oh don't


worry that's just a legal hand you knowit's just that it's called a legal fiction you know they mansplain it toyou it's called a legal fiction it's gonna be a little convenient it'll helpwith the contracts look the legal the legal fiction which is legal personhoodfor corporations only works to the extent that the people who are thedecision-makers for those corporations don't want those corporations todisappear that they that they they would go to jail or that they they wouldactually not want them and that's why you can have shell shell companies theyactually allow you to do unethical things a completely synthetic legalperson there would be no dissuading it


and and human law would not work on itit just doesn't make sense and again there's this attractor which is you knowthe company is trying to get out of their obligations and i'm not i'll talkabout a company like car companies say trying to get out of their obligationsand the futurists who want to believe that ai is going to come to thesebenevolent alien so we're going to make the world better and what i really worryabout is like a despotic leaders the kid that don't want to accept that death isinevitable and we hear reading about tech billionaires trying to upload theirminds and live forever what if they what ifsome bad putin chatbot yeah it's true


there on some computer staying crap andlike people continue like creates anarchy for hundreds of years that wouldbe awful that's one of the things i worry about is giving is is not piercingthis veil and getting people to see the human human responsibilities cord humanjustice and we cannot i really do want to go to the audience questions but i dowant to ask you before that about some of the way you're doing actually isgoing to have interesting actual implications if it comes to fruitioncause you're working on systems that can pick up our emotions because there'sother work going on maybe you're doing as well of systems that can displaymimic emotions we were talking you know


you only have to draw two two spots on abox and people give it empathy i mean what are the consequences of having veryconvincing systems people think okay so this is this is really importantyes we can build robots that that a lot of people built robots i mean handsrobotics is one of those that build robots that look as humans and they'revery realistic and they can make expressions that are very realistic andthey can maybe recognize your expressions in their react appropriatelyand so please remember everything is programmed so i don't think it'd takeyou long to build a system that would pick up on people's emotions and sellthem things when they were most likely


to buy stuff oh yeah so better separateissue that's facebook so they they filed the patents where they will have acamera on which they will have a face recognition engine that will beconnected to the facebook they have two point 1 billion profiles they canrecognize people in the shops and they will know from our searches whatlike how much we will pay for it hence each and every shop will give us customprices for each and everything unless any do you want to jump in on theaudience for questions we should have a bunch of raving mics orat least i'll be arriving when you start putting your hands up quite hard to seean awful lot of you especially on the


balcony so you mean shout or yeah idon't know if we can do anything with the lights to make that easier burst ifyou've got questions get your hands up and that's alicia's start yes and if wecan keep our questions short slide here we can have a lot and we don't have awhole load of time clinical neuroradiologist charing cross hospitalso within my field it's probably one for medical specialties which is gonna bemost affected by ai with systems that can automatically read scans now giventhe issues surrounding blackbox algorithms do you think it would beimmoral for us to intrument these systems until we actually unravel someof the mysteries that surround a black


box system given the fact that they canactually directly in effect a patient care one problem is that if the blackboxes are performing well enough it might be immoral not to use them ratherin the way that you know clinical studies sometimes get cut short becauseit becomes absolutely obvious that such and such a drug is such a lifesaver youcan no longer it's no longer reasonable to have the control group so i think thedilemma is is to actually the best would be to advise the doctor give him whatthe what the ai system will find and the doctor can take this into account or notso but just cutting it i don't think this is this is good so think alwaysabout this symbiosis of ai


it's the best i've heard people whoorganized hr are just really happy about how ai is helping them see things thattheir normal processes weren't helping them see before but if you tried toreplace the humans with ai then you'd have a whole bunch of other failures soit's trying to use mutual strengths absolutely it's about keeping humans inand trying to use mutual strengths but for practicing doctor isn't there apotential legal issue where to go against a decision by your ai assistantcould potentially have legal implications if things go wrong becausethey say well you didn't do what ideally as i said before justice has to stay sothat the human is the one who's


responsible and so ideally they it wouldonly be things that was being brought up but i see what you're saying that youknow somebody is very upset because you know they're you know this is the thingthey used to say about driverless cars i haven't heard that much recently butit's like okay there's been one-tenth as many people died but it's gonna be adifferent 10th and everyone's gonna be up in arms about that but in practicemost people seem to realize that driving is a russian roulette and they don'twant to be out on that and they're like okay i'm happy so i haven't actuallyheard that but i this is something it's good that we've identified an issue nowand i hope that we set a really nice


precedent and make good law about itbecause otherwise that could go horribly wrong okay more questions we might sortof work around a bit so that we sort of get everyone and don't sort of just haveeveryone the people with the mice running around like crazy so can we gowe've got we can bring - oh i thought we had like loads of my exciting so as awoman here in the front row blue top stripes so front rowgreat thank you so much this has been fascinating a couple of you mentioned inthe beginning the issue about what it means to be human and there'sdisagreements around that i'd love to hear more about that as we face thispotential system or even some people


have argued a life form thattotally different motivations and ours that has intelligence withoutconsciousness so what does it mean to be human faced with this the basic idea isthat the the way the a and a i stands for artifact so when we build it we areresponsible for it and that's the biggest difference between us in lifenow some people say oh but you choose to have a child okay half the cases that'strue it's different you don't get to say am i going to use lidar you know howmany end effectors am i gonna have you know what's the the the which kind ofcpu am i gonna put in there's you have it's not like having a baby it's likewriting a novel you have complete


control within the parameters of youknow the laws of physics so if we create things like that now i'm not saying ithink we can create things like that but even if it is possible to createsomething is just like a person what would we be doing why why would we youknow give up our responsibilities again i think our justice system would justsort of fall apart so so people don't like that because they don't like to belimited in what they're allowed to do but i basically i think so first of alli think it are lost justice system would fall apart secondly because of thethings we've all been talking about i'm sure that we would believe we hadcreated something it was conscious and


needed something ababwa before weactually had because we're so easily fooled so i just think we shouldn't gothere and i think i want focus on the eye and of course the rules among humansare tend to be made by those who value intelligence very highly but beingintelligent or having a particular level of intelligence isn't a measure of beinghuman and so again it comes down to the point that we has cropped up again andagain who makes the rules on which principles and are we actually usingintelligence artificial intelligence as a tool or are we likely to get into asituation as almost refers to the previousquestion where we trust it more than we


trust ourselves or people like us wetend to defer to human authorities when people know more than we do but notnecessarily because they're more intelligent than us i think there couldbe a question here about the way we're talking about this is about thenarrative around ai that leads us to place it on a pedestal as something thatin many respects will exceed human performance that doesn't mean we shouldtrust it more than we trust our judgments because human judgement is amix of consciousness of all kinds of things that are not just aboutintelligence so the church has no problem with researchers pursuing thisidea of what's always referred to as


human level intelligence as if humanlevel intelligence is the pinnacle of intelligence but the church has no issuewith that no pursuing human capacities andcapabilities is entirely what we call to be and to do it's all about applicationand it's all about application in ways that make us more deeply human we don'tknow yeah that's the main problem so what is human we have no idea each andevery brain of ours is completely different and what we know about braindid maybe 5% of of knowledge about brain we have no clue what's in the brain andeach and every of these brains are different question about social licenseand social risk as well so in the past


we have designed systems car insuranceis one of them that allowed us to distribute the risk and essentiallycompensate people for for-4 risks that are i suppose inherently shared in thesystem so it's a question and i'm just waving it but if ai and the developmentof ai is now going to put some kind of special risk because there will bequestions there people don't agree on and gonna have to come up with anoutcome answer in relation to autonomous cars for example how to redesign systemsaround that that ensure that social risk is distributed or that people at leastcompensated in some way shape or form in a way that maintain the second pointwhich is social license of the


approaches to operate and i think that'sreally really key and it really underpin a lot of the discussions that we've hadtoday check you around why companies might want to way i like to call this asort of desai announced defend model i often peoplemake a decision they announce something and they may have to face a massivebacklash against their their initiative or whatever it was very developed and ithink the companies but also wider society as a whole has to really thinkabout moving away from decide mouse defend towards engaged deliberate decideand it requires a massive massive cultural shift we're not ready for ityet


but we have to be mostly i am notconfident that we will always be the best so i so but the but when you get arecommender system like amazon that tells you what what book you should beright that's just based by doing a lookup it's not very complicated machinelearning you just find another person that bought a bunch of the same booksand you make a projection right that is basically i i think in the next 10 or 20years we're all going to be able to you know find our best mate look up googleour next best move and i think it's it's so that's why i'm doing aboutthink about human judgment being special i think well it's sort of is humanjudgment it is just data based on other


people but the point is i think we havea big challenge of the humanities when we are suddenly faced with the fact thatwe could use ai to figure out we can do better than we can do it ourselvesand i think that's that's the coming challenge we should get let's get somemore questions could we just get some stuff around here and then i'll moveover to these guys then we'll hit the hit the top crowd okay whoever she's pointing out i can't seewho you're pointing up i can see everyone's pointing she said that shewants to make herself better and if she gets better than hanson robotics and itwas programmed yes i will repeat the


question the girl is asking about thehanson robot robotics robots sophia who said that she will bath rock herself andshe will be shiva little he have a familyand she will love the family if you care about the family and she'll bettereverything and so on so the issue is that this is a robot you can program therobot to say whatever so this has nothing to do with reality cancerrobotics really like to make splash stories that will all fascinate us andamaze us and probably anger us because then we will talk a lot about them andthat publice is better than no publicity so do nottrust these things this is the same as


you would trust the terminator movie itis this we cannot do when you look at the robots the robots currently reallyfunny they usually fall they fall in all possible ways they cannot move very welllet alone do really stuff let alone understand really stuff making kidsmaking herself better yes she may know i mean the robot can acquire the knowledgethrough internet for example and and know a lot of siffin recognize a lot ofstuff but you know it's not about really making her emotionally better makingsomebody a better person let alone care care is an emotion which you show andfeel robots cannot feel we can just program them so it's it's you know thenasa they're a bunch of scientists and


they say that they have a twitteraccount that is the rover or whatever and it's just somebody typing you knowand they and they're pretending to be the so that kids can imagine being asatellite or whatever but it's not okay because it's gain to the point whereit's right in front of us and it's confusing everybody the british thebritish are one of the was the first country the uk was the first countrythat had a sort of a national level document the epsrc principles ofrobotics and the fourth principle was that you shouldn't make a robot seemmore human than necessary that's that it's machine nature should betransparent now the reason for that is


they're both cons some germanphilosopher guy he was really smart and he figured out that it was wrong whetheror not you dogs were something you needed to worry about for themselvesbecause we with them if you the people who kickdogs often hurt people too and so you said you need to take care of a dietbecause you think of it as being like a person regardless of whether or not thedog itself is something that's important so some people have used that to arguewe have to treat robots really well too because the remind us of ourselves andso therefore it do go on for us not to treat them correctly but the people whocame together and i'm one of them i'm


biased i but but they came together andwrote the principles of robotics said was know then that means that we shouldmake it more obvious that it isn't a person and so all these people thatthink it's really cute to make their robots look like people at the tellchildren that they're alive and whatever let's get some more questions tweet pashmina sorry i don't have a very widevocabulary pashmina software company and i happen to sit next to the marketingdepartment so whenever i hear companies making grandiose claims for social goodlike saving lives all sorts of alarm bells go off in my headso like sir james like hill in 1975 he


said you got to remember these guystrying to sell you something should we be allowing these software companies toget away with the claim of anything we do there are unanticipated consequencesand therefore use that as a get-out-of-jail-free card yes i'm notsure i've got a good answer to that you and i may be the only people old enoughhere to know who so james like hill is my advisor i'm gonna pass that savinglights they actually agree with that think about drones they could go intothe fire tell you actually which kind of fire you have and only in the roomswhere you can pass through you you let firemen scene or you you say it safeenough that you go in so you don't think


that's that's wrong really or thinkabout for example these things that i'm talking about we would be able todiagnose certain things much more in advance just based on the ai you know soit's great i mean it's it's the same as ct scan or mri scan or x-ray right ithelps you do some things that otherwise you would not be able to do i don't idon't see anything bad in that so saying saving lives is not wrong in that senseit it there are certain technology is able to do certain things to save ourlives and that's great i think that's great but of course we should not makeyou no promises which are not possible we should not do this what hansonrobotics does because it gives us a


completely wrong perception of thetechnology we should not call the technology miraculous because it's notit's awesome but some miraculous ok so there's gonna be trade-offs about youknow like the unintended consequences and there will be there will be somemistakes but we should they should figure out what the standards are aboutwhat you do before you release it and we should absolutely hold people to accountto see did they proper follow software engineering did they did they can theydemonstrate they the law in the accounting of the software that that wasreally something they couldn't have handled in the lab and caught beforehandi i just i think it's like any other


product and we will just establish thisand that's why we're thinking that it's people then we think it's somethingdifferent but it's not people it's a product silicon valley sort of issue the wholemantra of move faster break things when it comes to autonomous cars yeah i thinkit depends to some extent how high the stakes arei mean safety-critical ai autonomous cars that's an unfamiliar space largelynow if you're recommending movies and you get the recommendation wrong wellyou probably won't even notice there's you know the consequences are not verybad so why not get out there and i think


i mean it's certainly true in terms ofyou know entrepreneurship that you know releasing the new software every twoyears fully tested and all that it's very conservative and it may not be verycreative so yes there's certainly some some virtue in that mantra but i thinkwe've just got to be aware of how high the stakes are i mean we do know theactors and characters are different and yet we still enjoy movies yeah soactually my group is doing research right now about whether people can bothfind a robot really cute humanoid but also understand that it's a machine andturn it off and whether we can introduce that duality have some more questionsit's great i'm so paranoid about sort of


guessing what clothes are called get itwrong my name is justina and i wanted to ask a question around governance andprejudice i read a report by propublica who were talking about and in the u.s.they are using a type of machine or software to see how likely you are torhea fend you were arrested for a crime and what they found over a long periodof time was that black and then latino members of society were more likely tobe rated as higher offenders if in reality they had never offendedand a white counterpart actually had like 10 years of history but was ratedas a 3 so i know that we said that these are core issues me to be thinking aboutbut my question is this software is


being used right now it's affectingjudges judgment and it's an immediate issue so what are some areas aroundgovernance that we can start doing today rather than waiting a few years down theline so what you saying was that there is a system and it's called it's calledcompass and is made by a company called north point and some interrogation ofthis by pro publica has shown that over time this software is used to look atreoffending rates people in prison and the ministry of justice has a systemcalled oasis which isn't isn't the same kind of thing but we do have a systemthat does achieve the same end the propublicainvestigation found it over the long


term and i'm not smelling this i shouldbe but over the long term it seemed to is it right that it was saying thatblack offenders and other non-white offenders were more likely toreoffending so was suggesting that they yeah right yeah yeah so there was a biasin the in the judgments that it was handing down essentially on how onpeople's reoffending risk and this is challenged in court as well and one ofthe problems is that it's a black box and the companies say it's proprietaryso you cannot so we you know people have gone out and done but actually they'veshown with mechanical turk that most people can do better right but alsothey've built better decision tree


software which is actually you knowsomething you can open and see what it's doing that does a better job soabsolutely is something we could right now saythat no you cannot have an auditable software that makes decisions and infact most criminal justice system so i think there's a really interestingquestion there of a decision and that's cool because arguably one of the reasonsfor why we historically have a long period of time legitimacy and simplysaying we're gonna replace that with really very nature the other issue isthe trade-off between efficiency and i guess what that system needs to do inorder to get there and so this is the


problem with something that might end uphappening not being quite outcome firstly there is a decision that we needto make as a society as to whether we're going to hand that over to criminaljustice system that will make a decision on the spot or are we actually going tosay well we want to take time and think about this properly and deliberate aboutthat the cost there but there's also cost in the other side a respect room aswell so reducing maybe the time they've been making a decision but issues withwith the implications it's something that's being used to help judges andit's already being done so it's been plugged in to sort of the futurescenario of how far the application use


of ai can go and so that's like oneextreme but obviously there is a really interesting points about the fact thatthe use and application of ai in criminal justice system can help assistjudges and it can help assist juries to make their decision actually but thenthere's a really interesting question which is what does it do is it toprovide a recommendation or is it in which case there's a really there's apower dynamic there which needs to definitely be on page is it to helpinterpret data and again the power okay so in the uk right now some policedepartments are already taking this kind of advice and if you suspected that thesystem was as bad as this californian


system what would you doso i don't know i don't do british law so so if you found out that yourcommunities police system was using an ai system and and maybe you don't evenknow if it's biased or not and you just want to find out whether it was biasedwhat are the steps you take there's a lot of great innovation going on aroundthat i mean i guess with compass i'm very curious to know there are two waysyou could approach that you could train the system on previous decisions inwhich case if previous decisions have been biased then you'd bake the bass inalternatively you could train the system on previous outcomes that is did the didthe person aria fender


did they not even then you may not findthat that is socially except the the result of socially acceptable andbecause it may still may be because the data set was too small or for whateverreason it may still appear to be using a principalis unacceptable socially but what is some one i find fascinating i'm sorryi'm you know unreconstructed techie is that people are finding ways both tomonitor much more effectively what these systems are doing and to kind of bake inprinciples as well as data so that you know so that you simply cannot use colorshall we say as a as a criterion for making the decision and even disallowingthe kind of indirect use of color so


it's easy enough just to strike thecolor field out of your data if it happens to be there but if you use let'ssay postcode data it might turn out the indirect theory so even those kinds ofbiases can be sort of sniffed out and something can be done about which ithink is absolutely great exasperation about this issue though is in thesystems are out there affecting people's lives and their freedoms it's not it'sit's great to have these academic it's great to working on it but here what wehaven't talked about it what is the pressure outside of europe because weknow there are some things coming through in europe in a few months timethat may change things what is the


pressure in us where that system is usedon people every day to if that system is deemed to be because you were talkingabout cool stuff we could do that to make it better but in fact there's a badsystem and it's really important to realize that the system looks like itwas a badly handcrafted decision tree or something right it's like somebody justbuilt a bad system sold it to california now california won't admit they made amistake right or some part of california the whole state but anyway one systemlike that has actually gone to court successfully was an idaho in this caseit was about allocating so there were people that suddenly got less money fortheir disability all right and they said


and the idaho said we've outsourced thatit's this guy you know fred i don't rememberfred fred system okay and fred says that's my ip you can't look at it andthe aclu took him to court one they want to looked and it's a mess it's like it'snot it's a it's a i it's a giant excel spreadsheet you know with macros andstuff i yeah it just made no sense right the point is probably what happenedthere was that idaho was running out of money they didn't want to raise theirtaxes they said to this guy save money and don't and make sure that nobody cantell how it was done right that is still ai you don't have to use thesophisticated machine learning systems


unfortunately right so i think it'sreally essential that we have better answers than we've been able to generatehere to your questions citizens need to know how they go and complain if theythink that there's something going wrong and the problem of course likegovernment in general of course will be people that believe like the mostperfect system in the world is doing something wrong but there needs to beways to audit there needs to be clear ways to go and and deal with theseproblems because yes this is a present problem and as i said the and the acluwas able to in idaho and i believe actually texas has found somethingsimilar they said that's not due process


due process involves being able to auditthe code right so they some states are making that decision and it but ithasn't been regularize yet and and as people were saying before it's aninternational problems so i hope this is something we can come to terms with forexample through the un's internet governance forum i really love to get aquestion from the balcony if there's any one take can we take this one here andyeah i wonder if we might actually just get you you should give your you guysgive you a questions like each one really quick and then we'll take thefree and respond to the most interesting words the guy knows where to put themicrophone okay here one either side of


you okay yeah go ahead fred okay i'm aphd student at the data privacy group imperial and i was wondering so ai isall very data dependent right any any machinebill was gonna be super data-dependent and we can we're seeing research thatcomes out that says for example you can figure out what my gender is for mylocation etcetera etc but on the other hand there are people like the senioreditor of the economist that argue that we should you know and stop worryingabout privacy and give ai all the data it needs because we can you know curecancer so i was wondering if you guys had an input on how we should make thesedecisions in you know in any of the


fields that they could you could bedeployed you guys address that can we just grab the other question that was upon up on there super quick i mean just just just say question pretty prettybriefly about you just saying pretty briefly so what kind of social politicaleconomic system do you see existing in 50 years time when when ai can do mostthings okay and so i'm harry berg and i wanted toask you a question directly to malcolm well i think raised some of the mostthought-provoking points points and that some how do you think that the religiouscommunity will adopt or be affected by the use of artificial intelligence thefirst question was a bit quick to me i


didn't catch itdid you data privacy and whether we should keep data privacy how because alot of people say well if you give your data we will be able to do some amazingthings like your cancer and on the other side you know we do have these kind ofbiases right so i believe on that privacy that it is something you shouldactually be responsible for so it's your own decision currently it's not yourdecision unfortunately this choice it's taken away from you and i believe thatthis choice must come back to us so i believe that it is on us that's on ourplate scientifically to find a way that we will protect each and every datumthat is ours and once you can do that


attack your tag your data with your owntime right you will be empowered to give thedata to commuting is useful or you want or whatever else reason right includingmoney currently we are not able to do that and i don't think this is okay nomatter what's the cause right so that's really interesting intersection betweenthe issue of data and consent in particular consenting to give over yourdata and again this question about ai and its competency its ability topresumably in future and indeed now gather information gather data that wemight not necessarily consent to to giving of course and i think there's akind of cumulative problem there that we


haven't cracked which is that we seem tothink as a society the data ethics is governed around this principled consentsmeaningful consent there's a lot of discussion about ethics as being well iconsent to give my day away and a lot of debate about the issue of contents as nolonger being meaningful but maybe we need to really really rethink thisprinciple of consent as being the basis of our data away but particularly if wenever really consent to give our data in a meaningful way you know in in in infuture so you can't imagine the ai systems in the future that gather hugeamount of information about you know imagine i'm walking down a park everysingle day there could be a system that


that gathers a lot of data that i meanjust by being around me and being able to act and respond in that way butactually i've never actually consented to give that and given to your phone sothis is already given is always other day i went to yeahi went to a hotel you using maps software and seven days later i wasasked by the phone to give a review and i've never actually said to to to myphone ooh you know i'm really happy that you knowthat i'm staying at xy their place but you and it asked me for information ithink for me felt yeah we're seeing that now and and that could potentiallybecome taken to quite extreme limits in


future we do on the night but beforethat i think the last two questions come together for me rather interestinglyokay how will the good religious communitiesrespond to ai well communities of faith communities are not separate bubblesthey're everywhere there will be people with faith peoplequit christians muslims all sorts doing ai stuff at the cutting edge i expectthey may not make a big deal of it but they'll be there so i expect that we'lldeal with this just like pretty much everyone else does although given thatmost religious communities in the west oh have a slightly higher age profilethan the population as a whole will be


slightly late to adopters and just asthe church of england is using digital in getting its message out will will useai in as far as it worked but what is it about being religious what what is thatyou know what what's the distinction i think it's really quite interesting thisis the sect that the second question about 50 years from now i think one ofthe reasons why religious communities in general are standing out more as boardand non-mainstream is because most religious traditions have seriousproblems with the atomized individualization of societies beingreligious means seeing yourself as part of a historical community that's beenaround for quite a long time and living


it's not just propositional knowledgeit's how you live as a person with faith and that i think means relationships areprivileged community is a privileged concept and we see a lot of socialtrends undermining that and that's part of themarginalization of religion in the west now the key question about ai is is itgoing to exacerbate that trend or can it be used to challenge that trend i thinkif we see ai driving more and more wedges between people and theirinteraction and their sense of belonging to one another there could be i don'tsay i predict it there could be not only a religious pushback but from all sortsof people who find the erosion of


community deeply threatening if on theother hand ai can be fair i can do things that lead to healing the sickstrengthening community making us actually work better togetheri can see religious community is embracing it gladly so i think again ittakes us back to where we've been for a lot of this evening the politicaleconomic historical context is absolutely crucial to how this pans out you said 50 years in future i don'tthink this question is possible to answer think about it25 years ago we did not have internet 25 years ago we did not have mobile phonesdefinitely not smartphones so but the


internet is really important how much weconnected with each other how much disruption it introduced justfive years ago we had the race again of deep learningi mean narrow networks were introduced in 1970 but after after so many timesthat we have the processing machines it can actually run it random hence thingsare absolutely unpredictable i put a project three years ago i haveto redefine the project because the methodology is completely changed soit's really important to understand the speed and the acceleration of the of thetechnology is phenomenal and we have no clue where we will really be however isit's really important in my opinion to


talk about these kind of issues and tryto regulate delete certain things like for examplewho will be responsible if the driver's car kills somebody rightthese kind of things you have we have to start thinking about what we're going todo if we will have no countries but we will care companies that own us is thatgood thing so these are the kind of things that we have to think aboutthat's really predicting 50 years in in you know i don't think it's possible because i don't think as cool as all thestuff we've been talking about is i don't think it's that big much bigger ofa deal than writing was or than


telephones were telephones telegraph'srail right and it's and that is not to say those weren't really big deals rightbut i think there's two fundamental problems one is sustainability how doyou live in a planet how do you live in a space and the other is this inequalityissue you've been talking about how do you distribute things between people andthose are the issues we have to deal with well or badly i think right now thebig issue is there's people who want it to be more unequal so that they haveeven more power and they don't realize that that makes them their position evenmore fragile as well and that's what happened after in america after thecrash after world war one and the crash


to 29 then even the most elite said okwe've got to fix this and then unfortunately in europe had been left insuch a bad state after world war 1 that it was only after world war two that andthey did things like outline the extraction of wealth from countries in1945 they sat down and made illegal things that are happening now whenpeople transfer wealth other countries which is one of the things that keepsher in greece in afghanistan and russia right so those are the kinds ofchallenges and how we handle this in the next few years and whether we damp downthe inequality without wars that's the big question but i think in 50 yearswe'll have gotten used to ai like we're


used to writing and you know writingused to be seen as magic and witchcraft and you know like you're actually at adistance that was really freaked out - it will be used to itfifty years and we'll be back to the basic human problems again we're gonnabe kicked out very very soon but andrew will remove if you don't add anythingmore on the the fiftieth the efficient please do i think that's a splendid cometo rest on i think that's um i certainly have great optimism along with mattridley a writer that i greatly admire i have optimism for the kind of ingenuityof humankind and you know we have many problems and a society to face but ithink ai and actually the use of data i


think that we haven't emphasized quiteenough as a means of making good decisions and these are fantastic toolsthat we will use to craft very innovative solutions to the the problemsthat we face for example sustainability i think we just need to understandbetter this sense of common good if it did what does that look like and part ofour challenge with ai is that we have never really managed to forge that andwe have found it particularly difficult in recent years to forge that and so aiis developing rapidly and not only do we now need to forge that sense of commongoods or what i get michael stan will called the sites for the cultivation ofa common citizenship we also need to


understand and interpret how they applyto the changes that are happening and interaction between technology andcitizens so i think sites for the cultivation of common citizenship haveto be creative but they're more generally we've got a huge challengewhich is understanding how they applies to the effective ai itselfi actually think spaces like this are really crucial we need much more eventsif we are going to realize that vision we don't have enough but i think it'sfantastic that we made a good start we absolutely are on the dot going to bekicked out so you thank you to you for coming along on afriday night


thanks very much sure they'll grab you


Subscribe to receive free email updates: