Dr. Vint Cerf on AI
Till technology finds a way to trace their origins, one way to tackle the dangers of deepfakes could be to teach people to question motives, according to the Internet pioneer, Vint Cerf. He suggested one thing to teach everybody: critical thinking - "Ask yourself, where did this come from? What's its purpose? Am I trying to be persuaded of something that I shouldn't be persuaded of?". People need to be taught that not everyone has "your best interest at heart".
Also see - Vint Cerf Infographic from Google [PDF]
Shortened transcript -
Q - Everyone is talking about the future of AI but let me ask you the big question which so many people have on this from a technology standpoint are we getting to a point where AI systems could become self-aware?
A: Well, I hope the answer is no. I hope the answer is that we will understand how these things work sufficiently to manage their peculiarities. I'm sure that you've heard this term before, but the large language models are sometimes referred to as hallucinating models because they conflate factual information and produce counterfactual output. Nonetheless they also have shown a capacity to do things that surprise many of us including writing software. Of course, any piece of software that a large language model writes I would view with great suspicion. I would analyze very carefully before I put it to work but I would say that we are at the beginning of understanding how these systems can be used and I would also like to point out that they are a manifestation of a more general notion of machine learning which is something has evolved pretty dramatically. Remember artificial intelligence has been a subject of research since the 1960s but the more modern version of it is called neural networks with many, many layers and those systems have shown remarkable ability to perform tasks that humans might not be able to do. We have some that are helping to keep our data centers cool using machine learning methods. Other examples are folding of proteins that can be generated by DNA interpretation. A library of 200 million of those proteins has been computed using machine learning so they are tremendously powerful and useful things that can be done we're exploring.
Q: What about the conversations which...we've seen engineers for example working on AI systems have with machines...some of the answers that we've seen indicate what some would say is empathy, understanding and an ability to self-recognize or or are we getting ahead of ourselves
A: No, not really. Let me agree with you that we may be getting ahead of ourselves, in the sense that we are putting these things to work without considering all of the consequences but with regard to what you just said...you know the sort of apparent empathy and other understanding of the exchange...this is only the verisimilitude of human discourse. Now why do I say that? Remember that the way these large language models are assembled is to ingest huge amounts of text literally billions of lines of text and then to generate output word by word from the resulting model. What you're seeing or possibly even hearing because text to speech is also feasible is essentially assembled out of the content that the system has ingested so you're seeing a reflection of a manifestation of what a human would say or type that that sounds empathetic but it is a construct...it's an artificial construct not understanding. It is simply reflecting in text form what it has ingested and so this can be quite misleading because the text can be beautifully well formed. It's astonishing sometimes you can ask one of these large language models to produce a text in lambic pentameter...you know Shakespeare because the system has ingested all of Shakespeare's plays. This is very surprising especially for...you know...the lay person like you and me but even to the scientists that are putting these systems together, but it should not be conflated with real human intelligence.
Q: You're hardly the lay person Vint but that said is there is there a real fear that without global regulation the pace of AI development can result in unequal access at best and potential weaponization in a worst case situation?
A: Well, there you covered quite a bit of territory with that one sentence. So let me unpack that. First of all I think that it is correct to be concerned about what one could do with these large language models and the reason for that is that we have increased our ability to interact with computer-based systems using you know speech as well as text over the past decade or so. So these systems let me use the word understand or maybe the better word is recognize speech and therefore can interact with us they can generate speech as well and so we get into this very interesting situation where appliances become manageable not by knobs and buttons but by simple vocal interactions. Why do I care about that? Well the large language model is capable of generating text or sound or speech and could interact with those devices or it could interact with other programs that have what we call application programming interfaces that will accept commands to do things. For example, Google has a product called Google Home and it has the ability to take voice commands and control the lighting and tune on your television or turn on the security system and so on. So you can imagine a large language model interacting with this Internet of Things possibly deciding to do things that you wouldn't want it to do for example a person walks up to the front door it's not anybody that anyone knows maybe it's a robber and it says open the door and you know you don't want the system to respond to that because it shouldn't respond to a command like that unless it knows who is asking it to do that. So there are a bunch of important edge cases and nuances that programmers should pay attention to. These are the kinds of things that worry people like me and others about using these things without considering a lot of the consequences.
Q: What about concerns that some of the technology which is absolutely cutting edge is developed in one part of the world...countries or governments actually gain the benefits of that and aren't for reasons of IP or otherwise willing to share that with other parts of the world. So the question on regulation and free access. How important is that in your mind?
A: Well certainly at Google we are great believers in open source. The Android operating system is an example of that. So is the Chrome browser. So we're very interested in in trying to make things broadly available so people can build on those technologies, make new products and services to benefit everyone. Well, it is fair to say though that countries see various potential hazards they worry about their citizens and they want to protect them and so you hear a a sort of a background drum beat asking for regulations. Even the technologists are saying that there ought to be some constraints on the way in which these technologies are used especially if they might have high risk. So as an example you might use one of these large language models, chatbots in common parlance for entertainment. I asked one to write a little story about an alien that invaded my wine cellar and was consuming my Cabernet Sauvignon to stay alive and it wrote a very funny little story about how this alien got into my wine cellar that was perfectly entertaining and perfectly risk-free. On the other hand, if I was to ask one of these large language models, I'm planning my retirement and I would like you to work out what my portfolio should look like...that's probably a little more risky and maybe we should not engage in using these systems for such things or medical diagnosis or recommended treatment those are high risk and I think we would want to grade the very applications by their risk and say don't do these things without further analysis.
Q: I must ask you about something that's making the news all over the world. Sam Altman returning to open AI after leaving Open AI to join Microsoft. He's back right now. Was this a storm in a teacup? You know we're talking about artificial intelligence and obviously his role in all of this is crucial so your thoughts about what's transpired in the last couple of days and his role going for forward in developing all of this?
A: Only in America! yeah actually this is really weird. The board fires the CEO, followed by several days of turmoil, followed by the Board fires itself and hires a new Board. I frankly was sending emails this morning saying...wait a minute...how in the heck did the Board get fired. I'm not an insider in all of this so I'm afraid I can't reveal something either amusing or important but I just have to say as a bystander it's been quite a crazy thing but I think you'll notice that there are number of crazy things that happened here in the US in the corporate world. In terms of the longer term for AI though I am still very positive about this whole technology.
I've seen some really astonishing things that we can achieve. It can for example language translation is part of the large language model space and it's astonishing how much that capability has improved literally over the last four or five years. So these kinds of things empower people to do that...which they could not do or couldn't do as quickly and I'm very positive about that but I also recognize with with big powerful tools you have to be cautious about how they get used. You have to think your way through how to prepare people to use these tools safely just as you would any other power tool you wouldn't want turn a three-year-old loose with a chain-saw or something like that and so we want to be thoughtful about how we use these things
Q: What sort of dangers do deepfakes represent not just in terms of violating personal privacy for example but how dangerous can deep fakes conceivably be as a concept as a notion and with the technology evolving going forward.
A: These are fantastic questions by the way so congratulations! I don't know that my answers are very good but your questions are terrific.
So with regard to deep fakes, you can imagine the entertainment industry is going crazy because this is an opportunity to create that which might be impossible to do especially if an actor is has died or is too old to perform a part that the entertainment industry would like it to do. There's a recent Indiana Jones film that I think used various special techniques not deep fakes in order to make the main actor look a little younger but that doesn't make him younger it just makes him appear to be younger. So there is a concern of course among the actors and other screenwriters and so on...you probably know about this long dispute that lasted for months since recently was settled...about concern for abusive use of these techniques and inhibiting their ability to make a living. So from the purely entertainment point of view, and protection of intellectual property and privacy, deep fakes are a potential hazard. So we may need to do several things. On one side I think we have to work our way through what the intellectual property protection should be. I'm not sure what they are but I can understand an actor or an actress or a writer wanting to maintain their ability to make a living. For example, at the same time we can misunderstand that which we are seeing, these things could be made to say things that we didn't say right. This interview for example could probably be repurposed using suitable technology to have questions and answers that you didn't ask and I didn't answer and you know and then what would we say about that. So the question that we have...I have anyway, as a technologist is how do we make these...the origin of these systems or the provenance of these products....how do we make their origins visible...how do we make parties accountable for creating these things and I wish I had a better answer for you.
At Google, we have developed a technique for fingerprinting of songs. For example so if they're inappropriately uploaded to YouTube we can detect that and block them we might need to do a similar kind of thing with video for example or fixed imagery but it's we as a society I think are getting to the point where we have tools that can mislead in ways that are hazardous and we need to figure out how do we hold parties accountable for bad behavior. How do we identify origins of things so you know whether to accept them or not. I would suggest one thing we can teach everybody and that's critical thinking - ask yourself where did this come from? What's its purpose? Am I trying to be persuaded of something that you know I shouldn't be persuaded of? We should be thinking critically about what we see and hear.
Q: Governing the internet is obviously a huge challenge. There is democracy defined by internet companies like Twitter versus government regulations which come often with takedown requests. Who's winning this battle and do you believe that governments should be the ultimate arbiter of what stays on or goes?
A: Well, let's I have an interesting task under taken at the request of the Secretary General of the UN. I'm the Chairman of the Leadership Panel of the Internet Governance Forum which been meeting since 2006 to discuss that very thing. How should we govern the internet and the conversation continues. The next meeting will be in Riyadh in 2024, the last one was in Kyoto Japan.
So the answer to your question is that we need a multi-stakeholder view of what that governance should look like. The private sector, the individual, civil society , the Government, technical community should be getting together asking what should be the properties of a properly governed internet for everyone's benefit and there are discussions going on this very moment about cyber crime treaties that could be adopted on an international basis in order to identify badly behaving parties and to hold them accountable. So there is a lot of effort going on here on a collective basis. It's not just the government that should decide but we do need a multi-stakeholder perspective in order to develop a framework for governance in a situation where these things are so potentially hazardous.
Q: Just a couple of days back, India's Prime Minister Narendra Modi made the point that you know if there are deepfakes then there needs to be a protocol established where a video for example or an image is identified as a fake or one that's not real. From a technology standpoint is that something that is viable and can technology companies actually ensure that that happens?
A: Well, the answer is partly yes because we have something called digital signatures that's a way of taking digital content doing a computation on it in such a way that we can identify where it came from and that someone cannot fake that digital signature so there are techniques that could identify the origin of something …if the party is willing to assert that...so this raises a really interesting question that you might want to go find out yourself and that is how do we create a society where it is considered a sign of civil responsibility to identify the origins of content so rather than having anonymity being a primary goal of the society I think accountability and identifiability should be the goal you should get a gold star you know for identifying yourself as the origin of something sure and we don't quite have a society like that right now but maybe that's where we need to head in order to wind up encouraging people to identify sources and origins. They can be proud of their deep fake as long as we know where it came from and the fact that it is a deep fake
Q: I must sort of roll back the decades to ask you this question. Now you led the effort to develop and deploy MCI mail, the first commercial email service. Did you at that time...because as I understand it, you were a technologist first...you were an engineer first you were working through various protocols and you loved all of that but you had an idea as well. Did you at that time foresee a scenario where email would dominate our lives in the way that it now does?
A: Well, first of all MCI mail was not the first commercial email service. There were others that preceded it and there were other non-commercial services even before that. Electronic mail was invented in 1971. On the project that was funded by the Defense Advanced Research projects agency here in the US, a man named Ray Tomlinson who recently passed away, was the creator of that electronic mail instance. We all used it because it was so convenient and it covered time zones... you could coordinate with people without both of you being awake at the same time like we have to be on this call, so it was very exciting. I was part of the team that was using it back in the early 1970s and have been ever since. The MCI Mail system was a commercial email service that was turned on in 1983. It was the first commercial email service to be connected to the internet so I am very proud of that interconnection because it led to all of the other commercial email services also getting connected to the internet after which they discovered to their surprise that all of their customers who had been in this little isolated island could suddenly talk to all their competitors, customers through the internet and that was a big surprise. So at the time I was persuaded that that email was going to be a very very useful tool. It seems to be the rule of my life I get up in the morning and I have a 100 emails waiting to be responded to but it is so convenient to stay in touch with thousands of people. I have something like 14,000 people in my Rolodex and all of them seem to send me email every day so I still consider it one of the most useful tools invented. It's one of the most transformative technologies ever. I
Q: I'm broadcasting to you from here India...we've got some wonderful centers of excellence the IITs, the Indian Institutes of Technology. You've been a keynote speaker with them in the past and we've got some incredible IIT minds working on for example Computing and AI and Tech around the world, not least of all in the US. Could you tell us a little bit about the role of the IIT and the engineers and the minds from these institutes that have made a real impact.
A: Well, first of all, you probably know our President, CEO at Google, Sundar Pichai is from one of the IIT schools...many of his colleagues are as well here at Google and elsewhere. I think Satya Nadella is probably another example at Microsoft. I have to say from the from the American point of view, there's been this spectacular invasion of Indian talent which has risen to the top and are responsible for some of the most valuable companies in the world. What I find very interesting is that the transposition from India to the US has had a very interesting effect because these great minds have been planted now in very fertile soil here...where especially Silicon Valley or up...in the Seattle area or often Cambridge, Massachusetts or Austin, Texas, places where venture capital is available. People are willing to take risks and these great minds from India have taken advantage of that and done very, very well not only for themselves but for their employees and investors. So, a tip of the hat to the IIT group because they have produced such good talent and we have taken advantage of it here.
Q: You think very closely about a safe internet. We face a huge problem of internet scams in this country. Financial scams - people losing a lot of their hard-earned money. Are you really worried about the future of a safe internet where there often seems to be a scammer around every corner.
A: This is especially true when it comes to disasters. For example, people are often very good-hearted, they want to be helpful in a disaster and so the scammers come along and say send money...this gets back to this question of authentication, strong authentication, strong identification of origins, of provenance...if you if you get a request for money or some other kind of help and it has an urgency built into it and everything else...the red flag of suspicions should be flying. You should be asking questions about where did this come from? Can I corroborate the request in any way? I think that we would like to make it hard for people to exercise those scams. You probably have heard the term fishing...but spelled "p-h-i-s-h-i-n-g"...people send these emails and try to get people to click on links that either will take them to a place they shouldn't be maybe it downloads malware which makes it even worse or maybe they look like they are a legitimate party but they just copied the web pages from the legitimate party but they're taking your money someplace else I think a healthy degree of suspicion is important.
I've been postulating that we should have an internet driver's license...you know, how teenagers at least here are eager to get behind the wheel of the car...we make them take classes in school before they're allowed to take the keys to the car. Why don't we have an internet driver's license? ...you have to take a class where you learn about what the hazards are and how to defend yourself...how like Defensive Driving, we want defensive internet driving. I think we just need to teach our populations that there are other people that don't have your best interest at heart and this is how they manifest that and this is how you defend yourself.
Q: One of the most inspirational ideas to have come about in India over the last few years has been to develop a digital backbone and we have it everywhere. I mean on my phone I've got something called DigiLocker where I've got...and it's protected, it's encrypted so it's safe...a lot of my details, my car insurance, my medical insurance, my national identity driver's license etc etc...so much of Commerce, digital Commerce in India which benefits people in the smallest villages actually comes through their phones and through apps and through this digital backbone. Do you see a real link between economic growth and a digital backend that digital transformation can change the lives of countless people.
A: Indeed. You know, I have one of these things too (shows his smartphone) and it feels like it's in charge of me a lot of the time. The one thing I do worry about though is if it doesn't work for some reason...if I don't have connectivity or the battery is dead or I'm in a place where you know it's not usable...all kinds of things could happen you know...cascade failures...if you try to get logged into your bank account and the bank account says I need to talk to your mobile phone to confirm who you are but the mobile phone isn't available then you can't complete the transaction and then other side effects will happen...so we're awfully dependent on these things (phones) and I would like to see a broader range of devices including laptops and pads that could be backup for what we do on the mobile phone. I do believe that the apps on the mobile and the connectivity that it confers are extremely beneficial...we don't have the connectivity that we should have everywhere in this country. In the US, we're spending $42 billion to push internet access out into the rural parts of the country and I'm sure that Mr Modi is also seeking to make the internet and communications available everywhere...a big challenge indeed for India as well as the US. I do worry about our dependence on these things and so your question should be posed not only to this to me in this conversation but to others who have a responsibility for creating a safer and more secure environment so people can take advantage...as you suggest of the economic potential.
Q: A final question...and it's sort of linked to the word connectivity which you just mentioned from the beginning...of the internet...you know and...phone modems and connections through phone modems to internet speeds now which are much faster...5G that's the universe many of us operate on presently and then 6G is just around the corner. What would that mean for us?
A: Well...6G has a really interesting architecture because it includes what's called Mobile Edge Computing. What does that all mean? Well, imagine that you have your laptop and and you have your mobile and you have cloud services that are distant somewhere in the world...the idea of putting some computing power between you and the cloud is actually quite helpful because it could reduce latency, so if you're doing an application that requires rapid response...putting computing power between you and cloud system can actually introduce a low latency component which could be quite helpful...so 6G has that architecture and so as that gets rolled out...it may turn out to be a very useful part of the low latency space. So we're very excited about watching that happen, participating in the development of applications that can use this intermediate computing capability.
Q: Well, wonderful speaking to you Vint, internet pioneer and a hero for so many of us who have a bit of an interest in technology. Great speaking to you and I would love to be that alien that raids your wine cellar sometime in the future. Thank you very much for being with us.
A: Thank you so much. I hope the next time it's in person. India is one of my favorite countries and my wife loves Indian art and furniture...so we have a household full of that...so next time in person in New Delhi...absolutely.
Comments
Post a Comment