Comprindeo

GTC EUROPE OPENING KEYNOTE 2016

Watch and relive NVIDIA CEO Jensen Huang’s first day keynote. Press play on the GTC Europe 2016 Keynote video and explore the topic tags below.

Intelligent Voice automatically transcribes audio and video to text at ultra-high speed using NVIDIA GPU technology. Intelligent Voice’s pat-pending JumpToTM system provides instant insight into and access to the content that you want to find.

Please note, we will be adding GTC Europe recorded sessions to this library soon, bringing you a better understanding of how Artificial Intelligence and Deep Learning can be applied across industries, ranging from medical to retail, life sciences and autonomous vehicles. Please subscribe to our GTC Europe newsletter to be notified when new sessions are available.

JumpToTM Topics

High Performance Computing (2, 3)
Artificial Intelligence (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21)
Computing Technology (2)
Sensory Information (2)
Autonomous Vehicle (2)
Particle Physics (2)
Exponential Growth (2)
Speech Recognition (2, 3, 4, 5, 6, 7, 8, 9)
Computing Platform (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
Image Recognition (2, 3, 4, 5, 6)
Computer Graphics (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13)
Three Dimensional (2, 3, 4)
Startup Companies (2)
Construction Site (2)
Learning Platform (2, 3, 4)
Energy Efficiency (2)
Consumer Services (2)
Software Company (2)
Computer Science (2)
Hundred Thousand (2, 3, 4, 5)
Next Generation (2, 3)
High Definition (2, 3, 4)
Computer Vision (2, 3, 4, 5, 6, 7)
Virtual Reality (2, 3, 4, 5)
Steering Wheel (2)
Single Version (2)
Data Structure (2)
Twenty Twelve (2, 3, 4)
United States (2)
GPU Computing (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21)
Three Pillars (2)
Million Miles (2)
Open Platform (2, 3, 4)
Car Companies (2)
The Question (2, 3, 4, 5, 6, 7)
Number Three (2)
Car Platform (2)
Internet Era (2, 3)
Eiffel Tower (2, 3, 4, 5)
Live Video (2, 3, 4, 5, 6, 7, 8)
This Network (2, 3, 4, 5)
Data Centers (2, 3, 4, 5, 6, 7, 8)
World Around (2)
In Real Time (2)
That’s Right (2)
A Little Bit (2, 3, 4, 5, 6, 7, 8, 9)
Single Time (2, 3)
Driving Car (2, 3, 4)
Twenty Five (2, 3, 4)
Years Later (2)
Near Future (2)
Mini Cooper (2)
Forty Years (2)
Three Times (2, 3)
Jeff Hinton (2)
The Reason (2, 3, 4, 5, 6, 7, 8)

GTC EUROPE OPENING KEYNOTE 2016

Published on Sep 29, 2016

Jensen Huang, CEO and Co-founder of NVIDIA speaks about the future of Deep Learning, Artificial Intelligence and Autonomous Vehicles at Europe’s first GPU Technology Conference. (2:02:05)

data <data@import82.intelligentvoice.com>;
qKmkruQSQoilwFHwvyyLRQ <qkmkruqsqoilwfhwvyylrq@import82.intelligentvoice.com>;
2016-09-28 16:51:12

gtc_2016_keynote_audio.wav (2:01:34) data import82

{{timestamp}}

Voiceover: From a single origin. A unique point in space and time. This is the spark of innovation. fuelling your most amazing. Breakthroughs with the power of AI. It’s a passion for discovery that unveiled the Genesis of all that exists in the universe. Today deep learning is helping farmers feed the World. And Marine biologists save our most precious resources by analyzing in one month what used to take ten years. Everyday devices translate even the most complex languages from voice into text. And images into words. Xavier is now present. Helping the visually impaired recognize an old friend. Or letting a blind woman read to her child for the first time. Autonomous vehicles give us the freedom to reimagine our city streets and deliver relief to those who need it most. Even under the harshest conditions. Robots tap into the power of deep learning to separate trash from treasure. And give us amazing new ways to explore other planets. And today a twenty five hundred year old game meets its match as a computer competes with one of the greatest human champions of all time and wins. GPU deep learning is the breakthrough that sparked this Ai Revolution and fuels your most amazing discoveries yet to come.
Announcer: Ladies and Gentlemen please welcome NVIDIA co-founder and CEO Jen-Hsun Huang
Jen-Hsun Huang: Thank you welcome to GTC GTC is about GPU computing a form of computing we invented ten years ago GPU computing has come a long ways in just the last ten years it has a enabled amazing new applications solve problems that were impossible before. And now it’s in the process of completely revolutionizing industry’s GPU computing is a specialized form of computing. It solves problems than it could do things that normal forms of computing simply cannot we got some pretty exciting things to show you today so let’s get started. Well GPU computing this thing that we’ve been working on for ten years is at the beginning of something very very important. A brand new a brand new Revolution the what people call the AI Revolution the beginning of the fourth industrial Revolution However you describe it. We think something really really Big is around the corner. About twenty years ago twenty five years ago nineteen ninety five the PC Internet Revolution started several things came together that made the PC Internet era really really exciting the availability of microprocessor the Cpu a standardized operating system and standard document exchange system that made a possible for us to share information all of the world the PC Revolution the PC Internet era has put computers in the hands of a billion people. Ten years later. In two thousand and six. Two simultaneous things happen. The mobile Revolution mobile cloud Revolution was started by the iPhone and the Amazon AWS for some reason it happened about ten years after the PC Internet era. All of the combined a cumulative amount of innovation that was built into the industry made a possible for us to put computing technology in the hands of nearly three billion people and make computing capability available to you wherever you are from every computer in every home. To a computer in every hand. Well it’s ten years after that. It is now ten years after that and in the beginning of something very very Big we call the AI Revolution. Is this new in this new era of computing something pretty amazing happens. Software writes software. Machines learn. And soon machines will build machines. In this new era of computing the type of software that’s written by the computer is impossible for humans to write. And it’s therefore able to solve problems that we’ve never imagined before. In each era of computing a new computing platform was developed a new computing platform made it possible for these new capabilities to emerge. The CPU and the standardize operating system called windows the ARM low power S. O. C. and Android operating system and the cloud platform made a second era possible and now the third a brand new type of processor is necessary to make this type of software development possible. And put us put NVIDIA put GPU computing square in the center of this Revolution. It happened in twenty twelve. The Big Bang the Big Bang of GPU computing or deep large GPU deep learning was twenty twelve off even though even though great work had been done in deep learning before that even though in fact Jurgen Schmidhuber lab in Swiss AI I the laboratory had already started to work with GPU a with deep learning it Wasn’t until twenty twelve the the happen and it’s part Serendipity. Part Destiny. The serendipitous part of course is that the researchers of the Swiss AI in this case the researchers of N y U Alex Karchevsky under the laboratory of Jeff Hinton was trying to develop a new type of deep learning network that was incredibly deep neural net as your know and form is inspired by the human brain and has the ability to learn features from very complicated data by itself and it doesn’t the case of deep learning hierarchically meaning that if you were trying to recognize a human it might detect edges first from the image it might detect after that small features it could be eyebrows eyelashes. Pupils nose ear and from that it learns this is a human head and from that it recognizes this is a human and has the ability the generalized incredibly well generalize meaning that although it learns from just a few examples those few examples could be thousands of examples is able to generalize that all of you are humans. And so the ability to learn features hierarchically and to generalize representation learning. Was a very powerful idea it had one enormous Handicap. In one enormous Handicap is ideas been around for several decades in fact but the one enormous handicapped that had is it required a large number of examples to learn from it required an enormous amount of data to learn to write the software. That Handicap of being computationally exhausting that Handicap of requiring massive computers for the software to be written so that it could be useful lasted two decades. A Handicap the lasted two decades and then one day. Because we had caused our GPU to become general purpose Alex Karchevsky was able to discover our GPU. And developed a deep neural net on that GPU. Serendipity met Destiny an in twenty twelve he wrote a paper. A milestone paper and this milestone paper. Chronicled and described his deep Nero Neural Network and he submitted it for a competition. And this young man who has no experience in or little experience and computer vision created a neural net that was able to recognize a large quantity a large scale amount of images learning from one and a half million images and putting into a contest of recognizing over a hundred thousand images it won. This deep neural net was written by software learned by itself on GPU use won the contest and it he beat every computer vision expert. And every hand engineer computer vision algorithm. Over decades. One young man’s paper. One neural network called nailed the famous AlexNet beat everybody. Unbelievable results. The results of his achievement rang through the industry. We had the benefit I’ve the benefit of working with companies all over the world and industry leaders in scientists all over the world the results of that paper the results of that singular achievement is probably the most exciting moment in computer science. That we’ve experienced in the last twenty five thirty years and the reason for that is the achievement in itself is significant but the extrapolation of the achievement is daunting the extrapolation of the achievement what it means now. What it means to computer science what it means to computer programming what a means to the computer industry what it means to all the problems that we’re trying to solve how is a possible that a software piece of software learn spite self and create such amazing results and beat every human engineered algorithm that has been developed. Well the stage for the AI Revolution has been set. Since then ingest the last four years there’s not one week that goes by where some deep learning paper has been produced some lab has new ground breaking results in deep learning some company has been founded some new breakthrough. Has been achieved there are three very important milestones in the last four years that I want to highlight the first. Is a collaboration between us and Stanford research are NVIDIA research and the our lab of Stanford Andrew Ng a world famous pioneering I researcher a worked with our laboratory to create essentially a Ss large-scale large scale. GPU deep learning system that has the ability to simulate enormously large brains we can now create we can now allow computers to write software for very large problems and the reason for that reason why that’s important is because we want a are not to be a toy. We wanted to solve real problems and real problems are large and we need large computers that are incredibly scalable so that we can trade enormous models. Enormous models. That singular breakthrough that paper that paper has put GPU use in the hands of literally every serious researcher and every serious software company to solve very serious problems Big breakthrough in the your twenty twelve and turbocharged literally everything since then some other achievements this is image an at this is the competition that happens every single year and it just it it which is completed yet again and twenty twelve twenty sixteen and the new winter is deep learning again based on GPU use since twenty well known first of all the disc continuity in the black dot the last black dot and the first Green dot the last black dot as human engineered expert engineered computer vision algorithm developers expert engineer expert engineered it was able to achieve seventy four percent and stayed around Los seventies for quite a long time in fact. If it Wasn’t because and then of course deep learning came along AlexNet came along and we took a Big jump since then since then the models get larger. The architecture is of these networks become more complicated the computational intensity of the net worse continues to grow and one day last year we achieved super human levels. I am pretty certain not one of us in the audience today has the ability to beat this deep neural net in large scale image recognition in large scale image recognition and it’s very very likely that even all of us getting together working together as one team we cannot beat this network in large scale image recognition image recognition using deep you don’t Neural Nets has achieved super human levels it was because of deep learning to has. Inspired us at NVIDIA to apply this technology to menu the things I don’t want to talk to you guys about and in fact one of the areas such a great importance is of course autonomous vehicles self driving cars. It is inconceivable to us anyways that we could achieve. The level of safety and the level. Of capability of self driving cars using traditional computer vision approaches to object detection and finally now we have if you will Thor’s Hammer this incredible magical hammer that fell from the sky to help us solve this great challenge image recognition at superhuman levels. Just a couple of weeks ago. Are friends of Microsoft X. D. Huang X. D. Huang is a Microsoft’s speech Chief scientists speech recognition as you guys know is one of the most most researched area in artificial intelligence and the reason for that is because we can understand speech we can read and we can we can read and understand language we can learn the ability to understand speech. Walk will not will not only change how people interact with computers it will also change what computers can do deep learning has recently made enormous achievements this is from a paper from X D that he published several years ago and the worked at Dawn of speech recognition are some of the finest artificial intelligence researchers that we know in the world Jeff Hinton has made an enormous contributions in this area Deng Li over Microsoft X. D. Huang of course and of into here in Europe one of the one of the one of the most significant AI researchers and has made enormous contributions to deep learning use for speech recognition Jurgen Schmidhuber laboratory in Switzerland has has done amazing par near work or they were they were really quite frankly the the first to use deep learning with a long short term memory to a force for understanding our to improving speech recognition I’ll just a few weeks ago Microsoft announced that after all of these decades they have achieved quite a significant breakthrough of six point three percent six point three percent word error rate was speech recognition is really hard. Speech recognition is really hard for for for some very obvious reasons for example I vocabulary the larger little cavalry the the lowered the the lower the I proceed the higher the word error rate. Speech versus spoken versus Red speech you know when we’re when we’re talking we have lots of Oz an awesome Elms on speech is hard because everybody’s toxin different ways and there’s a whole bunch of as it turns out ease you know the B. C. D.’s. They sound very similar to computers the E V. P.’s. P. V. Z.’s A bunch of E.’s So the English the English language as it turns out as relatively hard for computers understand is fairly hard for us to understand and and so so speech recognition is something of great difficulty not to mention surrounding environments where you’re inside a car inside a bar inside a train station I’ll inside a inside a an insider inside a pub here in Amsterdam where everybody’s talking I’m I and so all kinds of different environments creates enormous come complexities for speech recognition we now have a cheap six point three percent by the way humans don’t achieve zero percent and that suggest that these computers would deep learning has a cheap quite significant. Levels of capabilities. While these three achievements are highlighted for particular reason. We now have the ability to simulate very large brains. We now have the ability to recognize images that is computer site and we now have the ability to recognize speech that is. Computerized understood a computers ability to understand what we say. Sight and sound. And the ability to learn. The ability to perceive and the ability to learn is the foundations of artificial intelligence. That is the reason why the world has become so excited about AI we now help the three pillars. The three pillars necessary to solve very large scale artificial intelligence. Lots of research is happening in this area and then has really it’s really shaped shape the industry the industry that we know today and it surely has shaped us. NVIDIA you know is a GPU company we invented the GPU and ten years ago we invented GPU computing we’re basically in three fields of endeavor that it’s unified by one concept are three fields of endeavor first of all is high performance computing. We created GPU computing to solve it’s a specialized form of computing and it solves problems the normal forms of computing can’t. and you wanted to simulate weather you want to create virtual reality if you want to simulate the brain this is. Style of computing a way of computing does made that possible. Almost every supercomputer in the world brand new super computers in the world are accelerated we started that trend we represent about seventy percent of the world high performance computers that are accelerated I think that in the future every single hype performance computer will be accelerated the second area of endeavor for us computer graphics the simulation of virtual reality. The simulation of virtual reality this is obviously a very exciting area for us we love computer graphics it has field enormous amounts of some as a innovation and and I’m a an R. D. Effort for our company and in this area visual computing this is a little bit like. Computing human intelligence is computing human imagination. We take what’s in your mind and we translated to computer graphics so that all can enjoy and this new field. That we found ourselves in and that we have been propelling the last six years. Artificial intelligence or otherwise. Computing human intelligence it’s some people have said that we’ve become the a I computing company. Whether we’re computing human imagination or computing human intelligence we’ve become the a I computing company but I kinda still think that we are the. Fun computing company we get to solve all of the world’s fun problems and we’re a if you will the computing of the future company. In fact most of the work that we are doing is leading to a pretty exciting future. In a moving our man Tony Stark is interacting with holographic computer graphics floating in front of him. He has Jarvis of the ambient. Jarvis is helping him fetch information in fact Jarvis as you know is rendering that computer graphics. Jarvis was talking to on answering questions collaborating with Tony Stark and one of my favorite parts was when Tony Stark puts his hands into. The iron man suit that he was designing and interacting with for the first time. Using merging simulation virtual reality. Augmented reality and of course powered by artificial intelligence. If you will that one scene. Captures what NVIDIA is working on. This is the future that we’re trying to create and were super excited about and of we come to GTC Every year we’re Gonna take giant leaves towards the future this is a pretty exciting time I think all of the pieces are starting to come together and GPU computing is that it’s Court. GPU computing is what G T. C.’s About and if there’s any any doubt whatsoever that GPU computing is be more important than ever and becoming more Central to one industry after another this chart. Which surely change your mind the number of attendees and GTCs were requested to go to justify every country in the world these days are we are on a world GTC Tour the first time in the history of our company after ten years. Developers all over the World. Has asked us to go to literally every single country because GPU computing is now used in every single country every GPU computing is touching every single industry there’s not one software company that I know of today does not using GPU computing either a little or just the Hon GPU computing is at the core of computing as we know it today the number of developers has grown tremendously. And as grown three times not in ten years it has grown three times in two years. Three times in two years I believe that exponential growth. It’s incredible a hundred and twenty thousand to four hundred thousand but this one’s just shocking. The number of deep learning developers has grown twenty five times in two years. It’s probably doubled since I started traveling. It’s absolutely incredible the number of deep learning developers and they’re touching just by every single industry and so the question is the question is why. Why has AI researchers all the world discovered. The GPU why has it now I’m not going to offer you a scientific reason for it this is a little bit of a cartoon reason but I think it might inspire you it might give us maybe a little bit of the understanding for why it is that AI researchers all over the world have adopted the GPU. Suppose I were to say I’m Gonna ask the audience to think. I would like to now ask you to think. I would like to ask you to think. About an iconic image in Europe. I would like you to think about the Eiffel Tower. I think it’s a fairly good choice for everybody. Let’s think about the Eiffel Tower not as it turns out when I ask you to think when I ask you to think about the Eiffel Tower when I asked you to think about the Eiffel Tower it’s very likely the most of you did a mental image of the Eiffel Tower your brain performed computer graphics. Your brain performed computer graphics if I said think about Ferraris. It is very likely your brain performed computer graphics some probably chose the for fifty eight maybe some chose the for thirty maybe some chose La Ferrari. Who knows but it’s probably read. Your brain performs computer graphics and your brain performs computer graphics in color. When you think. It’s also very likely that are GPU use because our GPU designed like a brain your brain as you know is not one super processor your brain is a whole bunch of neurons billions of neurons connected by tens of thousands of synapses each. And each one of these neurons don’t perform much work but together it’s able to think together is able to achieve something that only we can achieve something with trillions of dollars of are indeed the computer industry have not yet accomplished the GPU is maybe a little bit like a brain we have a whole bunch of processors thousands of processors that are working imperil to solve a problem thousands and thousands of processors in the case of a supercomputer the largest supercomputer in the United States is powered by NVIDIA Tesla and has sixteen or so almost eighteen thousand processors. Excuse me GPU use those GPU use have thousands of processors inside altogether about thirty six million processors are working together to solve a problem. Are GPU computing approach as a little bit like a brain and so maybe those two reasons inspires us. Give us some evidence of why it is the researchers all of the world has jumped onto GPU computing as the fundamental processing approach for AI advancement. Well GPU computing is a GPU deep learning as a new computing model now before I go often tell you about the products that were going to announce today and the new initiatives and our new partners let me first describe why this new approach is different whereas computing power as engineers Sup sitting in front of computers usually visual C. Plus plus. This developing essentially recipes incredibly complicated recipes that are followed step by step by step and the written by engineers and what the engineers wrote is what it does what the engineers wrote is what a dust what was written is what it does and when you’re done you come pilot you tested to make sure that it performs According to your expectations and then you release it to the world for functionality for application. Software engineers write the software QA engineers test the software and we release the software into production and the software does exactly what we expect what we wrote it to do. There’s a bug when eventually find that we fix it we fix it we test It we release it we find a bug we fix it back into that loop GPU deep learning a little bit different. There are several different elements of GPU deep learning the first part is training. It’s about this deep neural net learning from an enormous amount of data it’s learning from digital experience which is what data it’s because the world has an abundance of data today we have an abundance amount of experience to train the neural net with this is a computationally intensive part of deep learning incredible no illustrate some in the second the output of that is a deep neural net and then you infer you now apply that network to infer and then you have intelligent devices now let me go around Thia loop one more time so the case of training. In a case of training we have billions of trillions of operations billions of trillions of operations billions of trillions of operations that’s a fairly large number and that’s one of the reasons why it takes so long to train a network but what you have done as you train large models and your goal is to accelerate your time to market you’ve created a network this network as a neural net with hundreds of what is called hidden layers meaning layers on layers and layers and layers and as result we can generalize we can learn and generalize representations that are based on hierarchies of features that there could be edges. Eyebrows eyes head body human that all of this data underneath could eventually be abstracted. And be represented with a vector vector a piece of information that says human. Lots and lots of raw data human the ultimate form of compression. Lots and lots of images output feature representation human amazing amazing results. Our brain has the ability to do that the take raw information. And somehow extracted from it essential pieces of data the essential nuances so that we can abstract that data into a higher level representation called human. We put that network in data centers all over the world these are data centers that are populating all over the world Hyper scale data centers so every time you make a query you say find human image of a human it is very very likely was certainty now that it goes through an artificial intelligence network. And it searches it’s entire Library of images and it detects find the images that best represent your queries such human when you do a voice query a voice command Ok Google it goes through the same similar type of process goes into the network and the network infers from you what you need to do this is going to be a huge market and the reason for that is this almost every single query in the future is going to be AI based every single time you touch your phone every single time use the Internet it will be. Routed through an AI network. Billions and billions of queries they could be video they could be voice they could be music that could be text they could be requests for commands they could be things like help me book a trip help me book a trip. To Monaco. GPU inference makes this response time incredibly fast and as a result of that it improves the throughput of your data center i.e. reduces cost we’re going to put these networks on devices as well. This is the era of the intelligent device your vacuum cleaner is all already relatively intelligent it has the ability to be much more intelligent. Your toaster your coffee maker your House the cameras that watch the outside of your House. A little microphone that’s connected to a speaker otherwise known as Amazon Echo these devices are going to be infused with artificial intelligence so that they can be much much more intelligent. Deep learning an AI is the feel it’s the technology for IOT. This is going to be pretty exciting. While this is that so what we just want to around is basically how GPU deep learning works it’s this new model of computing and notice very little coding a lot of computation. Very little coding an enormous amount of computation. Very little coding an enormous amount of computation and the amount of computations going to grow. I sure I’m showing you three pieces of work for three very important AI research organizations in world the first one is Google this is Jeff Dean’s comes it comes right out of Jeff Dean’s a a Powerpoint slides and he basically says the important property of neural net the important property of Neural Net it is that the results kept better when there’s more data when they’re bigger models i.e. bigger brains and as a result you need more computation. A bigger brain with more experience lots and lots of opportunity to learn it with computation makes it for better results higher quality network second I’m showing you here Microsoft’s progress. In their image recognition that work this is AlexNet and it’s just tiny by today’s standards which is tiny by today’s standards only four years old it’s eight layers it performs one point four billion operations okay and achieved an error rate of sixteen percent and literally three years later Microsoft announced the ResNet at the deep learning deep neural net super deep network hundred fifty two layers and recently recently a sense time a announced that they broke this record with a network that is four times deeper. Several hundred layers deep neural network these networks are getting larger and larger and larger and as they get larger they can recognize more and more and more subtle details as a result of that their accuracy comes up Baidu deep speech one deep speech to went from eighty Giga flops of total processing to train that network to something that was. Ten times larger. Unbelievable advances and computational demands in deep learning as you can see these numbers are far far far far higher the Moore’s law. And that is the Handicap of deep learning at some level. This new computing approach this new computing approach if it were to advance. If Award to advance needs to conviction. Needs to conviction of an industry where at least the conviction of a company. To push computing technology at a pace. That is so much greater than Moore’s law. Well we thought. Why not us. We think deep learning is amazing. It is this incredible Hammer that fell from the Sky. It has the ability. To Turbo charge AI. It could be the Foundation of the next generation of computing and it can solve problems that we only dreamt of solving. Our whole lives. And for me personally. I want to do it before I get to retire. I’ve been doing I’ve been doing my job for twenty almost twenty five years and I want to dedicate my next forty years. I want to dedicate my next forty years to this endeavor and so I’d better get going. Well we invented this we decided after we started to understand about deep learning that in fact the rate of change has to grow not diminish we recently we recently. Rolled out Pascal and the thing that it was incredible was this the first customer of Pascal the first customer of this incredible new processor that sixty five times the performance of what we were able to achieve four years ago sixty five times I mean it’s kind of I love every GPU we’ve ever built you know a parent has to love every child but what is that about. That is the picture of under achievement. And yet yet Kepler Kepler this GPU was the GPU that Alex found. This was the GPU that accelerated a deep learning by a factor or forty over the CPU and look at it. A picture of under achievements. A picture of under achievement so Kepler to Maxwell to Pascal you could see we are incredibly serious about the advancement of this field. The first customer the first customer of I of a de Gtx one is an open laboratory call open AI in their mission their mission is to democratize to advance this field and to end the has gathered some of the world’s finest researchers in the eye to democratize this technology it’s an open industry laboratory and they were the first. Email that I received a soon as I announced this product by the time I walked off stage they’ve asked with a great deal of urgency they need a machine like this to advance their science so NVIDIA’s D G X 1 is the system that embodies the Pascal processor that sixty five times faster than what we were able to achieve just four years ago. While the thing that’s really great is that that our platform is so accessible you could get it in a gaming PC you can get it in the laptop you can get it in a server you can get it in the supercomputer you can get in clouds you can get indeed Gtx once. You could build it yourself you could buy it you could rent it. You could get NVIDIA’s GPU computing platform literally every country everywhere as a result every single framework that has been developed for AI has been optimize for. The NVIDIA’s GPU platform if you are an AI researcher this is your platform and we’re committed to continue to advance it at a rate that is. Incomparable to the rate of computing advance frankly in the last thirty years all of the curves that we have seen the progress of Moore’s law has to be broken we can’t slow down we’ve got a Hyper Charger. This is our first example of Hyper charging Moore’s law well. To bring this capability to the world a we need a whole lot of partners as well and I’m super super pleased and super proud to announce that IBM is a great partner of hours in this new area of the computing as you guys have heard IBM talk about cognitive computing cognitive computing is the future of their company. Cognitive computing has the ability to solve some very very large problems and underneath that cognitive computing. Services stack called Watson needs to be a supercomputer and that’s supercomputer needs to have super capabilities super capabilities for artificial intelligence with working with the IBM team was announced I guess a couple of years ago we work together to create a technology call NV link the power eight which is the fastest microprocessor in the world today is connected to our GPU use directly through this fastest single interconnect that humanity has ever created between power eight NVIDIA Tesla GPU is this interconnect called NV link when you connect all of them together. You have this network of fast processors fast see fast seep using Pascal GPUs and it can be dedicated to solve a I problems partnership with IBM. While today we’re really excited to announce up a new partner. I’m I’m going to show you some some my I amazing applications of of AI in some amazing applications of GPU deep learning and the breadth and the reach of our platform. In just about every industry but there’s there’s one one area of research that is is of of great importance to was an I think this is an area that we can really move the needle for society and it’s in applying AI for the work of companies all over the world to apply for the work of companies all over the world and today we’re announcing that a a P and Ourselves are working together to make the world’s largest one of the world’s largest Enterprise software companies to integrate It with the in video de G X one and are GPU deep learning platform so that we can bring AI capability to enter prices all over the world we’re partnering with Germany in Israel team. Amazing team working on this now and when were successful. Hundreds of thousands of customers of SAP will have the benefit of a I computing so that they can Turbo charge their business really sorry let’s give SAP A round Of applause please. D G X 1 to Gtx one as an instrument of AI. Just like the large Hadron Collider. As an instrument of particle physics. Without that scientific instrument you can’t do reasonable part of the particle physics advance the R&D the budget of DGX-1 was two billion dollars. This is the most expensive. Most ambitious modern computing endeavor in recent history. Ten thousand engineering man years went into it. We’re now shipping to DGX-1. This incredible and this important instrument of AI research should be put into the hands of the world’s best AI scientists. We put in the hands of Open AI The laboratory at Stanford. Peter Beals laboratory of Berkeley. Yoshua Yoshua Bengios laboratory in Toronto or Montreal excuse me. Yann LeCun laboratory at N. Y. U. All of the world’s most important AI laboratories must have access. To the most capable instrument of AI research that the world as a Vr no to Gtx one answer today we’re super proud super excited to announce that the German laboratory German research. Center for artificial intelligence and the swiss AI a lab where your organs located will be the to designated research centers of the in bought NVIDIA our lab here in Europe. don’t have access to our to G5x one supercomputer. They’ll have access to our resources so that we can advance important areas of research in the I the otherwise wouldn’t move along as fast and of course and of course we have all kinds of abilities to collaborate to move AI into society in a good way okay so let’s let’s recognize let’s recognize these to research centers here in Europe they are truly pioneers in AI and I’m so delighted to partner with them thank you very much. So Hey AI training GPU deep learning training now was talking about data Center influencing this is a massive market you train you train the software you train the software week you train the network so that this network could be as as great as possible you train with enormous my did it takes billions of trillions of operations it takes months and months and months this is what researchers now do in the software development cycle of their new services new software when the network as complete words ready ready to be tried or to be enjoyed you put it into a large scale Hyper scale data Center there are millions there are millions tens of millions of servers in the world today that support cloud computing and all of the Internet services that we enjoy. Tens of millions of nodes. Of Hyper scale data centers this is a brand new market for us. And now that these networks now the these networks are trained they’re ready to be deployed into production and we were to design it were to design the exactly right accelerator. The exactly right GPU we can make a possible possible for these networks to be inferenced me when you ask a question what is this image what is the song what did I say. When you ask it of query that it would respond instantaneously and when billions of us when billions of us were to make inquiry simultaneously and some of the queries are Jordan Neri extraordinary quarters were able to make these queries and Haven’t responded instantaneously and for these data centers to be able to support literally a million times more workload. Without having data Center costs go up by a million times and energy consumption to go up by a million times. We need a special anew accelerator we call the Tesla P four and P40 they are two brand new accelerators one of them as for our large scale processing but the second one the P4 is design thank you very much. So this is designed for GPU servers. This is designed for GPU servers and this cute little thing. If you can recall a GPU cute is the cutest GPU the has ever been invented. The Mercedes S six hundred. The Mini Cooper. This little thing P four fits into a opens C. P. Hyper scale server that’s one you. It consumes anywhere depending on your configuration whether you want to be fifty Watts seventy five Watts the thing that’s really amazing is. Fifty Watts here yet two hundred and fifty watch here this is forty times forty times faster than the fastest see you at a I computing at GPU a deep learning so you plug one of these things things in and you replace forty notes. Plug one in replace forty notes forty knows is basically three or four racks of servers replace it with one of these incredible amounts more performance incredible savings this you plug it into this little tiny server and it’s forty times the energy efficiency of a sepia forty times. What used to be a thousand Watts Cpu node would be forty times less. Incredible okay so Tesla P4 and Tesla P40 Thank you. Forty times the energy efficiency and forty times performance performance well that’s just the GPU we’ve opted we optimize the GPU we optimize as new generation of GPU use with new new architectures for deep learning and new instruction and new numerical formats that are optimize for deep learning as a result we get a huge boost in performance but on top of that we need every new architecture essentially needs as new optimize in compiler and we’re announcing for very first time a run time called Tensor RT Tensor RT. What Tensor RT as a performance optimize in inferencing engine it’s a software that goes along with P40 mpg for and when you put the software on top when you run the software and because of supports all these different numerical formats that are GPU support and has the ability to smartly fuse operations that are vertically in the network or horizontally across a layer of the network reducing eliminate the work fusing operations together so that you could do it in a cycle it does a whole bunch of auto tuning so that the network that you trained on the GPU computing platform is now optimize for run time on our GPU computing platform we already support to G Google net resonant AlexNet and these networks and all of the custom layers the you guys want to do in between I’m we’re going to us supporting the future of course all the networks okay so tense or Artie really really really important innovation I’m super excited about it congratulate the engineers that that worked on this is available today go to a website to download it. Well let’s take a look at ’em let’s take a look at all of this. So what I’m going to show you is this. Imagine you’re Hyper scale data Center and you’ve got videos that are streaming that are being loaded up and you know that live video today’s one of the most frequently shared. Forms of social content. And the thing I’m online video is it’s not record it and so you don’t enjoy it it’s gone. And if there’s live video that you would like to share with your friends it would be nice if as you’re applauding the live video it already knows which one of your friends are which one of your family members or you relatives would want to enjoyed that live video and so while we need to do is to make your possible for the data Center to literally look at every single live video stream that’s going by every single mobile user every everybody who’s load live video in the future and we’re going to be loading a lot of life in the video the future every single one of those videos every single moment of it we were going to apply artificial intelligence to figure out is or something of importance in here what is being shown what is being shown and who would be interested in seeing what is being shown who’d be interested in seeing so when we a one would you this and ninety videos a streaming into our server into one note of our server Edward yellow school him run the video first let’s you’ll people this is the video for example. One stream at a time there ninety different streams are all running a seven twenty your up you know we’re we’re we’re loading these are the for example in turn into Youtube lives could be Facebook live it could be a periscope. And these are all live videos and what we need to do is we number one is when to figure out what is it that we’re looking at now for all of us for all of us we probably can tell what we’re looking at back that’s probably somebody doing pushups. Okay. That’s probably somebody playing guitar. That’s probably to people making dinner. And this is somebody playing a formidable Ping Pong opponent it just always seems to come back okay so let’s go ahead and can you go hand label unless school figure out what what the computer sees so to computer things that were playing table tennis that this is sumo Wrestling that there’s somebody on a swing as the rope climbing that they’re growing and tie cheap. So the computer was able to learn from looking at all the videos that it was taught with and now when we load new videos into it is able to recognize and as suppose suppose I had a service and I said look you know what I’m only interested in people play music. And so it’s gotta be smart enough to recognize that these are people play music on what if it’s people playing sports for example. Okay so it would recognize people playing sports and suppose I just wanted water sports. Okay and so so the the amazing thing is in the future as we streamed video artificial intelligence network does able to recognize images an artificial intelligence working their networks that can recognize. The meaning of the image what is known as semantics what is the context by understand the meaning of the image we can now have more information by which we can filter search War to recommend to people on this is something that’s kinda cool so the next no one show you thank you Edward that was great. How could you teach so we know we we can reckon we can Nm teach a neural network how to recognize things we can teach a Neural Network how to recognize the is how do we teach something that we’ve always thought is the domain of humans. Which is creativity. Artistic capability something that defines what a humanist okay and so is it possible for us to teach a neural network artistic capability artistic Flair and so we did is we took a whole bunch of videos a whole bunch of whole bunch of different Arts Arts on by different artist for example Picasso and others and we train network to recognize the style of Picasso. Or a recognize the style of the traditional pencil artist in Asia. We teach them the styles and then we’re going to do is when a show of an image in this case. Image this is where where my this is London right the flag of London’s of fun and so this London eye and we showing an image anyone repainted. Not filter at the repainted we take this image and we say painted again with a different style with artistic style it could be Monet could be Picasso it could be Van Gogh okay so take an image and repainted and suppose we could do it so fast suppose we could do it so fast with our GPU use we could do it on why video. And so let’s let’s take a look at some examples was for shows some video so these are some beautiful places and in Europe. O. Oh okay so we have some live video footage. Recognise that suppose let’s take these live videos and now let’s redraw it with an artistic sensibility for every single frame this neural network this artificial intelligence network redraw the on Oh it’s not a filter I guess it could be a filter think of it as a filter were read redrawing every. Every single frame as be redrawn Oh one frame at a time. Beautiful this is what Picasso would have done to movie incredible neural net artist. Repainting all. These we’ve now seen all kinds of interesting neural networks we know that we could take we can take Arts of certain styles or certain timeframe and we can learn it into a neural network and neural network and actually generate new art completely generate new art and and a images that the most likely done by artist but I’ve never be has never been painted before is one example doing in realtime thank you very much Edward. What you were looking at was the Tesla P40 with a network that was trained. With everything always describing this new artistic Flair. Trained to be an artist video was coming in and it was regenerating the video into this new art form. And then previously you were looking at a stream large streamed large number of streams ninety different streams of video. And we’re recognizing what’s in the video the semantics of the video what’s happening not just there’s a person but the semantic what what they’re actually doing and we’re labeling we’re detecting those videos based on the semantics. While the applications for deep learning GPU deep learning is really broad and the the reach the reach last several years of our GPU deep learning platform is really quite daunting and quite amazing and ups I’m just the reach and the new services that are being used on our GPU are a growing every day in fact if you look at look at Geoff Dean’s presentation a Google just literally three years ago they have some twenty different applications internally at Google that were using was using deep learning it has now grown to nearly three thousand. That’s an exponential growth in just a couple of to three years a Facebook talks about deep learning literally all the time we have great partnership with Baidu do Yelp uses deep learning for recommendation and recognizing which one of the images are best to show other customers Microsoft use a deep learning for core Tina uses deep learning for their speech recognition net flicks uses deep learning for movie recommendations. The list goes on the list goes on pin Trust uses deep learning so that you can now take a image you like I answer you want to know where to buy it and so it recognizes what’s inside an image and a reckon and recommends things that you can buy that are similar not the same necessarily because you might not be able to buy that but similar and point you to those to those websites the number of AI I powered GPU deep learning powered consumer services is literally everywhere in the world I frankly don’t know of one yet. One deep learning one consumer services that doesn’t rely on NVIDIA’s GPU are platforms also available on services it’s available on Ali Baba is available on Amazon it’s available and Microsoft cloud it’s available on IBM cloud if you’re a Enterprise customer we have Enterprise partners who cofigures servers that are ready for deep learning GPU deep learning and so it could be Dell could be HP could be IBM Yeah it could be Cisco it could be Lenovo are we are able to literally reach every corner of the world with GPU servers design and configured for deep learning and now with the partnership of SAP we will soon have applications that are running on these servers to serve the world’s largest enterprises if you want to build your deep learning supercomputer and have a special need or special configuration she would like to build a we also have ODM partners GPU server makers eyes GPU server builders in Taiwan in all over the world that could have him ready for you. Tens and tens of configurations just about every single version from one U to two U to three to four U from one to two to four eight GPUs use how whatever configuration size you would like to have every single version of our GPU use are all supported. And so as you can see whether it’s services that uses NVIDIA GPUs use to cloud services that rents the platform to you to server companies that can offer you servers designed an optimize for GPU deep learning to ODMs and server builders that can help you build it. However you would like to have it NVIDIA’s GPU deep learning platforms available to you that is one of the most important works though we’ve done in the last several years so that we can put this platform democratize it and make it available to literally everybody. Wall as a result start ups are cropping up all over the world we now know of fifteen hundred fifteen hundred startups around the world that are deep learning based startup companies use deep learning to solve some very important problems in the case of deep instinct the using her for Cyber security if we can recognize very very subtle differences in how somebody. Were to rummage through our files that’s subtle differences would be an intrusion and so they can use AI and use deep learning to identified the subtlest of differences and subtlest of intrusion patterns deep learning for giant genomics if we can if we can read and understand the human genome and we can do it fast enough we have twenty thousand in our body. But that’s not the hard part the hard part is first of all understand what those twenty thousand arc and then as a mutate to get in front of it before it spreads so that we can get ahead of it deep learning for self driving cars a really tough problem and way to come back to that deep learning for advertising. This company nerve. Has the ability to recognize a logo a brand a trademark in live video and they could literally in the second in the fraction of a second recognize that for an hour of movies in large and large scale GPU server so as a result as a result they can show the advertisers all of the places where their brand has been exposed and their brand has been presented to customers great for companies who are advertising companies. AI startups here in Europe some really amazing stories benevolent AI. There are thirty million. Medical stories medical papers in the world every thirty seconds a new medical breakthrough is happening and a new piece of pay a new medical paper is being published every thirty seconds. Now I don’t know what how much time doctors have to spend in reading papers but it’s just impossible to stay on top of the torrent of new breakthrough medical research and the papers that are being written there are over a hundred million chemical compounds that our bodies react to positively or otherwise there are tens of millions of patience of various forms of disease. They would like to use deep learning to sort through rummage through and understand the process all of that on structure data to discover insights to advise doctors on how best to discover and how best to invent the next cure the next a drug. Well they were using an Amazon cloud service to process all of that data and they estimated that it would have taken way longer than a year to process the data that I just described and so here in Europe here in Europe benevolent AI is the first customer of the DGX one with the DGX one DGX one they’re researchers can help doctors now process through that enormous volume of data in about a week. From over a year to a week a year is basically impractical nobody’s going to use that approach. Nobody’s going to use that approach but now we can would to DGX one this instrument of AI this AI supercomputer we can now do that processing in basically a week or two small art has the ability to recognize faces and they’re used in a Ivy a surveillance system so they can look for people who are lost to look for people who are wanted. Deep learning for facial recognition what’s really special is us on most of the time we’re looking at looking at our faces it’s not like it’s not like getting a driver’s license or passport photo it’s very unlikely that the person you’re trying to find is looking at you like this it’s very likely they’re either somehow looking away they might be a clue to they might be in shadowed their hair might be down they might be worried and a hat they may have aged a little bit. Gained a few pounds. You can got a ten. They might have changed just a little bit and small art has the ability to recognize that in just a fraction of a second intelligent voice insurance companies companies are had the have cost centers it recognizes and the follows everybody’s it basically recognizes the speech of everybody on the phone and not only that it detects their emotions and so that you could figure out a way to understand whether the person who’s on the phone with you may be deceiving you for example for an insurance claim were a financial trade or something like that intelligent voice really interesting this one’s really cool said Otto has been trained to recognize this Hey I network has been trained to recognize what is plastic versus what’s trash it has as a result automatically automatically picked up sixty time sixty sixty thousand tons. Of plastic. By itself. And now that plastic could be used to be transformed and something else instead of being put into landfills Sadaka really really exciting and of course on that work is something that is very complicated as you can imagine because all the objects of the coming through are all very different. Inference thing is data centers. Pascal P40 in for tense or Archie data centers all of the world brand new market for us now the these networks are being developed in they’re ready for production we can now put on these servers and put it into the hands of customers all over the world service providers startups all over the world on talk about intelligent devices. What some people call I Ot but if we infused with artificial intelligence these devices can be rather rather interesting and be used to solve all kinds of interesting problems you know whereas the PC era introduced computers two billion people and the mobile cloud introduce computers to three billion people I believe the AI era. Would put tens of billions of intelligent devices connected to the Internet and these devices and these machines. These autonomy machines could be of all kinds of interesting sizes and shapes only our imagination only our imagination limits us whether it’s a camera the only record when something interesting is happening. Or a little tiny camera and display in microphone and speaker is a little tiny agent called G. Bow and just talking to you and knows who you are my tell you a story might turn into a video conferencing system by call mom. Tell you what the weather’s like today it might be something like Echo essentially artificial intelligence network connected also to an artificial intelligence cloud it might be a drone of completely autopilot pilot. And flies around looking for looking for people to say we’re delivering drugs the medicine somebody who’s in harm it could be something as simple as a grocery delivery robot that delivers groceries to you from around the neighborhood or delivers Pizza. These Internet connected artificial intelligence machines are going to start cropping up and what they need is a are supercomputer they need a computer that battery powered they need a computer the has the capability of a lie and has the ability to be. Instantaneously responding to the circumstances around it and so we created this embedded AI supercomputer called the Jets and T. X one running tense or R. T. the network that we were talking the wrecked the infancy saw optimize Ing software that talked about earlier this little tiny computer has the ability to recognize images and sound and learn and do amazing and wonderful things little tiny AI supercomputer. While having the system is one thing but one of the greatest challenges right now is what is computing mean to software developers and how to software developers take advantage of this amazing capability called deep learning and so we’ve created the platform for it. They become to NVIDIA dot Com R. S. T. K. is rich rich with all kinds of software and algorithms for your supercomputers but something that we’re joined done I’m super excited about is we’re starting an Institute to teach applied deep learning how do you take the problems that you would like to solve. And what kind of tools do you have available to you and how do you use those tools to create essentially an embedded system for a service that you can deploy this network into in a really optimize way we call that the in video deep learning Institute’s and people are so excited about it I think there’s on today I think there’s three hundred people who are going to be attending it this deep learning Institut that we’ve we’ve ruled out has been offered all over United States it’s been offered in Japan it’s been offered in China’s been offered in Taiwan and sold out every single time and so as result we decided to partner with three very large. An incredibly successful digital education platforms for Coursera Microsoft and Udacity. So was super excited to partner with them and would like to spread this new way of doing computing all over the world we want to democratize deep learning want to democratize a I computing so NVIDIA embedded computing platform. Want to talk about something that’s really really important. And this this this industry is not only large the the problems the challenge and technical challenge of creating autonomy vehicles and to bring AI to this industry is not only extraordinary in technical challenge but it’s also extraordinary in society benefits fewer accidents more utilities for the vehicle so we have. Lower cost access to mobility putting mobility making mobility accessible to people the otherwise when habit. Maybe even a complete redesign of horrors cities are. Hey I for transportation is a very very large industry and that’s one of the reasons why so many people are focused on it the thing that that that I’m. with mention we talked about for some time and it’s becoming more obvious to everybody is that autonomous vehicles is not about smart sensors only it needs smart sensors in need lots and lots of but it is if you will a robotics problem. It is a I computing problem you have to first number one perceive what is happening so if I were to show you this particular scene and you you look at those the first thing that you have to do is you have to perceive what is happening what are the things in it and what’s happening the semantics of what’s happening here that there’s cause there’s a there’s workers are cars there’s that’s obviously a construction site and this construction site one of been obvious to you of all we did was can detect the cones and lame and cars and signs. That this particular scene as different that somehow a school bus. That’s driving down the road and a school bus that’s part by the sidewalk is a very different condition that we have to think about a very differently number one perception we have to perceive the world around us sense the world around us number two we have to reason. Reasoning is reasoning is one of the most important things that we do as humans and that’s one of the most important things we have to do as a I computing we have to reason and the number three we have to plan what do I do now was a turns out. In this particular case the plan is not to stop. It’s got Red all over it. The plan is simply to drive more cautiously but the dry stopping is simply the wrong answer when to create all kinds of congestion Ss and so let the people work and we should drive through we have to reason and we at the plan and we have to drive accordingly at the Foundation of all of this is learning and that’s one of the reasons why deep learning has an opportunity to help us solve all of these challenges that we’ve been waiting a long time to solve and that’s one of the reasons why NVIDIA has jumped in with both feet to work with the automotive industry to create a scalable platform. For self driving cars so that together as an industry together we can help revolutionized transportation. Our platform is called drive P X 2 and and the it came with some amount it came with some amount of questions about why is Drive PX-2 to this way well it turns out that a Tamas autonomous vehicles comes at all kinds of sizes and shapes and is now becoming clear that in fact having a scalable platform with one architecture is really the best way to go and the reason for that is this different car companies different segments of the industry. Different. Applications and different countries. Have a different vision or different time. Scale of their vision for autonomy vehicles. They could arrange all the way from somebody who would like to create an auto cruise highway cruising capability that is incredibly safe incredibly safe highway cruising capability but it’s not just about sensing the sensing as localize Ing it’s reasoning it’s planning is acting. It’s a I computing to somebody who would like to be able to say take me home and actually tissue onto the highway off the highway destination to destination auto show for obviously require different amount of computational capability and the number three somebody who like to build a fully autonomous vehicle where there Nope drivers if there are no drivers. You have to be ninety nine point nine nine nine nine nine nine percent accurate and the reason for that is because if something happened to the car doesn’t know what to do it stops it remains stop. For a long time. Maybe forever. And so that car if you have a fleet of these cars they’re eventually going to find something they don’t recognize you’re going to have a fleet of cars they’re all stop. Just all over the World. And so forth autonomy requires even more ability to detect the very corner of corners of corners of condition and so all of these platforms are designed so that we can address different segments of the a ton of Miss vehicle spectrum and everybody’s different visions. We call a Drive PX-2 it allows us to do perception to do reasoning to driving and consist of basically three parts the a I computing part which is the processors the computing system the operating system that takes sensor fusion in does all of the AI processing connected to all the algorithms and do it so fast that the car can actually respond to adverse conditions in time to take the appropriate action you have to do that fast enough so performance matters and scalable well today. We launched when we launch Drive PX-2 we showed everybody the large version of it because our early as customers wanted everything our earliest customers wanted everything working with some seventy eighty different partners around the World. Startup companies. Taxis a service companies. Shuttle companies. Trucking companies. Brand a card companies. Mapping companies. The Numb this ten trillion dollar industry is quite large as you can imagine is ten trillion dollar industries quite large and their players important players in the ecosystem all over and so we started with the highest performing configuration and I’m at a at see us and they’re all kinds of people were using a today this is the smallest configuration of this is drive P X 2 this little tiny computer. This is the little tiny computer connection to a couple of front front cameras and has the ability to recognize what’s in front of you. To localize where you are to connect with a high definition map and to update that high definition map because it can recognize the surroundings going to do Slam and they could update the high definition map. And it’s nice and small fully integrated computer. So this is the Drive PX-2 cruise. We recently announced that this computer. In order to create an autonomy system needs to include not just this computer and the algorithms inside but also connected all the way to the cloud. The proper. Autonomous vehicle platform is a cloud to car last one it’s a cloud to heart platform from age the map all the way down to the AI supercomputer HD map all the way down to the supercomputer in all the software in between and we announced in China a partnership with Baidu that Baidu has selected the NVIDIA platform for their mapping cars for their self driving vehicles us for for a that would be used for OEMs as well as their self driving taxis. That one this one computing platform will be used in those variety of use cases and it would be the same architecture. Drive P X 2 the operating systems called drive works and is connected to the Baidu cloud. Well today today I’m super excited to announce. That we’re partnering with one of the world’s largest. Most pervasive mapping partners Tom Tom that Tom Tom has selected the NVIDIA drive P X 2 to be incorporated into their mapping cars and that together we’re going to create together we’re going to create. A cloud to car platform. For the Western market. Tom Tom has mapped a very large part of the world as you know one of the world’s great mapping company’s their services used by nearly every body and we’re going to work on basically three things. As Tom Tom maps the world they’re collecting video that video has to be process and turned into a hype high definition map data processing is enormous it is the Grand challenge of Supercomputing to literally record the World. And turn it into an HD map it is a computational challenge of extraordinary proportions recognizing lanes recognizing objects recognizing structures recognizing. What is car and don’t and rejected turned the whole thing into a three dimensional map registered fusing so the is coherent and has to be within a few centimeters. All of that processing is done in their cloud the first and when do is work of them to accelerate that processing so that it could be absolutely super real time so that we can collect videos as fast as we want in the future and we can continuously quench it and turn it into a the maps the second thing is the drive P X 2 the Drive PX-2 will now be there in car HD mapping system not only will it continuously collect an update differences into the map to be fused with the each the map and then Thirdly together together we will have a cloud to car platform HD map in the cloud a I algorithms localization algorithms an AI supercomputer four the car. Ladies and Gentlemen let’s welcome Alain de Taeye who is the Board member of Tom Tom the head of HD mapping Alain come on up. Hi. Thanks organizing become from fear in the Nm to that mean thank you nineteen really fifty meters away from the temple as well that’s exactly where we should enjoy we’ve we chose it for that very reason that we can announcer partnership so-so first of all first of all I think it’d be great if the audience had the ability to appreciate. The magnitude of the problem of mapping the world Yeah I know it sounds it sounds nice when when when people just HD mapping of such a short Grace Yeah the magnitude of the problem to tell us about a little bit about that how much of the world have you map while at this moment is a forty seven born one million kilometers of roads in our map database but only a hundred twenty will actually Tomorrow on the Paris motor show we will announce bit more but Can’t speak about that hundred twenty thousand kilometers is mapped in HD the back kinda gives you the okay forty seven million kilometers of which only a hundred thousand HD has been at an HD way that hundred and twenty thousand so you’re almost there Yeah but it’s a small endeavor we’re almost there now out of the so forty seven million out of how much of society does that represent forty seven point one million is about seventy percent seventy percent of of I know of the developed society know uh-huh and out of the forty seven million miles you’ve you’ve surely driven more than a hundred thousand mile
Alain De Taeye: absolutely Yeah so we we have basic information available as by the way one of the reasons why we a use platform we have basic information to grow much faster the whole problem of making an HD map is people believe just as they believed in the obeys the making a navigable map was unaffordable now people believe HD maps which very detailed very accurate is an affordable is not unaffordable know uh-huh need to be clever about it Yeah I’m unique to use AI platforms
Jen-Hsun Huang: Yup
Alain De Taeye: a tool to automatically create and maintain because it’s to be bigger problem
Jen-Hsun Huang: because the world’s changing
Alain De Taeye: absolutely
Jen-Hsun Huang: they world’s changing time relax differences an abstract we send it up to your cloud up you’ve got to find a way to filter out all the junk
Alain De Taeye: Yup
Jen-Hsun Huang: figure out what the major differences are and fuse it with the new map isn’t that right
Alain De Taeye: are we got images in we get traces in Gee I’ll give you an idea think we get over seven billion traces in the day it’s a lot of information and that’s why we need platforms like that also the same with localization if you want to localize your car with the we the centimeter accuracy you don’t want to Miss twenty centimeters when you have self driving car then you need that information which we have has road DNA then does the localization also on your platform
Jen-Hsun Huang: and so between the two of us we are going to endeavor to map forty seven million known civilization roads bettered that’s drivable
Alain De Taeye: let’s make that sixty then we have the whole World
Jen-Hsun Huang: sixty million mile we’re one hundred thousand miles into it
Alain De Taeye: one hundred twenty twenty that’s twenty percent more
Jen-Hsun Huang: and so ends already even more and so so what we need to do of course it sounds like a lot but the fact of those is this once we get going and we need a computing platform go all process at super real time we need to process a super do what time and so what that means as we collected a video for a day we need to process that video in an hour and so once we can do that once we can do that and Fuse it into an HD map will get through sixty million miles in a hurry
Alain De Taeye: I go even step further
Jen-Hsun Huang: you’re going to go to Mars that were didn’t know that
Alain De Taeye: that’s another Guy. But we’ve we’ve put that in the in Mobile mapping fans who can think about putting it in all kinds of commercial vehicles on things like that a putting it in cars and then actually maintain the HD map in the car itself you had a kind of the that’s Holy Grail thus right
Jen-Hsun Huang: incredible wealth the future of autonomous vehicles the future self driving cars requires a very high quality HD map we’re all counting on me going to build It thank you thank congraulations. Okay what me I’m let me show you something New. Thank you all on. Every time I hear Alain du Taeye name I I I actually think of Alain Ducasse as well Alain de Taeye Alain Ducass are there only engineers in the audience. There are no there are no nobody in the audience who enjoys food. Yeah that’s right Alain Ducasse thank you sorry. This is fifteen thousand Watts that I’m standing in front of. You know I’m getting a Tan the back of my back. Okay so so I want to sure you got something new on you guys know that that in order to in order to create a self driving car on there’s the computing platform has the process information fast enough and the fusion the sensor information is coming in from all over you got you’ve got to of course the simple stuff like GPS and the IMU and the in the in the rotation of your tires in your steering wheel you also have the cameras that are coming in all on around you you have Lidars you have radars and then now talking you heard from for Alain we also have HD map all of this information is fused together as the important information into your car’s operating system and this information is being changed an updated and ingested in real time this was the ultimate this is the ultimate real time high throughput Supercomputing problem and you’ve got to get the job done you’ve got to get that job done best effort is not enough. It is not enough that when you had the that you hit enter as you know I tried I just Wasn’t able to detected in time next time when the car is not traveling so fast I will be able to pick into you’ve you’ve got it has to be high throughput it’s got to be mission critical the algorithm for basically driving self driving cars the the the basic functionality I described there’s sensing there’s localization there’s planning and there’s the action taking what I want to show you today’s us this is our operating system the operating system is much much more than this but I want to show you the algorithms part of the the a the the operating system and I want to give you an of show you an update of where we are basically the way works is this the sensory information is coming in on the left we have three artificial intelligence networks running on this car three deep learning algorithms the first one is just detecting things we detect cars would detect lines we detect signs we detect cones we detect things we detect sings okay it’s called Dryden we detecting Ss. As it turns out. None of us drives like that. None of us drive by going no cow no dog no person. No car no Co. No tree no Lake what what we’re all know house no truck belt what’re the list can be pretty long. It would be a be a pill attaining way to drive. We detect these things for two reasons we detect these things for two reasons one we want to continuously update our HD map in a cloud and there are several different markers that we see that can help us figure out where we are number two. As a backup. Just in case. Just in case as a backup but the real way of driving is this. You detect what’s safe to drive when you’re driving your detecting look about the road a safe luck is open I Don’t. there’s nothing inside the absence of things is open road there’s a network that is do we segmentation and figuring out where is it safe to drive. Not what not to head but where is it safe to drive. However. It turns out it turns out there’s even a third way that we a third network and really the way that we drive. When we’re driving we’re not thinking at all Unfortunately. When we’re driving is just of behavior like playing tennis. It’s a behavior you simply grab the steering wheel you start driving all of the sensory information comes in your brain is thinking it’s working and it’s doing completely by reflex and we just drive we just drive and that’s one of the reasons why we can listen to a book while we drive that’s one of the reasons why we can have a conversation while we drive. Your when we’re driving it is a behavior we have a third neural network called pilot net and is simply a behavior network okay none of detection network of behavior know would sort of first thing is that the first section as we detect everything around our car we create one of the most important data structures a self driving cars called the occupancy grid the occupancy grid is a three dimensional grid those be updated completely in real time with all of the sensor information all the detection networks and is creating this three dimensional mesh of your World. It’s creating a virtual World. This is the virtual mathematical world based on all of the sensor information that has accumulated it’s called the occupancy grid that occupancy grid is if you will the primary asset the database the self driving car. That occupancy grid then is tested against what the car would like to do the car is driving by itself using pilot net. It’s going to go and start driving based on just what it sees we’re in tell it go in that direction it’s going to go as going to start driving based on everything that we taught it based on everything we taught it and it’s Gonna be tested against the future prediction of everything around the car. And if we were to hit something for the path would lead to a collision in the future in the near future we would of course override it okay so and then will we do is because we would like to test. We would like to test that our understanding of the world is consistent with the understanding of the car. That our understanding of what we see is consistent with the understanding of the car we visualize the occupancy grid in a virtual reality if you will computer graphics of what the car seats the artificial intelligence car sees and so let me first show you let me first show you the upper left hand upper upper lefthand corner on just detecting things just and please. Doesn’t California this we’re detecting things on when we didn’t attack the our latest network it we’re going to detect Tom went to detect three dimensional objects because a car is not a car as you know a cars you know is not a flat image so we have to detect what we think is the volume around the car. Okay in all of that information now gets go gets included into our Ati occupancy great we would also like to know where the where the lanes are. Wants to know where the lanes we would also that’s those are easy things we will also like to know where it’s safe to drive. Now this is this is really cool look at this it’s looking at the scene and is saying you know what you see the yellow all the yellows the Red passenger not safe yellow not safe Red not see Blue safe. You see this. Okay. Red in recognizes the pass it recognizes pedestrians not safe please do not go there. So it makes sense you see that Red as pedestrians yellow our cars okay and the simply looking at every single image and saying where is it okay to drive where is it okay drive I don’t necessarily driver but where is it okay to drive first of all where is it okay to drive and when I’m of course driving this was this is what I would see no of course I would never going to the other Lane because I know the rules of the Road. But otherwise I can pretty much drive in the areas that is safe to drive lets put altogether Justin and so we got detecting the cars detecting the lanes is running on Drive P. X. Computer the operating system called drive works. And it’s detecting these things okay now these things are all getting put into the occupancy grid as I mentioned earlier let’s go back to the slide please. Okay so you saw all these all these the network detection is we take what I’m going to show you now on is the occupancy grid However of before you can be occupancy grid have to have the HD map in this case from Tom Tom and it tells us we’re lanes are tells us where to signs are and based on our ego motion which is the the motion of the car. We figure out where we are on the road it’s called localization. Okay is a very simple version of reasoning the cars reasoning about where a my in the world and so let’s show people what the car sees. Okay so this is our digital dashboard this comes off of Drive PX-2. As you know one of the things that we do very well as computer graphics. I’ve had people look at this and they go Wow it’s amazing the graphics you guys can do. I know thank you we learned recently. A it turns out that the computer graphics your seeing is X actually quite remarkable this this a team of young engineers over here Justin once you. Come on guys stand up take bow I know you Haven’t slept. Thanks guys it’s amazing what you could do on caffeine alone it is. You know where whereas whereas deep learning is the is the feel of artificial intelligence caffeine is the feel of deep learning engineers. And so what you’re looking at this is the occupancy grid this is the mind of the computer this is literally the data structure of the computer. Okay this is the data structure of the computer and so figured out where it is the Lane these are the car next to it he got see this and we’re tracking we’re detecting the cars and tracking and cars in three sixty. Justin’s doesn’t make sense to show the front video did you want to do that. Sure Yeah the video word we can also enter into sort of all while my cruise mode where the cars driving itself like that okay so when we Aren’t word auto cruise mode the the the the the digital dash changes a pretty dramatically to let us know that it is in auto cruise mode it’s gotta be big change Yeah Yes so now these are all the cars that we detect notice the car infront of us we’re detecting these cars. Okay. You see the cars. And so what’s happening here is we’re detecting were localize Ing connecting to the HD map we create an occupants Ingrid and this is the visualization the occupants but when you look out the window and it’s inconsistent with this. Then you know that something is wrong. I would go back to manual mode file a bug with NVIDIA. Okay we’ll get on and right away and it’s it’s so much more sophisticated and difficult to represent this full three D. Space rather than just some bounding boxes so this really represents really advanced deep learning to be able to identify the exact location of Yeah right to really the true on recognizing the bounding box as we know the extent story of that car and the occupancy grid could be us little tiny Mini Cooper it could be a super super long truck thats right okay so visualization of the occupancy grids now let’s come back to the slides please are of so what you saw earlier is detection and you saw you you didn’t see the occupancy grid but you solved the visualization of the occupancy grid which is in this you you acts which I think it’s going to be pretty important to the future self driving cars action you want the driver you want to do you want the part you want the the passengers to know that the car has the situation in hand and you want to be able to calibrate against where it it wants to drive in were you think it out to dry on no one talking about pilot net. The fact of the matter is when we’re driving we don’t do any Newtonian physics. When we’re driving we’re not doing any calculus when we’re driving we just drive and so the question is how do we get a car to just drive and when we’re in a show you is this this is BB8 this is the latest version of art of our network and we win teaching at how to drive no remember this. BB8 has no detection networks inside. When BB8 is driving its doing we didn’t tell it detected comb. Detect Lane. Detect a post detect fence detect cars detect the absence of roads we didn’t tell BB8 anything we just drove. We drove and drove and drove and drove and drove and drove and BB8 imitating us. So we’re BB8 is going to do is imitate us so the question is this when we’re driving what do we see and why did we know what it turns out we don’t have to describe it using algorithms we don’t have to describe it using equations we only have to do it over and over and over again until BB8 generalize is is detected the features and a generalize is what is driving behavior Ladies and Gentlemen BB8.
BB8 Test Driver: Oh Yeah good universal sign for auto there are no lanes here Ladies And Gentlemen. That’s barely a road. That is incredibly strange that the car did that. No map just imitation and it learned how to drive in the dark Got it
Jen-Hsun Huang: What do you guys think. Autonomous machines the thing that’s really causes and want to show you what Bebe what’s in Bebe eight H mine is that okay the question is what did it C. The question is what did it’s the and so we started driving and eventually learn how to drive just like us okay in learn how to drive like a like us the question is what did it see what are the things that had saw that it learned and generalize and eventually say these things matter to driving these things matter to driving I think you’ll be kinda surprised. Those sparkles are Bebe eights Hey I neurons. That’s what’s firing on this village on this video and luck it’s kind of interesting it’s looking at the corner because that carp particularly cars kinda closer to was an every now and then it’s checks the car in front and checks the car to the right every now and then it’s always looking at the lanes making sure I’m staying inside it. Everything else us not limit their wherever there is no sparkles. Non information to Bebe eight. These are the things that it figured out that were important to it’s driving behavior and then as results it just stayed in the middle Delaine all by itself did no calculations because it knew that large League when we drove we stayed in the middle the legs. It knew that when we drove when there are no lanes we stayed in the middle the road and it recognize to break it recognize how Detroit discern a mud road a dirt road and dirt row in the dark next to bushes that when we don’t seem to drive over bushes. We seem to drive over the dirt road okay Ladies and Gentlemen deviate. So today we’re announcing we’re announcing that Driveworks so we talked about in the beginning the year as made enormous progress and where to process the packaging old up our strategy would Driveworks is this drive T. action Driveworks as an open platform. It’s an open platform it’s an open platform that tier one a OEMs and tear two one Oh D. M.’s An end to end your honor we isn’t and car companies have the ability to pick off the pieces that they would like to use or to pieces they were not to replace. And together we will work together as an industry to move autonomous driving forward. Drive works as an open platform we have reached Alpha one as you know that this going to be an area of research and development for years to come and even though we’re Gonna see some self driving cars get on the road here in the next next year or make note no more the next year no more the next couple of years we’re going to continue to enhance the software today we’re announcing that drive drive drive works Alpha one will be released to our early partners in October after that of which they could use pieces of they liked pieces that were like to replace an after that we will updated every two months just as we continue to learn as humans our car will continue to keep to learn and a network will get better and better and better consider what we’ve achieved in just one year’s time him in imagine we’re would be in another couple to three years. Drive works Alpha one. Well the number of a Thomas vehicles on a road are increasing and we’re working with some eighty company’s eighty partners around the world one after another after another over the next several months the coming year are going to be revealed. I think that the vision of having an AI computing platform and a I computing platform by which cars could be built on top of that has the ability to do perception and sensing. Localization and reasoning planning and driving the ability to have all of that on top of a deep learning platform is really quite the right answer at this point and we’re seen just really really rapid developments across all of our partners. While the word that you’ve seen so far is really at the intersection. You know whether it’s whether it’s the the company slide that I showed you were all of the different examples that I created that I’ve shown you so far they share one thing in common that the work that we’re doing that NVIDIA’s doing is at this intersection between visual computing. Which in our brain consumes the vast majority of our neural cortex Hey I and high performance computing the work that we do have to be achieved Vr very quickly this area of computing a Ike we call a I computing would enable the future of intelligent machines were super excited about this area so much so that several years ago we decided but the world needs a processor that is designed specifically for this intersection we started working on a project in truly called project Xavier and leaves today I’m announcing project Xavier to you project Xavier. Is basically an AI supercomputer S. O. C. seven billion transistors isn’t we put seven billion transistors into cutting into context for you seven billion transistor transistors is equivalent to the largest CPU the world’s ever made. The highest performance server seep you the largest number of course you can find is just about seven billion transistors this is the largest. Processor endeavor that I know of that we have ever done. Not only is it large it is also multi functional not only as a multi functional the throughput requirements necessary is really quite tremendous the ability to support. HD cameras all Oh all over you lied ours and radars the ability to do three fundamental things three fundamental things. Deep learning. Computer vision and high performance computing. These three fundamental elements of computing we think is going to be super super exciting area for us to innovate and we’ve taken an enormous chance to create Xavier and I’m just so incredibly excited four seven billion transistors eight high performance see few course inside five hundred and twelve of our next generation GPU course has a brand new computer vision accelerator. Moving video processing to 8 K. process it in full HDR and the reason for that for in the case of autonomy privacy we need a very very precise black box inside the car and it should be recording things all the time and recording an HDR and designed for a Sul see functional safety. This is. The greatest S. O. C. endeavour I have ever known and we have been building chips were very long time is so this is project and project Xavier our next generation Association. Have samples and the next year. Well let me see what it looks like when you have Xavier this is Drive PX-2 The Drive PX-2 mother board includes to tag lace generation Tech wrestle seas and to discreet GPU use okay the Drive PX-2 and it’s full configuration is to Parker’s into Pascal GPU use it performs twenty trillion operations per second twenty trillion operations per second of deep learning capability and have Aren’t twenty eight spec hundred and twenty speculations all in about eighty Watts okay so this configuration hundred and twenty tops and Hearn twenty point tops in her twenties beckons is eighty Watts well that’s approximately equal to a hundred and fifty Mac books a hundred and fifty Mac pros own little tiny computer that since inside a car while Xavier is exactly the same architecture it’s twenty tops as a hundred and sixty spec Ins and twenty wants in a little tiny board like this. King Xavier and so just imagine what a Tom’s vehicle to do a in the near future where Xavier so super excited by Xavier we have plenty of time before next year and I’ll give you more details as we go. Let me quickly summarize are announcements today. We have introduced an end to end deep learning platform. I introduced the new P40 and P4 today it opens up a brand new market for GPU deep learning we have great partners for Enterprise i B M and now really excited to announce S. A. P. We now have the ability to take GPU deep learning from trading. All the way to inference Ing in data centers from training the network to applying the networks to queries the incredible number queries the consumer applications house from trainees are networks to actually finding in sight for your business okay so first thing as an announcement of P40 in for the second set of announcements has to do with our self driving car initiative. We are today announcing that drive works Alpha one. All of the capabilities we talked about will be released to our partners first released in October and then after that every two months we announced a partnership with Tom Tom that together we will enable a autonomous driving platform from cloud to car and then Lastly the future of AI computing. This processor we called a Xavier one of the greatest endeavors of our company. Well I think you could you can get a sense now of why think a I going to be so important for the future of the computer industry and frankly for the future of the world there are so many so many ideas now of the applications of a I new problems that were able to solve that we weren’t able to solve before whether it’s AI for transportation a ten trillion dollar industry that we can we have an opportunity to make addict big differences a I to revolutionize medicine and then a I of course to completely utterly utterly revolutionize society with intelligent machines there among us helping us do things that are mundane helping us do things that are dangerous or even helping us do things that we simply have no possibility of doing ourselves this is a great new era of computing own a welcome all of you to GTC And I look forward to seeing all of you today.