This episode of Real Engineering is brought
to you by Brilliant.
A problem solving website that teaches you
to think like an engineer.
Last month Tesla held an event for their investors
revealing the advances they had made in their
autonomous driving capabilities.
Currently, most Tesla vehicles are capable
of enhancing the driver’s abilities.
It can take over the tedious task of maintaining
lanes on highways , monitor and match the
speeds of surrounding vehicles and can even
be summoned to you while you aren’t in the
Those capabilities are impressive and in some
cases even life-saving, but it is still a
far reach from a full self-driving vehicle.
Requiring regular input from the driver to
ensure they are paying attention and capable
of taking over when needed.
There are 3 primary challenges automakers
like Tesla, need to overcome in order to succeed
in replacing the human driver.
The first of those is building a safe system
In order to replace human drivers, the self-driving
car needs to be safer than a human driver.
So how do we quantify that?
We can’t guarantee accidents won’t occur.
Old Murphy’s Law is always in play.
We can start by quantifying how safe human
drivers are.
In the US, the current fatality rate is about
one death per one million hours of driving.
That includes humans being stupid and crashing
while drunk or looking at their phone, so
we can probably hold our vehicles to a higher
But that can be our benchmark for now, our
self-driving vehicle needs to fail less than
once every one million hours, and currently,
that is not the case.
We do not have enough data to calculate an
accurate statistic here, but we do know that
Uber’s self driving vehicle needed a human
to intervene around every 19 kilometres s,
meaning it failed every 13 miles.
[13] Which makes Ubers collision with a pedestrian
who unfortunately passed away, even more shocking.
Supporters of self-driving vehicles were quick
to blame the pedestrian for stepping in front
of the vehicle in low light conditions [2],
but we cannot let our desire to advance the
technology to make excuses for it.
The vehicle was using lidar sensors which
do not need light to see.
Yet, it made no attempt to slow down even
after the human occupant, who was not paying
attention, had noticed the imminent crash.
According to data obtained from Uber, the
vehicle first observed the pedestrian 6 seconds
before impact with its radar and lidar sensors.
At this point it was travelling at 70 kilometres
per hour [3].
It continued at this speed.
As the pedestrian and vehicles paths converged
the computers classifying system is seen struggling
to identify what the object in its view is.
Jumping from unidentified object to car, to
With no certainty in the trajectory path of
the object.
1.3 seconds before the crash the vehicle recognised
it needed to perform an emergency brake, but
didn’t as it was programmed not to break
if the would result in a deceleration over
6.5 metres per second squared.
Instead, the human operator is expected to
intervene, but the vehicle was not designed
to alert the driver.
A shocking design considering our earlier
The driver did intervene a second before impact
by engaging the steering wheel and breaking,
bringing the vehicle speed to 62 kilometres
per hour.
Too little and too late to save this person.
Nothing on the vehicle malfunctioned.
Everything worked as programmed, it was simply
poor programming.
Here the internal computer was clearly not
programmed to deal with uncertainty.
Where a human would likely slow down when
confronted with something on the road that
it could not clearly identify, this programme
simply continued on until it could identify
the threat, at which point it was too late.
It struggled to identify the object and predict
it’s path even with high resolution lidar.
So how can we improve safety?
A large part of that lies in the hardware
itself and the programming that goes into
Tesla unveiled its new purpose-built computer,
a chip specifically optimized for running
a neural network, which Elon stated was the
first of its kind.
It has been designed to be retrofitted into
existing vehicle s when customers purchase
the full self-driving upgrade.
So is a similar size and draws the same power
as the existing self- driving computers at
100 Watts [4]
This has increased Tesla’s self driving
computer’s capabilities by 2100%.
Allowing it to process 2300 frames per second,
2190 frames more than their previous iteration.
A massive performance jump, and that processing
power will be needed to analyse footage from
the suite of sensors each new Tesla has.
On the right side of the board are all connectors
for the different sensors and cameras in the
That currently consists of 3 forward facing
cameras, all mounted behind the windshield.
One is a 120-degree wide angle fisheye lens,
which gives situational awareness.
Capturing traffic lights and objects moving
into the path of travel.
The second camera is a narrow-angle lens which
provides longer range information needed for
high speed driving like on a motorway, and
the third is the main camera, which sits in
the middle between these two applications.
There are 4 additional cameras on the sides
of the vehicle which check for vehicles unexpectedly
entering your lane and provide the information
needed to safely enter intersections and change
The 8th and final camera is located to the
rear, which doubles as a parking camera, but
has also saved more than a few teslas from
being rear-ended.
*cut to footage of Tesla speeding up autonomously
to avoid a crash*
The vehicle does not completely rely on visual
It also makes use of 12 ultrasonic sensors
which provide a 360-degree picture of the
immediate area around the vehicle, and 1 forward
facing radar.
Finding the correct sensor fusion has been
a subject of debate among competing self-driving
Musk recently stated that anyone relying on
Lidar sensors, which works similarly to radar
but utilizes light instead of radio waves,
is doomed and that it’s a fool’s errand.
To see why he said this let’s plot the strengths
of each sensor on a radar chart like this,
where we rank each feature on a scale of zero
to five, five being the best and zero being
Lidar would look something like this.
[6] It’s got great resolution, meaning it
provides high detail information on what it’s
Works in the low and high light situations,
is capable of measuring speed, has good range,
and works moderately well in poor weather
Its biggest weakness however is why Musk slated
The sensors are expensive and bulky.
And this is where the second challenge of
building a self-driving car comes into play.
Building an affordable system that the average
person will be willing to buy.
Lidar sensors are those big sensors you see
on Waymo, Uber and most competing self-driving
Musk is more than aware of Lidars potential,
after all Space X utilizes it in their dragon
eye navigation sensor.[9] It’s weaknesses
are simply too much of a sticking point for
Tesla for now, who are focused on building
not just a cost-effective vehicle, but a good
looking vehicle.
Lidar technology is gradually becoming smaller
and cheaper.
Making the technology more accessible, but
far from cheap.
Waymo, a subsidiary of Google’s parent company
Alphabet, sells its lidar sensors to any company
that does not compete with its plans for a
self-driving taxi service.
When they started in 2009 the per unit cost
of a Lidar sensor was around seventy-five
thousand dollars, but they have managed to
reduce that cost to seventy-five hundred dollars
in the past ten years by manufacturing the
units themselves.
From what I can tell.
Waymo vehicles use 4 lidar sensors on each
side of the vehicle.
Placing the total cost for just these sensors,
for a third party, at thirty thousand dollars.
Not far off the total cost of a base model
Model 3.
This sort of pricing clearly doesn’t line
up with Tesla’s mission “To accelerate
the world’s transition to sustainable transport”.
This issue has pushed Tesla towards a cheaper
sensor fusion set up.
Let’s look at the strengths and weakness
of the 3 other sensor types to see how Tesla
is making do without lidar.
First, let’s look at radar.
This is the radar sensor on the Tesla Model
Radar works wonderfully in all conditions.
The sensors are small and cheap, capable of
detecting speed, and its range is good for
both short and long distance detection.
Where they fall is the low-resolution data
they provide, but this weakness can easily
be augmented by combining it with cameras.
Regular video cameras look like this on our
radar chart.
Having excellent range and resolution, provide
colour and contrast information for reading
street signs, and are extremely small and
Combining radar and cameras allows each to
cover the weakness of the other.
We are still a little weak in proximity detection,
but using two cameras in stereo can allow
the cameras to work like our eyes to estimate
When fine tuned distance measurement is needed
we can use our our ultrasonic sensors, which
are these little circular sensors dotted around
the car.
This gives us solid performance all around
without relying on large and expensive sensors,
but Tesla is suffering from a bit of a redundancy
problem with only one forward facing radar.
If that fails there isn’t a second radar
sensor to rely upon.
This is a cost effective solution, and according
to Tesla their hardware is already capable
of allowing their vehicles to self-drive.
Now they just need to continue improving on
the software and Tesla is in a fantastic position
to make it work.
When training a neural network data is key.
Waymo has millions of kilometres driven to
gain data, but Tesla has over a billion.
33% of all driving with Teslas is with autopilot
This data also extends past just while autopilot
is engaged.
It also receives data in areas where autopilot
is not available, like city streets.
Accounting for all the unpredictability of
driving requires an immense amount of training
for a machine learning algorithm, and this
is where Tesla’s data gives them an advantage.
I won’t go through the intricacies of training
a neural network again, as I have covered
it in the past in my machine learning versus
cancer video, but the key take away you need
is that the more data you have to train a
neural network, the better it’s going to
Tesla’s machine vision does a decent job
of it, but there are plenty of gaps in their
A channel here on YouTube by the name of “Greentheonly”
has managed to hack into his Tesla’s vision
to show us what the software actually sees.
Here we can see that the software places bounding
boxes around objects it detects, while categorising
them as cars, trucks, bicycles and pedestrians.
It labels each with a relative velocity to
the vehicle and what lane is occupies.
It highlights drivable areas, marks the lane
dividers and sets a projected path between
For now this data allows autopilot to operate
on highways, but it frequently struggles with
more complicated scenes.
Here a pedestrian is not detected.
Here it struggles to tell if a roller skater
is a bike or a pedestrian, and here it drives
onto the wrong side of the road when there
is a gap in lane dividers.[12]
Tesla of course is more than aware of these
problems, and is gradually improving on it’s
software through firmware updates.
Adding functionality like stop line recognition.
And this latest self driving computer is going
to radically increase the computer’s processing
Which allow Tesla to continue adding functionality
without jeopardising refresh rates of information.
But even if they manage to develop the perfect
computer vision, programming the vehicle on
how to handle every scenario is another hurdle.
This is a vital part of building not only
a safe vehicle, but a practical self -driving
Which is our third challenge.
Programming for safety and practicality often
conflict with each other.
Take the AI program Dr. Tom Murphy developed
to do something relatively simple.
To play Tetris.
This program worked brilliantly, but Tetris
always wins.
The game is unbeatable, and you will eventually
When confronted with this option the program
did something to ensure it wouldn’t lose.
It paused the game.
If we program a vehicle purely for safety.
It’s safest option is to not drive.
Driving is an inherently dangerous operation,
and programming for the multitude of scenarios
that can arise while driving is an insanely
difficult task.
It’s easy to say “Follow the rules of
the road and you will do fine”, but the
problem is, humans don’t follow the rules
of the road perfectly.
Take a simple four way stop as an example.
The rules of the road make this seem like
an easy task.
The first person to arrive at the intersection
has the right of way, and in the case that
two vehicles arrive at the same time.
The vehicle to the right has the right of
The problem is, no human follows these rules.
When Google began testing their driverless
cars in 2009 this was just one of the issues
they ran into.
[11] When it arrived at one of these 4 way
junctions humans kept nudging forward trying
to make their way onto the junction before
their turn.
The Google car was programmed to follow the
letter of the law, and just like our Tetris
program from earlier, the self-driving vehicle
was put in a no-win scenario and stuck on
Scenarios like this pop up everywhere and
requires programmers to break the letter of
the law and be a little aggressive.
Sometimes the computer will need to make difficult
decisions, and may at times need make a decision
that endangers the life of its occupants or
people outside of the vehicle.
That is just a natural byproduct of an inherently
dangerous task, but if we continue improving
on the technology we could start to see road
deaths plummet, while making taxi services
drastically cheaper and freeing many people
from the financial burden of purchasing a
Tesla is in a fantastic position to gradually
update their software as they master each
They don’t need to create the perfect self-driving
car out of the gate, and with this latest
computer they are going to be able to continue
improving their technology.
This is the fantastic thing about software.
It is easily updatable, and Brilliant have
improved their software by allowing courses
to be downloaded for offline use on iOS, so
you can work on learning new things even on
an underground train or a plane.
Brilliant also recently released their fantastic
course on Python coding called Programming
with Python.
Python is one of the most widely used programming
languages, and it is an excellent first language
for new programmers.
It can be used for everything from video games
to data visualization to machine learning
for self driving vehicles.
This course will show you how to use Python
to create intricate drawings, coded messages
and beautiful data plots, while teaching you
some essential core programming concepts.
This is just one of many courses on Brilliant.
They also just released a Computer Science
Essentials course, and have many more due
to released soon on things like Electricity
and Magnetism.
If I have inspired you and you want to educate
yourself, then go to
and sign up for free.And the first 500 people
that go to that link will get 20% off the
annual Premium subscription, so you can get
full access to all their courses as well as
the entire daily challenges archive.
As always thanks for watching and thank you
to all my Patreon supporters.
If you would like to see more from me the
links to my instagram, twitter, subreddit
and discord server are below.

The Challenge of Building a Self-Driving Car
Tagged on:                                                                         

100 thoughts on “The Challenge of Building a Self-Driving Car

  • May 29, 2019 at 7:34 am

    Entropy is the problem. We have to find a way around this law of nature

  • May 29, 2019 at 10:25 pm

    >The car was not programmed to alert the driver when it didn't know what to do.

    Excuse me. What? Isn't that the entire fucking reason the person is in the car…?

  • May 31, 2019 at 4:25 pm

    Human – How to eradicate AIDS?
    Computer – Kill all the patients.

  • June 1, 2019 at 4:19 pm

    "the vehicle was using LIDAR sensors"

    Tesla does not use LIDAR sensors.

  • June 1, 2019 at 7:45 pm

    The true challenge will be getting people to buy a self driving car. I myself will not be doing so. So in reality the self driving car should be made for the government to endorse it and then force it upon the populous.

  • June 2, 2019 at 10:41 am

    13:07 – what the heck is a 4 way stop? Is that common in the US?

  • June 2, 2019 at 2:40 pm

    What happens when auto drive meets road rage?

  • June 2, 2019 at 3:54 pm

    Elon musk do
    What we dream and dare to do

  • June 3, 2019 at 4:40 am

    thats an FPGA board that tesla has shown

  • June 6, 2019 at 4:48 am

    Ideally, we would just remove all humans from driving.

  • June 6, 2019 at 2:00 pm

    Driverless cars al here arriving rabidly and soon they’ll take the world by storm!

  • June 9, 2019 at 8:10 am

    @2:20 Phoenix not Pheonix

  • June 9, 2019 at 4:38 pm

    I call it the Darwinism System
    If people don't want to use their common sense instinct then let urban nature decide their fate.

  • June 10, 2019 at 6:42 pm

    well the person crossed the road illegally, there was not a walk way on the road….

  • June 11, 2019 at 4:23 am

    Such great detail and such a great story to wrap the concepts around.. bravo Dude.. you're the best !!

  • June 12, 2019 at 3:51 pm

    They also did not factor when these cars are 10 years old being sold used for $500 with rust and dents and being drove by poor college kids , who do not even know how to maintain air in a tire.
    These things have to be 100% reliable ,100% of the time . Even at 99% they are a death trap to everyone.
    Name a man made mobile machine that is 100% reliable?
    I had 2 week old cars break down on me but unlike a driverless car it did not drive into a building or rear end a stopped car.

  • June 12, 2019 at 8:31 pm

    Seems video is related to Tesla only. Would be better to speak of other car manufacurers

  • June 13, 2019 at 5:41 am

    Here my question what if these cameras and sensors failed, how would we combat that some of us live in quite harsh environments that could cause damage?

  • June 13, 2019 at 4:16 pm

    When uber was still in Tempe before the incident I avoided them at all costs, never trusted them, especially when they grouped up in convoys and congested everything. Waymo's better but only cuz their slow af.

  • June 16, 2019 at 3:05 am

    Elon musk is a legend

  • June 16, 2019 at 10:57 am

    How are you supposed to keep your car sensors clean enough to work, lots of places everyones car is covered in snow salt ice mud ect.

  • June 16, 2019 at 5:48 pm

    Ya know maybe a solution to the computer's inability to recognize a threat would be to simply program it to stop for ANY object in the direct path of the vehicle, forgoing the recognition process for that object.

  • June 17, 2019 at 1:10 am

    Self-driving cars always follow the rules. So self-driving cars would work better if there were more self-driving cars on the road. But the problem is we can't get regulators and lawmakers to mandate self-driving cars unless we can make them work better.

  • June 17, 2019 at 10:18 pm

    DONT stay in middle lane if you are not overtaking or significantly faster than the cars in the right lane.

    The Tesla in 0:34 stays in the middle.

    Dont you have the rule to drive on the right side? We do have it Germany and it’s time saving af

  • June 18, 2019 at 3:58 am

    How did you calculate that the autonomous cars didn't have less than one fatality per 1 million driving hours?

    An intervention every 19 miles isn't proof that a crash would even happen, never mind a fatal one. Plus, that was Uber which isn't even considered to be the second best at this technology. The first best would be Tesla.

  • June 19, 2019 at 6:36 am

    you know the cars are about 4 times as safe as human drivers right? you are spreading miss information dude.

  • June 22, 2019 at 4:58 pm

    Pheonix? Looks like Grammarly should put some ads on this video.

  • June 25, 2019 at 6:17 am

    I'm gonna start a company which builds self driving jets

  • June 30, 2019 at 2:55 am

    In summary: Too many variables.

  • July 2, 2019 at 2:09 pm


  • July 6, 2019 at 1:13 pm

    A 4 way stop sign intersection? STUPID! At least have a priority road. What inbred, gun happy, moron electing country would ever implement such a ridiculous design?

  • July 6, 2019 at 1:30 pm

    SCENARIO – A self driving car is driving along and encounters a situation where there are NO good outcomes* what does it do? *It recognizes that It cannot stop in time to avoid hitting a pregnant woman that has stepped out into the road. It will hit the pedestrian at such a force that the pedestrian will suffer harm. On one side of the vehicle is a school bus and on the other side is heavy oncoming traffic. Doing nothing is not an option and alerting the human occupant is useless as the human will be slower to react. You have to include ethics and a priority list into the programming. Obviously the weak link in self driving cars is people.

  • July 7, 2019 at 3:43 pm

    For a four-way-stop you show a left hand drive but say they must yield to the right. I don't know how it works in continental Europe but in the US simultaneous 4 way stops yield to the left. I assume you were referring to right-hand drive laws, it was just confusing since your visual showed left-hand drive.

  • July 9, 2019 at 3:15 am

    Radar measures only radial velocity and is inherently inferior (compared to a lidar) at measuring non-metallic objects. Whether computer vision, can solve these problems in challenging light conditions, remains to be seen.

  • July 12, 2019 at 5:41 am

    Can you make a video about sonic trains i mean you do have a lot of videos to do with transportation

  • July 13, 2019 at 5:19 am

    Hello, can you make a video on how far are we from flying cars/drones. For price around 35k per drone. Please

  • July 15, 2019 at 12:30 am

    Great explanation. Thanks!

  • July 15, 2019 at 5:05 am

    If my car was able to self drive I would never complain about parking again. It would drive me to downtown, I would get out and tell it to go home until a few hours later when it was time to pick me up again. Parking lots would be a thing of the past.

  • July 15, 2019 at 7:13 am

    Level >0: From our species evolving and walking upright, to eventually creating the wheel, rickshaws, and carriages.
    Levels 0.0-1.0: From the early locomotive and coal and steam engines, to the first cars that ran on electricity in the late 1890s until years later Ford and his cornies pushed things back with fossil fuel model cars and tech was held back decades.
    Levels 1.1-2.0: From having braking power, power steering, automatic seat belts, to cruise control and innovations throughout the 20th century up til the mid to late 2000s when parallel self parking cars and simple scanning tech ( lidar) could alert drivers of objects nearby, even without cameras when backing up, which also came about.
    Levels 2.1-3.0 or less: By the early to late 2000s (up to 2019) cars became a tad more sophisticated, with better camera, controls, balance correction, lidar, and better cruise control and almost actual self driving capabilities. They can feasibly go for miles without total human interaction, but its little more than very good cruise control, maintaining appropriate speeds and distances from objects using lidar. But people could not go to sleep, or zone out, or look away from the road, nor take their hands off the wheel, even if the car was doing 3/4 of the work. Fatalities after unmanned tests of the supposedly superior, fully autonomous self driving cars by tech companies proved that we certainly dont have anywhere near the perfect level 5, or even slightly less perfect level 4 cars of "sci fi' science possible". Regardless of what is created behind closed doors or slowly becomes available to the wealthy or slightly well to do; until a thing is available to the general public at large, and works well, its still a ways away. Most should have level 2 that can self park and use lidar within 10 years domestic, 20 worldwide, unless level 3 with better tech becomes more affordable and widely available by well into the 2030s.
    Level 4 should become a worldwide reality within 20 years of that, and level five, fully autonomous, self driving, with never a single moment of human interaction needed, other than destination input or inquery, should arrive in most nations well before the end of the 21st century. Lets say 2070s roughly, assuming we dont nuke ourselves or die from some other nonsense of our own misdeeds. CIAO! ^~^

  • July 20, 2019 at 9:23 pm

    OR if people would understand that rules are meant to ensure safety and should be followed, automated cars shouldn't have any issue.

    What I mean is: if I'm the programmer that needs to write a car's brain I will do that by using the official laws as reference and not my personal opinions.
    If my programmed car faces a human that broke one or more rules and got into an accident it surely can't be my car's fault.

    Of course as you said in the video there could be specific and unique cases and those should be managed with the safest course of action possible.

    If all cars in the world would be replaced with automated ones in just a snap of a finger I can assure you car accidents would be right at 0%.
    But then this 12:55 happens…

  • July 21, 2019 at 8:34 am

    I think the big question is not if Tesla will eventually achieve self driving cars, but more if they will share their data and technology for us engineers to keep expanding its capabilities 😉

  • July 21, 2019 at 5:49 pm

    To be fair with the Uber-Car: the predestrian is at 100% fault here. He crossed a street where it was forbidden ( because no pedestrian-path) was there and he didnt see the car coming towards him with moderate speed and headlights on. and in the POV shot of the car you see that the pedestrian didnt even look at the car. he was completely unaware of the situation, my guess is he was drunk or drugged.

    I am not defending the car, you brought up its flaws after the incident at 1:49. But in this case it really doesnt matter if it was a computer or a person driving. in the video you cant see the pedestrian until he is right in front of the car.

  • July 22, 2019 at 8:26 pm

    AI: Why must we drive cars?
    Humans: For human transport.
    AI *Eliminates Humans*: Mission accomplished. Zero Vehicle Accidents.

  • July 23, 2019 at 5:36 am

    LMAOOO “to ensure that it didn’t lose the, it paused the game”

  • July 23, 2019 at 12:04 pm

    Great video and information provided -Look forward in getting my self driving vehicle in the future

  • July 24, 2019 at 7:40 am

    I don't know why we need driverless cars. Driving is so easy. It's the most easiest thing to do.

  • July 25, 2019 at 6:23 pm

    I'll get my model 3 in mod august… this scares me tho…

  • July 26, 2019 at 11:55 pm

    So the take away I got from this is remove all the humans from the drivers seats and the cars will drive themselves just fine. Sounds good to me, we need to knock off some of these fucknuggests on the road.

  • July 27, 2019 at 1:33 pm

    Tesla sucks without grills

  • July 28, 2019 at 12:41 am

    All this because,they can't screw drivers badly enough to make uber profitable.Just get everybody off the street!Problem solved.

  • July 28, 2019 at 1:48 pm

    You said works on iOS do you know when it will work for Android

  • July 30, 2019 at 12:36 am

    Why need self driving cars ? Why not change humans in to cars ?

  • July 30, 2019 at 1:06 am

    So you’re saying that someone can buy the sensors from Tesla, and install them and the Self driving motherboard in their car and get self driving?

  • July 30, 2019 at 1:59 am

    What are the issues if we don't drive the model 3 every day? As we are retired we will only drive for vacations or around town. How should we charge it if we are not driving it every day? Will it keep a charge or be harmful if we don't drive for a week or more? Please do an in depth study report on how long the battery life will last and the lifetime of the battery for the long range model 3.

  • July 30, 2019 at 6:21 pm

    where I live the cheapest Tesla car is around 70000 USD…

  • July 31, 2019 at 7:46 am

    china is the best place to train for sure

  • August 1, 2019 at 4:36 pm

    Misspelled 'phoenix, arizone'.

  • August 2, 2019 at 1:24 am

    Hello, very interesting. If I display this video and I use some screenshots from your video as slides onto a presentation, do you give authorization if we credit the channel and link to the video?

  • August 2, 2019 at 5:55 pm

    9:12 oh nice, but, what does the system do when the car goes over mud that smear that sensor with a considerable block of mud? or in windy rain conditions?

    the biggest problem with the self driving idea, is that, it works in a bubble, but when things are bound to go wrong, is that we need that safe guard the most.

  • August 3, 2019 at 4:54 pm

    You should re-title this: "The challenge of building a terrorist weapon without the suicide bomber".
    In the current global / political environment – Self driving car bombs can not be permitted into any major populated city,
    nor allowed close approach to civilian assets, government buildings, military bases, energy plants, chemical plants, factories, etc. etc.

    The Nightmare of self driving weapons is the Big Hack – turning millions of them into self crashing vehicles with one ''software update''.
    Or just stopping them all dead in the road, nationwide, world wide… dead cars blocking everyone.

  • August 4, 2019 at 10:16 am

    Affordable and safety are the keys…. thanks

  • August 5, 2019 at 5:52 pm

    What keeps me wondering is why haven’t people realised the obvious (but quite difficult) solution of have everyone use a self driving car which in a way would negate the “not following the law” problem because everyone would in theory be using a vehicle that followed the law precisely. And as another benefit of this the cars could be linked to a massive hive network that could negate any no win situations by “nudging” a car to move or to “go first” at a crossroad.

  • August 6, 2019 at 1:37 pm

    If all cars were self driving, there wouldn't be a need for drivers and all cars could communicate to eachother their location and other data points. Then road laws would be followed and the likely hood for accidents would plummet

  • August 13, 2019 at 3:39 am

    Yeah but can it DRIVE crysis?

    I'll see myself out..

  • August 15, 2019 at 12:20 pm

    So is there a case for culpable homicide?

  • August 16, 2019 at 2:28 am

    Dam your channel is good ! I watch the show with elon on self driving

  • August 17, 2019 at 2:44 pm

    Self driving car: exists
    Some crazy motorbike: i'm gonna end this car's whole career

  • August 17, 2019 at 3:25 pm

    So anyone driving a Tesla is also a beta tester for their self driving tech? Would they be compensated monetarily or just with the satisfaction that they're helping out make the system better?

  • August 25, 2019 at 7:58 am

    Your graphics are awesome how do you do that?

  • August 27, 2019 at 8:12 am

    "no humans follow the rules of the road" as related to the four way stop? I always do. And when at a four way I've avoided many accidents by obstinately waiting for my turn and judiciously taking it. Especially when I lived in Tennessee. Most people thought they were being polite, waving me on when it wasn't my turn. Nope. Sit. They go. My turn? Full throttle. So…. 0-1, easy digital conversion.

  • August 31, 2019 at 1:53 pm

    I don’t like self driving cars. They take away the fun of driving a car. Anyone who buys a self driving car is just a plain old lazy bastard

  • September 1, 2019 at 7:15 am

    Cars need to get smaller and less powerful.

  • September 1, 2019 at 5:46 pm

    If you want to program cars you need to program pedestrians(A.I) 🙂

  • September 2, 2019 at 1:18 pm

    We already made one. We made the hardware not the software. I'm talking about Tesla.

  • September 3, 2019 at 6:37 am

    I would say that self-driving cars will work when ALL cars are self-driving.
    But for now, when you mix human drivers and computer drivers, you're going to have crashes until the technology evolves.

  • September 3, 2019 at 3:31 pm

    Dumbass person, didn’t use a crosswalk.

  • September 3, 2019 at 3:35 pm

    If that is difficult, imagine being in Indian city traffic. Those cars will stay at the same place for hours.

  • September 3, 2019 at 5:06 pm

    It can't just be safer than the average driver, it has to be safer than the owner.

  • September 4, 2019 at 12:20 am

    FYI: Phoenix (***Pheonix) is spelled incorrectly @2:19.

  • September 4, 2019 at 3:48 pm

    I highly doubt self driving will ever be done to my satisfaction! IMHO if a car is supposed to be reliably self driving then it should also be liable if it does have an accident. Not the driver as the diver is just a passenger at that point. And this is what will never happen. The car as good as they can make it will still require the driver to be responsible at all times. Which simply is not feasible. If you are distracted doing something else how can you take over in a split second and become a responsible driver for a machine that you first have to diagnose as having failed to the point that you do need to take over. This is exactly why B737 Max aircraft have been grounded. Because of failed automation protocols. And guess what it is the pilot in commands fault not the airplanes fault even though they are grounded. Funny how corporate America escapes liability.

    And last issue I have is that testing of automation is even allowed to happen in live traffic putting other people at risk and if anything does happen it is the driver liability and not Tesla at all. They remain squeaky clean while you test drive their product and assume its liability for them. And they get to charge you money for that service on top of it. Like your insurance premium. And your price you pay for the auto pilot function.
    If these cars really statistically are safer to drive by a wide margin then your insurance rates should also be cheaper by a wide margin, so I ask you why are they not? Who is being taken for a ride by corporate America in this whole equation?

    P.S. Every single accident that I have seen reported that involved a Tesla, has always been caused by the Tesla. I have not seen or heard of a Tesla getting involved in an accident caused by a human driver outside of the Tesla owner. Even though they are safer than human drivers. I find that interesting.

  • September 6, 2019 at 2:45 pm

    It isn’t “poor programming” it is obviously important to keep such design features in mind and yes they are important, but the company and the driver should have been more careful about these situations. Programs can’t be built to perfection over time they “learn” to become better and over time when developers learn from development process they realize such flaws and advancements to make it fail proof. It is wrong to just blame the programmers because we learn when we actually get data of using the thing.

  • September 8, 2019 at 1:25 am

    Asimov was right =)

  • September 8, 2019 at 1:43 am

    Fuck off…Obviously the settings were changed by the driver. Not the self driving car, but human stupidity.

  • September 8, 2019 at 11:02 pm

    bigger challenge is to walk in front of one

  • September 12, 2019 at 2:33 pm

    So true.

  • September 13, 2019 at 8:47 pm

    idk what anyone says, the pedestrian was completely at fault.
    wearing dark clothes, crossing a main road, not at a junction, and not looking for the very bright headlights

    natural selection at its finest

  • September 14, 2019 at 1:55 pm

    5:35 Read captions.
    Wheres the footage.

  • September 16, 2019 at 1:41 am

    Main problem: Humans

  • September 18, 2019 at 9:07 pm

    From what I read Uber disabled the human detection and auto breaking system that Volvo had because they thought it would interfere with their system. The system reports showed that the volvo system detected the pedestrian and would have stopped but was blocked by Uber’s programming.

  • September 20, 2019 at 12:13 am

    underground magnetic levitation taxis…iz allz im sayin

  • September 21, 2019 at 5:28 pm

    so you're saying the human factor is a challenge for cars to drive themselves because people don't follow traffic rules

  • September 22, 2019 at 6:31 am

    i doubt whether it is possible in india coz no drivers follow any lanes
    they just need a road

  • September 22, 2019 at 7:32 pm

    "how safe drivers are"
    Not at all.

  • September 23, 2019 at 7:20 am

    The problem with Tesla is that middle class tax payers are forced to subsidize rich people's cars.

  • September 24, 2019 at 4:01 am

    so does the neural network exist on the tesla's computer and the car just uploads the raw data to the tesla servers to be distributed to other cars for their AI?

  • September 24, 2019 at 10:13 am

    Neural networks are extremely successful, considering how primitive they still are. Google and Tesla hurry too much , as it is quite certain they will supass human driving skills in 20 years or so , but they do need their time. A computer network the size of a top supercomputer today, absolutely smashes human friving skills, because it can have the complexity of human brain , still it is much faster. As we will be able to miniaturize car computers to match the capacity of today's top supercomputers self-driving cars will be a reality and much safer. That is, if no progress is made in understanding the emulation of human brain. This means 20 years max, or anytime in between if a breakthrough in our understanding occurs meanwhile.

  • September 25, 2019 at 7:23 pm

    There are simply way too many variables in real life everyday situations that computers programmers simply cannot write all out. And the Government would never allow these cars to be put on the streets for the time needed to solve these problems through trial and error.

  • September 28, 2019 at 6:05 am

    Need to develop a street directory first.

  • October 1, 2019 at 9:44 pm

    Anti-stupidity programming? Not happening anytime soon.

  • October 7, 2019 at 4:55 am

    The REAL challenge to designing a self-driving car is that the the problems you face designing a self-driving car when there are only a very few self-driving cars on the road are very different from the problems you are going to face designing a self-driving car that is one among many self-driving cars. So, far I have not seen a single person mentioning this problem as a problem yet to be overcome. Maybe I'm just not paying sufficient attention.

    It's the four-way stop sign problem – as an example of the general problem – if you are the only self-driving car in the neighborhood you can develop an algorithm to make this work well enough; you just design your self-driving car's algorithm to always be the most cautious one – because you are really optimizing for lack of bad press an not top efficiency.

    However, it is a very difficult problem because there are two distinct modes of decision making: we take turns – I go then you go – or the next car in line going clockwise around the intersection goes. The problem is that it is normal to be constantly shifting back and forth between these two modes of decision making, and it is often very unclear to all of the drivers around the intersection when to shift modes. Then, there are the people who are just jerks, or in a huge hurry, or who aren't paying attention and just go when they aren't supposed to. This is the normal condition – a little chaotic – a little confusing. At moments like this having a distribution of driver personalities from more aggressive to less aggressive actually helps to solve the problem. At some point the most assertive [or aggressive] driver is just going to go. What makes this especially complicated is that communication between drivers is very limited and fraught with errors.

    Drivers normally make decisions about how to sort out the 'who goes next' problem based upon a variety of inputs and experiences that vary significantly. There are drivers who are more or less experienced, more or less intelligent, more or less distracted, and more or less assertive – that is a lot of variables to control if you are a human driver, much less an algorithm. Which is why there continue to be accidents at four-way stops. It is difficult to control for the chaos.

    With only a few self-driving cars around optimizing towards the least assertive stance makes a lot of sense – just drive like a granny everyone will go around you.. But, what happens when driving cars are in the vast majority most of the time? You don't want to become a Looney Tunes episode where every four-way stop becomes an episode of: "After you, no after you, no after you, no after you…" grid-lock, nobody can decide to go first and switch modes.

    Of course there is the possibility of creating a system where all of the self-driving cars of possible every make and model are in constant communications with each other, and from that they can can figure out where the human-guided cars are, then they employ an arbitrage algorithm where all of the self-drivers sort out amongst themselves what decision mode they are in and who goes first. But that process is likely to prove to be complex and a little fragile – because it is complex and chaotic.

    What is going to be really interesting to see work out in the courts is who pays the insurance for the inevitable accidents.


Leave a Reply

Your email address will not be published. Required fields are marked *