In the piece — titled “Can You Fool a Self Driving Car?” — Rober found that a Tesla car on Autopilot was fooled by a Wile E. Coyote-style wall painted to look like the road ahead of it, with the electric vehicle plowing right through it instead of stopping.
The footage was damning enough, with slow-motion clips showing the car not only crashing through the styrofoam wall but also a mannequin of a child. The Tesla was also fooled by simulated rain and fog.
And the president is driving one of these?
Maybe we should be purchasing lots of paint and cement blockades…
The president can’t drive by law unless on the grounds of the White House and maybe Camp David. At least while in office. They might be allowed to drive after leaving office…
This isn’t true at all. I can’t tell if you’re being serious or incredibly sarcastic, though.
The reason presidents (and generally ex presidents, too) don’t drive themselves is because the kind of driving to escape an assassination attempt is a higher level of driving and training than what the vast majority of people ever have. There’s no law saying presidents are forbidden from driving.
In any case, I would be perfectly happy if they let him drive a CT and it caught fire. I’d do a little jib, and I wouldn’t care who sees that.
Current and past presidents are prohibited from driving.
you’re gonna have to drop a source for that.
because, no, they’re not. the Secret Service provides a driver specially trained for the risks a president might face, and very strongly insists, but they’re not “prohibited” from driving simply because they’re presidents.
to be clear, the secret service cannot prohibit the president from doing anything they really want to do. Even if it’s totally stupid for them to do that. (This includes, for example, Trump’s routine weekend round of golf at Turd-o-Lardo)
to be clear, the secret service cannot prohibit the president from doing anything they really want to do
Was Trump lying when he said the SS wouldn’t take him back to the capital on Jan 6?
I could definitely see him lying about that so he doesn’t look like he abandoned his supporters during the coup, but I could also see the driver being like “I can’t endanger you, mr president” and ignoring his requests.
Was Trump lying when he said the SS wouldn’t take him back to the capital on Jan 6?
Definitely not. There is no way in hell the secret service would have taken the president to that shit show. Doesn’t mean that they would have physically arrested him if he insisted going on his own, however.
I don’t think Trump can drive. As in, he doesn’t even know what the pedals do.
clearly knows what he is doing
He’s going to fall out of the cab on the next right turn.
He looks like he’s making the siren sounds and having a great time
I imagine when he’s driving around his golf course he makes voom voom noises
The real question is, in a truly self-driving car, (not a tesla) are you actually driving?
When he was in the Tesla asking if he should go for a ride I was screaming “Yes! Yes Mr. President! Please! Elon, show him full self driving on the interstate! Show him full self driving mode!”
The bar set for self-driving cars: Can it recognize and respond correctly to a deliberate optical illusion?
The bar set for humans: https://youtu.be/ks11nuGGupI
For the record, I do want the bar for self-driving safety to be high. I also want human drivers to be better… Because even not-entirely-safe self-driving cars may still be safer than humans at a certain point.
Also, fuck Tesla.
I mean it also plowed through a kid because it was foggy, then rainy. The wall was just one of the tests the tesla failed.
Right, those were the failures that really matter, and Rober included the looney tunes wall to get people sharing and talking about it. A scene painted on wall is a contrived edge case, but pedestrians/obstacles in weather involving precipitation is common.
It’s no longer an edge case if faulty self driving becomes the norm.
Want to kill someone in a Tesla? Find a convenient spot and paint a wall there.
Doesn’t even have to be an artificial wall, for example take a bend on a mountain road and paint the rock.
Next test I would love is what is the minimum amount of false road to fool it.
Have you ever seen examples of how the features that ai picks out to identify objects isn’t really the same as what we pick out? So you can generate images that look unrecognizeable to people but have clearly identifiable features to ai. It would be interesting to see someone play around with that concept for interesting ways to fool tesla’s ai. Like could you make a banner that looks like a barricade to people, but the cars think looks like open road?
This isn’t a great example for this concept, but it is a great video. https://youtu.be/FMRi6pNAoag?t=5m58s
I was thinking something that the AI would think the road turns left and humans see it turns right
A better trick would be to paint the road going straight when there’s a cliff. Much easier to hide the evidence that way.
E. Lon Musk. Supah. Geenius.
MEEP MEEP
“They only paid me to say it once…”
Wank E. Cuckyote
Will he Ket o’ won’t he?
Painted wall? That’s high tech shit.
I got a Tesla from my work before Elon went full Reich 3, and try this:
- break on bridge shadows on the highway
- start wipers on shadows, but not on rain
- break on cars parked on the roadside if there’s a bend in the road
- disengage autopilot and break when driving towards the sun
- change set speed at highway crossings because fuck the guy behind me, right?
- engage emergency break if a bike waits to cross at the side of the road
To which I’ll add:
- moldy frunk (short for fucking trunk, I guess?), no ventilation whatsoever, water comes in, water stays in
- pay attention noises for fuck-all reasons masking my podcasts and forcing me to rewind
- the fucking cabin camera nanny - which I admittedly disabled with some chewing gum
- the worst mp3 player known to man, the original Winamp was light years ahead - won’t index, won’t search, will reload USB and lose its place with almost every car start
- bonkers UI with no integration with Android or Apple - I’m playing podcasts via low rate Bluetooth codecs, at least it doesn’t matter much for voice
- unusable airco in auto mode, insists on blowing cold air in your face
Say what you want about European cars, at least they got usability and integration right. As did most of the auto industry. Fuck Tesla, never again. Bunch of Steve Jobs wannabes.
It’s brake, the car brakes.
It probably breaks as well, but that’s not relevant right now.
I bet the reason why he does not want the LiDAR in the car really cause it looks ugly aestheticly.
It’s also very expensive.
Sorry but I don’t get it. You can getva robot vacuum with lidar for $150. I understand automotive lidars need to have more reliability, range etc. but I don’t understand how it’s not even an option for $30k car.
IIRC robot vacuums usually use a single Time of Flight (ToF) sensor that rotates, giving the robot a 2d scan of it’s surroundings. This is sufficient for a vacuum which only needs to operate on a flat surface, but self driving vehicles need a better understanding of their surroundings than just a thin slice.
That’s why cars might use over 30 distinct ToF sensors, each at a different vertical angle, that are then all placed in the rotating module, giving the system a full 3d scan of it’s surroundings. I would assume those modules are much more expensive, though still insignificant compared to the cost of a car sold on the idea of self driving.
You’re car’s not driving indoors at 1mph with the maximum damage being tapping but not marring the wall or vehicle.
You need high speed, bright lasers, and immense computation to handle outdoor, fast, dangerous work
It costs too much. It’s also why you have to worry about panels falling off the swastitruck if you park next to them. They also apparently lack any sort of rollover frame.
He doesn’t want to pay for anything, including NHTSB crash tests.
It’s literally what Drumpf would have created if he owned a car company. Cut all costs, disregard all regulations, and make the public the alpha testers.
The guy bankrupted a casino, not by playing against it and being super lucky, but by owning it. Virtually everything he has ever touched in business has turned to shit. How do you ever in the living fuck screwup stakes at Costco? My cousin with my be good eye and a working elbow could do it.
And now its the country’s second try. This time unhinged, with all the training wheels off. The guy is stepping on the pedal while stripping the car for parts and giving away the fuel. The guy doesn’t even drive, he just fired the chauffeur and is dismantling the car from the inside with a shot gun…full steam ahead on to a nice brick wall and an infinity cliff ready to take us all with him. And Canada and Mexico and Gina. Three and three quarters of a year more of daily atrocities and law breakage. At least Hitler boy brought back the astronauts.
it did cost too much at the time, but currently he doesnt want to do it because he would have to admit hes wrong.
The panels are glued on. The glue fails when the temperature changes.
I can’t believe that this car is legal to drive in public.
Right? It’s also got a cast aluminum frame that breaks if you load the trailer hitch with around 10,000 lbs of downward force. Which means that the back of your Cybertruck could just straight up break off if you’ve frontloaded your trailer and hit a pothole wrong.
If you own a tesla or a cybertruck you deserve it.
To be fair, if you were to construct a wall and paint it exactly like the road, people will run into it as well. That being said, tesla shouldn’t rely on cameras
Edit: having just watched the video, that was a very obvious fake wall. You can see the outlines of it pretty well. I’m also surprised it failed other tests when not on autopilot, seems pretty fucking dangerous.
Yeah, the Roadrunner could easily skip by such barriers, frustrating the Coyote to no end. Tesla is not a Roadrunner.
in the world of coyotes, be a roadrunner.
MEEP MEEP.
Watch the video it’s extremely obvious to a human driver that there is something wrong with that view ahead. It’s even pointed out in the video that humans use additional visual clues when a situation is ambiguous.
The cars don’t have deduction and reasoning capabilities so they need additional sensors to give them more information to compensate due to their lack of brains. So it’s not really sensible to compare self-driving systems to humans. Humans have limited sensory input but it’s compensated for by reasoning abilities, Self-Driving cars do not have reasoning abilities but it’s compensated for by enhanced sensory input.
To be fair, if you were to construct a wall and paint it exactly like the road, people will run into it as well.
this isn’t being fair. It’s being compared to the other- better- autopilot systems that use both LIDAR and radar in addition to daylight and infrared optical to sense the world around them.
Teslas only use daylight and infrared. LIDAR and radar systems both would not have been deceived.
So out of interest I looked it up.
The new BYD cars that are coming out also have self-driving probably to directly compete with Tesla.
However they do use lidar, and radar, and cameras, and infrared cameras, and ultrasonic sensors. All have to be working or the car won’t go into self-drive. So other companies consider even one of their sensors failing to be enough to disable self-driving capabilities yet Tesla are claiming that it’s perfectly safe to drive around with those features not even installed let alone functional.
So yeah that’s a real bad look.
The video does bring up human ability too with the fog test (“Optically, with my own eyes, I can no longer see there’s a kid through this fog. The lidar has no issue.”) But, as they show, this wall is extremely obvious to the driver.
The tesla would lose its shit if it sees this
They already have trouble enough with trucks carrying traffic lights, or with speed limit stickers on them.
and fire trucks. and little kids. and, uh, lots of things really.
I have seen trucks with landscape scenes painted on the side and I’ve never crashed into one of those thinking that it was a portal to a random sunlit field.
I’d take that bet. I imagine at least some drivers would notice something sus’ (due to depth perception, which should be striking as you get close, or lack of ANY movement or some kind of reflection) and either
- slow down
- use a trick, e.g. flicking lights or driving a bit to the sides and back, to try to see what’s off
or probably both, but anyway as other already said, it’s being compared to other autopilot systems, not human drivers.
It was super annoying how scared he acted when he knew it was styrofoam and it wasn’t even going to leave a scratch on the car. I would have like it much better if the car crashed into and actual wall and burst into flames.
Instinctively, human brains generally don’t like large objects coming to them unbidden at high speed. That isn’t going to help things, even if you’re consciously aware that the wall is relatively harmless.
Lol yeah they’re “furious”
My 500$ robot vacuum has LiDAR, meanwhile these 50k pieces of shit don’t 😂
Holy shit, I knew I’d heard this word before. My Chinese robot vacuum cleaner has more technology than a tesla hahahahaha
Vacuum doesn’t run outdoors and accidentally running into a wall doesn’t generate lawsuits.
But, yes, any self-driving cars should absolutely be required to have lidar. I don’t think you could find any professional in the field that would argue that lidar is the proper tool for this.
…what is your point here, exactly? The stakes might be lower for a vacuum cleaner, sure, but lidar - or a similar time-of-flight system - is the only consistent way of mapping environmental geometry. It doesn’t matter if that’s a dining room full of tables and chairs, or a pedestrian crossing full of children.
I think you’re suffering from not knowing what you don’t know.
Let me make it a but clearer for you to make a fair answer.
Take a .25mw lidar sensor off a vacuum, take it outdoors and scan an intersection.
Will that laser be visible to the sensor?
is it spinning fast enough to track a kid moving in to an intersection when you’re traveling at 73 feet per second?
You’re mischaracterizing their point. Nobody is saying take the exact piece of equipment, put it in the vehicle and PRESTO. That’d be like asking why the vacuum battery can’t power the car. Because duh.
The point is if such a novelty, inconsequential item that doesn’t have any kind of life safety requirements can employ a class of technology that would prevent adverse effects, why the fuck doesn’t the vehicle? This is a design flaw of Teslas, pure and simple.
But they do, there are literally cars out there with lidar sensors.
The question was why can’t I have a lidar sensor on my car if my $150 vacuum has one. The lidar sensor for a car is more than $150.
You don’t have one because there are expensive at that size and update frequency. Sensors that are capable of outdoor mapping at high speed cost the price of a small car.
The manufacturers suspect and probably rightfully so that people don’t want to pay an extra 10 - 30 grand for an array of sensors.
The technology readily exists rober had one in his video that he used to scan a roller coaster. It’s not some conspiracy that you don’t have it on cars and it’s not like it’s not capable of being done because waymo does it all the time.
There’s a reason why waymo doesn’t use smaller sensors they use the minimum of what works well. Which is expensive, which people looking at a mid-range car don’t want to take on the extra cost, hence it’s not available
Good God it’s like you’re going out of the way to intentionally misunderstand the point.
Nobody is saying that the lidar on a car should cost the same as a lidar on a vacuum cleaner. What everyone is saying is that if the company that makes vacuum cleaners thinks it’s important enough to put lidar on, surely you’re not the company that makes cars should think that it’s important enough to put lidar on.
Stop being deliberately dense.
Stop being deliberately dense.
Its weaponized incompetence.
I bet they do the same shit with their partner when it comes to dishes, laundry, and the garbage.
You’re either taking to a fanboy or Elon on ket. You ain’t gettin’ through.
I’m not being deliberately dense it just a seriously incomplete analogy. At worst I’m being pedantic. And if that’s the case I apologize.
I agree with the premise that the cars need lidar radar whatever the f*** they can get.
Saying if a vacuum company can see that a vacuum needs lidar (which is a flawed premise because half the f****** vacuums use vslam/cameras) then why doesn’t my car have lidar, none of the consumer car companies are using it (yet anyway). It’s great to get the rabble up and say why are vacuum companies doing it when car companies can’t but when nobody’s doing it there are reasons. Ford Chevy BMW f***, what about Audi what about Porsche? What about these luxury brands that cost an arm and three fucking legs.
Let’s turn this on its head, why do people think they’re not including it in cars. And let’s discount musk for the moment because we already know he’s a fucking idiot that never had an original idea in his life and answer why it isn’t in any other brand.
Is it just that none of these companies thought about it? Is it a conspiracy? What do people think here. If I’m being so dense tell me why the companies aren’t using it.
Whether lidars are reliable enough to run on autonomous cars has nothing to do with whether they are cost efficient enough to run on vacuum cleaners though. The comparison is therefore completely irrelevant. Might as well complain that jet fighters don’t allow sharing on Instagram your location, because your much cheaper phone does.
It’s a cost-benefit calculation.
- For a vacuum at the speeds they travel and the range it needs to go, LiDAR is cheap, worth doing. Meanwhile computing power is limited.
- my phone is much more expensive than the robot vacuum, and its LiDAR can range to about a room, at speeds humans normally travel. It works great for almost instant autofocus and a passable measurement tool.
- For a car, at the speeds they travel and range it needs to go, LiDAR is expensive, large and ugly. Meanwhile the car already needs substantial computing power
So the question is whether they can achieve self-driving without it: humans rely on vision alone so maybe an ai can. I’m just happy someone is taking a different approach rather than the follow the pack mentality: we’re more likely to get something that works
Edit: everyone talks about the cost-benefit, but I imagine it makes things simpler for the ai when all sensors can be treated and weighted identically. Whether this is a benefit or disadvantage will eventually become clear
https://techcrunch.com/2019/03/06/waymo-to-start-selling-standalone-lidar-sensors/
Waymo’s top-of-range LiDAR cost about $7,500… Insiders say those costs have fallen further thanks to continuous advances by the team. And considering that this short-range LiDAR is cheaper than the top-of-range product, the price is likely under $5,000 a unit.
This article is six years old, so I wouldn’t be surprised if they’re even cheaper now.
Only Tesla does not use radar with their control systems. Every single other manufacturer uses radar control mixed with the camera system. The Tesla system is garbage.
The self driving system uber was working on also went downhill after they went full visual only.
yeah, you’d think they’d at least use radar. That’s cheap AF. It’s like someone there said I have this hill to die on, I bet we can do it all with cameras.
10 - 30 grand
Decent LIDAR sensors have gotten a lot cheaper in the last 5 years or so, here’s one that is used in commercial self-driving taxis: https://www.alibaba.com/product-detail/X01-36020021-Nev-Auto-Parts-for_1601252480285.html
So that one sensor is $700. Waymo has 4 LIDAR sensors (all of which are physically larger and I would imagine fancier than the Alibaba ones, but that’s speculation), so just in the scanner hardware itself you’re looking at $2,800. Plus the computer to run it, plus the 6 radar receivers, and 13 cameras, I could absolutely see the price for the end user to be around $10k worth of sensors.
But to be clear, I don’t think camera only systems are viable or safe. They should at minimum be forced to use radar in combination with their cameras. In fact I actually trust radar more than lidar because it’s much less susceptible to heavy snow or rain.
Shit that’s pretty decent. That looks like a ready fit car part, I wonder what vehicle it’s for. Kind of sucks that it only faces One direction but at that price four them would not be a big deal
You’re bending over backwards to miss the point huh
So be clear about the point.
Older teslas HAD lidar. They were removed on newer models to cut costs.
They did not. They had radar, which was removed.
The price of lidar sensors has dropped by like 50 times since musk decided to cut costs by eliminating theny from their cars.
Yeah looks like it, chinese sensors are down to 700 a pop. Even if it’s a few grand, it’s decent, looks like chevy offers it on 7 models.
I think you’re suffering from not knowing what you don’t know.
and I think you’re suffering from being an arrogant sack of dicks who doesn’t like being called out on their poor communication skills and, through either a lack of self-awareness or an unwarranted overabundance of self-confidence, projects their own flaws on others. But for the more receptive types who want to learn more, here’s Syed Saad ul Hassan’s very well-written 2022 paper on practical applications, titled Lidar Sensor in Autonomous Vehicles which I found also serves as neat primer of lidar in general..
Wow, what’s with all the hostility against him.
It’s maybe because i also know a bit about lidars that his comment was clear to me (“ha, try putting a vacuum lidar in a car and see if it can do anything useful outside at the speeds & range a car needs”).
Is it that much of an issue if someone is a bit snarky when pointing out the false equivalence of “my 500$ vacuum has a lidar, but a tesla doesn’t? harharhar”.
(“ha, try putting a vacuum lidar in a car and see if it can do anything useful outside at the speeds & range a car needs”).
Because no one suggested that.
So someone saying “why does my 500$ vacuum have a lidar but not the car” isn’t suggesting that?
I guess in some technical way you’re right, but it for sure is the implication…
Well look at you being adult and using big words instead of just insulting people. Not even going to wastime on people like you, I’m going to block you and move on and hope that everyone else does the same so you can sit in your own quiet little world wondering why no one likes you.
You’re an idiot.
jesus man, how many alts do you have?
It’s a highly questionable approach that has raised concerns over Tesla trying to evade guilt by automatically turning off any possibly incriminating driver assistance features before a crash.
So, who’s the YouTuber that’s gonna test this out? Since Elmo has pushed his way into the government in order to quash any investigation into it.
It basically already happened in the Mark Rober video, it turns off by itself less than a second before hitting
As Electrek points out, Autopilot has a well-documented tendency to disengage right before a crash. Regulators have previously found that the advanced driver assistance software shuts off a fraction of a second before making impact.
This has been known.
They do it so they can evade liability for the crash.
If it knows it’s about to crash, then why not just brake?
So, as others have said, it takes time to brake. But also, generally speaking autonomous cars are programmed to dump control back to the human if there’s a situation it can’t see an ‘appropriate’ response to.
what’s happening here is the ‘oh shit, there’s no action that can stop the crash’, because braking takes time (hell, even coming to that decision takes time, activating the whoseitwhatsits that activate the brakes takes time.) the normal thought is, if there’s something it can’t figure out on it’s own, it’s best to let the human take over. It’s supposed to make that decision well before, though.
However, as for why tesla is doing that when there’s not enough time to actually take control?
It’s because liability is a bitch. Given how many teslas are on the road, even a single ruling of “yup it was tesla’s fault” is going to start creating precedent, and that gets very expensive, very fast. especially for something that can’t really be fixed.
for some technical perspective, I pulled up the frame rates on the camera system (I’m not seeing frame rate on the cabin camera specifically, but it seems to either be 36 in older models or 24 in newer.)
14 frames @ 24 fps is about 0.6 seconds@36 fps, it’s about 0.4 seconds. For comparison, average human reaction to just see a change and click a mouse is about .3 seconds. If you add in needing to assess situation… that’s going to be significantly more time.
AEB braking was originally designed to not prevent a crash, but to slow the car when a unavoidable crash was detected.
It’s since gotten better and can also prevent crashes now, but slowing the speed of the crash was the original important piece. It’s a lot easier to predict an unavoidable crash, than to detect a potential crash and stop in time.
Insurance companies offer a discount for having any type of AEB as even just slowing will reduce damages and their cost out of pocket.
Not all AEB systems are created equal though.
Maybe disengaging AP if an unavoidable crash is detected triggers the AEB system? Like maybe for AEB to take over which should always be running, AP has to be off?
Breaks require a sufficient stopping distance given the current speed, driving surface conditions, tire condition, and the amount of momentum at play. This is why trains can’t stop quickly despite having breaks (and very good ones at that, with air breaks on every wheel) as there’s so much momentum at play.
If autopilot is being criticized for disengaging immediately before the crash, it’s pretty safe to assume its too late to stop the vehicle and avoid the collision
This autopilot shit needs regulated audit log in a black box, like what planes or ships have.
In no way should this kind of manipulation be legal.
Because even braking can’t avoid the crash. Unavoidable crash means bad juju if the ‘self driving’ car image is meant to stick around.
Any crash within 10s of a disengagement counts as it being on so you can’t just do this.
Edit: added the time unit.
Edit2: it’s actually 30s not 10s. See below.
10n what
Oops haha, 10 seconds.
Where are you seeing that?
There’s nothing I’m seeing as a matter of law or regulation.
In any case liability (especially civil liability) is an absolute bitch. It’s incredibly messy and likely will not every be so cut and dry.
Well it’s not that it was a crash caused by a level 2 system, but that they’ll investigate it.
So you can’t hide the crash by disengaging it just before.
Looks like it’s actually 30s seconds not 10s, or maybe it was 10s once upon a time and they changed it to 30?
The General Order requires that reporting entities file incident reports for crashes involving ADS-equipped vehicles that occur on publicly accessible roads in the United States and its territories. Crashes involving an ADS-equipped vehicle are reportable if the ADS was in use at any time within 30 seconds of the crash and the crash resulted in property damage or injury
https://www.nhtsa.gov/sites/nhtsa.gov/files/2022-06/ADAS-L2-SGO-Report-June-2022.pdf
Thanks for that.
The thing is, though the NHTSA generally doesn’t make a determination on criminal or civil liability. They’ll make the report about what happened and keep it to the facts, and let the courts sort it out whose at fault. they might not even actually investigate a crash unless it comes to it. It’s just saying “when your car crashes, you need to tell us about it.” and they kinda assume they comply.
Which, Tesla doesn’t want to comply, and is one of the reasons Musk/DOGE is going after them.
I knew they wouldn’t necessarily investigate it, that’s always their discretion, but I had no idea there was no actual bite to the rule if they didn’t comply. That’s stupid.
The self-driving equivalent of “Jesus take the wheel!”
Not sure how that helps in evading liability.
Every Tesla driver would need super human reaction speeds to respond in 17 frames, 680ms(I didn’t check the recording framerate, but 25fps is the slowest reasonable), less than a second.
It’s not likely to work, but them swapping to human control after it determined a crash is going to happen isn’t accidental.
Anything they can do to mire the proceedings they will do. It’s like how corporations file stupid junk motions to force plaintiffs to give up.
They’re talking about avoiding legal liability, not about actually doing the right thing. And of course you can see how it would help them avoid legal liability. The lawyers will walk into court and honestly say that at the time of the accident the human driver was in control of the vehicle.
And then that creates a discussion about how much time the human driver has to have in order to actually solve the problem, or gray areas about who exactly controls what when, and it complicates the situation enough where maybe Tesla can pay less money for the deaths that they are obviously responsible for.
They’re talking about avoiding legal liability, not about actually doing the right thing. And of course you can see how it would help them avoid legal liability. The lawyers will walk into court and honestly say that at the time of the accident the human driver was in control of the vehicle.
The plaintiff’s lawyers would say, the autopilot was engaged, made the decision to run into the wall, and turned off 0.1 seconds before impact. Liability is not going disappear when there were 4.9 seconds of making dangerous decisions and peacing out in the last 0.1.
They can also claim with a straight face that autopilot has a crash rate that is artificially lowered without it being technically a lie in public, in ads, etc
The plaintiff’s lawyers would say, the autopilot was engaged, made the decision to run into the wall, and turned off 0.1 seconds before impact. Liability is not going disappear when there were 4.9 seconds of making dangerous decisions and peacing out in the last 0.1.
these strategies aren’t about actually winning the argument, it’s about making it excessively expensive to have the argument in the first place. Every motion requires a response by the counterparty, which requires billable time from the counterparty’s lawyers, and delays the trial. it’s just another variation on “defend, depose, deny”.
If the disengage to avoid legal consequences feature does exist, then you would think there would be some false positive incidences where it turns off for no apparent reason. I found some with a search, which are attributed to bad software. Owners are discussing new patches fixing some problems and introducing new ones. None of the incidences caused an accident, so maybe the owners never hit the malicious code.
if it randomly turns off for unapparent reasons, people are going to be like ‘oh that’s weird’ and leave it at that. Tesla certainly isn’t going to admit that their code is malicious like that. at least not until the FBI is digging through their memos to show it was. and maybe not even then.
That makes so little sense… It detects it’s about to crash then gives up and lets you sort it?
That’s like the opposite of my Audi who does detect I’m about to hit something and gives me either a warning or just actively hits the brakes if I don’t have time to handle it.
If this is true, this is so fucking evil it’s kinda amazing it could have reached anywhere near prod.The point is that they can say “Autopilot wasn’t active during the crash.” They can leave out that autopilot was active right up until the moment before, or that autopilot directly contributed to it. They’re just purely leaning into the technical truth that it wasn’t on during the crash. Whether it’s a courtroom defense or their own next published set of data, “Autopilot was not active during any recorded Tesla crashes.”
even your audi is going to dump to human control if it can’t figure out what the appropriate response is. Granted, your Audi is probably smart enough to be like “yeah don’t hit the fucking wall,” but eh… it was put together by people that actually know what they’re doing, and care about safety.
Tesla isn’t doing this for safety or because it’s the best response. The cars are doing this because they don’t want to pay out for wrongful death lawsuits.
If this is true, this is so fucking evil it’s kinda amazing it could have reached anywhere near prod.
It’s musk. he’s fucking vile, and this isn’t even close to the worst thing he’s doing. or has done.
I hope some of you actually skimmed the article and got to the “disengaging” part.
As Electrek points out, Autopilot has a well-documented tendency to disengage right before a crash. Regulators have previously found that the advanced driver assistance software shuts off a fraction of a second before making impact.
It’s a highly questionable approach that has raised concerns over Tesla trying to evade guilt by automatically turning off any possibly incriminating driver assistance features before a crash.
I’ve heard that too, and I don’t doubt it, but watching Mark Rober’s video, it seems like he’s deathgripping the wheel pretty hard before the impact which seems more likely to be disengaging. Each time, you can see the wheel tug slightly to the left, but his deathgrip pulls it back to the right.
Yeah but that’s milliseconds. Ergo, the crash was already going to happen.
In any case, the problem with Tesla autopilot is that it doesn’t have radar. It can’t see objects and there have been many instances where a Tesla crashed into a large visible object.
That’s what’s confusing me. Rober’s hypothesis is without lidar the Tesla couldn’t detect the wall. But to claim that autopilot shut itself off before impact means that the Tesla detected the wall and decided impact was imminent, which disproves his point.
If you watch the in car footage, autopilot is on for all of three seconds and by the time its on impact was already going to happen. That said, teslas should have lidar and probably do something other than disengage before hitting the wall but I suspect their cameras were good enough to detect the wall through lack of parallax or something like that.
Don’t get me wrong, autopilot turning itself off right before a crash is sus and I wouldn’t put it past Tesla to do something like that (I mean come on, why don’t they use lidar) but maybe it’s so the car doesn’t try to power the wheels or something after impact which could potentially worsen the event.
On the other hand, they’re POS cars and the autopilot probably just shuts off cause of poor assembly, standards, and design resulting from cutting corners.
I see your point, and it makes sense, but I would be very surprised if Tesla did this. I think the best option would be to turn off the features once an impact is detected. It shutting off before hand feels like a cheap ploy to avoid guilt
… It shutting off before hand feels like a cheap ploy to avoid guilt
that’s exactly what it is.
Rober seems to think so, since he says in the video that it’s likely disengaging because the parking sensors detect that it’s parked because of the object in front, and it shuts off the cruise control.
if it can actually sense a crash is imminent, why wouldn’t it be programmed to slam the brakes instead of just turning off?
Do they have a problem with false positives?
if it was european made, it would slam the brakes or swerve in order to at least try and save lives since governments attempt to regulate companies to not do evil shit. Since it american made it is designed to maximise profit for shareholders.
I don’t believe automatic swerving is a good idea, depending on what’s off to the side it has the potential to make a bad situation much worse.
I’m thinking like, kid runs into the street, car swerves and mows down a crowd on the sidewalk
Its the cars job to swerve into a less dangerous place.
Can’t do that? Oops, no self-driving for you.
I’ve been wondering this for years now. Do we need intelligence in crashes, or do we just need vehicles to stop? I think you’re right, it must have been slamming the brakes on at unexpected times, which is unnerving when driving I’m sure.
So they had an issue with the car slamming on the brakes at unexpected times, caused by misidentifying cracks in the road or glare or weird lighting or w/e. The solution was to make the cameras ignore anything they can’t recognize at high speeds. This resulted in Teslas plowing into the back of firetrucks.
As the article mentioned, other self-driving cars solved that with lidar, which elon himself is against because he says AI will just get so good and 2d cameras are cheaper.
This is from 6 years ago. I haven’t heard of the issue more recently
https://www.washingtonpost.com/technology/interactive/2023/tesla-autopilot-crash-analysis/
The tesla did not consistently detect that the thing infront of it was a truck, so it didn’t brake. Also, this describes a lot of similar cases.
I remember a youtuber doing similar tests, where they’d try to run over a fake pedestrian crossing or standing in the road at low speed, and then high speed. It would often stop at low speed, but very rarely stopped or swerved at high speed.
Wouldn’t it make more sense for autopilot to brake and try to stop the car instead of just turning off and letting the car roll? If it’s certain enough that there will be an accident, just applying the brakes until there’s user override would make much more sense…
Normal cars do whatever is in their power to cease movement while facing upright. In a wreck, the safest state for a car is to cease moving.
It’s a highly questionable approach that has raised concerns over Tesla trying to evade guilt by automatically turning off any possibly incriminating driver assistance features before a crash.
That is like writing musk made an awkward, confused gesture during a time a few people might call questionable timing and place.
It always is that way; fuck the consumer, its all about making a buck
That’s so wrong holy shit
Beep beep
Does anyone else get the heebies with Mark Rober? There’s something a little off about his smile and overall presence.
Did you know he used to work at NASA? He very rarely mentions it. /s
Yeah, he’s over-positive, it’s unnerving.
Still, that video is good anti-musk press.