Home » Level 3 Autonomy Is Confusing Garbage

Level 3 Autonomy Is Confusing Garbage

Pasted

The Society of Automotive Engineers has done many wonderful things in its 117 years of existence, like making everybody’s horsepower drop when they went from SAE gross to SAE net power ratings in the early 1970s, or forming the Fuels and Lubricants committee in 1933. More recently, we’re talking about the SAE because of its classification of driving automation levels, which range from 0 to 5 and is used to classify automated vehicles. The problem is that this classification system really isn’t well understood, and especially recently, as the first cars that claim to be SAE Level 3 arrive on the market, it’s becoming important to highlight that some levels, like Level 3, are severely, perhaps even dangerously, confusing.

The SAE Automated Driving Levels

One crucial problem that I’ve discussed before is that fundamentally, the classification system used by the SAE is flawed. The level system used suggests some sort of qualitative, hierarchical ranking, with higher levels being more advanced, when this really isn’t the case at all. For example, here is the current SAE J3016 Levels of Driving Automation Chart:

J3016Graphic_2021.png

Vidframe Min Top
Vidframe Min Bottom

If you actually really look at what the criteria are for each level, you’ll realize that these are not gradations of the ability of a machine to drive, but rather are descriptions of how the driving task is shared between the human and the car.

In Levels 0 through 2, the human is always, unquestionably in charge. This is obvious in L0 and L1, because if the human is not in charge for any period of time, then you’re going to crash, which makes it pretty obvious who should have been running the show.

ADVERTISEMENT
Ad Placeholder Wide Top Banner

Level 2 has the human in charge, but the car can be performing so many aspects of the task of driving that the human’s role changes from one of actively controlling the machine to what is known as a vigilance role, where the human is monitoring the behavior of the car, and must be ready at any moment to take over should anything go wrong.

My opinion on L2 is that it is inherently flawed, not for any specific technical reason, but because conceptually it puts humans into roles they’re just not good at, and there’s the paradox that the more capable a Level 2 system is, the worse people tend to be at remaining vigilant, because the more it does and the better it does it, the less the human feels the need to monitor carefully and be ready to take over. So the fundamental safety net of the system is, in some ways, lost.

I’m not the only one who feels this way, by the way; many people far smarter than I am agree. Researcher Missy Cummings goes into this phenomenon in detail in her paper Evaluating the Reliability of Tesla Model 3 Driver Assist Functions from October of 2020:

Drivers who think their cars are more capable that they are may be more susceptible to increased states of distractions, and thus at higher risk of crashes. This susceptibility is highlighted by several fatal crashes of distracted Tesla drivers with Autopilot engaged (12,13). Past research has characterized a variety of ways in which autonomous systems can influence an operator’s attentional levels. When an operator’s task load is low while supervising an autonomous system, he can experience increased boredom which results in reduced alertness and vigilance (ability to respond to external stimuli) (14).

I’ve written about my problems with L2 plenty, though. I’d like to grow a little in my feckless griping and complaining and move onto Level 3, which is, excitingly, even more confusing and deeply flawed than Level 2!

ADVERTISEMENT
Ad Placeholder Wide Top Banner

The Conceptual Problems with Level 3

Level 3 is worth talking about now because cars that claim to have L3 automation, like the Mercedes-Benz EQS with the Drive Pilot system are now set to come to America this year, so it seems like the peculiar traits and issues associated with Level 3 should be widely discussed before cars operating under L3 parameters actually start to be deployed on roads in quantity.

According to the SAE standard, L3 is actually an automated driving system, which means that the person in the driver’s seat does not need to always pay attention to the driving task. The SAE calls this “Conditional Driving Automation” and the chart states

“You are not driving when these automated driving features are engaged – even if you are seated in “the driver’s seat.”

The chart then adds this qualifier block of text for L3:

“When the feature requests, you must drive.

My problem with L3 is that the SAE has set up a standard that carmakers can use that is extremely vague and woefully incomplete. The difference between zero driver demand (“you are not driving”) and absolute attention (“when the feature requests, you must drive“) is vast and the circumstances that trigger this transition are quite unclear.

ADVERTISEMENT
Ad Placeholder Wide Top Banner

For something to be considered an L3 system, how much time must elapse between a “request” to drive and getting an inattentive driver to realize they need to start driving, assess the situation around them, and actually take over? What if the person in the driver’s seat, who was informed they do not need to do anything related to driving, is asleep? What if they’re drunk? What if they’re making out or wrestling with an unruly ferret or have their hands full of sloppy chicken wings?

It does not seem reasonable at all that you can tell a human in a car that they do not need to drive or even pay attention to what a car is doing, as you have to in Level 2, and then simultaneously expect them to take over upon request, when a car is in motion on an active road.

I’ve just asked a lot of questions there, so it makes sense to try to find someone who can answer them; luckily, we know someone — an automated driving engineer with years of experience who currently works for a major automaker. Though this automated driving engineer’s current position at the OEM isn’t directly part of its AV program, he still prefers to remain anonymous at this time, but I can assure you we have confirmed his credentials.

I relayed my questions about L3, and asked him if my assessment of the clarity and specifics of this level were accurate, at least as he understood them.

“Absolutely. Level 3 is hot garbage, and potentially incredibly dangerous.”

ADVERTISEMENT
Ad Placeholder Wide Top Banner

While I can’t say I’m surprised, it’s not exactly fun to hear that from an expert.

The Level 3 Standard is Not Clear About How or When to Handoff Control to a Human

Our source agreed that the basic conditions of Level 3 are confusing at best, and confirmed that there really are no clearly defined parameters for what a successful handoff should be, how long it should take, or when and how it should happen.

He did mention that some potential failure situations do have a good degree of built-in failovers, and used the example of bad lane markings. If the lane markings disappear, most AV systems are able to extrapolate at least 50 to 120 meters ahead, giving about three seconds of continued safe driving, even with only vision data.

Other sensors that are fused into the overall perception system can help, too, such as GPS data and even following a car in front, and these together can buy up to 30 seconds of safe driving. Sensors can also self-report as they become obscured or marginalized in some way, so ideally there should be some degree of warning before they are determined to be inoperable.

This is all good to hear, but it is assuming that the takeover request happens because of some criteria or situation that is recognized and processed by the car’s sensors and related computing hardware; if that’s the case, then that’s the best possible situation for a handoff, but in reality, there is absolutely no guarantee a given automated driving system will even fully know it needs help in every situation.

ADVERTISEMENT
Ad Placeholder Wide Top Banner

While poor road markings and sensor obscuring are well-understood examples that are being compensated for, the real world is full of unexpected situations that can confuse an AV. Sometimes, even expected things, like, say, the moon can throw an AV for a loop, and in those cases where the need for handoff isn’t recognized by the car, there really aren’t any good clear procedures for how to best respond.

Maybe you’ll realize something is going wrong in time, if you’re paying some kind of attention. Maybe you won’t. If I had to guess, chances aren’t great you’d be paying attention because, remember, a L3 system tells you you don’t have to.

Simultaneously telling a human in the driver’s seat that they do not need to pay attention unless, you know, they do, is a recipe for trouble. There’s no way this contradictory approach can end well, especially because the L3 specification makes absolutely no mention of what sort of failover procedures should be in place when the human is not able to take over in time–and that amount of time, again, isn’t clear at all.

Everyone is Just Guessing

I asked our automated driving engineer about this, and while he said that “processes are being designed,” he also added that the fundamental parameters of Level 3 are “poorly designed,” and as a result “everyone is just guessing” when it comes to how best to implement whatever L3 actually is.

“There is absolutely a lot of guessing going on,” he added, noting that there simply isn’t any real well of L3 driving data to work with at this time.

ADVERTISEMENT
Ad Placeholder Wide Top Banner

 

What Should Happen When A Handoff Fails?

Ideally, an automated vehicle should be able to get out of active traffic lanes and park itself somewhere as safe as possible when it needs a human to take over but the human does not do so.

At this point in automated vehicle development, no car on the market does this, with the only currently implemented solution being that the car slows to a stop in an active traffic lane and sits there, like nobody in their right minds would do.

This isn’t a viable solution moving forward.

ADVERTISEMENT
Ad Placeholder Wide Top Banner

I asked our automated driving engineer about this as well, and he told me that while of course engineers had been talking about getting the cars off the road in situations where a successful handoff doesn’t happen, he had not encountered any serious work on getting it to actually happen, and while that doesn’t mean no one is trying (I suspect there are efforts to figure it out), our engineer felt that currently

“There isn’t an agressive, industry-wide push to figure it [getting the car to a safe shoulder] out, because most companies don’t care about failover states as much as getting the cars to just stay in their lanes and not crash into anything.”

The vagueness of L3 and the lack of any real serious development about what should AVs do in situations where they are no longer functioning properly reveals a fundamental flaw about how all automated vehicle development is currently progressing; at this moment, most development efforts are being put into how these systems should work, when the truth is that we’re at the crucial moment where effort needs to be directed to determining how these systems will fail.

That’s what really matters right now; we can’t make any real progress towards vehicles that never require human intervention until we can make vehicles, or, significantly, overall systems for vehicles, that can manage them when they fail.

The Regulations That Allow L3 Cars on the Road Are Very Limiting

Previously, Mercedes-Benz had cited to me that UN regulations prevented them from doing anything but coming to a controlled stop in an active lane of traffic, citing this regulation:

ADVERTISEMENT
Ad Placeholder Wide Top Banner

5.1.5. If the driver fails to resume control of the DDT during the transition phase, the system shall perform a minimum risk manoeuvre. During a minimum risk manoeuvre, the system shall minimise risks to safety of the vehicle occupants and other road users. 5.1.6. T

[…]

5.5.1 During the minimum risk manoeuvre the vehicle shall be slowed down inside the lane or, in case the lane markings are not visible, remain on an appropriate trajectory taking into account surrounding traffic and road infrastructure, with an aim of achieving a deceleration demand not greater than 4.0 m/s².

Okay. I don’t think parking on an active traffic lane on a highway should be considered a “minimum risk manoeuvre” even if they do spell “maneuver” all weird. It seems the German government would agree, as they have recent automated driving regulations that address this in a far more viable way: Level 3 systems are only permitted for highway travel, and then only for speeds of 60 kph (37 mph) or less–essentially, high-traffic, low-speed situations, like driving on the 405 in Los Angeles at pretty much any time.

The 2021 law does require that faster automated driving and higher than L3 systems will need to actually deal with non-reacting humans in a more reasonable way:

If the driver or the technical oversight does not react, the system is able (unlike in level 3) to autonomously place the motor vehicle in a minimal risk condition (e.g. stop the vehicle on a hard shoulder).

See the difference? One is a minimal risk maneuver–the car doesn’t take risks in the process of coming to a stop, but after that maneuver, all bets are off, because the car is potentially sitting there, blocking a lane on the highway. The new law specifies a minimal risk condition, which is where the car is effectively out of harm’s way. That’s so much better.

So, at this moment, Mercedes-Benz is able to release a vehicle they can claim L3 automation on because that L3 automation is limited to low-speed, low complexity situations where the inherent flaws of L3 itself aren’t as likely to be too pronounced.

But, it’s important to remember they are still very much there. L3 is not a well-thought out level, period. The human-machine interaction specified is at best vague and confusing, and at worst has the potential to put people in dangerous situations.

ADVERTISEMENT
Ad Placeholder Wide Top Banner

Our engineer also had a lot to say about the 37 mph limit that’s currently the law. He explained that such a limit effectively restricts these L3 systems to traffic jams, and mentioned that he felt that in these situations

“The driver doesn’t get to relax. They have to monitor constantly if the system is off or on–going over 37 mph will turn it off, for example–and the opinion of the engineers I work with is that no one will want this, given the cost.”

If there were any real overarching, industry-wide plan for automation, this would be the point where the technology and procedures and regulations that deal with how these systems can fail safely and in as controlled a way as possible would be being developed. But they’re not.

Why Is It Like This?

The engineer we reached out to offered a bit of an explanation for why there was no overarching plan to figure out these hard but crucial issues at the beginning, and he said it had to do with how the industry tends to use something known as agile software development.

He described this as an “iterative approach to get to a minimum viable product, then improve the software with updates after the fact.” He also noted that a lot of the sticky safety and failover/handoff issues had yet to be resolved because so much of the industry is “still working on smooth lane changes.”

ADVERTISEMENT
Ad Placeholder Wide Top Banner

This agile approach may work great for the rapid development of software for our phones, but the difference between your phone crashing and your car crashing I think is generally agreed to be pretty significant.

Our automated driving engineer does think that some level of automated driving that approximates what the Level 3 standard describes is coming, but, realistically, he says that it’s at least “five years out. That’s for highway speeds, on highways only. I don’t see it sooner than that.”

It’s not like doing the hard, unglamorous work of managing AV failure situations is impossible, because it’s not. The auto industry works because of incredible numbers of standards and practices and procedures, in the ways cars are built, sold, licensed, inspected, and the infrastructure they operate in.

We have traffic laws and road markings and traffic signals and safety equipment on cars, from bumpers to airbags to seat belts to those taillights I fetishize. If we’re really serious about automated driving, it’s time to figure out how we want to deal with these machines when they go wrong, and part of that process may very well be taking a hard look at automation levels like Level 3 and acknowledging the harsh reality that any system that can both demand nothing and then everything from a human is doomed to failure.

 

ADVERTISEMENT
Ad Placeholder Wide Top Banner

 

Facebook
WhatsApp
Twitter
LinkedIn
Reddit
50 Comments
Inline Feedbacks
View all comments
TripleOno
TripleOno
2 years ago

When all is said and done, a Level 3 system is worse than having someone else drive you. Some may say that there are dangers inherent to being driven, such as sexual assault, expense, or dinginess. Nonsense, I say.
Whomever can afford a Level 3 system ostensibly is successful and good at making friends and influencing people. So ask a friend to drive you or hire a reputable service.

50
0
Would love your thoughts, please comment.x
()
x