Tesla Model 3 Autopilot: The good, the bad, and the ugly
4 April 2018 - Autoblog
Real problems obscure some of the promise of a compelling system
Recently, we had the opportunity to test out the Tesla Model 3 in Silicon Valley. You can read our entire Model 3 first drive review here, but we thought that Autopilot was important enough to deserve its own discussion.
Here, as in other Teslas, Autopilot is essentially the combination of two driver assist systems. The first Tesla calls "Traffic-Aware Cruise Control," which is mostly indistinguishable from other adaptive cruise control systems. The second is Autosteer. The hardware is robust: forward radar, eight cameras and 12 ultrasonic sensors can detect vehicles all around it. It can see cars the driver can't, including, at times, two vehicles ahead.
Autopilot is a little different in the Model 3 than it is in the Tesla Model S or Model X. For one thing, there is no display directly in front of you to keep tabs on what the car is doing. That information is instead displayed on the Model 3's 15-inch central touchscreen. Same goes for the controls for maximum speed and following distance. The right stalk on the steering column — the same used as a gear selector — is used to turn Autopilot on and off. Press it down once to turn on cruise control, twice for Autosteer. That part's simple enough.
The rest is not intuitive, or easy to use on the fly. You can adjust maximum speed and following distance, but you have to go into the Autopilot menu on the central touchscreen to do that. Then you have to find the plus and minus icons, which are just about as big as the tip of one's finger. Because these controls are located off to the right, and since they're not physical buttons you can't operate them by feel, it's tricky to make these adjustments while under way.
Also on that screen is the display that shows you what the car's sensor suite sees, as well as the little blue steering wheel icon that lets you know when Autosteer is active. This is far less ideal than having that imagery directly in front of the driver, as it is in the Model S and Model X's digital instrument panel.
We fumbled through our time with the Model 3's Autopilot system. With no display directly ahead of us, we had to take our eyes off the road to take stock of what the system was and wasn't handling. We're still the one in control, the one responsible if the car leaves its lane and hits something. Technically, we're still the one driving the car.
Once we were used to it, operating and keeping tabs on Autopilot got a little easier, but the system still let us down a couple of times. Holding the turn signal down will initiate an automatic lane change if the Model 3 detects its safe.
The problem is, after requesting an automatic lane change, the Model 3 didn't seem to be able to determine if it was really safe to move over. One time we signaled to move over, but the traffic in that had slowed down. Instead of waiting or aborting on its own, the Model 3 braked far too hard than we were comfortable with trying to make its way into a gap in the traffic. We aborted that one manually rather than see if we'd end up in someone's tailgate, or have the same happen to us. Another time, we radioed to colleagues in another car and had them pull into our blind spot. They had warning of what we were trying, thankfully, so when the Model 3 tried to merge directly into their vehicle they were able to brake and avoid it.
On numerous occasions, Autosteer disengaged for whatever reason, and we missed the quiet chime to indicate it. When we loosened up our grip on the steering wheel and signaled, expecting an automatic lane change, the Model 3 just drifted in the lane until we realized the little blue icon wasn't lit. A more distinctive and obvious audio or visual clue would have helped a lot to let us know when it the car decided it was our turn to take over completely.
This is not a self-driving vehicle. Far from it. It seems a misleading — or even dangerous — to call it a semi-autonomous system considering the vague amount of automation such a descriptor provides. Even so-called self-driving cars with safety drivers behind the wheel as backup have proven to fail tragically, as we saw with Uber in Tempe. Being lulled into a sense that the car can drive you is a looming catastrophe until the technology has being sufficiently tested, proven and publicly deployed. Even Tesla's owner's manual calls Autopilot a "beta" system, essentially making owners guinea pigs in a rolling laboratory.
Autopilot is assisted driving at best — a tool to help the human operator to drive better with less effort. In that capacity, Autopilot should be the safest, easiest tool to use it can possibly be. In this iteration, it fails. There's simply too much guesswork involved, and too little information accessible (i.e., in front of your face or better audio cues) to let you know what functions the car is currently performing.
Until the car can be fully relied upon to take over driving duties without interruption, it simply needs to provide more information more readily to the driver, and make its limitations crystal clear — and often.
To contrast, let's take a look at Cadillac's Super Cruise semi-autonomous system. It has its limitations, but there's a clear break where the car's duties end and the driver's begins. The sounds and flashing, colored lights on top of the steering wheel are almost impossible to ignore, and the warnings come early. Plus, the fact that the car actually monitors your attention with cameras pointed at your face is a huge difference here. We simply felt more aware of our surroundings, our duties and our vehicle's self-driving capabilities with Super Cruise than with Autopilot, despite the freedom to go hands-free.
We're convinced that current and near-future technology is capable of making self-driving cars a reality. It's even close to being safe and practical, despite the mix of completely meat-driven, dinosaur-burning cars piloted by distracted teenagers out there. There's an evolution that has to happen to educate rider-drivers sufficiently to get them to learn how to work with an automated vehicle. Ideally, that will mean keeping humans in completely control until (with some assistance) until Level 4 automation is truly ready. For now, that still means being held responsible for the vehicle, even when it's out of your hands. If a vehicle is sold with partial automation — which is problematic in its own right — cars need to have an interface that provides simple, readily available information with seamless, early transitions between human and robot driving.
Software updates could provide solutions to a lot of the Model 3's Autopilot problems. Better audio cues and simpler, more intuitive controls (maybe move some of them to the thumbscrolls [update: Tesla did just that earlier this week in an over-the-air update, though we haven't had the chance to test it out]) would go a long way. But the problem still remains that there is no easily seen display in front of the driver. And there is no attention management system currently in place. The current hardware can't provide the former, and it's unclear if it can provide the latter. There is that one little camera by the rear-view mirror, but Tesla hasn't been forthright its uses or capabilities.
Regarding the usability of Autopilot in the Model 3, software can't fix the physical changes this car truly needs, chiefly a head-up display, if not an instrument panel. That would require a design refresh from Tesla. An aftermarket HUD could help, but it likely couldn't incorporate the visualization of Autopilot functions as well as a factory setup. But we ought to also consider that putting a "beta" system with a name that implies autonomy in an untrained customer's car is a questionable move. As it currently exists, approach Autopilot with caution. You are still the driver.