Skip to main content

The Road to Autonomy: Impact of human factors on automated driving

By Lisa Dorn
Associate Professor of Driver Behaviour, Cranfield University

Lisa's main research interests relate to the transactional component of driver behaviour and the design of behavioural interventions. She is currently principal investigator on a Horizon 2020 project to develop telematics-based driver coaching interventions and also a UKRI grant to investigate behavioural adaptation in response to autonomous systems until 2024. In an exclusive webinar with QBE, she explored the impact of human factors on automated driving and the implications for the insurance industry.

Human vs. Driver behaviour

Often people talk about ‘Personality changing behind the wheel’ but personality is a stable characteristic of the individual and hardly changes over a lifetime. However, moods can change quickly in response to what is going on in your life and in response to traffic situations and other road users. Drivers often feel comfortable about expressing their feelings behind the wheel in a way that they are less likely to do in other capacities. One of the reasons for this is that there are few social consequences for emotional outbursts when driving.

One of the most interesting patterns of behaviour I've observed in the driving space is how human biases operate. When we think about our own driving, we naturally tend to forgive and forget the things we’ve done badly like getting in the wrong lane or not seeing another road user and pulling out in front of them. The complete opposite applies when drivers think about other people’s driving. They pay a lot more attention to the mistakes of others and this can lead to stereotyping. Generally we underestimate risk and overestimate our own ability and this needs to be tackled if we want to improve driver safety.

Levels of autonomy

The Society of Automotive Engineers classification system (SAE International, 2018) provides an outline of the different levels of autonomy within vehicles from Level 0 (no driving automation) to a Level 5 (fully autonomous vehicle). Level 1 vehicles provide driver assistance and these types of vehicles are now widespread. Level 2 vehicles are already here too and equipped with partial automation systems combining two or more advanced driver assistance systems. They include Tesla Autopilot, Mercedes-Benz Drive Pilot, and Volvo Pilot Assist with automated functionalities such as Autonomous Lane Keeping (ALK) Adaptive Cruise Control (ACC) and Autonomous Emergency Braking (AEB).

Level 3 vehicles take the technology one step further - they are capable of driving from a starting point to a destination without human involvement, but only in certain conditions and the human must be ready to intervene at any time. The Audi A8 is arguably a Level 3 vehicle. The next step is a Level 4 vehicle and this is considered to be fully autonomous but it may not be able to operate in severe situations like very bad weather. There is no vehicle currently available to consumers at this level of autonomy. At Level 5 the fully self-driving vehicle can operate autonomously in every situation and condition and human input is removed altogether. The expectation is that automated systems will reduce human error and crash rates but autonomy at levels 2 and 3, and possibly 4, are likely to be dangerous intervening levels to full autonomy. Level 5 vehicles will start to appear many years further down the road.

Impact of human factors

The webinar considered the effect of automated driving on driver behaviour with reference to a fatal crash in Mountain View Arizona, which involved a Level 2 Tesla Model X using the ‘Autopilot’ function. Tesla’s ‘Autopilot’ is able to “steer, accelerate and brake automatically within its lane” with a shared responsibility of the driving task split into sustained lateral and longitudinal vehicle motion control (performed by the vehicle) and the Object and Event Detection and Response (performed by the operator). ‘Autopilot’ requires the operator to monitor both the system and the road ahead and be prepared to take over full control at any time. This shifts the driver’s role from being actively engaged in the driving task to that of a passive supervisor and out-of-the-loop. Human factors research has repeatedly shown that the human ability to manage critical situations when automation fails is much more impaired compared with manual driving. The detrimental effect on the quality of driving immediately following a handover can be seen for up to 40 seconds. Research has also demonstrated a ‘startle’ response to automation failures leading to impaired ability to respond in a takeover request. Plus, what do drivers do when they are not monitoring the system and the road? The Mountain View Tesla crash data suggests that the driver was playing a mobile phone game with their hands off the steering wheel for extended periods before the crash happened (NTSB, 2018). If the human in the driving seat is not monitoring the vehicle or the road, how far can they be expected to intervene when required?

Role of technology

Technological innovation in automation of vehicles is progressing faster than standards or regulatory bodies can keep up with. A concern for the insurance industry is that certification of the technological standards with one vehicle would not necessarily transfer to another even if it is classified within the same automation level. For example, different partially automated systems vary in their capability to read lane markings and perform to different degrees of success during challenging conditions, such as in bright sunlight or rain. There is also a lack of consensus about what type and modality of alerts and warnings are most effective and no governance on the inclusion of these features, or the assurance that manufacturers will comply even if evidence-based guidelines were in place. For the Mountain View crash, the driver received two visual alerts and one auditory alert some time prior to the fatal incident. Even if the driver’s attention was grabbed at this time, the fact that the driver subsequently crashed suggests that it was short-lived. It also appears that the vehicle provided no collision avoidance alerts in the seconds leading up to the crash. This illustrates the combined vehicle and human factor weaknesses inherent within automated driving. The road to autonomy could be extremely bumpy as the number of automated vehicles on our roads increase.

Impact on insurance industry

Whilst we are a long way from seeing Level 4 and 5 vehicles on our roads, the webinar will show that there are already major risks associated with Level 2 vehicles and Level 3 vehicles are just around the corner. Underpinning the problems of driver distraction and inattention, is the extent to which a driver trusts the technology. The ability of the human operator to maintain attention as the technology advances may become more impaired as trust develops with increasing autonomy and secondary task engagement becomes more likely. Drivers must develop the necessary competencies prior to and during use of automated systems given their new role in the vehicle as a supervisor. Driver assessment and training interventions are the gold standard that fleet-based companies use to mitigate the risks of their workforce driving primarily Level 0 and 1 vehicles but there is currently no regulation for mandating training for automated vehicles for Level 2 and beyond. The onus may be on the insurance industry and fleet-based companies to put the right systems in place to ensure drivers develop the competencies they need to drive automated vehicles.

References

  • National Transportation Safety Board. (2018). Collision Between a Sport Utility Vehicle Operating With Partial Driving Automation and a Crash Attenuator Mountain View, California. Washington, DC USA: NTSB.
  • SAE International. (2018). Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles.

Lisa DornLisa Dorn is an Associate Professor of Driver Behaviour at Cranfield University and has been a principal investigator on research projects funded by the Home Office, EU, EPSRC, ESRC and industry for over 20 years. Her main research interests relate to the transactional component of driver behaviour and the design of behavioural interventions. Lisa is currently principal investigator on a Horizon 2020 project to develop telematics-based driver coaching interventions and also a UKRI grant to investigate behavioural adaptation in response to autonomous systems until 2024.