A few bumps in the road
So great is the noise being generated around the subject of autonomous ships that it feels as though robot vessels could be plying the seas any day. The truth is that the shipping industry is still many years from such a development.
Even their champions point out that autonomous control systems are likely to appear first on small ships in coastal trades and most agree that the change is more likely to be transition than big bang. The recent successful testing of a COLREGS-compliant prototype by US researchers is an interesting development but the amendment of SOLAS to allow an ‘unmanned’ ship does not seem to be on the IMO’s agenda.
And as the complexity of computer-based safety systems grows, the nagging concern is whether the humans which operate them have the skills they need and an organisational structure that understands the implications for loss prevention, hull insurance and environmental protection.
Advocates of autonomous vessels often cite the airline industry as the model to follow thanks to its impressive safety record. But it also provides sobering parallels of what happens when things go wrong.
When BA pilot Peter Burkill made an emergency landing in a Boeing 777 at Heathrow in 2008, he was initially praised as a hero. On finding he did not have enough power from the engines to land the plane, he quickly realised he would have to break with procedures in order to minimise the risk to life.
Using his experience, skill and judgement, he managed to crash-land the plane without serious injury. He subsequently admitted to thinking that had he been wrong and disaster ensued, he would be blamed because of that decision.
Anyone who imagines that in the aftermath of a serious maritime casualty that the master and officer of the watch would be given similar benefit of the doubt has not been keeping up with the news. But then, the outcome for Burkill was not much better.
In 2010, UK media reported that Burkill had been ‘forced out’ of BA by rumours he had ‘frozen’ moments before the crash. He was turned down, without a single interview, for jobs with 10 airlines and told that being involved in the crash had left a permanent stain on his reputation as a pilot.
It’s typical of the problems that face navigators in a world where the human-machine interface is becoming more complex, according to Dr Mark Nicholson, Research and Training Fellow in the Department of Computer Science at York University.
“In that situation, what we should be asking is what was it about the procedures that made him decide to do the ‘wrong’ thing for the right reason. He purposely violated procedures and was right to do so. Subsequent simulator tests found that every pilot who followed established procedures crashed the plane.”
Nicholson says this change in behaviour is one the industry will have to come to terms with as man-machine interaction becomes more complex and nuanced.
“There is a realisation now that safety is not just about one element, it’s about getting the technology and the human beings to work together and having the organisational structures in place to support them,” he says. This is causing ‘a huge change in the way companies work’ but the experience of high reliability industries such as nuclear power shows that the collapse of hierarchical models and their replacement with systems of interaction makes it inevitable.
If indeed shipping is going to move from human control to use of autonomous systems, he argues the key issue is to try and work out which elements we want to keep control over. And long before then, humans will already be coping with a changed relationship to machines and will be required to make many more decisions about the information they are presented with – as well as the reliability of the system behind it.
Navigators will increasingly be told what to do by machines, he says, even though such systems are far from perfect. Learning how to spot errors and over-ride systems will become critical.
“It’s a big change in the job of a navigator. They are still going to be controllers, based on the data and information they see. But if the installation is incorrect they could be swayed to do wrong thing. It will no longer be good enough just to blame the operator,” he says.
Nicholson admits that understanding the interaction of the three elements is a complex task as each has the potential to contribute to a poor decision. But as ships talk to each other more and more, the over-riding need is to give the right information to the human and let them make decisions, even when machine errors occur.
Two of most important characteristics of a safe system are trust and reliance. The former is a decision to accept the information as presented, and in the process opting out of being the controlling entity. The latter is where the user lacks the capability to check information and so becomes reliant.
“In an extreme example you completely abdicate responsibility. When you talk about automated systems and vehicles that is what you are doing. But how far along the line do we want and need to go and does the organisation understand what it is asking. Has it given users the right level of authority to discharge that responsibility?” he asks.
The issue for the near term is coping with the increased level of trust users are going to have to give machines and how much reliance is going to have to be in place. Nicholson thinks that levels of both will change and navigators and organisations alike will have to learn to deal with an evolving scenario.
“The more a safety-related information system is used, so the level of reliance and trust changes. It is not a one-time deal,” he says, adding that an organisation has to build flexibility into the system in order to understand when users have moved away from procedures, just as the BA pilot did. “It will change through time and users have to adjust to that. Do organisations look at those systems and their interactions? I would suggest not.”
It’s at this point if not before, that the three elements come together, through building smarter systems, increasing education for the human, by changing the organisation’s working practices – or a combination of all three. As the processing power of technology increases and this type of human-machine interaction rockets, so the need becomes more urgent.
The immediate challenge is how to turn ‘analogue people’ into ‘digital people’. This may not be hard for the Nintendo generation but Nicholson says their mental models of what safety systems are there to do may need to be adjusted. As a new generation of e-navigators comes on-stream, their experience needs to be rapidly turned into new practices, procedures and methodologies within training regimes and organisational structures.
“E-navigators will need to learn about how systems and functions work, which we call mode awareness; what a system is doing, why and whether it can be changed,” says Nicholson. “If you don’t understand how it works and what mode it’s in, you can’t interact with it and it rapidly gets out of control.”
It’s a lesson that shipping has started to take onboard and Nicholson says the critical point is to understand how the nature of the job is changing and that the knowledge and skills need is changing too. Education and organisation structures have to change to reflect that. The alternative is that navigators stick to what they know and hope for the best.
“The old pilot’s joke is that they don’t fly anymore, but they can type really quickly because all they need to do is type the route into the computer and it does the rest for you. The experience of the BA captain shows how wrong that thinking is.”