Human factors and moral dilemmas in the rush towards big data
A combination of advances in technology, including faster computer processing speeds, increased density and capability of computer storage and higher bandwidth connectivity has opened up a new world for big data: the application of smart algorithms.
Such data processing techniques enable users to infer patterns and preferences and to identify opportunities and risks for a business, projects and bring ‘artificial-intelligence-lite’ to more and more users.
It’s a thrilling prospect, though as Alison Holt, founder of consultancy Longitude 174 points out in a recent white paper for the London School of Economics, it also requires good governance, a sound ethical framework and an understanding of human factors.
The benefits are not necessarily one-sided towards the organisations commissioning the data analysis. As a producer of data, it’s possible that in exchange for knowing your location, a provider points out direction and waypoints, hazards and opportunities.
In the public sphere, some of these might pay to promote their services, such as a ship chandler which wants you to know they have the spare parts you need. Many container line customers already have the ability to track shipment status using data-enabled services developed and deployed at low cost.
There have been some great wins for society from altruistic work with big data; for example, applying these techniques to neonatal death have resulted in a breakthrough for treating infections in premature births and saving lives.
But what if these big data techniques fall into the wrong hands and instead of being used to drive new business lines or save lives, they are put to more nefarious purposes? In our daily lives on the internet we leave a trail of digital footprints, digitally kicking over information collection points on the way.
It’s possible for a company to purchase this information and check out our friends and contacts, judge political or personal preferences and use data to redirect, confuse, disillusion or generally discourage the user from doing anything from shopping to voting.
In the maritime industry, the ability to misdirect or obfuscate is hardly new, in fact it has been used for commercial advantage for centuries. The tools may change, but the risks remain that data users are directed away from real sources of information and opportunity towards illusory ones.
“It’s the same data, used for different purposes, and with different value propositions for the person-in-the-street and the organisation commissioning the data analysis. Big data techniques are neutral, but how we apply these techniques and their underpinning technology determines whether we are a force for good or a force for bad. It is the human element that determines whether the techniques are applied in a way that benefits society,” says Holt.
The other human element that must be considered is how good or smart the algorithms actually are. Analysts know that ‘average behaviour’ can confuse or detract from focussing on outliers and there are issues with the linkage of information and the potential to jump to incorrect conclusions.
Smart algorithms are only as smart as the humans developing them and if software developers could be paid to develop perfect software, there would be no failing billion dollar IT-enabled projects around the world. It is the human element that determines our success or failure.
As a species, says Holt, we get very excited about new technology, game-changing opportunities presented by state-of-the-art technology platforms and innovative techniques for the development or delivery of services. Businesses, governments and organisations all want to make a difference in the world and for customers, citizens and users. However, Holt thinks they should proceed with caution.
She points to the excitement about the potential for driverless cars and publicity suggesting they will be safer than cars driven by a human. “Eventually that is very likely to be the case. However, for now we should take a moment to remind ourselves that the software written to drive the car and the algorithms created to make decisions come from humans working from a list of likely events and working to their own moral and ethical codes.”
Holt believes industry and individuals alike should make the most of the opportunities of the big data revolution. However, if we ignore the fact that new technologies are created, programmed, deployed, run and maintained by human beings who aren’t advancing in these areas at anything like the rate of the technology, then we are doomed to disappointment and failure.
“At the very least, we need to apply standards for the governance of data used across our organisations and we need to think through the moral and ethical implications of what we are doing. If we don’t, we face a global data crisis where sufficient devices across our inter-connected digital world are working from untrusted, unreliable or unsuitable data as to cause serious harm to our businesses and institutions,” she warns.
It doesn’t have to be this way. If the necessary governance and ethical frameworks can be developed, there is the potential to come up with solutions for problems that were previously out of reach and to deliver data-enabled products and services that delight and inform, enhance work productivity and business sustainability.