As those who are following us in GRASP know, one of our primary interests is the future of work, and particularly how it is being transformed by the increasingly immaterial forces in our lives. At least one other blog entry has dealt with this subject, for example. Another blog entry dealt with the technology known as machine learning, a branch of Artificial Intelligence (or simply “AI”) that is making impressive strides in many areas today. In fact, the strides are so impressive that the European Union has begun to get worried. Is this all going to spin out of control? What effect will it have on society? What effect will it have on the future of work? The leading countries (for now) in this technology – the United States and China – don’t seem to be particularly worried, but Europe is. After establishing leadership recently in the protection of privacy with its General Data Protection Regulation, Europe now has the ambition to establish leadership in the ethics of artificial intelligence – a “kinder and gentler AI”, so to speak. So it’s an appropriate time for us to revisit the question of AI and the future of work for a quick update.
The subject here is a kind of manifesto that was recently published. An independent High-Level Expert Group (HLEG) set up by the European Commission issued a document in April of 2019 entitled “Ethics Guidelines for Trustworthy AI”. The document immediately positions itself by evoking the concept of “human-centered AI”, and one thread that tends to run throughout the exposition is the idea of transparency. We have already touched upon this in a narrower context in a previous blog post: the problem that machine learning technology produces programs and activities that could be dangerous to use in critical applications like self-driving vehicles on highways, because nobody understands what they do – or even more problematic, what they’ll do next. That is: what will the car do if it finds itself faced with a choice of running over, say, that old person who suddenly appeared in front of the car, or turning off the road into a ditch and risking to kill all of the passengers? This is not just a technical issue, but also an ethical issue – a surprisingly old one, too. More than 50 years ago, the Trolley Problem was posed, and became essentially the genesis of current efforts to confront ethical problems of decision-making in the context of autonomous driving.
Here is another example of the issue of transparency in AI. We’re all well aware of the big headlines in the newspapers (as well as speeches by political candidates) that AI could put us out of work by taking over our jobs. That is, some types of work that in the past could only be performed by humans – intellectual work, not just manual labor – could be carried out by devices equipped with artificial intelligence. Customer care robots, automated legal assistants, robo investment advisors. But not many have considered another aspect – that AI might cost you your job because it may well be a robot who is interviewing you for that job. There are recruiting apps
out there that take care of tasks like reading job applications and deciding which ones deserve further study. It’s bad enough that an AI application might decide not to give you a job – also because there is some evidence that machine learning programs can absorb biases against underprivileged groups (by the training data they ingest). But it’s even worse when you don’t even know that it was a robot who didn’t give you that chance.
This is where the “human-centered” guidelines of the European Commission’s expert group step in. These guidelines state that we have a right to know when we are being “processed” by AI rather than a real human being. That is, we have a right to transparency in the matter of whether AI is involved in the decision-making progress. Furthermore, the guidelines admonish us to “… pay particular attention to situations involving more vulnerable groups such as children, persons with disabilities and others that have historically been disadvantaged or are at risk of exclusion, and to situations which are characterized by asymmetries of power or information, such as between employers and workers.”
There will be a lot of AI in the future of work. Overall, the balance is probably positive – AI has much to offer. But the European Commission has done something very necessary by expanding the scope of oversight beyond the previously narrow boundaries. Stay tuned for further developments.