The legitimacy of a decision made by a human being with the contribution of Artificial Intelligence tends to be more and more questioned.
Could this be a matter of vocabulary?
Artificial: “made or produced by human beings rather than occurring naturally, especially as a copy of something natural“, and the synonyms “feigned, insincere, false, affected, mannered, unnatural, stilted, contrived, pretended, put-on, exaggerated, actorly, overdone, overripe, forced, strained, hollow, spurious …”
But usually, artificial objects are seen as a superiority of man on nature. When it comes to a lake, a flower, a lawn or even a heart, an artificial object is meant to make our life easier and it carries the idea of security, efficiency and mastery. On the contrary, when it comes to Intelligence, “Artificial” convey the opposite concept and most of the synonyms we found in the dictionary to which one can add all the myths found in some of the most successful block busters: humans dominated by machines that acquired their power by smart trickery.
We tend to oppose Human Intelligence and Artificial Intelligence in a competition where Artificial Intelligence is a competitor to Human Intelligence, rather than the efficient accomplice of its Human Partner. Media sell more copies with AI threatening humans than with stories telling the successes of human intelligence boosted by AI. The very few Tesla’s accidents under automated driving are a better way to make the buzz than the analysis of the security improvement of self-driving cars.
On the other hand, we are fascinated and amazed by a fridge that can reorder missing Haagen Dazs, or a robot that makes a backflip in a warehouse, but we like it much less when it does the job of the warehouseman and we are screaming disgustingly when we are offered the assistance of AI from a conceptual stand point.
Paradoxically, it is where AI can bring the most obvious benefits that business users are demanding the more proofs and guaranties than ever before, neither to Dataviz, nor to Predictive Analysis. All a sudden users want to apply the Precautionary Principle. Users reject black boxes they make them nervous. They got the feeling they are losing control. And guess what? THEY ARE RIGHT! They demand “Glass Boxes”. Hmm! Would that really solve the problem. Isn’t transparency a lure to mute the legitimate demand for justifications?
Let’s get in the Business User’s shoes!
What would the access to the algorithms change to the trust relation between the user and the machine? Nothing! First of all because most of them are public and only understandable by happy few data-scientists and those who published them. Does your grand-mother care about the electronic air suspension of your car provided that you can drive her to shopping mall with comfort? You do not care either but can feel the difference and know the basics about air suspension and that’s enough. Let’s keep the comparison going. As far as your car suspensions, you’ve read experts’ advises that detailed the benefits of that precise technology and you feel good about having it on your car. Did you ask for the middleware that support the regulation of the pressure in the air bags before making your decision? Surely not. That the same for Artificial Intelligence. A “Glass Box” is not the answer that will make the user comfortable with AI.
From our experience, the justification of a decision is strongly related to the traceability of the rules that led to build it. Algorithms do not make mistakes and they must be made easy to manipulate. This is for the most obvious and easy part, but the most important, what is likely to bring trust between AI systems and user, what will bring legitimacy and justification is the process of building the decision. First of all, the system must allow the user to bring in his experience or constraints. These few little things that are not in the database and make the difference! Also, a decision is never a solitary process. A business professional needs to share, exchange, let other play with his model and confront it to other experience. This is Collective Intelligence. How could contributors to a decision challenge it when he has been part of the process that guided him?
At the end of the day, the user is true: no legitimate decision can come from a Black Box, but neither could it come from a “Glass Box”. A good decision is a decision that is together legitimate and justified. It always comes from a team that shared a common tool to elaborate it. This is why a decision can never be made by a machine alone. Only the human factor can turn a recommendation into a decision. Whatever the capabilities a Machine can show, it should never be a pretext to disempower the Humans.
The contribution of collective and human intelligence is the only way to avoid the Black Box effect, because it merges with AI so deeply that all three ends up being so interwoven that they can’t be separated. The input are made of experiences, intuitions, team work, confrontation of expertise and massive computation. With such an implementation, Artificial Intelligence does not divide people. On the opposite, it cements organization by allowing people to share experience and expertise and interact on established facts.
We’ve seen how the collective use of AI could open a new era of management where decision could be legitimated by a strong justification of the process that guided to it: management 4.0.
We also know that some decisions have to be let to a “Black Box” or “Grey Boxes”, where the nuances are mainly dependent upon the time frame of the decision. Even if AI makes people make faster decision, a strategical decision, built with AI will take sufficient time to let the stake holders bring their input, understand the recommendations and agree on the implementation. At the other end, the contribution of AI to an automated car must be delivered in the fraction of a second. Between these two extremes, Credit Scoring, Rolling Mill tuning, Lead generation systems, Human resources absenteeism management … cover all the range from few minutes to several days and it is commonly admitted that the shortest is the time to make a decision the most AI will be dominant.
Last but not least, AI is not static. Machine learning makes AI an evolving proposition that withdraws lessons from experience. The shortest is the time, the highest is the contribution of the continuous improvement process of Machine Learning. This return on experience legitimates the decision that are made.
Augmented intelligence is the combination of Human Intelligence, Collective Intelligence and Artificial Intelligence. The legitimacy of the decisions, based upon a strong collective justification, makes it easier to adopt by business professionals because it brings back the human factor right at the center of the organization, served by the machines.