Human Centric AI — Time for AI ‘Naturalization’?

Devendra Bharadwaj
12 min readNov 20, 2020
Image Credit: Freepik.com

AI is all pervasive in almost every walk of our lives now and we seldom question the way AI behaves or generate results. However, for some time now strong need has been felt to make AI more responsible and mature, a need for more Human Centric AI. This coupled with the fact that AI is still a growing field we never found time to review its characteristics and performance paradigms in general. However, lately this topic of Human Centric AI has picked a lot of attention. Reason, flaws in AI generated outcomes are evident and secondly and more importantly the way it’s impacted public or humans. The debate on HAI got attention when an investigative journalism report claimed that a computer program used by a US court for risk assessment was biased against black prisoners. The program, Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), was much more prone to mistakenly label black defendants as likely to reoffend — wrongly flagging them at almost twice the rate as white people (45% to 24%). Another program, PredPol for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighborhoods because the program was learning from previous crime reports. Many other episodes of AI biases or so called “technical glitches” have been reported lately for e.g. Google image recognition program labelled the faces of several black people as gorillas; a LinkedIn advertising program showed a preference for male names in CEO searches, and a Microsoft chatbot called Tay spent a day learning from Twitter and began spouting antisemitic messages.

But how could these sophistically developed AI systems flounder? For this we’ll have to step back and understand the evolution of AI systems.

Image Credit: Interactions.acm.org

Industries so far have been focusing on bringing more automation in the processes. With the advent of Industry 4.0 the approach is changing more towards Cyber Physical Systems (CPS) comprising smart machines, storage systems and production facilities which are capable of autonomously exchanging information, triggering actions and control each other independently. Human interface started to be factored in while designing smart system. While these systems were developed and started interfacing with humans the gap at certain levels such as Human Like Intelligence (including Emotional Intelligence), Rationality, Common Sense and Fairness started to become evident. Proliferation of AI systems at ground level in all walks of life started exposing its biggest fallacies, lack of empathy and absence of societal behavioral aspects.

This led to a need for rethinking on how AI systems are currently built and what should be the performance parameters or KPIs for them.

Image Credit: University of Maryland

To address these gaps and to provide AI more human like intelligence researchers across the globe have been struggling to define a Human Centric AI. Notable institutions like John Hopkins, Stanford and Massachusetts Institute of Technology have already initiated researches and building prototypes to identify the parameters responsible for Human Like Intelligence of AI. A very concrete step in this direction has been taken by European Union (EU) which formed “Humane AI Community” to develop the scientific and technological foundations for artificial intelligence that is beneficial to humans and humanity, in accordance with European ethical, social, and cultural values. While all these institutions, corporations and governments have focused their efforts to develop Human Centric AI separately the key common characteristics that all agree on are, RST (Reliable, Safe & Trustworthy), Empower Humans with new abilities for creativity, Ethical, Fair, Reasonable and Explainable.

Let’s understand what led to expecting these key features from the new-age AI.

Reliable, Safe and Trustworthy

This covers a lot from a wide variety of AI applications, right from recommender systems for recommending movies to life critical systems in planes and medical systems. To understand the Human Centric approach, we’ll have to first uncover how current AI systems work on RST front. Most of the AI systems today function with certain degree of failure, even the life-critical ones. Take for example your smart phone that detects your face and unlocks your phone. Let’s say you are flustered because of which your phone’s AI doesn’t detect you. The boundary of current AI in this case is therefore restricted till identification and taking an action as per the inherent rules which don’t cover cognitive aspects of the user. Another example would that be of an L5 Autonomous car where due to some issue with autonomous functions or scenarios if car is not able to determine action it would end up crashing (as reported in Self Driving Car crashes like Waymo, Tesla etc). In both the cases there is a limited reliability on AI system while Safety and Trustworthiness has gone for a toss in latter. If on the other hand these systems were built with a more mature Human Centric AI results would be starkly different. On top of the face detection techniques HAI system would have used bio sensing to determine the true state of your face and probably unlocked the phone, thereby ensuring that it is not reliable but would intelligibly identify you as the right owner of phone. The HAI in your car would have understood that it is not able to take decisions on its own and hand over the control to you within a safety window. In both the cases HAI is showcasing both collaboration as well as delegation the moment it realizes that it is not capable of desired actions or result thereby eventually ensuring RST.

Empowering Humans

An evolution level goal to build HAI system is to empower humans rather than replacing them. AI systems have always been thought of as adversaries to humans when it comes to jobs. The future HAI systems will augment human capabilities by making them more efficient and solving complex problems which in turn may open more avenues. The other aspect of this is HAI will learn how humans behave and will mature itself to support humans with activities. A very good example of this is depicted in the movie “TAU” where the AI named TAU when provided new information about humans, their history and thinking acquires a human like intelligence as well as behavior. This may be a long road for HAI but definitely is an evolutionary step.

Ethical

The current AI systems have been built to adhere to strict rules where the outcomes could be good as per the policy but disapproving from the aspect of ethics. This is particularly applicable where AI systems must deal with human well-being and protection. For example, an AI system may reject your admission or appointment with your GP as you do not have enough medical cover even though you may be in dire need of the treatment while approving the same for those who have cover. In this case though AI followed a certain set of basic criteria it did not consider other factors like recent medical history, criticality of treatment etc to arrive at outcome thereby impacting the end user. And though the outcome from AI’s ‘perspective’ was good it’s unethical from human angle. How would a HAI system then define whether its outcome is ethical or not and ensuring that the outcome is appropriate as per situation. This has been a topic of debate for some time now and has been a controversial one as well. Building HAI system is basically teaching it to behave morally or as though moral. But who will decide what is moral and what is not? And more importantly will AI be responsible eventually for its outcome?

The EU HAI committee has summed up the ethics expectations from HAI as one where,

i.) AI is aligned with ethical principles and human values

ii.) AI is responsible for any outcome not fitting under prevailing rule of law.

Though there have been some arguments against ethical AI, governments and corporations have started investing in building HAI with ethical component. The UK has been at forefront with setting up Centre for Data Ethics and Innovation. Similarly, Google was one of the first companies to vow that its AI will only ever be used ethically — i.e. it will never be engineered to become a weapon. In April 2020, the European Commission published a set of guidelines for the ethical development of artificial intelligence, chief among these being the need for consistent human oversight.

Fair

Fairness is a very subjective term and though multiple examples have been observed where AI has favored unfairness, researchers are still scratching their heads as how to define a fairness rule for AI.

AI typically has biases inherently not because the way it is designed but due to the biased in the data that is fed to it. This data is generated and labelled mostly by humans where bias creeps in unknowingly. So when a loan is granted to a ‘White’ person and rejected for ‘Colored’ (as depicted in a bank’s case study) we blame AI for the outcome without having looked at the data that was fed. AI is merely a tool to process and identify the ‘hidden’ pattern. The problem is magnified when AI is rewarded for such an outcome through a feedback loop.

Organizations so far have been dealing with unfair outcomes of AI by taking a different route of legal ways and disclaimers. This is where HAI’s need was felt. A first step toward building a fair HAI is not to include the protected attributes like sexuality, ethnicity etc in the data processed by AI. This will remove the possibilities of inherent biases based on a human trait.

Explainable and Reasonable

It is still a difficult question to answer as to why AI system choose a certain outcome. An example usually quoted is why an AI system throws out the picture of a black man when searched for gorilla. Most AI systems especially using Deep Learning are sort of black boxes where it is very difficult or nearly impossible to put a finger on a way the AI will process the input and yield the result. Amongst thousands of parameters that it generates what are those responsible for the unreasonable outcomes. This is not only like finding a needle in a haystack but also futile as these parameters change over the course of learning process. The other aspect is AI being reasonable and able to comprehend Human thinking process, nuances of language and environment perception the way humans do.

The guiding principle of HAI is not only having explainable and accountable AI, but also that humans should be able to seamlessly interact with it, guide it, and enrich it with uniquely human capabilities, knowledge about the world, and the specific user’s personal perspective.

A significant role in this learning process to understand human thinking will be played by NLP (Natural Language Processing) through an enhanced form called Transformers. These new NLP software solutions can process words in relation to all the other words in a sentence, rather than analyzing each word individually. Add semantics of behavioral science and this will lead to more nuanced psychological insights. The closest example is the chatbot Replika which claims to interact humans at the level of emotional intelligence.

Now that we have determined the key expectations from HAI the next big question is how to convert them from concepts to working framework and implementation. The figure below showcases an approach for HAI design. As has been discussed so far, the HAI will need to have Humans in the feedback loop and Thinking process/guidelines as the key components to be able to understand and then interact with humans.

The above design will include Machine Learning, Deep Learning and Reinforcement Learning fields of AI in varying degrees of implementation. Moreover, only AI techniques will not suffice, rather the HAI system will also have to rely on rules of behavioral science and cognitive engineering.

Let’s try to understand the above design through an example. Imagine you are riding back from office to home in your fully autonomous self-driving car. It’s been really a tough day for you, and you are a bit down. As you step in your car it senses your mood and initiates a dialogue asking about your day etc. Simultaneously, it is also sensing the outer environment for any detours, traffic snarls to ensure that your emotional state doesn’t deteriorate. Additionally, while it senses your tone and facial emotions it learns from its past records that some genre of songs may lift your mood up. It engages you by playing not only those soothing songs and dimming cabin lights but start acting as a companion who cares for your well-being. To an extent some of these elements are already present in some AIs in luxury segment cars in the form of personalization. However, HAI is one step ahead by not latching itself only to these personalized configurations but learning from everyday behavior and enriching its knowledge about you.

Inputs:

Just like current AI systems HAI will also have to rely on various inputs like speech, visual and environment. However, an additional requirement of bio sensing providing an array of emotional and physical states of humans will also be required to be able to take empathetic decisions.

Processing and Temporal Storage:

HAI system will not be able to perform only based on its ongoing learning. Rather just like all Human memories are stored in temporal lobe it will have to depend on a dedicated storage where references for verbal and non-verbal communication with “Good” and “Bad” outcomes are stored. Also, since HAI will be judged eventually on how well it is able to understand humans a set of Emotional Quotients Index will supplement its decision making.

Action/Result

The results can fall in any category of prediction, recommendation, inferences or actuation of external systems. HAI will ensure that all these actions are guided by human centric rules. An important path that results follow could be the feedback from Humans. This path is a key HAI design factor and plays an important role in enriching HAI experience.

To support this design three main components are to be taken care of:

1.) Unbiased data: Though AI systems rely on huge volumes of data for HAI bias in data is of utmost importance. As discussed in sections above, since the data is collected, labelled and engineered by humans it is thereby susceptible to prejudices, assumptions and perceptions. This leads unconscious bias to seep in the data. In current AI systems if a class or category is not uniformly represented imputation techniques are incorporated so that ML doesn’t favor one class over other. Similarly, the bias is to be taken care of in the input. How? One approach could be to collect the data based on public rather than private attributes, secondly instead of having single source of data collect the data from multiple sources and most importantly ensure that the data is collected using objective rather than subjective methods. This also applies to the knowledge repository which acts as a reference for maturing your HAI.

2.) Human Loop Training: Since the goal of HAI is to achieve more human like characteristics as ever a companionship with Humans is required for its training. Current AI systems are mostly trained in an open loop mode where the models are trained and tested against a static set of data. With HAI the training will have to be done with Human in loops. This training will be more dynamic in nature and will rely on Human feedback as part of its key inputs.

3.) Human Index Validation: Lastly, the way models are tested for their accuracy and relevance will need some new dimensions. Instead of relying on the conventional accuracy methods some new test KPIs will have to be evolved e.g. Emotional Quotient Scale to determine how well HAI is performing on acquiring Human understanding.

The design and discussion of a scalable and rational HAI system has already gained significant traction in the last two years. It’s transcending realms of not only Reinforcement learning but Advance concepts of Neural Network like Associative memories. It’s only a matter of time when we see a true HAI system as vital part of our lives.

References:

1. https://www.itpro.co.uk/technology/30736/what-is-ethical-ai#:~:text=Roboethics%2C%20or%20robot%20ethics%2C%20is,situations%20in%20an%20ethical%20way.&text=For%20that%2C%20we%20turn%20to,moral%20behaviours%20to%20AI%20machines.

2. https://chattermill.com/blog/human-centered-ai-can-transform-your-customers-experience/

3. https://epublications.marquette.edu/cgi/viewcontent.cgi?article=1678&context=dissertations_mu

4. https://www.akin.com/

5. https://www.sciencedirect.com/science/article/pii/B9780444887405500165

6. https://www.researchgate.net/publication/49597039_A_Study_on_Associative_Neural_Memories

7. https://www.humane-ai.eu/research-roadmap/

8. https://medium.com/@mark_riedl/human-centered-artificial-intelligence-70b019f956d1

9. https://www.fujitsu.com/sg/vision/human-centric-ai/

10. https://interactions.acm.org/archive/view/july-august-2019/toward-human-centered-ai

11. https://developers.google.com/machine-learning/crash-course/static-vs-dynamic-training/video-lecture

12. https://www.newscientist.com/article/2166207-discriminating-algorithms-5-times-ai-showed-prejudice/

13. https://thenextweb.com/neural/

14. https://www.strategy-business.com/article/What-is-fair-when-it-comes-to-AI-bias?gko=827c0

15. https://hcil.umd.edu/human-centered-ai-trusted-reliable-safe/

16. https://hbr.org/2020/08/how-to-fight-discrimination-in-ai

--

--