Group research
Human-Centric AITopic tags
Annotation platform Datasets Explainable AI Knowledge graphs Multilingual systems Qualitative evaluationGroup research
Human-Centric AITopic tags
Annotation platform Datasets Explainable AI Knowledge graphs Multilingual systems Qualitative evaluationThe need for explainable human-centric AI
AI systems perform tasks that normally would require human intelligence, and can accomplish them faster by rapidly consuming and analysing large amounts of complex data. That’s the theory. In practice, many instances of applied AI fail spectacularly. In some cases, we don’t even notice!
How does AI think?
AI is typically a black box. In many cases we don’t understand why AI systems make decisions or predictions. This enigma becomes more pronounced the more complex AI decision-making becomes. Without this understanding, we won’t know when AI gives us a wrong answer. To harness AI’s full potential requires human-AI collaboration, which combines the best of both worlds: the profound analysis and pattern-recognition capabilities of AI with empathetic human decision-making.
To accomplish this, NEC Laboratories Europe is developing explainable human-centric AI that will allow these systems to explain themselves to human users, empowering us to make better and more informed decisions.
The need for explainable AI
To deploy AI systems safely, especially in medium-risk and high-risk scenarios, we need explainable human-centric solutions that enable us to understand what a system learnt and why it decided or predicted what it did.
Harnessing human-AI collaboration
First, let’s examine what we mean by human-AI collaboration with an example of a biomedical researcher collaborating with an AI system (see Figure 2). The researcher wants to predict the side effects for a patient when one or more medical substances are taken at the same time.
The biomedical researcher’s first question is: “What happens if someone takes the drug paliperidone and the chemical element calcium at the same time?” A conventional AI system might provide a correct prediction, saying, “It will cause pain,” but would stop there. A noncollaborative AI system can’t answer the researcher’s next question, “Why do you think it will cause pain?”
A collaborative AI system would then explain that each substance activates the particular proteins, LPA and MMP. When the researcher asks about the relationship between those proteins, the AI system would also explain that LPA further increases the activation of MMP, which is already activated by calcium. Thus, paliperidone indirectly exacerbates the upregulation of MMP already caused by calcium, which leads to increased pain.
Collaborative AI systems can increase user productivity when they interact naturally with humans and explain their predictions in a way that helps users accomplish their tasks.
Benefits of human-AI collaboration
Human-AI collaboration enhances human decision-making and delivers three major benefits:
- Increased human efficiency: By performing low-effort tasks, AI frees up time for other human activities.
- Better decision-making: AI provides requested information quickly to support actionable insights.
- Improved processing capabilities: AI detects big data patterns impossible for humans to discern, leading to AI-assisted breakthrough discoveries.
Why are human-centric AI explanations necessary?
A typical AI setup that does not provide explanations works like this: Given some training data, a machine learning process produces a learned function. That function provides a decision or recommendation to a user and the interaction ends. Important user questions go unanswered, such as: “Why was this decision or prediction made?” and “How sure is the model about it?”
In contrast, given some training data, the explainable AI machine learning process returns a model that provides explanations along with its decisions and predictions.