Abstract
Explainable AI (XAI) is a domain that develops methods that make is possibly to explain, or at least justify, recommendations and actions of AI systems in similar ways as humans do. However, current XAI methods tend to produce non-interactive results that are mainly (or only) understandable to the AI engineers themselves. Social XAI (sXAI) is proposed as a name for XAI functionality that mimics how humans explain and justify their actions and opinions at least to some degree. sXAI methods should be interactive and adapt their explanations to the explainee’s background knowledge, interests and preferences in how to receive explanations, as well as the pace of interaction. The presentation shows how sXAI can be implemented using the Contextual Importance and Utility (CIU) method – and gives some reasons for why we probably won’t see sXAI happening anytime soon.
Speaker
Kary Främling is Professor in data science, at Umeå University, with emphasis in data analysis and machine learning. He is also head of explainable AI (XAI) team and describes his core research focus on Explainable Artificial Intelligence (XAI) and notably on so-called "outcome explanation", i.e. explaining and/or justifying results, actions or recommendations made by any kind of AI systems, including (deep or not) neural networks.
Please note. The subtitle was primarily generated by AI in collaboration with human review and may contain some errors.