The recent advances of Artificial Intelligence (AI) hold promise for significant benefits to society in the near future. However, in order to make AI systems deployable in social environments, industry and business-critical applications, several challenges related to their trustworthiness must be addressed first. Most of the recent AI breakthroughs can be attributed to the subfield of Deep Learning (DL), but, despite their impressive performance, DL models have drawbacks, with some of the most important being a) lack of transparency and interpretability, b) lack of robustness, and c) inability to generalize to situations beyond their past experiences. Explainable AI (XAI) aims at remedying these problems by developing methods for understanding how black-box models make their predictions and what are their limitations. The call for such solutions comes from the research community, the industry and high-level policy makers, who are concerned about the potential of deploying AI systems to the real world in terms of efficiency, safety, and respect for human rights. EXAIGON will advance the state-of-the-art in XAI by conducting research in four areas: 1. Supervised learning models 2. Deep reinforcement learning models 3. Deep Bayesian networks 4. Human-machine co-behaviour. Areas 1-3 involve design of new algorithms and will interact continuously with Area 4, to ensure the developed methods provide explanations understandable to humans. The developed methodologies will be evaluated in close collaboration with 7 industry partners, who have provided the consortium with business-critical use cases, including data, models and expert knowledge. The consortium includes two international partners from the University of Wisconsin-Madison, and University of Melbourne, respectively, who have conducted and published outstanding research in relevant areas over the last few years.
Project leader: Anastasios Lekkas
Institution: Institutt for teknisk kybernetikk