Infusing public services with AI solutions can contribute to efficiency and effectiveness improvements but this may come with an increase in opaqueness. This opaqueness can pose limits on involving humans in shaping, operating and monitoring the arrangements in place ensuring meaningful human control. The responsible use of AI entails ensuring algorithmic intelligibility and accountability. Intelligibility means that algorithms in use must be intelligible as to their state, the information they possess, their capabilities and limitations. Accountability means that it is possible to trace and identify responsibility for the results of algorithms. Both are required for using algorithms under human oversight. The AI4Users project will contribute to the responsible use of AI through the design and assessment of software tools and the formalisation of design principles for algorithmic accountability and intelligibility. The project takes a human-centred perspective addressing the needs of different groups implicated in AI-infused public services: citizens, case handlers at the operational level, middle managers and policy makers. The novelty of the AI4Users is that it targets specifically non-experts extending the reach of research beyond AI experts and data scientists. The use cases to be employed by the project will address different oversight scenarios including human-in-the-loop, human-on-the-loop and human-in-command. The User Organisation will be NAV and the project will be associated with NAV’s AI lab. The project research will be linked to NAV´s ongoing AI work and specific AI solutions under deployment. The project will seek access to case handlers in local NAV offices and NAV´s permanent local and national user committees (NAV Brukermedvirkning lokalt & nasjonalt). The overall aim is to advance the public infrastructures and contribute to introducing human-friendly and trustworthy artificial intelligence in practice.
Project leader: Polyxeni Vasilakopoulou
Institution: Institutt for informasjonssystemer