Peoples who are not able to communicate verbally needs augmentative and alternative communication (AAC) to be understood and to interact with the surroundings. In order to improve the quality of life of these peoples, and to facilitate the work of guardians and other caregivers, there is a strong need for a solution that can translate non-verbal communication consisting of sound, facial expressions and body gestures, to something understandable. Today's methodology for translating expressions of persons in need of AAC is based on written notes describing expressions and signs (indexical signs), and the interpretation and response hypothesis. Caregivers often provide assistance to many different people, making it extremely difficult to learn the repertoire of expressions their individual clients have. This results in misinterpretation that leads to frustration, often violent behavior and resignation. To look up written notes take time, and the caregiver’s response time is relevant to how communication is perceived. This innovation aims to develop a system that records sound, facial expressions and body gestures and uses machine learning to interpret the compound expression in real-time. The goal is to facilitate quick and adequate response from the caregiver. The innovation shall be used in everyday situations, both indoors and outdoors. This will ease the everyday life for caregivers, improve quality of life for the person in question and most likely contribute to a positive development of cognitive skills and an extended expression repertoire. The solution will thus support the UN Convention on the Rights of Persons with Disabilities. The innovation is unique to the market and challenges the state of art in image sensor usage and machine learning, and hence is in need of extensive research. The final product will have great impact on the company growth and financial development. The project will be run in close cooperation with Sintef and Norsk Regnesentral.
Project leader: Vidar Solli
Institution: LIFETOOLS AS