About

Project Overview

The emergence of the new European Artificial Intelligence (AI) Act underscores the importance of taking a “Human Rights by Design” (HRbD) approach when developing AI-based systems. For instance, among its requirements, the AI Act stipulates the use of Fundamental Rights Impact Assessments (HRIAs) carried out on the onset of high-risk digital initiatives (e.g., health and social care, law enforcement, and education). However, a significant gap exists between the legal and software engineering worlds, making it challenging to operationalise a human rights-centred strategy that goes beyond mere legal compliance and truly supports companies embedding human rights in the entire design process, from software requirements to deployment. Today, even FRIA methodologies work only at a “policy level” assessment, which is too vague and abstract to communicate effectively with the software teams of developers and architects. This superficial approach often results in such assessments by a legal team in complete isolation without truly influencing the software design.

The project aims to investigate engineering strategies to help organisations and software professionals operationalise human rights in software design by adapting and refining methodologies. Here, we adhere to a broad definition of the term “design”, comprising many phases of the AI software lifecycle, from the data collection and verification to the deployment and maintenance of systems. Specifically, existing methods for FRIAs, risk assessments, and other threat modelling techniques can be employed at the earlier stages of digital health initiatives and evaluated in practice, ensuring that they can meaningfully inform software practitioners and other stakeholders.

We foresee three main goals that will guide the work activities in this project:

This multidisciplinary project, bringing experts from Computer Science (CS) and the Centre for Gender Studies (CGF) at Karlstad University (KAU), assembling a strong consortium to co-produce research in operationalising human rights in AI-based digital health with an optimal composition for addressing the research goals. The consortium includes a highly innovative technology company (Mavatar), a law firm specialising in AI and data protection (inTechrity), and a multinational pharmaceutical organisation (Pfizer). We also include a public sector institution connected to the digitalisation of social care in Sweden, i.e., the Swedish Association of Local Authorities and Regions (SALAR).

As a final underlying goal of the OPEHRA project, we aim to translate research findings into guidelines that industry partners can use internally and broadly disseminate to the industry sector, benefiting digital health players in Sweden. Our research findings will be openly available (i.e., as open science best practices, data will be made FAIR – Findable, Accessible, Interoperable, and Reusable). A dedicated online repository will archive guidelines, industry briefs, and other research artefacts (e.g., data, documentation, games and other learning tools), enhancing the research impact.