About

Project Overview
The emergence of the new European Artificial Intelligence (AI) Act underscores the importance of taking a “Human Rights by Design” (HRbD) approach when developing AI-based systems. For instance, among its requirements, the AI Act stipulates the use of Fundamental Rights Impact Assessments (HRIAs) carried out on the onset of high-risk digital initiatives (e.g., health and social care, law enforcement, and education). However, a significant gap exists between the legal and software engineering worlds, making it challenging to operationalise a human rights-centred strategy that goes beyond mere legal compliance and truly supports companies embedding human rights in the entire design process, from software requirements to deployment. Today, even FRIA methodologies work only at a “policy level” assessment, which is too vague and abstract to communicate effectively with the software teams of developers and architects. This superficial approach often results in such assessments by a legal team in complete isolation without truly influencing the software design.
The project aims to investigate engineering strategies to help organisations and software professionals operationalise human rights in software design by adapting and refining methodologies. Here, we adhere to a broad definition of the term “design”, comprising many phases of the AI software lifecycle, from the data collection and verification to the deployment and maintenance of systems. Specifically, existing methods for FRIAs, risk assessments, and other threat modelling techniques can be employed at the earlier stages of digital health initiatives and evaluated in practice, ensuring that they can meaningfully inform software practitioners and other stakeholders.
We foresee three main goals that will guide the work activities in this project:
- Goal 1 - Landscape Mapping of Human Rights Engineering Approaches for AI in Digital Health – As part of the initial steps of the project we plan to systematically map the existing approaches for integrating human rights as part of the design of AI-based digital health solutions. A taxonomy of engineering approaches (i.e., theories, methods, tools, and techniques) can be defined as the primary synthesis of results. We also plan to map this taxonomy to current standards on AI risk management and algorithmic bias (e.g., NIST AI RMF, ISO 42001, 23894, 22989, 23053). Hence, it can provide a holistic “engineering view” of human rights for AI-supported systems. A taxonomy would also help us identify research gaps, such as areas in the AI lifecycle that are under-researched or lack rigorously evaluated human rights-enhancing approaches.
- Goal 2 - Operationalising HRIAs in the Design Process – This activity will apply Fundamental and Human Rights Impact Assessments (FRIAs/HRIAs) to the key case studies provided by the industry partners. FRIAs and HRIAs are emphasised in the OPERAH project because they can be leveraged as rigorous assessment methods to identify (i) harmful risks related to a system and (ii) suitable mitigating controls (i.e., using the taxonomy proposed in Goal 1).
- Goal 3 - Co-Producing Training and Awareness Programmes with Industry – AI literacy has become a legal obligation (Article 4 of the EU AI Act). Thus, in complement with HRIAs, we envisage that other methods and techniques can be employed (e.g., human rights threat modelling, gamified card-based tools for human values and responsible AI (e.g., PLOT4AI), DDC Digital Ethics Compass). All these methods and tools can support training and awareness-raising activities within the organisations and even serve as in-depth risk and threat modelling exercises done as part of an HRIA. We can co-produce and validate our materials by conducting industry workshops on human rights-centred approaches in AI-based digital health for awareness raising and training.
This multidisciplinary project, bringing experts from Computer Science (CS) and the Centre for Gender Studies (CGF) at Karlstad University (KAU), assembling a strong consortium to co-produce research in operationalising human rights in AI-based digital health with an optimal composition for addressing the research goals. The consortium includes a highly innovative technology company (Mavatar), a law firm specialising in AI and data protection (inTechrity), and a multinational pharmaceutical organisation (Pfizer). We also include a public sector institution connected to the digitalisation of social care in Sweden, i.e., the Swedish Association of Local Authorities and Regions (SALAR).
As a final underlying goal of the OPEHRA project, we aim to translate research findings into guidelines that industry partners can use internally and broadly disseminate to the industry sector, benefiting digital health players in Sweden. Our research findings will be openly available (i.e., as open science best practices, data will be made FAIR – Findable, Accessible, Interoperable, and Reusable). A dedicated online repository will archive guidelines, industry briefs, and other research artefacts (e.g., data, documentation, games and other learning tools), enhancing the research impact.