Skip to primary content

About the project

RISK-AI is a Nordic–Baltic research project exploring how artificial intelligence can be used responsibly in healthcare. We focus particularly on AI for medical imaging and other clinical applications, where both the potential and the risks are significant. 

Building on the EU AI Act and international frameworks for trustworthy AI, the project introduces a new concept: Responsible Risk. This means aligning technical, clinical and regulatory perspectives on risk, so that AI can be used in ways that are safe, transparent and socially acceptable – for patients, healthcare professionals and society at large. 
 
In RISK-AI we will: 

  • Conceptualise “Responsible Risk” as a way to connect responsible AI principles with risk-based regulatory requirements in healthcare RISK-AI. 
  • Map and analyse governance models in the Nordic–Baltic region to understand how organisations respond to the AI Act and manage AI-related risks.  
  • Define thresholds for acceptable clinical risk and identify coping strategies for healthcare professionals and organisations when implementing AI tools in practice. 
  • Extend and update the MAS-AI framework (Model for Assessing the value of AI) by integrating Responsible Risk into its ethical and legal dimensions.  
  • Pilot the updated MAS-AI framework in real-world clinical use cases in Norway and Latvia, assessing AI technologies at different stages of their lifecycle. 
  • Build capacity and competence among policymakers, clinicians and other stakeholders through training, co-design workshops and practical tools for AI risk assessment and governance. 

Our ambition is to support trustworthy adoption of AI in healthcare by providing actionable guidance on how to understand, assess and manage AI-related risks in a responsible way, ultimately contributing to better and safer patient care in the Nordic–Baltic region.

RISK-AI is a three-year research project funded by NordForsk under the Nordic Programme on Health and Welfare. The project runs from January 2026 to December 2028 and brings together five core partners from Norway, Denmark, Finland and Latvia, including universities, e-health research centres and university hospitals.

The total requested funding for the consortium is approximately 25 million NOK.

Background 

The project builds its research on existing work in AI ethics, risk management and digital health, drawing in particular on international frameworks for trustworthy AI (e.g. the EU AI Act and the NIST AI Risk Management Framework) and on the validated MAS-AI framework for assessing the value of AI in healthcare. It further leverages extensive experience from Nordic–Baltic initiatives in digital health governance, health technology assessment and capacity building for healthcare professionals.

MAS-AI (Model for Assessing the value of AI) is a health technology assessment framework designed specifically for evaluating artificial intelligence in healthcare. It provides a structured way to assess not only the technical performance of an AI tool, but also its clinical, organisational, economic, ethical and legal implications. 

The framework consists of two steps covering nine domains and five process factors: 

Step 1 focuses on describing: 

  • the patient group  
  • how the AI model was developed
  • initial ethical and legal considerations 

Step 2 supports a multidisciplinary assessment of outcomes across five domains: 

  • Safety  
  • Clinical aspects  
  • Economics  
  • Organisational aspects  
  • Patient aspects

MAS-AI was developed by a multidisciplinary group of experts and patients at Odense University Hospital and the University of Southern Denmark. It has been validated and used in several international contexts, including: 

  • a Canadian evaluation of AI in healthcare  
  • prospective radiology studies in Denmark  
  • a clinical trial on AI-based management of operating theatres in Italy  
  • European-level modelling projects testing AI in clinical cases

In RISK-AI, we will update and extend MAS-AI by integrating the concept of Responsible Risk into its ethical and legal dimensions, and by piloting the updated framework in clinical use cases in Norway and Latvia.  

The EU AI Act 

The EU AI Act is the European Union’s comprehensive regulation on artificial intelligence, introducing a risk-based approach to governing AI systems. It aims to ensure that AI used in the EU is safe, transparent, non-discriminatory and subject to human oversight, while still supporting innovation and societal benefit.

Under the AI Act, AI systems are categorised by risk level, with the strictest requirements for high-risk systems – including many applications in healthcare, such as medical imaging, diagnostic support and clinical decision support. For these systems, organisations must comply with obligations related to: 

  • risk management and documentation  
  • data quality and bias mitigation  
  • transparency and explainability  
  • human oversight and accountability  
  • robustness, security and ongoing monitoring

RISK-AI responds directly to this regulatory context by examining how healthcare organisations in the Nordic–Baltic region can understand, assess and manage AI-related risks in line with the AI Act, and by integrating these requirements into the updated MAS-AI framework through the concept of Responsible Risk.

APPFWU02V