An Australian initiative is working to produce an online tool to help classify different types of artificial intelligence (AI) solutions used in Australian healthcare.
The dynamic, interactive tool will be a first-of-its-kind initiative, highlighting specific risks associated with bias, explainability, and robustness of AI within healthcare. Its aim is to support the safe and responsible use of AI.
The project is a collaboration between the Digital Health Cooperative Research Centre (DHCRC) with the Department of Health and Aged Care and two specialist AI teams within the University of Technology Sydney – UTS Rapido and UTS Human Technology Institute.
DHCRC said the project complements efforts by governments, peak organisations, and clinical professional and safety bodies to ensure AI is deployed safely into health care settings.
The online tool will adapt the Organisation for Economic Cooperation and Development (OECD) AI Classification Framework to the Australian healthcare context and incorporate recent government policies, including proposed mandatory guardrails for AI across the Australian economy.
The dynamic, interactive tool will be a first-of-its-kind initiative, highlighting specific risks associated with bias, explainability, and robustness of AI within healthcare
Having been endorsed by 46 countries, including Australia, the AI Classification Framework provides an internationally recognised baseline for classifying AI systems and in turn for assessing the effectiveness of national AI strategies.
The Australian project will road test the localisation of the framework with developers, deployers, and end users of AI solutions in healthcare.
The project team is aiming to have a basic web tool ready for testing by mid-2025.
DHCRC CEO Annette Schmiede said the “availability and adoption of AI is without doubt moving at a rapid pace across all sectors, including healthcare”.
“The challenge is building clear and consistent guidance and tools, ensuring these are effective for the diverse range of audiences and AI solutions across healthcare including developers, health care providers, and consumers.”
Professor Adam Berry, Deputy Director of the UTS Human Technology Institute, said the “tremendous promise” of AI “depends upon responsible practice”.
“A critical first step to realising that practice is to be consistent in the documentation of how individual AI systems are used, function, and deliver impact across diverse stakeholders. That consistency helps us build common approaches for assessing and addressing risk and enables everyone to consistently talk clearly about the use of AI by preserving the integrity of the tool.