“AI systems must be carefully designed to reflect the diversity of socioeconomic and health-care settings and be accompanied by training in digital skills, community engagement, and awareness-raising. Systems based primarily on data of individuals in high-income countries may not perform well for individuals in low- and middle-income settings,” WHO wrote in its Ethics and governance of artificial intelligence for health (PDF) report. “Country investments in AI and the supporting infrastructure should therefore help to build effective health-care systems by avoiding AI that encodes biases that are detrimental to equitable provision of and access to healthcare services.” WHO added if “appropriate measures” are not taken when developing AI-based healthcare solutions, it could result in “situations where decisions that should be made by providers and patients are transferred to machines, which would undermine human autonomy” and lead to healthcare services being delivered in “unregulated contexts and by unregulated providers, which might create challenges for government oversight of health care”. It also points out that other risks could include unethical collection and use of health data; bias being encoded in algorithms; and risks to patient safety, cybersecurity, and the environment. Must read: Ethics of AI: Benefits and risks of artificial intelligence The report is the product of 18 months of consultations between a panel of experts in ethics, digital technology, law, and human rights, who were appointed by WHO. It noted that AI has enormous potential to help improve healthcare and medicine worldwide, but only if ethical considerations and human rights are placed at the centre of design, development, and deployment. “Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology it can also be misused and cause harm,” said Dr Tedros Adhanom Ghebreyesus, WHO Director-General. “This important new report provides a valuable guide for countries on how to maximise the benefits of AI, while minimising its risks and avoiding its pitfalls.”
The report explained that AI could potentially improve diagnosis and clinical care, enhancing health research and drug development and assisting with the deployment of different public health interventions, such as disease surveillance, outbreak response, and health systems management.
At the same time, using AI-based tools could help governments extend health care services to underserved populations, improve public health surveillance, and enable healthcare providers to better attend to patients and engage in complex care, the report said.
Read more: AI and ethics: The debate that needs to be had     As part of the report, WHO has developed six principles that it hopes will be used as a basis for governments, technology developers, companies, and civil society organisations when developing AI for health. These include allowing humans to have autonomy over healthcare systems and medical decisions while ensuring privacy and confidentiality is protected; the wellbeing and safety of humans are maintained; ensure transparency is maintained over the development, design, and deployment of AI technology; AI for health is designed to encourage inclusiveness and equity; AI applications are responsive and sustainable; and that stakeholders involved in the design, development, and deployment of AI technologies maintain responsibility and accountability. 
At the end of last year, World Economic Forum launched a report that detailed how organisations could take an ethical approach to designing technology and using it responsibly.
The Ethics by Design – An Organizational Approach to Responsible Use of Technology detailed three design principles that can be integrated to promote ethical behaviour when it comes to creating, deploying, and using technology. These principles included paying attention in a timely manner on the ethical implications of technology by building awareness through training and internal communication channels, developing organisational “nudges” such as checklists and using due diligence reminders, and weaving value and ethics into the company culture.

Report finds startling disinterest in ethical, responsible use of AI among business leadersData, analytics, machine learning, and AI in healthcare in 2021Leaked AI regulation: What it means for the U.S.The trouble with AI: Why we need new laws to stop algorithms ruining our liveThis startup wants to fix your biased AI, one dataset at at time