The world has changed

Authors

  • Guillermo Schor-Landman Fundación Iberoamericana de Telemedicina, Buenos Aires, Argentina; Universidad de Buenos Aires; Universidad Internacional de Valencia

DOI:

https://doi.org/10.5377/alerta.v5i1.13209

Abstract

Information and communications technologies (ICT) and the COVID-19 pandemic have transformed the way of living and coexisting.

ICT have modified habits, customs and practices, which have facilitated access to more modern and efficient services in the new scenario of the information and knowledge society. They have impacted the field of health with technological solutions for research, medical services, patient care and administration, thanks to the amount of data that is collected today and can be analyzed in a timely manner through computer applications, especially with Artificial Intelligence (AI).

AI was developed in the 1950s, but it was not until a few years ago that sensors capable of collecting the amount of data that are available today have been created. The COVID-19 pandemic, a global health emergency, has broken down the cultural barriers that had delayed the intensive use of technological tools in the field of health.

The need to know, prevent and provide quality care has naturally imposed digital health, defined by the World Health Organization (WHO) as the use of ICT for health.

Electronic medical records, teleconsultations, electronic prescriptions and, above all, AI for research and health care are now common terms.

Due to the benefits they provide to the population, it is the obligation of governments to computerize health care quickly; But since every creation of man always generates risks, and only the best practices should be maintained when the pandemic is overcome, it is also mandatory that governments regulate digital health as soon as possible, especially AI, establishing ethical limits to its applications.

Despite the important benefits provided by the development of technology, there is also an exposure to risk; which may be about fundamental rights and the safety of people due to errors in their development, discriminatory bias or bad faith. Therefore, it must be regulated, especially in the proper use of data and in the attribution of liability for damages.

Regulation is not prohibition. It should not be a brake on innovation, but should establish the limits that collaborate with the proper use of technology.

Artificial intelligence

It is a term that encompasses those computer systems capable of detecting their environment, thinking, learning and making decisions; that is, they can work autonomously based on the data they receive, and their objectives defined in algorithms1. AI enables machines to learn from experience, adjust to new contexts, and act similar to how a human would.

It is estimated that AI will be as transcendental as the discovery of the Internet a few decades ago, since all productive sectors are being influenced by this technology: health, agriculture, industry, commerce; services, education, culture, entertainment, etc.

A drastic reduction in the years needed for nations to double the size of their economies is also expected, depending on the capacity of each country to implement AI in its infrastructure2.

However, just as AI can be a fundamental tool for development, it can also widen the distances between countries and people. Therefore, the implementation of ethical principles is essential and constitutes one of the most relevant elements.

Ethics of artificial intelligence

It is defined as a branch of ethics that analyzes and evaluates the moral dilemmas that arise from the deployment of this technology in society. The adoption of applications capable of making decisions on their own continues to raise numerous questions at an ethical level. Public administrations must anticipate and prevent future potential damage, with a culture of responsible innovation for the development and implementation of fair, safe and, consequently, reliable AI systems.

Potential risks

There are possible risks associated with the adoption of AI systems such as: the destruction of jobs, the manipulation, security and vulnerability of software and hardware, the intrusion of privacy, an increase in the digital divide between countries and people; the erosion of civil society through the manipulation of information, among others.

Biases can appear in AI programming, and this is where ethics acquires a crucial role. Just as every person has biases, so can smart application developers, though they probably won't become apparent until various bugs accumulate in the system.

Possible solutions For AI to be trusted and to meet the challenge of risks, the following aspects must be determined:

AI life cycle. The stages of the life cycle of these systems range from research, conception and development to deployment and use, through maintenance, operation, marketing, financing, monitoring and evaluation, validation, end of use, disassembly and termination3.
Person who participates in at least one stage of the life cycle of the AI ​​system. These can be researchers, programmers, engineers, data specialists, end users, companies, universities, and public and private entities, among others. This definition of the above concepts is essential to clearly identify those responsible, be they human or legal persons, in the event of damage, in order to generate the trust that the development of AI requires.
Human supervision. An essential requirement for reliability is that, throughout their life cycle, AI systems are subject to permanent monitoring by actors, users, institutions and governments, as appropriate. In cases where decisions are understood to have an irreversible or difficult to reverse impact or that they may involve life or death decisions, the final decision should not be handed over to AI systems and should always be made by a human being.

The European Commission proposes an approach aimed at excellence and trust based on the level of risk that AI could carry, establishing 4 categories and different levels of control for each one, ranging from "unacceptable risk", whose applications are prohibited, moving on to «high risk», which requires approval prior to use and regular internal and external controls, to «limited risk» and «no risk», which must be subject to Voluntary Codes of Conduct4,5.

Finally, the words of Audrey Azoulay, director general of UNESCO, are taken up again: «The world needs rules so that artificial intelligence benefits humanity»6.

Downloads

Download data is not yet available.
Abstract
507
PDF (Español (España)) 563

Published

2022-01-27

How to Cite

Schor-Landman, G. . (2022). The world has changed. Alerta, Revista científica Del Instituto Nacional De Salud, 5(1), 3–5. https://doi.org/10.5377/alerta.v5i1.13209

Issue

Section

Editorial