Artificial Intelligence (AI) can have various benefits in different industries, but in the medical sector, it is raising some legal and ethical concerns. Physicians and researchers providing AI predictions to patients must recognize the risks and proceed with caution.
FREMONT, CA: The usage of artificial intelligence in medicine is creating great excitement and hope for treatments in the future.
AI is generally referred to as the computer's capability to imitate human intelligence and learn. For example, by the usage of machine learning, scientists are operating to develop algorithms that can help them while making decisions regarding cancer treatment. They are hopeful that the computers can analyze radiological images and distinguish the cancerous tumors that will respond well to chemotherapy and the ones which will not.
AI in medicine also gives rise to significant legal and ethical challenges. Most of the concerns are about privacy, discrimination, psychological harm, and the physician-patient relationship. There are some of the problems that AI, in medicine, can give rise to.
Potential for Discrimination
AI includes analyzation of large amounts of data to differentiate patterns, which are later used for predicting the probability of future incidents. In medicine, the data sets can not only come from electronic health records or health insurance claims but also various surprising sources. AI can collect information about an individual's health from income data, social media, criminal records, and even by purchasing documents.
Researchers have already started using AI to predict a multitude of medical conditions. The conditions may consist of stroke, suicide, diabetes, heart disease, and cognitive decline. The predictive potential of AI is raising vital ethical concerns in the healthcare system. When AI is generating the prediction about a person's health, it can also be included in the electronic health records.
Therefore, if a person has cognitive decline or opioid abuse or even AI has just predicted it, then any other people who have access to the health record can easily see them. Moreover, the medical records of a patient are recognized by dozens of clinicians and administrators during their medical treatment. In some cases, the patients themselves provide authorization to others to access their records, for instance, when they are applying for life insurance or employment.
The disclosures on medical records can often lead to discrimination. For example, employers may be interested in employees who are healthy and productive as they will need to grant medical leaves and also to save on medical costs. When they see the medical record of a potential candidate with specific diseases, they will reject them. Even the lenders, life insurers, and landlords may be reluctant to lend money or rent out a property to individuals with AI predicted disease.
There are also many data broker industry giants who are mining personal data and engaging themselves in AI activities. These types of companies can sell medical predictions to any third parties, which may include employers, life insurers, marketers, and many others. These types of businesses are not healthcare providers or insurers, so they do not need the patient's permission to acquire their information and can also disclose it freely.
Lack of Protections
The predictions of AI can also lead to psychological harm to individuals. For example, people can get traumatized when they get to know that they might suffer from cognitive decline in the future. There are high chances that individuals will obtain health forecasts directly from the commercial unit that has brought their data. Nothing can be more disturbing people than discovering the news that they are at risk of having dementia from an electronic advertisement that is urging them to purchase memory-enhancing products.
Moreover, in the case of genetic testing, people are requested to seek genetic counseling so that it becomes easier for them to make a proper decision about whether to be tested. The advice also helps patients get a better understanding of the test results.
The concern with the predictions is that in most cases, they are far from reality as many factors can contribute to errors. If the medical records that are used to develop the data are flawed, the algorithms' output will also be incorrect. Therefore, the patient will have to suffer from unnecessary psychological harm or discrimination when, in reality, they are not suffering from the predicted ailments.
A Call for Caution
The physicians providing their patients with AI predictions must make sure that they educate them about the pros and cons of the forecasts. Experts must take it as a responsibility to counsel their patients and save them from the mental pressure and the discrimination of society.