Wednesday, February 8, 2023

Machines, Like the Devil, Are in the Details

A famous Anglo-Saxon proverb says that the devil is in the details, and something similar happens with artificial intelligence: it can make decisions based on subtleties or anecdotes or irrelevant issues in “our eyes”, even better than us. I’ll explain

The most common form of automatic learning, performed by machines, is based on using a training data set that serves as an example of what you want to learn. In supervised learning methods, the training set is made up of pairs of data consisting of an input – this could be a picture of a cat – and the feedback or output that the system has to give once it is trained to that effect – the category. cat” -. Thus, to design a system capable of diagnosing COVID-19 from chest X-rays, the training set can be made up of thousands of X-rays from healthy and sick people, labeled according to each case. Is.

Algorithms that learn about these data sets do so by computing the value of thousands and thousands of parameters of their complex mathematical models, so that after this training, they are able to adequately reproduce the response related to the artificial intelligence training examples. capable. And, above all, get the correct answer for new cases, even if they are more or less similar to the training set, will not be identical. Following on from the previous example, if the system has learned to solve the task correctly, it will be able to diagnose new patients through their chest X-rays.

We work like this. But only similar. Our ability to perceive and act in the world, even in professional environments, is incomparably superior to that of machines, and we are also general-purpose intelligence. However, machines properly designed to solve specific problems can outperform us. They prevent us from playing chess or detecting tumors on medical images, to name just two well-known examples. Of course, depending on how the training set is chosen, we can get some surprises.

Going back to the COVID-19 example, a system designed at the University of San Raffaele in Milan learned to do this by, among other things, detecting the existence of a clear correlation between the date printed on chest X-rays and the probability of death. The patients had COVID-19. Many patients admitted to the university hospital in 2020 had COVID-19, so the time of admission became a good diagnostic element for the machine, regardless of the circumstantial and even how important that information would be to the human specialist Why not be important What’s more, the machine can learn to detect COVID-19 on X-rays better than a radiologist, without knowing what lungs are or their function.

Among the hundreds of attempts to design diagnostic systems for COVID-19 from chest X-rays, there have also been other cases in which systems have learned to pay attention to what is clearly relevant, such as patient posture (the most severe patients are more X-rays lying around), or in handwritten notes on X-rays taken at reference hospitals during the pandemic, where the number of admissions for sick people was high.

Machines also sometimes go from anecdote to category, more so when they use mathematical models capable of “fixing” any details that might be useful for the purpose we’ve set out for them, Whether or not it makes logical sense to solve a problem. crisis. And that is that at this point we have machines that can know a lot, but think very little.

Nation World News Desk
Nation World News Desk
Nation World News is the fastest emerging news website covering all the latest news, world’s top stories, science news entertainment sports cricket’s latest discoveries, new technology gadgets, politics news, and more.
Latest news
Related news