I saw a reference to a paper by researchers from Cambridge University’s Department of Artificial Intelligence Applied to Medicine, who published a paper describing how they came up with a method Select patients who should receive donated organs using a artificial intelligence algorithm It improves upon the current human norm, which, of course, is also based on various mathematical models.
Bottom line: Researchers found that machine learning algorithms (called Organ) was able to produce better results in simulations that included a mix of real data and carefully balanced hypothetical data from the past 26 years. these things are complicated For ethical reasons and because there are several factors to take into account: The type of organ, how rare it is, can provide the recipient with years of life, the likelihood of rejection, and so on. The new algorithm manages to make some peculiar selections, but the one that maximizes the total number of years of life of the recipients, Something that seems like a good idea and something socially acceptable without any problems.
What they observed is that algorithms learned to offer newly available organs, depending not only on who could benefit the most, but also The probability that a new similar organ will appear in a short time (a factor known as “lack of limb”). It benefits those who need an organ with certain “rare” characteristics, when it is known that other compatible organs that do not have them (and which do not need them) are likely to appear for other patients. which may have priority over the former.
The work is from two years ago, but the point is still there. really, Organ There is not only one: there are others OrganSync, OrganBoard You eye transplant which are focused on a single area of ​​medicine.
It seems to me that if a mathematical model is currently used and this type of algorithm is an improvement or expansion on the previous one, why not leave the decision in their hands. After all, it can be justified that the outcome is better for society as a whole. Of course, this also shows the importance of explainable artificial intelligence (XAI) being able to give a good account of the reasons for their decisions.