Double coffee
March 31, 2023 - 3 min

Artificial Career

I think it is time to stop and think about the ethical and moral issues that arise when the decision maker is not a person.

Share

Half jokingly, half seriously, a few weeks ago I used ChatGPT to write a column in this same space. It is true that I had to fix several things, give more continuity to the reading and reduce the number of consecutive dots and repeated words, so that it would not look like a robot wrote it (no pun intended). However, the result was more than acceptable and my intervention was to give it "my touch" more than to correct, for example, the background of the writing. Anyway, this does not prove anything, since for other types of tasks its performance is still quite poor and it still fails to contrast sources that may say contradictory things. For the time being.

Although this type of experimentation has continued (copycats), the reliance on the debate between Artificial Intelligence and ChatGPT may lead the public to believe that they are synonymous and that future development has to do with the problems I described in the previous paragraph. Nothing could be further from the truth. Just as blockchain technology is not just about cryptocurrencies, Artificial Intelligence is not just a text generator. And the race unleashed to be the first to launch the next big tool has led to a number of questions and debates that have not had sufficient depth or even the opportunity to take place. For example, a button: in the face of a fortuitous event, where an accident is unavoidable and third parties could be affected, who should an autonomous driving vehicle privilege to save?

For this reason, a group of world leaders (and junior economists at the end of the world), participants of the Future of Life Institute, signed an open letter open letter in which they ask that the great artificial intelligence experiments be put on hold, at least for six months. A pause that, according to the authors, should be public and verifiable and, in the event that it cannot be carried out quickly, be imposed by governments. The reasons are clear: to develop a series of protocols to make AI action safe beyond reasonable doubt; to work with governments and authorities to develop robust institutions capable of supervising AIs and monitoring their progress and growth; to create "watermarks" to identify the real from the synthetic in the face of model leaks, etc. Last but not least, generate funds to minimize the social and economic impacts of the drastic changes in people's lives and jobs that the rapid application of AI may cause, even threatening democracy.

On a personal scale, I think it is time to stop and think about the ethical and moral problems that arise when the decision maker is not a person. Moreover, it may have been a person, the one who designed and programmed the AI, but who does not necessarily represent the sentiment of society, let alone was elected (again, in a democracy) to do so. Let's go back to the question of the autonomous driving vehicle. Some manufacturers already have the answer to that: the vehicle will always try to minimize negative impact to its occupants. It may seem obvious, but let me interfere a bit with this idea: what if the occupants are drug dealers fleeing a crime and, after passing through a slippery area, head straight for a group of pregnant women returning from a prenatal yoga class? Perhaps too far-fetched, let's think of an easier one: In the car is a scientist with the cure for cancer, but the vehicle lost control and will run over (barring a steering wheel movement that will send the vehicle into a ravine) the person you love most in the world. Was the decision easier? Is there a right answer to these dilemmas? Should these decisions be made by whoever programmed the AI, with its biases and experiences, or should it be a body elected by the citizens? Which citizens? Is there a global ethical standard or should it be separated by cultures, nations or lifestyles? Remember this is just a button, an example, for just as AI is not just ChatGPT, neither is it just autonomous driving.

Six months might not be enough time, but it is better than nothing. We won't be able to solve everything, it's true, but it will allow us to think about how we sort out what we have and where we want all this to continue. Because, for me at least, I don't want a tyranny from those who run AI companies today, even if they don't seek it, let alone a Skynet.

 

Nathan Pincheira

Chief Economist of Fynsa