Deadbots and the ethical limits of technology

Photo: Owen Gent

systems machine learning [sistema de aprendizado de máquina, essencial ao desenvolvimento da inteligência artificial] It increasingly seeps into our daily lives, challenging our moral and social values ​​and the rules that govern them. Today, virtual assistants threaten the privacy of the home, news recommendations shape the way we understand the world, and risk prediction systems guide social workers who must protect children from abuse, while data-driven recruitment tools also rate your chances of getting a job. However, the morals machine learning It is still a mystery to many.

When searching for articles on this topic for young engineers from the Ethics and ICT course at UCLouvain, Belgium, I was particularly struck by the case of Joshua Barbeau, a 33-year-old man who used a website called December project To create a chatbot – A chat bot — which would simulate a conversation with his late fiancée, Jessica.

.

you çhatbots that imitatedead people

known as deadbotThis type of chatbot allowed Barbo to exchange text messages with synthetic “Jessica”. Despite the morally controversial nature of the issue, seldom have I come across material that goes beyond mere factual and looks at the case through an explicit normative lens: why it is right or wrong, morally desirable or reprehensible, to develop deadbot?

Before dealing with these questions, let’s set the record straight: December project Created by Jason Rohrer, a game developer, to let people customize chatbots With the character they want to interact with, as long as they pay for it. The project is built on the basis of the API for GPT-3, a text-generator language model from the research firm Artificial Intelligence. open ai. The Barbo case sparked a dispute between Rohrer and open ai Because the company’s guidelines expressly prohibit the use of GPT-3 for sexual, romantic, self-harm, or bullying purposes.

call position open ai As a hyper-moral and arguing that people like Barbo were “adults with the ability to consent”, Rohrer terminated the GPT-3 version of December project. While we can all get a hunch about whether developing a file deadbot er machine learningExplaining its effects is not an easy task. That is why it is important to address the ethical issues raised by the issue, step by step.

Barbu’s approval is sufficient to develop deadbot Who is Jessica?

Since Jessica was a real (albeit dead) person, Barbo’s consent to create a profile deadbot It seems to me that imitating him is not enough. Even when people die, they are not just things that others can do as they please. This is why our societies consider it wrong to desecrate or disrespect the memory of the dead. In other words, we have certain moral obligations towards the dead, as long as death does not necessarily mean that people cease to exist in a morally relevant way.

Likewise, the debate is open about whether we should protect the basic rights of the dead (such as privacy and personal data). Develop deadbot Reproducing someone’s personality requires large amounts of personal information, such as social media data (see What Microsoft or the everlasting suggested), something that has already been proven and can reveal very sensitive traits about people.

In “Be Right Back” (2013), an episode of Black Mirror, the popular British science fiction series, Martha loses her husband, but discovers that she can virtually “recreate” him, uploading all of her communication profiles to the Internet and social networks.

If we agree that it is unethical to use people’s data without their consent while they are alive, why is it ethical to do so after death? In this sense, by developing deadbotit seems reasonable to seek the consent of the person whose personality it mirrors – in this case, Jessica.

When the imitator allows

So the second question is: Will Jessica’s consent be enough to consider her creation deadbot? What if it insults your memory?

Consent limits are actually a controversial issue. Let’s take as a typical example the “Rothenburg cannibal”, who was sentenced to life imprisonment despite the consent of his victim to eat it. In this regard, it has been argued that it is unethical to consent to things that could be harmful to ourselves, either physically (selling our vital organs) or abstractly (altering our rights).

What are the specific terms in which something can harm the dead is a particularly complex issue that I will not fully analyze. However, it should be noted that even if the dead cannot be as harmed or abused as the living, this does not mean that they are not vulnerable to abuse, even if they are (in theory) moral. The dead can suffer damage to their honor, reputation, or dignity (for example, posthumous defamation campaigns), and disrespect for the dead also harms their next of kin. In addition, the lack of respect for the dead leads us to a society that is more unjust and less respectful of the dignity of people in general.

Finally, given the flexibility machine learningthere is a danger that the consent given by the imitator (in life) means little more than a blank check of their potential paths.

With all this in mind, it seems reasonable to conclude that if the development or use of deadbot It does not comply with what the imitator agreed to, his consent is considered void. Furthermore, if it is clearly and deliberately infringing on your dignity, your consent should not suffice to consider it moral.

Who is responsible?

The third question is whether AI systems should aspire to simulate any kind of human behavior (regardless of whether that is possible).

This has been a long-standing concern in the field of artificial intelligence and is closely related to the dispute between Rohrer and open ai. Should we develop artificial systems capable of, say, caring for others or making political decisions? There seems to be something in these abilities that makes humans different from other animals and machines. Therefore, it is important to note that using AI for technical solutions purposes, such as replacing loved ones, can lead to an underestimation of what defines us as human beings.

The fourth ethical question is who is responsible for the results of a deadbot Especially in the case of adverse effects. imagine that deadbot Di Jessica has independently learned to act in a way that corrupts her memory or irreversibly deteriorates Barbo’s mental health. Who will take responsibility? AI experts answer this slippery question with two main approaches: first, the responsibility lies with those involved in designing and developing the system, as long as they do so according to their own interests and worldviews; Second, the systems machine learning It depends on the context, so the moral responsibilities for their actions must be distributed among all the agents with whom they interact.

I put myself close to first place. In this case, where there is an explicit co-creation of deadbot which includes open aiAnd Jason Rohrer and Joshua Barbeau, I consider it logical to analyze the level of responsibility of each party.

First, it will be difficult to maintain an extension open ai Having explicitly prohibited the use of their system for sexual or amorous purposes, self-harm or bullying. It seems reasonable to attribute an important level of moral responsibility to Rohrer because: (a) he explicitly designed the system which made it possible to establish deadbot; (b) it did so without anticipating measures to avoid potentially negative consequences; (c) was aware that he did not comply with the instructions open ai; and (d) take advantage of it. And because Barbeau has allocated a file deadbot Based on Jessica’s own characteristics, it seems legitimate to hold him jointly liable should her memory be offended.

Ethical, under certain conditions

So, going back to our first general question about whether it’s ethical to develop a file deadbot Machine learning, we can give a sure answer provided:

  1. Both the person imitated and the person who assigned and interacted with it gave their free consent to as detailed a description as possible of the system’s design, development, and uses;
  2. Developments and uses that do not respect the imitator’s consent or that are contrary to their dignity are prohibited;
  3. The persons involved in its development and those who benefit from it bear responsibility for its possible negative consequences. Both are retrospective, to account for events that have occurred, and prospectively, to prevent their occurrence in the future.

This case explains why morals exist in machine learning It is important. It also explains why it is necessary to open a public debate that can better inform citizens and help us develop policy measures to make AI systems more open, socially just, and in line with fundamental rights.

Did you like the text? Contribute to maintaining and expanding our in-depth journalism:
Aharonfive hundred

Leave a Comment