GSTDTAP  > 地球科学
The unforeseen acceptance of deepfakes
admin
2022-01-17
发布年2022
语种英语
国家法国
领域地球科学 ; 资源环境
正文(英文)
Rapid improvements in deepfake technology, which makes it possible to modify a person's appearance or voice in real time, call for an ethical review at this still early stage of its use. Researchers working in the field of cognitive science shed some light on the public’s perception of this phenomenon.

Video conferencing, live streaming, and platforms such as Zoom and Tiktok have grown in popularity. Algorithms embedded inside these services make it possible for users to create deepfakes, including live, which change their appearance and voice. In the very near future, it could become difficult to tell whether the person we are chatting with online is really who we see on the screen or whether they are using a voice or face changer. 
 
Despite the media frenzy surrounding deepfakes, most notably after a particularly realistic video of Barack Obama calling Donald Trump an “idiot”, a study had yet to measure the ethical impact on the public at large. The director and comedian Jordan Peele used artificial intelligence software to match Obama’s lip movements on an old video to a new audio dub. Peele created the clip to warn of the potential dangers of mass manipulation by the technology. But could other uses of deepfakes, in medicine or remote sales for instance, be deemed acceptable under certain conditions?

Famous deepfake of a video of Barack Obama, created by Jordan Peele (right) in 2018 to emphasise the dangers of this technology in terms of mass manipulation.
Famous deepfake of a video of Barack Obama, created by Jordan Peele (right) in 2018 to emphasise the dangers of this technology in terms of mass manipulation.

To find out, we conducted a survey whose findings were recently published in Philosophical Transactions of the Royal Society B. The study presented different hypothetical applications of expressive voice transformation to a sample of some three hundred predominantly young French urban students, potential users of these technologies in the future. 

“Ethically acceptable” uses

These scenarios, styled to resemble Black Mirror, a dystopian science fiction television series, included, among others, a politician trying to sound more convincing or a call centre employee able to modify the voices of unhappy customers in order to ease the upset caused by aggressive behaviour. Situations where the technology was used in a therapeutic context were also presented; namely a patient suffering from depression transformed their voice to sound happier when speaking to family, or a person experiencing stress listened to a calm version of their own voice. These situations were inspired by previous research where we studied manners of speaking perceived as honest, as well as research by the start-up Alta Voce.1 What started out as simple laboratory experiments have taken on a much more concrete dimension since the health crisis and the development of video conferencing tools.
 
Results have shown that participants in the study found most of the scenarios ethical, including, most surprisingly, those where the volunteers were not previously informed of voice modification. Expectedly, this applied to therapeutic contexts, but also to uses that enhance human capacity despite their bordering on transhumanism. In the end, the only situation that was deemed problematic was the user being unaware of the fact that their own voice was algorithmically modified.

The researchers’ platform changes the voices and faces of participants without their knowledge, and in particular their smile and tone of voice, in order to study their impact on social interactions.
The researchers’ platform changes the voices and faces of participants without their knowledge, and in particular their smile and tone of voice, in order to study their impact on social interactions.

Provided they were told of such modification beforehand, our participants showed no difference in acceptance whether the speech transformations applied to themselves or to other people they listened to. Yet regarding technologies such as self-driving vehicles, the subjects of our study would prefer not to buy a car that would spare a pedestrian rather than the driver in the event of an accident, despite agreeing that this would be the most ethically acceptable option. Our survey also shows that the more familiar people are with these new uses of emerging technologies, through books or science fiction programmes for instance, the more morally justifiable they find them. This creates the conditions for their widespread adoption, despite the "techlash" (contraction of technology and backlash) observed in recent years, including growing animosity towards big tech.

A clear risk of manipulation

Use of these technologies can be perfectly harmless, if not virtuous. In a therapeutic context, we are developing voice changers aimed at PTSD sufferers, or at studying the reaction of patients in a coma to voices made to sound “happy”.2 These techniques, however, also present obvious risks, such as identity theft on dating apps or on social media, as in the case of the young motorcycle influencer known as @azusagakuyuki who turned out to be a 50-year-old man. Wider societal threats include increased discrimination with, for example, the possible use by companies of masking technology to neutralise their customer service employees’ foreign accents.
 
The pace of these innovations therefore requires ongoing and difficult adaptation on the part of users, decision-makers, and experts. It is essential for us to plan our research from the very early stages taking into account ethical considerations on the potential applications of such breakthroughs. As we are doing with Alta Voce, we must provide academic support during their industrial development. At a time when public debate seems to sway between wholehearted acceptance of widespread technology and generalised distrust, the human and social sciences are more than ever necessary in studying our relationship to new technologies and in developing a methodology that involves their users.

The points of view, opinions and analyses expressed in this article are the sole responsibility of the author and do not in any way constitute a statement of position by the CNRS.

Footnotes
  • 1. From our ERC Proof of Concept Activate project.
  • 2. In collaboration with professors Tarek Sharshar and Martine Gavaret of the psychiatry and neuroscience University Hospital (GHU) in Paris.
URL查看原文
来源平台CNRS News
文献类型新闻
条目标识符http://119.78.100.173/C666/handle/2XK7JSWQ/344579
专题地球科学
资源环境科学
推荐引用方式
GB/T 7714
admin. The unforeseen acceptance of deepfakes. 2022.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[admin]的文章
百度学术
百度学术中相似的文章
[admin]的文章
必应学术
必应学术中相似的文章
[admin]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。