GSTDTAP  > 资源环境科学
We need more bias in artificial intelligence
admin
2021-04-21
发布年2021
语种英语
国家欧洲
领域资源环境
正文(英文)

This opinion piece is forthcoming in Il Sole 24 Ore.

logo of Il Sole 24 Ore Italian newspaper

The Muller-Lyer optical illusion consists of two lines of equal length that differ only in the direction of arrowheads at either end. Yet, to most observers, the line with arrowheads pointing outwards looks longer than the other. If you grew up in and among buildings with straight walls and 90 degree angles, you have learned to perceive lines according to geometric patterns. Your view of the Muller-Lyer lines is biased.

Artificial intelligence developers sometimes fall into similar traps. They program life-changing applications, into which they project biases. Algorithms have led judges to be harsher on Black offenders when assessing the likelihood that defendants will commit the same crime again. Machine-learning applications favour male over female job applicants (Amazon scrapped its recruiting tool after the system learned to attach a lower score to applications that mentioned the word ‘woman’). Automatic translation programs replicate gender stereotypes. For example translating from a language without gendered pronouns (such as Finnish) into English, the algorithm could suggest using ‘he’ when the action is ‘to work’ and ‘she’ when the action is ‘to wash the laundry’.

We should be very concerned about bias embedded in artificial intelligence. Efforts by public authorities to curb it, such as the regulation being proposed by the European Commission in ‘the Artificial Intelligence Act’, are welcome.

Often, however, biases are not only embedded in the design of the algorithm. They are also external to it, originating in societal biases. Amazon’s recruiting tool inherited the bias from a dataset covering a decade during which most job applications came from men (a symptom of the strong asymmetry in gender power in the technological sector). Similarly, automated translation applications learn gender stereotypes from the thousands of books used to train them. Discrimination against women and minorities is well reflected in literature.

No matter how objective we try to be, the mere decision to adopt artificial intelligence solutions has profound implications. That decision is inherently subjective and thus comes with some political responsibility, which goes beyond simply regulating the use of artificial intelligence. Algorithms learn to be as discriminatory as the society they observe. They then suggest decisions that are inherently discriminatory, and thus contribute to exacerbate discrimination within society. Policy may break this vicious circle.

If public policy aims to improve decision-making and build a more inclusive society, it should deal explicitly with the question of the role of artificial intelligence in achieving the end goal. If artificial intelligence amplifies society’s biases, policy may well need to intervene, either prohibiting its use or embedding counterbalancing biases. For example, algorithms that automatically rank subjective content in online chats could be compelled to attach lower weights to discriminatory comments. This, in effect, would distort the sentiments of a community: perhaps in the collective image of a community populated by men, women are not associated with intellectual work. But the algorithm would then yield a representation of the world closer to what we would like it to be. In medical research, desirable biases could be used to correct gender imbalances. Coronary heart disease is a leading cause of death for women, but men are overrepresented in clinical trials: artificial intelligence could favour women’s enrolment over that of men.

This does not mean that politicians should systematically interfere with technology markets, micromanaging technology development and adoption. But an overall political vision is needed to set the direction of travel, if the aim is to live in a better world.

We often already call for the introduction of desirable biases through affirmative action. Gender quotas address discrimination against women in the selection for positions of power. Quotas do not however simply correct bias. They are also a political statement: gender equality is a tool to change the system structurally. Male-driven decision making in companies or public institutions could indefinitely perpetuate itself, with those in charge continuing to select those who match their male-oriented vision of the world. Imposing quotas is tantamount introducing a bias against that; it means rejecting one way of doing things and instead supporting a different vision that aims to correct historic marginalisation.

Similarly, the discussion on how to improve the use of artificial intelligence in Europe should not be separated from its structural implications.

In the 1960s, anthropologists realised that members of Zulu tribes in South Africa did not fall for the Muller-Lyer illusion. Unlike their peers from Western societies, they saw immediately the lines were of the same length. Their interpretation of the information provided was different. Zulus live in rounded huts in an environment where the sharp angles of European buildings are absent. Their geometric vision is different. Of course, a Zulu might find herself less at ease estimating distances in a European city.

Ultimately, what makes one vision more desirable than another is not its neutrality, but whether it can better serve one’s goals in the context of where those goals are being pursued.

This was produced within the project ‘Future of Work and Inclusive Growth in Europe’, with the financial support of the Mastercard Center for Inclusive Growth.


Republishing and referencing

Bruegel considers itself a public good and takes no institutional standpoint.

Due to copyright agreements we ask that you kindly email request to republish opinions that have appeared in print to [email protected].

URL查看原文
来源平台Bruegel
文献类型新闻
条目标识符http://119.78.100.173/C666/handle/2XK7JSWQ/323938
专题资源环境科学
推荐引用方式
GB/T 7714
admin. We need more bias in artificial intelligence. 2021.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[admin]的文章
百度学术
百度学术中相似的文章
[admin]的文章
必应学术
必应学术中相似的文章
[admin]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。