top of page

Artificial Inclusion is the problem, not Artificial Intelligence

Text of my keynote at the VSNU Digital Society Conference 2019. Based on this VN essay.


Artificial Intelligence (AI) is changing our society at an ever-increasing pace. Data has become one of the most valuable resources of our society, and AI technologies reveal patterns in data that offer new insights and new creative solutions. Smartphones, smart cars, smart homes and smart cities determine what we do and how we interact with each other. And AI is increasingly helping governments, hospitals, courts and companies to make decisions for and about us. How will AI further influence our society?

Many experts predict that AI will change society for the better. They point, for example, to AI's enormous potential for the global economy, and to how AI is already helping farmers improve crops, doctors for faster diagnosis of diseases, and the elderly to combat loneliness. However, some experts predict that AI will lead to massive unemployment, growing economic disparities and social unrest, including populist uprisings. There is even fear of losing human autonomy, freedom of choice, and free will.

Regardless of their utopian or dystopian vision, most AI experts agree on one thing. The answer to the question of how AI will affect our society depends very much on how well AI will be able to embrace, anchor and enrich human values and diversity. Human values such as equality, justice, and dignity. The call by experts around the world to develop ethical and inclusive AI deserves our full attention and collective action.

Take the growing concern that AI reinforces exclusion and discrimination in society. Several examples of this have recently attracted a lot of media attention: Amazon's algorithm for recruiting staff that discriminates against women, the COMPASS system in the U.S. that gives black people a higher risk of recidivism than white people with the same criminal history, and the medical risk profiling system that gives millions of black people, who are as ill as white people, less access to healthcare. An example closer to home is SyRi, the Dutch AI system for detecting social security fraud, which mainly picks out poor people and people with a migration background because they happen to live in certain neighbourhoods.

All these examples concern algorithms that recognise patterns in large amounts of data and learn from them without human supervision. If these data contain historical or contemporary social biases and inequalities, then these will surely be adopted and reproduced by these algorithms. For example, if algorithms are trained on historical police data, then conscious or unconscious biases in arrests are adopted in current predictive policing systems. Or if in the past black people had to be sicker than white people to be admitted to health care, algorithms trained on the basis of historical health data will also discriminate against black people.

National and international institutions and companies respond to the emergence of biased algorithms with measures aimed at controlling and regulating them, as if they were aliens that need to be tamed. For example, the Ethics Directive published by the European Commission in 2019 recommends that when data is collected, it may contain social prejudices, inaccuracies, errors and mistakes, which must be removed before an algorithm is trained on these data. The Dutch parliament even wants to go a step further and install an "algorithm watchdog", which screens public domain algorithms for bias and inequality.

However well intended, these measures are not sufficient for creating AI with human values and respect for diversity. For one it sometimes is impossible to ensure that datasets on which algorithms are trained are truthful and unbiased. Or to develop algorithms that detect and eliminate harmful biases and inequalities. More importantly, however, the emphasis on controlling and regulating data and algorithms and thus on technical problems diverts from the root causes of biased algorithms. Namely historical and contemporary inequalities in society and the lack of diversity in private and public organizations.


According to the AI NOW Institute in New York, 80% of the university professors in A.I. worldwide are men. For Facebook 85% of AI experts are men and for Google 90%. Ethnic diversity is even worse. For example, 2.5% of Google's employees are black, while Facebook and Microsoft each employ 4% black AI experts. The state of AI diversity in the Netherlands is not clear: there is little data available about it. But there is no reason to assume that the diversity of AI experts in Dutch organisations differs much from these statistics.

A lack of diversity is problematic for any organisation that operates in a diverse society. For organizations dealing with AI, however, this lack is much more consequential because of the boundless reach and rapid impact of AI technologies. Billions of people use Facebook, Google and Twitter, most of which do not resemble the white male developers of these technologies. The risk that large groups of people face discrimination, exclusion or unfair treatment is real and widespread. And so are the algorithmic rise of white nationalism, holocaust denial, and conspiracy theories such as the great replacement. These phenomena will not only adversely affect certain population groups, but complete societies.

The lack of diversity of people in AI and the problem of biased algorithms have largely been dealt with separately. These, however, are two sides of the same coin: lack of diversity in the workplace is interwoven with biased algorithms. One thing leads to another. But it also means that a workplace of people with different backgrounds and expertise enable tackling or preventing biased algorithms.

Government agencies, universities and technology companies recognize the problem of biased algorithms and the importance of diversity in AI. They continuously emphasize the importance of diversity of data, diversity of people and perspectives, diversity of processes and platforms. And yet diversity in AI does not seem to work out yet, like in many another fields. It is as if all the emphasis on and attention for diversity is there to create an illusion of involvement, commitment and inclusion.

For example, Google would implement 7 principles for ethical AI through internal education and development of responsible AI. It set up an external advisory board of various people to guarantee the responsible development of AI. Only a few weeks after the establishment of the board, it was abolished. And Google is still confronted with protests and walkouts because of the way the company deals with sexual discrimination and harassment. Women do not feel safe in the male-dominated company.

Another example is Standford University's renowned Human AI Institute, which was launched at the beginning of this year with the aim of having a diverse group of people researching the impact and potential of AI. The President of Stanford said at the inauguration: "Now is our chance to shape that future by bringing humanists and social scientists together with people who develop artificial intelligence”. Shortly after its inception, the institute got embroiled in diversity concerns: out of 120 staff members, the vast majority are male, and none are black.

Organizations often state that they are unable to find diverse talent that meets preset job requirements. Time and again, however, the problem turns out not to be the absence of diverse talent, but the inability to create an environment in which diverse talent can be recognised, attracted and retained. After their appointment, talents with a migration background often say that they are treated as tokens of diversity and that their talents remain underused, undervalued and underpaid. This artificial inclusion is the real problem that needs attention and action if we are to develop ethical and inclusive AI systems.


True ethical and inclusive AI understands, values and builds on differences between people in terms of socio-cultural values, knowledge, skills and lifestyle. Tech companies such as Facebook, Google and Twitter cannot be expected to work on ethical and inclusive AI because they put their own business model first. No ethical and inclusive AI can be expected through control and regulation of AI either, as long as public organisations, such as governments, municipalities and universities, and their elected representatives, laws and regulations are not sufficiently divers and inclusive to start with

If we want to make AI ethical and inclusive, at least two things are needed. First AI should be taught to citizens from all walks of life, including and perhaps especially those at the margins of society. This increases and enriches the talent pool. Initiatives such as Codeer College Codam in Amsterdam understand this and take bold steps towards free and fundamentally new forms of peer-to-peer education. Secondly, public AI should be created by diverse and interdisciplinary AI teams in collaboration with citizens, communities and civil society. This enables grassroots AI technologies that serve citizens and communities better and more honestly. Initiatives such as my own Civic AI Lab are trying to pursue this.

Commentaires


Recent Posts
bottom of page