Canada

Attachment Tamanho
gisw2019_canada_feminist_or_not 779.52 KB

Organization

University of Toronto

Feminist or not? Canada’s challenges as it races to become a leader in artificial intelligence

Introduction

The Government of Canada, led by Prime Minister Justin Trudeau, declared itself a “feminist government” in 2015,[1] pursuing policies and funding initiatives intended to promote gender equality and protect human rights like the Feminist International Assistance Policy[2] and a CAD 160-million investment into women’s and indigenous organisations.[3] Canada also committed itself to being a key player in artificial intelligence (AI), publishing the first national strategy for AI in 2017,[4] which was backed by CAD 125 million in federal funding.[5]

The government is clearly committed to the development of the AI field, and because of this AI's use in the public sector is expected to increase. Canada has been using automated decision-making experiments in its immigration mechanisms since 2014, while the Toronto Police Service has been using facial recognition technology since 2018. Yet the negative implications of AI-backed systems and tools are only beginning to surface, with research showing that the use of AI in government decision making can violate human rights and entrench social biases, especially for women.[6] This report discusses the challenges that must be confronted to ensure that the implementation of AI, especially in immigration and policing, does not contradict Canada’s commitment to human rights and gender equality.

Context

Following the civil rights movement in the 1960s, the “rights culture” in Canada “evolved from simply prohibiting overt acts of discrimination to ensuring substantive equality.”[7] This led previous Canadian governments to implement a range of policies and initiatives, including the establishment of the Status of Women Canada[8] (1976) and the Human Rights Commission[9] (1977), as well as the passage of the Human Rights Act[10] (1977) and the Charter of Human Rights and Freedoms[11] (1982). Canada has also ratified international human rights treaties, including the Convention on the Elimination of all Forms of Discrimination against Women[12] (CEDAW), the International Covenant on Civil and Political Rights[13] (ICCPR), and the Universal Declaration of Human Rights[14] (UDHR), among others. However, the government has failed to protect the rights of historically marginalised groups, particularly indigenous women, and struggles with eliminating systemic inequality and discrimination of minority groups.[15]

The government has actively supported the development and deployment of AI, despite the risk that technologies may exacerbate pre-existing disparities and can lead to rights violations. For example, it invested CAD 950 million in the Innovation Supercluster Initiative in early 2018,[16] which included the SCALE.AI Supercluster. It also launched a fast-track visa programme for tech talent in 2019,[17] and established the Advisory Council on Artificial Intelligence, composed of researchers, academics and business executives, to build on Canada’s AI strengths and identify opportunities.[18] Canada is also engaged in AI efforts globally. It is a party to the G7’s Charlevoix Common Vision for the Future of Artificial Intelligence, which highlights the importance of supporting gender equality and preventing human rights abuses by involving “women, underrepresented populations and marginalized individuals” at all stages of AI applications.[19] While these steps, along with investments from companies such as Uber, Google and Microsoft, indicate Canada’s growing leadership in AI, the government has been criticised for not doing enough to address the gender diversity problem.[20]

Women account for only 25% of all technology jobs in Canada and about 28% of all scientific occupations, despite representing half of the workforce.[21] Women also continue to have poor representation among the executive teams and boards of Canadian technology companies.[22] Canada’s Minister of Science Kirsty Duncan has said that getting more women into high-ranking scientific positions is a priority,[23] but more work needs to be done to ensure gender diversity in science and technology, which will have a tremendous effect on how they impact human rights and equality.

Human rights challenges in the age of AI: A look at immigration and policing

The lack of diversity in science and technology is exceptionally pronounced in the AI field, which is predominantly white and male. For instance, more than 80% of AI professors are men.[24] The gender pay gap also compounds the diversity problem. Women working in the Canadian tech industry with a bachelor’s degree or higher, typically earn nearly CAD 20,000 less a year than their male counterparts, and this pay gap is higher for visible minorities. Black tech workers in particular not only have the lowest participation rates in tech occupations, but also experience a significant pay gap relative to white and non-indigenous tech workers in Canada.[25]

A study by the AI Now Institute found that lack of diversity results in the creation of AI systems and tools with built-in biases and power imbalances. It is consistent with the feminist critique of technology, which posits that existing social relations and power dynamics are manifested in and perpetuated by technology.[26] This is particularly true of gender relations, which are “materialised in technology, and masculinity and femininity in turn acquire their meaning and character through their enrolment and embeddedness in working machines.”[27] An example of this is AI-powered virtual assistants, such as Apple’s Siri, which are predominantly modelled on feminine likeness and stereotypical characteristics.[28] These products have been criticised as products of sexism, due to the fact that they replicate stereotypes of what are considered to be “women’s work” (e.g. service-oriented roles) and behaviour (e.g. servile, helpful, submissive, etc.).[29]

Even if diversifying the technology field leads us to build AI systems and tools that are more neutral, the data that are fed into them might still contain bias. For example, research shows that deploying facial recognition algorithms using mug shot photos resulted in racial bias, as it derived an incorrect relationship between skin colour and the rate of incarceration.[30] These algorithms are also more likely to fail in correctly identifying dark-skinned women than light-skinned women.[31] Due to indications that AI systems may replicate patterns of racial and gender bias, and solidify and/or justify historical inequalities, the deployment of these tools should be concerning. This is particularly the case for any government, like Canada’s, that has made commitments to protecting human rights and achieving gender equality.

AI’s deployment in Canada is well underway. Canada has been using automated decision-making experiments in its immigration mechanisms since 2014.[32] The federal government has more recently been in the process of developing a system of “predictive analytics” to automate certain activities conducted by immigration officials, and to support the evaluation of some immigrant and visitor applications. Immigration, Refugees and Citizenship Canada (IRCC) confirmed that the federal department launched two pilot projects in 2018 using algorithms to identify routine Temporary Resident Visa applications from China and India for faster processing. The Canadian Border Services Agency (CBSA) has also implemented automated passport processing that relies on facial recognition software, in lieu of initial screenings performed by a CBSA officer.[33] In May 2019, an investigation revealed that the Toronto Police Service (TPS) has been using facial recognition technology that compares images of potential suspects captured on public or private cameras to approximately 1.5 million mug shots in TPS’s internal database. Privacy advocates have been critical of TPS’s use of facial recognition technology due to concerns with potential discrimination, as well as infringements of privacy and civil liberties.[34]

Canada is not alone in using AI technologies for migration management. Evidence shows that national governments and intergovernmental organisations (IGOs) are turning to AI to manage complex migration crises. For example, big data is being used by the United Nations High Commissioner for Refugees (UNHCR) to predict population movement in the Mediterranean,[35] while retinal scanning is being used in Jordanian refugee camps for identification.[36] At the US-Mexico border, a tweak to the Immigration and Customs Enforcement’s (ICE) “risk assessment” software has led to a stark increase in detentions, which indicates that decision making in immigration is also increasingly relying on technology.[37] A 2018 report by the University of Toronto’s Citizen Lab and International Human Rights Program shows that despite the risks associated with utilising AI technology for decision making in immigration and policing, these experiments often have little oversight and accountability, and because of this could lead to human rights violations.[38]

Research from as early as 2013 shows that algorithms may lead to discriminatory and biased results. The outcome from a Google search, for example, may yield discriminatory ads targeted on the basis of racially associated personal names, display lower-paying job opportunities to women,[39] and perpetuate stereotypes based on appearance (e.g. by associating “woman” with “kitchen”).[40] Claims have also been made that facial-detection technology has the ability to discern sexual orientation.[41] If these biases are embedded in emerging technologies used experimentally in immigration, then they could have far-reaching impacts. In airports across Hungary, Latvia and Greece, a new pilot project called iBorderCtrl introduced an AI-powered lie detector at border checkpoints. Passengers’ faces are monitored for signs of lying and if the system becomes more “sceptical” through a series of increasingly complicated questions, the person will be selected for further screening by a human officer.[42] Canada has tested a similar lie detector, relying on biometric markers such as eye movements or changes in voice, posture and facial gestures as indicators of untruthfulness.[43] The deployment of such a technology raises a number of important questions: When a refugee claimant interacts with these systems, can an automated decision-making system account for their trauma and its effect on memory or for cultural, age and gender differences in communication? How would a person challenge a decision made by AI-powered lie detectors? These questions are important to consider given that negative inferences due to AI will impact the final decision made by a human official.

Migrants who identify as women or gender non-binary also confront challenges that are not yet adequately understood by AI technologies. For example, women and children generally have a different expectation of privacy than adult men, given the risks to their personal safety if their data is shared with repressive governments. It is clear that the use of AI-backed technologies in immigration results in a number of concerns, such as infringing on domestically and internationally protected human rights, including freedom of movement, freedom from discrimination, as well as the individual right to privacy, life, liberty and security.[44] Other concerns include an inappropriate reliance on the private sector to develop and deploy these technologies; an unequal distribution of technological innovation, exacerbating the lack of access to the justice system for marginalised communities; and an overall lack of transparency and oversight mechanisms. Finally, affected migrant communities, such as refugees and asylum seekers, are not sufficiently included in conversations around the aforementioned development and adoption of AI technology.[45]

Women, in particular, fear the prevalence of AI technology because it can be weaponised against them. “Smart home" technologies that automate various facets of household management (e.g. appliances, temperature, home security, etc.), for example, have enabled gender-based violence (GBV) and domestic abuse. In more than 30 interviews with The New York Times, domestic abuse survivors told stories of how abusers would remotely control everyday objects in their homes, sometimes to just watch or listen, but other times to demonstrate power over them.[46] As technologies become affordable and internet connectivity becomes more ubiquitous, technology-facilitated violence or abuse is likely to continue. In Canada, immigrant women have claimed refugee status or asylum due to GBV, while indigenous women, many of whom live in large urban population centres,[47] experience violence at a rate six times higher than non-indigenous women.[48]

Conclusion

Technology is neither inherently neutral nor democratic, but is a product of existing social relations and power dynamics, and because of this it can also perpetuate them.[49] Therefore, when technological experiments are introduced into the provision of public services, border management, or the criminal justice system, they could potentially exacerbate social divisions, strengthen unequal power relations, and result in far-reaching rights infringements. These concerns are particularly acute if diverse representation and human rights impact analyses are missing from AI’s deployment. Without oversight to ensure diversity and proper impact assessments, the benefits of new technologies like AI may not accrue equally.

Canada must ensure that its use of AI-backed technologies is in accordance with its domestic and international human rights obligations, which is especially important given that Canada aims to be one of the world’s leaders in AI development. As such, Canada’s decision to implement particular technologies, whether they are developed in the private or the public sector, can set a new standard for other countries to follow. The concern for many human rights advocates and researchers is that AI-backed technologies will be adopted by countries with poor human rights track records and weak rule of law, who would be more willing to experiment and to infringe on the rights of vulnerable groups. For example, China’s mandatory social credit system – which aims to rank all of its citizens according to their behaviour by 2020 – punishes individuals by “blacklisting” them, and therefore creates “second-class citizens”.[50]

If Canada intends to be a leader in AI innovation, while maintaining its commitment to human rights and advancing gender equality as a “feminist government”, then it must confront the challenges associated with the development and implementation of AI. Some of the challenges outlined include the gender and racial imbalance in science, technology, engineering and mathematics (STEM) education and employment, as well as the lack of accountability and transparency in the government’s use of emerging technologies across the full life cycle of a human services case, including in immigration and policing. Steps that can be taken include ensuring that the Advisory Council on Artificial Intelligence is diverse and representative of Canadian society,[51] allowing for civil society organisations that work on behalf of citizens to conduct oversight over current and future uses of AI by the government, and supporting further research and education to help citizens better understand the current and prospective impacts of emerging technologies (e.g. AI-backed facial recognition or lie detectors) on human rights and the public interest.

Action steps

The following action steps are suggested for Canada:

  • Push for better diversity and representation in the Advisory Council on Artificial Intelligence.
  • Develop a code of ethics for the deployment of AI technologies to ensure that algorithms do not violate basic principles of equality and non-discrimination. The Toronto Declaration, for example, can serve as guidance for governments, researchers and tech companies dealing with these issues.[52]
  • Advocate for accountability and transparency on the government’s use of AI, including algorithmic transparency, for example, through creating legislation similar to Article 22 of the European Union’s General Data Protection Regulation (GDPR) on automated individual decision making, which gives individuals the right to object to any profiling that is being performed.[53]
  • Work with affected communities, such as refugees and asylum claimants, to understand the purpose and effects of the government’s use of AI.
  • Push for digital literacy and AI-specific education to ensure that Canadians understand new technologies and their impact.
  • Utilise gender-based analysis plus (GBA+)[54] and other human rights impact assessments to evaluate AI tools.
  • Conduct oversight to ensure that technological experimentation by the government complies with domestic and internationally protected human rights.

Footnotes

[1] www.canadabeyond150.ca/reports/feminist-government.html

[2] Global Affairs Canada. (2017). Canada’s Feminist International Assistance Policy. https://www.international.gc.ca/world-monde/issues_development-enjeux_developpement/priorities-priorites/policy-politique.aspx?lang=eng

[3] Women and Gender Equality Canada. (2019, 12 June). Government of Canada announces investment in women’s organizations in Ottawa. https://www.canada.ca/en/status-women/news/2019/06/government-of-canada-announces-investment-in-womens-organizations-in-ottawa0.html

[4] Natural Sciences Sector. (2018, 22 November). Canada first to adopt strategy for artificial intelligence. United Nations Educational, Scientific and Cultural Organization (UNESCO). www.unesco.org/new/en/media-services/single-view/news/canada_first_to_adopt_strategy_for_artificial_intelligence

[5] Canadian Institute for Advanced Research. (2017, 22 March). Canada funds $125 million Pan-Canadian Artificial Intelligence Strategy. Cision. https://www.newswire.ca/news-releases/canada-funds-125-million-pan-canadian-artificial-intelligence-strategy-616876434.html

[6] Molnar, P., & Gill, L. (2018). Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System. University of Toronto. https://citizenlab.ca/wp-content/uploads/2018/09/IHRP-Automated-Systems-Report-Web-V2.pdf

[7] Clément, D., Silver, W., & Trottier, D. (2012). The Evolution of Human Rights in Canada. Canadian Human Rights Commission. https://www.chrc-ccdp.gc.ca/eng/content/evolution-human-rights-canada

[8] https://cfc-swc.gc.ca/abu-ans/who-qui/index-en.html

[9] https://www.chrc-ccdp.gc.ca/eng

[10] https://www.chrc-ccdp.gc.ca/eng/content/human-rights-in-canada 

[11] Ibid.

[12] https://www.ohchr.org/en/hrbodies/cedaw/pages/cedawindex.aspx

[13] https://www.ohchr.org/en/professionalinterest/pages/ccpr.aspx 

[14] https://www.ohchr.org/en/udhr/documents/udhr_translations/eng.pdf

[15] National Inquiry into Missing and Murdered Indigenous Women and Girls. (2019). Reclaiming Power and Place: The Final Report of the National Inquiry into Missing and Murdered Indigenous Women and Girls. https://www.mmiwg-ffada.ca/final-report

[16] Innovation, Science and Economic Development Canada. (2018, 15 February). Canada’s new superclusters. Government of Canada. www.ic.gc.ca/eic/site/093.nsf/eng/00008.html

[17] Employment and Social Development Canada. (2017, 6 December). Global Skills Strategy. Government of Canada. https://www.canada.ca/en/employment-social-development/campaigns/global-skills-strategy.html

[18] Innovation, Science and Economic Development Canada. (2019, 14 May). Government of Canada creates Advisory Council on Artificial Intelligence. Cision. https://www.newswire.ca/news-releases/government-of-canada-creates-advisory-council-on-artificial-intelligence-838598005.html

[19] https://www.international.gc.ca/world-monde/international_relations-relations_internationales/g7/documents/2018-06-09-artificial-intelligence-artificielle.aspx?lang=eng

[20] PwC Canada, #movethedial, & MaRS. (2017). Where’s the Dial Now? Benchmark Report 2017. https://www.pwc.com/ca/en/industries/technology/where-is-the-dial-now.html

[21] Information and Communications Technology Council. (2018). Quarterly Monitor of Canada’s ICT Labour Market. https://www.ictc-ctic.ca/wp-content/uploads/2019/01/ICTC_Quarterly-Monitor_2018_Q2_English_.pdf

[22] Ontario Centres of Excellence. (2017, 15 November). Bridging the female leadership gap in Canada’s technology sector — The Word on the Street in the World of Innovation. Medium. https://blog.oce-ontario.org/bridging-the-female-leadership-gap-in-canadas-technology-sector-9923bf90babf

[23] Mortillaro, N. (2018, 8 March). Women encouraged to pursue STEM careers, but many not staying. CBC News. https://www.cbc.ca/news/technology/women-in-stem-1.4564384

[24] Paul, K. (2019, 17 April). 'Disastrous' lack of diversity in AI industry perpetuates bias, study finds. The Guardian. https://www.theguardian.com/technology/2019/apr/16/artificial-intelligence-lack-diversity-new-york-university-study 

[25] Lamb, G., Vu, V., & Zafar, A. (2019). Who Are Canada’s Tech Workers? Brookfield Institute for Innovation and Entrepreneurship. https://brookfieldinstitute.ca/wp-content/uploads/FINAL-Tech-Workers-ONLINE.pdf

[26] Wajcman, J. (2010). Feminist theories of technology. Cambridge Journal of Economics, 34(1), 148-150.

[27] Ibid.

[28] Sternberg, I. (2018, 8 October). Female AI: The Intersection Between Gender and Contemporary Artificial Intelligence. Hackernoon. https://hackernoon.com/female-ai-the-intersection-between-gender-and-contemporary-artificial-intelligence-6e098d10ea77

[29] Chambers, A. (2018, 13 August). There’s a reason Siri, Alexa and AI are imagined as female – sexism. The Conversation. https://theconversation.com/theres-a-reason-siri-alexa-and-ai-are-imagined-as-female-sexism-96430

[30] Garvie, C., & Frankle, J. (2016, 7 April). Facial-Recognition Software Might Have a Racial Bias Problem. The Atlantic. https://www.theatlantic.com/technology/archive/2016/04/the-underlying-bias-of-facial-recognition-systems/476991

[31] Meyers West, S., Whittaker, M., & Crawford, K. (2019). Discriminating Systems: Gender, Race, and Power in AI. AI Now Institute. https://ainowinstitute.org/discriminatingsystems.pdf

[32] Keung, N. (2017, 5 January). Canadian immigration applications could soon be assessed by computers. The Toronto Star. https://www.thestar.com/news/immigration/2017/01/05/immigration-applications-could-soon-be-assessed-by-computers.html

[33] Dyer, E. (2019, 24 April). Bias at the border? CBSA study finds travellers from some countries face more delays. CBC News. https://www.cbc.ca/news/politics/cbsa-screening-discrimination-passports-1.5104385

[34] Lee-Shanok, P. (2019, 30 May). Privacy advocates sound warning on Toronto police use of facial recognition technology. CBC News. https://www.cbc.ca/news/canada/toronto/privacy-civil-rights-concern-about-toronto-police-use-of-facial-recognition-1.5156581

[35] Petronzio, M. (2018, 24 April). How the U.N. Refugee Agency will use big data to find smarter solutions. Mashable. https://mashable.com/2018/04/24/big-data-refugees/#DNN5.AOwfiqQ

[36] Staton, B. (2016, 18 May). Eye spy: biometric aid system trials in Jordan. The New Humanitarian. www.thenewhumanitarian.org/analysis/2016/05/18/eye-spy-biometric-aid-system-trials-jordan

[37] Oberhaus, D. (2018, 26 June). ICE Modified Its 'Risk Assessment' Software So It Automatically Recommends Detention. Motherboard, Tech by VICE. https://www.vice.com/en_us/article/evk3kw/ice-modified-its-risk-assessment-software-so-it-automatically-recommends-detention

[38] Molnar, P., & Gill, L. (2018). Op. cit.

[39] Sweeny, L. (2013). Discrimination in Online Ad Delivery. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2208240

[40] Simonite, T. (2017, 21 August). Machines taught by photos learn a sexist view of women. WIRED. https://www.wired.com/story/machines-taught-by-photos-learn-a-sexist-view-of-women

[41] Murphy, H. (2017, 9 October). Why Stanford Researchers Tried to Create a ‘Gaydar’ Machine. The New York Times. https://www.nytimes.com/2017/10/09/science/stanford-sexual-orientation-study.html

[42] Picheta, R. (2018, 2 November). Passengers to face AI lie detector tests at EU airports. CNN. https://edition.cnn.com/travel/article/ai-lie-detector-eu-airports-scli-intl/index.html

[43] Daniels, J. (2018, 15 May). Lie-detecting computer kiosks equipped with artificial intelligence look like the future of border security. CNBC. https://www.cnbc.com/2018/05/15/lie-detectors-with-artificial-intelligence-are-future-of-border-security.html

[44] Meyers West, S., Whittaker, M., & Crawford, K. (2019). Op. cit.

[45] Molnar, P. (2018, 14 December). The Contested Technologies That Manage Migration. Centre for International Governance Innovation. https://www.cigionline.org/articles/contested-technologies-manage-migration

[46] Bowles, N. (2018, 23 June). Thermostats, Locks and Lights: Digital Tools of Domestic Abuse. The New York Times. https://www.nytimes.com/2018/06/23/technology/smart-home-devices-domestic-abuse.html

[47] Arriagada, P. (2016, 23 February). First Nations, Métis and Inuit Women. Statistics Canada. https://www150.statcan.gc.ca/n1/pub/89-503-x/2015001/article/14313-eng.htm

[48] https://www.canadianwomen.org/the-facts/gender-based-violence

[49] Wajcman, J. (2010). Op. cit.

[50] Ma, A. (2018, 29 October). China has started ranking citizens with a creepy 'social credit' system — here's what you can do wrong, and the embarrassing, demeaning ways they can punish you. Business Insider. https://www.businessinsider.com/china-social-credit-system-punishments-and-rewards-explained-2018-4 

[51] Poetranto, I., Heath, V, & Molnar, P. (2019, 29 May). Canada’s Advisory Council on AI lacks diversity. Toronto Star. https://www.thestar.com/opinion/contributors/2019/05/29/canadas-advisory-council-on-ai-lacks-diversity.html 

[52] Access Now. (2018, 16 May). The Toronto Declaration: Protecting the rights to equality and non-discrimination in machine learning systems. https://www.accessnow.org/the-toronto-declaration-protecting-the-rights-to-equality-and-non-discrimination-in-machine-learning-systems 

[53] European Union. (2018, 25 May). Art. 22 GDPR Automated individual decision-making, including profiling. https://gdpr-info.eu/art-22-gdpr

[54] “GBA+ is an analytical process used to assess how diverse groups of women, men and non-binary people may experience policies, programs and initiatives. The ‘plus’ in GBA+ acknowledges that GBA goes beyond biological (sex) and socio-cultural (gender) differences.” Source: Gender Based Analysis Plus (GBA+), Status of Women Canada. https://cfc-swc.gc.ca/gba-acs/index-en.html

Notes:
This report was originally published as part of a larger compilation: “Global Information Society Watch 2019: Artificial intelligence: Human rights, social justice and development"
Creative Commons Attribution 4.0 International (CC BY 4.0) - Some rights reserved.
ISBN 978-92-95113-12-1
APC Serial: APC-201910-CIPP-R-EN-P-301
978-92-95113-13-8
ISBN APC Serial: APC-201910-CIPP-R-EN-DIGITAL-302