Immigration algorithms and similar AI-driven tools have been rolled out in the Western World over the past few years. As politics become increasingly polarised regarding questions concerning the acceptance and integration of migrants (particularly refugees), governments have turned to technology as an objective yet accelerated means of decision making. But will these decisions actually benefit migrants or subject them to discrimination? It seems that the outcome changes from nation to nation.
Let’s start with the UK.
The UK Home Office has recently announced it would stop using an algorithm to process visa applications. This was in response to a judicial review of the system launched by The Joint Council for the Welfare of Immigrants and digital rights group Foxglove. The campaigners claim that the algorithm brought ‘entrenched bias and racism’ into the immigration system.
Since 2015, the Home Office algorithm has employed a “traffic-light system” whereby applicants are graded as green, amber or red according to their level of risk. The algorithm was designed to assign risk based on the applicant’s nationality. These applications would then be approached with increased scrutiny by officials, making them much more likely to be refused. Since more and more individuals from the same group of nations were rejected, the statistics informed the algorithm to categorise certain countries as suspect nationalities. Unsurprisingly, African migrants were the biggest victims of this vicious feedback loop.
Algorithms are also not always accurate. In 2017, over 7,000 international students were expelled from the UK after an algorithm wrongfully accused them of cheating on an English proficiency test.
It is clear that certain algorithms only create an illusion of objectivity and precision. If programmed in a biased way and fed with biased statistics, algorithms only reinforce the prejudices of government officials who ultimately decide a migrant’s fate. In many cases, this could be the difference between life and death, or success and failure.
However, there are still examples of algorithms that can be successfully implemented into the immigration system. Switzerland’s algorithm is used during the resettlement process of refugees. It aims at allocating them to certain states in order to maximise their probability of employment and successful integration. The algorithm relies on some baseline characteristics that have been found to be successful in finding a job in a given location and matches them with the asylum seeker’s own attributes.
Since the Swiss algorithm was launched in 2018, refugee employment has seen an increase from 27% to 37% according to the Swiss Secretariat for Migration (SEM). But, they do claim that much of that is down to the introduction of pre-apprenticeship courses. It is also noteworthy that most employed refugees work in low paid jobs that do not allow them to become financially independent.
Overall, immigration algorithms can be implemented for the good of migrants or to their detriment. Many are calling for such technologies to be standardised and regulated on a global scale. Canadian human rights lawyer Petra Molnar advocates for the creation of an independent algorithmic control structure that would handle both algorithmic content and implementation. She also suggests the creation of a task force bringing together governments and academics to discuss ethical issues surrounding immigration algorithms.
Both initiatives would enable algorithms to help more refugees find homes in new countries and escape the perils of war and persecution. From an economic perspective, robust immigration algorithms could be a harbinger for even greater growth in international trade, travel and business.