Breakthrough

Outsourcing of Facial Recognition Technology and its Consequences

Artificial intelligence (AI) technologies have seen a rapid uptake in government applications, and across sectors. For India, arguably a major and contentious use of AI is the ubiquitous deployment of facial recognition tech (FRT), especially by local law enforcement agencies. Given the paucity of independent scholarship, especially from an Indian context, on these issues, we decided to examine some unique aspects to this debate, through doctrinal and empirical methods of research. 

 

This working paper is the final one in our three-part series examining different aspects of how FRT is featured in Indian law enforcement. The first working paper presented a deep dive into the legal and constitutional challenges, and ethical and design risks in the use of this technology in the criminal justice system (specifically policing and surveillance). The second working paper was an empirical study of the impact such technology is likely to have on already vulnerable populations, through a case study of its use by the Delhi Police.

AI in law enforcement has multiple uses. Basu and Hickok (2018) have compiled the range of such applications in India and show that these primarily fall under the categories of predictive analytics, and speech or facial recognition. For instance, Delhi Police’s Crime Mapping, Analytics and Predictive System (CMAPS) is an example of the deployment of predictive analytics for law enforcement.

FRT, which is a form of AI application, has found increasing use in Indian law enforcement. Several states have at least one form of FRT system, most commonly one that links with a network of CCTV cameras and is used for local policing. In utility, it ranges diversely from finding missing children, keeping an eye on busy marketplaces, profiling protestors and also ostensibly “protecting women”.

The most overt intent of deploying FRT for law enforcement purposes came in 2018. The National Crime Records Bureau (NCRB) floated a request for proposal (RFP) for the provision of a pan-Indian FRT system for law enforcement.  After considerable concerns were voiced against it, including a legal notice being served, the tender was modified such that the technology would not be linked to multiple databases.

The proliferation of FRT in the domain of law enforcement is concerning on various fronts which have been the subject of detailed discussion in the first working paper of this series. Briefly, it poses an obvious risk to privacy, as people are put through surveillance and analytical systems tracking their everyday activities.

In the absence of a data protection law, there is very little if any protection against privacy harms caused by the use of this technology. Surveillance with the use of FRT can also lead to small crimes or infractions being punished disproportionately as these become easier to track. The use of FRT can lead to significant bias against marginalised people, especially those who already face a bias from the police. We have covered these risks in detail in the previous two papers in this series.

However, a crucial aspect of the whole debate around the use of FRTs in law enforcement, both internationally and here in India, that is conspicuously absent is how the private sector works in tandem with local and federal governments, and law enforcement agencies to design these surveillance systems. Arguably, prima facie, this is emblematic of either the lack of publicly accessible data on such private-public contracts or exhibits a lack of real consideration of its innate risks and challenges.

Whatever the reason, at present, state governments across India have continued to procure such technologies while circumventing any public scrutiny and accountability. Therefore, as the concluding working paper in our series, here we will focus on the involvement of the private sector in developing and implementing FRT solutions for law enforcement in India.

The paper is structured in the following sections. First, we discuss the methodological issues we faced while conducting our empirical research on the private sector’s participation in this area in India. Our difficulties themselves are symptomatic of the opacity surrounding these contracts. Following the methodology enunciation, the chapter provides an overview of the contribution of the private sector to facial recognition systems in law enforcement worldwide; chapter II does the same for India.

Private-sector technology provision and law enforcement: An overview

In January 2020, the New York Times broke a story about an American company called Clearview AI. Clearview AI scrapes data from social media and other publicly available sources anywhere on the Internet, to create a powerful facial recognition tool. This FRT, among other entities, was licensed to domestic law enforcement agencies in the US and

several other countries. Police personnel can click a picture of a person and understand a large part of their digital footprint through their faces using the app. The company provides its technology to at least 600 police agencies in the United States, and at least 2,000 American public agencies in total. In May 2021, privacy activists filed legal challenges in different European countries against Clearview AI. Despite its scale, Clearview AI is only one of the companies providing FRT to law enforcement agencies in the United States.

Over the last few years, employees of large technology companies along with civil society activists have forced these companies to desist from providing FRT to governments for surveillance functions. Amazon and Microsoft have both announced moratoria on providing FRT to the police, while IBM has ended its general facial recognition programme altogether. Most recently, the now rebranded Facebook (as Meta) announced the discontinuation of its facial recognition programme.

However, it is pertinent to mention that these self-regulatory measures come belatedly. For instance, before it was forced to end these partnerships, Amazon’s subsidiary Ring had made deals with 1,300 law enforcement agencies to use CCTV camera footage for surveillance. Similarly, before Microsoft was forced to enact its moratorium, it had pitched its FRT system to the Drug Enforcement Administration, which has often come under heavy criticism for its racist and brutal policies. Microsoft and Amazon have not extended their moratoria to cover federal law enforcement agencies in the US.

Aside from this loophole, there are several other large and small private companies that still provide FRT to the police. In 2012, IBM entered into an agreement with the government of Davao in the Philippines to provide a surveillance system for the city. The system, provided by IBM till 2016, reportedly assisted in the extrajudicial killings carried out by President Rodrigo Duterte in his “war on drugs”. While IBM denies that the system included FRT, promotional material reveals that IBM advertised FRT as being part of the system. The system also enabled the over-criminalisation of petty crime like loitering. FRT does not have to be directly provided to the police for it to be used for law enforcement functions.

At times there is monitoring by non-state actors that also indirectly feeds into arbitrary surveillance. For instance, real estate companies use FRT in many parts of the world to restrict entry to, or detect crime in, their premises. In China, such use has been particularly prevalent – both for security and advertising purposes.

A survey revealed that over eighty per cent of people in China wanted more control over their data collected by such systems. Subsequently, several provinces and cities in China have banned or are considering banning the use of FRT for security purposes. China also released draft standards for the use of FRT, limiting its use to identification rather than prediction, discouraging its use on minors, and recommending a search for alternatives to the technology before implementation.

There are also examples of real estate companies in Canada and the United States using FRT to screen residents and visitors. In Brazil, public schools use FRT to track attendance and ostensibly assure safety. Much of Brazil’s surveillance equipment is provided by Huawei and operated by the private telecom company Oi Soluções.

In a resolution, the European Parliament has called for a ban on both facial recognition technology and predictive policing. It also called for a ban on private facial recognition databases. However, this resolution is non-binding. In India, it is primarily news reports that provide information on the involvement of private companies in providing FRT to the police. There is no data or documented records made available to the public by the law enforcement agencies involved, and as revealed by our RTI endeavours, the application is opaque.

News reports reveal that a few Indian startups such as Staqu, Innefu Labs and FaceTagr provide FRT to the police. Foreign companies like Japan’s NEC and Israel’s Cortica are also providers of FRT to the police. Other private companies such as EY, Idemia, Tech5, Thales, Anyvision and Vara Technology have been present at pre-bid conferences organised by the National Crime Records Bureau (NCRB) for developing a national facial recognition system.

We have been able to glean the following types of uses of FRT by the police in India:

  1. Real-time monitoring of public places: Police use FRT to monitor public places to identify “blacklisted” people among the crowd. Staqu, for instance, has stated that its FRT can identify people based on a low-resolution video feed as well.
  2. Investigation: Police also use FRT for narrowing a list of suspects, tracking down suspects and potentially using this information as evidence during the trial. For example, the Bihar police floated a tender in April 2021 requesting an integrated surveillance system that could, among other capabilities, match faces of suspects to various databases. In a case involving sandalwood smuggling, the Karnataka police claimed that a month-long investigation could have been completed in a week had they been able to use FRT.
  3. Smartphone-based instant search and verification: FaceTagr claims that it has helped Chennai police use an app that can match a person’s face to a database instantly, through the use of a picture clicked on the spot.55 In 2017, it was reported that the database included 12,000 offenders, to be increased to an additional 40,000. Police collected the initial database themselves.
  4. Conflict area monitoring: FRT is also reportedly being used by the Indian armed forces. Staqu reportedly conducts aerial imaging analysis for the Indian Army. Army documents show that there is a requirement for AI-based field monitoring with the use of legacy cameras such that resources are freed up from the time-consuming task of monitoring camera feeds. They also show that as of 2019 there was no real time FRT deployed by the Indian Army. The Indian Army has published problem statements in order to encourage research and development by both public and private actors in these technologies.
  5. Covid-related monitoring: FRT was used for verifying beneficiaries at vaccination centres as part of Covid-19 prevention measures in India. Since vaccination was age-restricted and tracking the number of shots per person is important, identity verification fulfilled a law enforcement purpose. FRT was used to match people’s photos at the vaccination site with the Aadhaar photos, although the details of this use are scant.

This list is not exhaustive, as there can exist uses of FRT that are not revealed in the public domain. However, it is indicative of the broad contours of usage by law enforcement agencies in India and how the private sector designs these for them.

The data on standards followed by or required from private providers of FRT for law enforcement is also scant. In its revised tender for a national facial recognition system, the NCRB required bidders to have participated in a NIST evaluation. NIST is the National Institute of Software and Technology (of the US), and conducts a facial recognition vendor test that assesses FRT on accuracy and demographic bias.

To our knowledge, information about FRT accuracy standards requirements during the tender process is not public (for law enforcement applications). It is also pertinent to mention there is an argument to be made on the efficacy of an American test for an FRT tool to be designed and deployed in India, using Indian datasets.

The following section elaborates on the list of problems caused or likely to be caused by the use of privately provided FRT by law enforcement.

Issues caused by private sector involvement in FRT for law enforcement

There are both legal and governance questions raised due to the involvement of the private sector in providing FRT for law enforcement. Some questions are related to privacy, particularly in the indiscriminate use of various datasets, the outsourcing of surveillance functions to private entities, and the security of data used in FRT processes. Some questions are related to the thorny issues of liability that arise in the use of AI applications. Other questions relate to the blending of private incentives with public power in security operations. This section elaborates on all of these issues.

Privacy Risks

There are serious concerns in terms of the informational autonomy and privacy of individuals. FRT uses sensitive and unique information and facial prints which are akin to biometrics, and certainly classify as personal data or information as laid down by the Supreme Court of India in the Puttaswamy judgment. While India awaits a formal data protection legislation to be enacted, it is incorrect to assume that private and public entities have a carte blanche on how to collect, process and utilise the personal data of individual citizens.

At the same time, the concept of informational autonomy has been recognised and found to be a key manifestation of the right to privacy under Article 21 of the Constitution. This has significant implications with the current form of surreptitiously engaging private corporations to design, deploy, and oversee the use of FRT in Indian law enforcement.

The most significant issue is what datasets are being used by these companies to design the underlying FRT algorithms. For instance, when the NCRB’s original request for proposal to set up a pan-Indian FRT system, gave a loosely worded description of the datasets that may be used, it immediately drew a sharp response from lawyers and privacy activists.

Following the issuance of a legal notice, it compelled the NCRB to recast the usable datasets in a much more clearly defined manner in its revised RFP. 67 For most states, where companies have designed such algorithms, it is completely unclear as to whether the sensitive, personal data of Indians is being tapped into without their informed consent, in violation of the right to privacy, or not. This also conflicts with the notion that an individual must have control over what personal information is collected, used, or shared further, as established by the Supreme Court in Puttaswamy.

On the other hand, if datasets comprising Indian faces are not being used in the training of the algorithm, there are other risks like inherent design flaws leading to inaccurate outcomes. We have covered these in detail in our first working paper of the series. Beyond the question of datasets, there is a significant concern of whether surveillance activities are directly or indirectly being outsourced to these private corporations.

Surveillance, even when conducted by the state, is exceptional and governed by laws. Without commenting on the merit or demerit of state surveillance, it is unquestionable that private entities are not empowered to assume this role, nor is this a responsibility the state may freely delegate. The opacity which shrouds the current engagements of state police forces or governments in India with limited private entities, their roles and scope of engagement, and the access they arguably can continue to have over the underlying algorithm, warrants serious questions on the plausible and dangerous merger of state functions with a private entity.

This has virulent ramifications from a democratic, as well as privacy standpoint, the latter being the power of mass surveillance being afforded to shadowy private sector entities. The third issue in infraction to an individual’s right to privacy is the potential breach of data that private entities designing such algorithms, may precipitate.

The draft Personal Data Protection Bill, 2019 (PDP Bill) has proposed considerable onus on data processors by prescribing stringent measures in terms of data collection, storage, access controls, etc. However, in the absence of this formal legislation, and again, due to the lack of public visibility or any scrutiny of how the private sector entities are operating in this space of designing surveillance systems, there is a legitimate concern on where datasets being used are safeguarded with appropriate mechanisms in place.

In fact, in India, there have been reported instances of data leaks from FRT applications being used by local police agencies.71 Data breaches can both be accidental and deliberate.

The example of Clearview AI might help to illustrate this point further. As Clearview AI is used by so many police departments in the United States, it now has a database of all the people that the police search for.72 This is a database that can be looked at in many different ways depending on one’s understanding of police work: it can be seen as a list of vulnerable people, criminals, or people given to crime.

What happens when a database of such people is used for other purposes? Let us assume the police runs searches on the face of a person for being a suspect but later decides he is not a suspect. It is obviously and evidently inappropriate for this search information to land up with a credit rating agency, a housing society, child adoption centres, and employers.

The risk of unfair discrimination is clear and intense. The search data does not even have to be tied to a certain name for it to be used maliciously – aggregate data always carries a risk of de-anonymisation, and entities like credit rating agencies use locality-based data to rate entire neighbourhoods as well. In India, the national-level facial recognition system to be developed for the NCRB would connect to the Crime and Criminal Tracking Network and Systems (CCTNS) database.

The private FRT vendor would have access to this database in some form, or at the very least will have access to inferences made from this database. There is no clear manner in which the private vendor will be prevented from using this database jointly with FRT data for unrelated purposes, such as by selling it to a credit rating agency by bypassing legal restrictions against such sharing.

As another example, Staqu has now started providing its software for private security uses such as those in real estate. There is no transparency on what kind of data is used to aid private security providers to meet their ends, and Indian law is woefully inadequate at limiting such practices.

The ongoing unregulated and invisible collaboration between governments and private corporations, thus, poses serious risks to individual privacy and informational autonomy.

More transparency in this partnership, and the establishment of oversight through clear checks and balances, are necessary, if such an arrangement is to continue in a fair and accountable manner. We will broach this further in the final section of this paper.

Issues Surrounding Unclear Legal Liability

The use of FRT in law enforcement poses several risks and each warrants a deeper consideration from a liability standpoint. Arguably, the most insidious risk is to an individual’s freedom and liberty, as inaccurate FRT results can result in apprehension, detention, prosecution and even potential conviction of an individual. Again, these are not merely theoretical conjectures – there have been instances of misidentification which have caused detrimental results for innocent individuals.

The issue around the legal liability of AI (and its specific manifestations like ML algorithms) is highly debated. While consideration of the nuance of where and how liability can be imputed is a theme for a more detailed scholarship, the authors herein are simply aiming to flag the polycentric nature of the ecosystem of law enforcement within which a flawed FRT algorithm can create a legal cause of action against different actors. From a liability perspective there are three main concerns – first, the liability of the private corporation or developer of the FRT algorithm; second, the liability of the state for deploying a flawed algorithm; and third, the potential of holding the algorithm liable, per se.

Beyond the debate around who to hold accountable, in India’s case, the manner in which FRT has been pursued by law enforcement agencies, there is little recourse for judicial action. Any litigation would require a substantial amount of evidence to establish the liability of both private and state actors. Particularly for private actors, the secrecy offers a de facto immunity from potential legal liability as it is nearly impossible to build a proper case, in the absence of even the most fundamental details of this arrangement.

As researchers, the authors have struggled to piece together even some rudimentary aspects of how the state governments have engaged private corporations, governing norms or terms of reference for such an arrangement, and whether there are any liability provisions governing this relationship.

For a potential accused, access to this information will be utterly inaccessible. What results from this is an arbitrary ecosystem where despite high risks of surveillance, privacy infringement, transgressions against due process, and a real threat to constitutional and legal rights, there is no meaningful recourse. Depriving an individual of proper access to justice in itself violates the sacrosanct ethos of the Indian Constitution and would certainly fall short of the idea of “constitutional morality” that has become the cornerstone of AI ethics in India.

Private Goals Drive Public Priorities

When a significant portion of the technology for facial recognition is provided to public agencies by the private sector, the peculiar motives of private actors affect the development and use of this technology. Consequently, private motives affect public outcomes. In this section, we will use the case of Clearview AI along with other examples to illustrate that often these motives are incompatible with public welfare and can distort public outcomes.

  1. Profit, free trials and proliferation: When Clearview AI started out, it followed a model that is by now quite familiar to us: deep discounts for customers to create and capture a market.79 It offered police departments free trials and cheap licenses at the beginning to demonstrate to the police the value of its technology. This made many policing agencies de facto brand ambassadors of the technology.

While there may be nothing wrong with offering discounts to public agencies, this practice in a legal vacuum and in the absence of public deliberation reflects amispricing of the technology. In effect, not only was the technology deployedwithout consultation with the public or without a social welfare assessment, it was also deployed at an artificially low price. This is because there is private interest in the proliferation of this technology, which motivates the low price. The private interest was not exposed to a balance with the public interest due to the opacity of the use.

In India, several FRT providers are also funded by venture capital and are presumably able to provide this technology at deep discounts. The mispricing of the technology leads to use that is well above the socially optimal use. In other words, policing activities are increased and sharpened to unjustifiable levels because the technology is made available for cheap in the short term.

Such over-policing can lead to over-criminalisation of society. In 2017, the Times of India reported that the Chennai police approached a “quarrelling group” and used the FaceTagr FRT app on them. It revealed that one of the people had pending cases against them. It is not unclear why this example was considered a success story by the police, or how it would clear any test of necessity and proportionality.

Similarly, Staqu’s principal FRT, christened Jarvis, can be used to detect “unnecessary loitering and suspicious activity”. The description on Staqu’s website is accompanied by a picture of a homeless man, clarifying the usual targets of such surveillance. Since the phenomenon of over-criminalisation has been discussed in detail in the previous papers in this series, we will not cover it in detail here.

2. Conflicts of interest: Clearview AI scrapes images from social media websites to develop its database. Naturally, Facebook is a major source of image data for Clearview. In February 2020, Facebook demanded that Clearview AI stop using its data, as this activity was in violation of Facebook’s policies.83 However, a prominent investor in Clearview AI – Peter Thiel – also sits on Facebook’s board. 84 Notably, Facebook, unlike Twitter, did not send a formal cease and desist letter to Clearview AI. Serious concerns about conflicts of interest are raised when the same people or entities are invested in FRT for law enforcement as well as data gathering for non law-enforcement purposes.

In the Indian context, the private entities that are currently, or may potentially design such algorithms for law enforcement agencies, will trigger similar questions. It is also pertinent to state here that the proposed PDP Bill is unlikely to offer much resolution to such conflicts of interest. It currently provides numerous clauses which enable the central government to exempt the application of numerous substantive provisions of the draft bill to entities including private corporations, which are deemed necessary for law enforcement. Hence, it is legally conceivable that entities engaged in designing such surveillance technologies for the state are afforded such exemptions on their data processing.

  1. Manipulation of data: The New York Times article that first broke news about Clearview AI’s activity also showed that the company could manipulate its database and block search results for certain faces. The relevant quote from the article is below:

After the company realized I was asking officers to run my photo through the app, my face was flagged by Clearview’s systems and for a while showed no matches.”

For an activity as sensitive as law enforcement, the ability of a private actor to manipulate the results of a search seems perilous. In the context of India, when the partnerships between private technology providers and police agencies are so opaque, there is very little opportunity to examine the level of control the police exerts over the database, and the transparency with which the technology functions.

The incentives in the use of privately provided FRT in law enforcement are similar to those of private prisons in the United States. Private prisons have been shown to be primary drivers of mass incarceration and the heavily racially biased “war on drugs” in the US. The profit incentive in imprisonment – along with other factors – ends up promoting government action in increasing imprisonment and criminalisation of activities that are either harmless or best reduced through non-punitive measures.

In fact, there is now an entire for-profit industry of bail bonds that traps people in debt once they are arrested. The effects extend to foreign policy as well – as of 2016, nearly three-fourths of US federal immigration detainees were held in private prisons. The toll of the war on drugs in terms of life years and lives lost is staggeringly high and should give anyone pause while considering the promotion of private interests in law enforcement.

To be clear, there are cases of the positive involvement of private providers in public services. But the involvement of private interests in security provision is a special case because it directly affects the coercive actions of the state, which in ordinary circumstances are subject to democratic control. There is rich literature about the consequences of private incentives driving security provision. Perhaps the most well-known expression of this general relationship is Eisenhower’s phrase “military-industrial complex”.

Eisenhower pointed to not only the profit motive and monopolisation of defence markets but also the difficulty in cutting back spending on these endeavours once it was increased and entrenched. Without undue alarmism, the takeaway for us should be that at the very least, a clear and public examination of the costs and benefits of privately-provided technology for law enforcement should be carried out before its use is entrenched.

Recommendations

  • Transparency of agreements
  • Algorithmic standards and regulation
  • Stricter legal restrictions on surveillance
  • Public involvement in decision-making

 

Ameen Jauhar and Jai Vipra work at the Centre for Applied Law and Technology Research at Vidhi Centre for Legal Policy.