In praise of the private Indian
Ananth Padmanabhan Ananth Padmanabhan | 03 Aug, 2017
HAVING STARTED IN 2010 with the objective of identifying residents using biometric markers, the Aadhaar experiment has grown in size and scale over the years. Not only has it has ignited our collective imagination, it has been contested at the Supreme Court as an intrusion of our privacy. Its technology design, which permits private entities to read its database for the authentication of customers, and its supporting legal framework, which places the Unique Identification Authority of India (UIDAI) in a unique position as the custodian and regulator of the Aadhaar database, have come under the judicial scanner.
On first glance, Aadhaar appears to renew the classic debate between the ‘constructive’, largely represented by engineers and technocrats, and the ‘critical’, as civil society actors, NGOs and public- interest motivated lawyers often get branded. Can constitutional principles be sacrificed at the altar of development to achieve the greater common good? Should an experiment that’s particularly promising for a country where data is currently sparse, and pilferage through duplicate identities extensive, be shot down on principles under the guise of indeterminate values such as privacy and personhood? Many stakeholders have previously been caught in similar crossfires of public reasoning, involving trade-offs between outcomes and principles. Think of big dams, infrastructure and mining projects, and other growth and developmental initiatives. But when contending with control over personal data—an intangible, non-rivalrous, inexhaustible commodity—our perception of rights, wrongs, benefits, and harms stand altered. The conflict, in reality, is between two designs for dominance: the design of privacy and the solution architecture of Aadhaar. To examine this conflict, we must first identify the design offered by the idea of privacy, before turning to the substantive concerns that Aadhaar invites.
An influential early work, The Right to Privacy (1890), co-authored by Samuel Warren and Louis Brandeis, was a response to the immediate technological advances of the time that strengthened an intrusive press. The authors, building upon the psychological insight that an individual’s self-image suffers greatly when private and personal information is disclosed, distilled the individual’s general right to be let alone as being at stake across diverse factual scenarios. In a perfect illustration of how law, much like technology, chisels its own unique design, the authors introspected on possible limitations of this right too, including the publication of matter that is of ‘public or general interest’.
William Prosser’s four-way classification followed in 1960, leading to a re-design of the broader ‘right to be let alone’ into four categories of wrongs: intrusion upon seclusion; public disclosure of private facts; presentation in false light; and wrongful appropriation of a person’s identity or attributes. Though this model, based on individual control over personal information, has come under strain by the technological onslaught, particularly the rise of online social media companies and ‘big data’ driven business models, the robustness of privacy lies in its ability to adapt to changing circumstances. Privacy by design (PbD) serves as a wonderful example of this flexibility and dynamism inherent in the very concept. Developed in the mid-90s as part of a Canadian response to informational privacy challenges, particularly within the technology context, PbD offers important guidelines for products to build privacy into their solution architecture. A critical feature of this design is the emphasis on privacy as a default position, which means that the purpose for which personal information is collected is made clear right when the data is gathered; its collection is limited to the extent necessary for achieving this underlying purpose; and user privacy is guaranteed through multiple mechanisms including prior consent, an accurate and up-to-date database, transparent practices that inform the user about data use, sharing, and disclosure, and effective grievance redressal mechanisms. In addition, PbD prioritises an ex-ante preventive strategy to safeguard private data over ex-post remedial fixes, thereby mandating inbuilt systems to anticipate privacy invasion beforehand and strong security measures through the entire data lifecycle.
Privacy today has thus evolved much beyond a philosophical construct to offer, by way of design principles, a rather clear set of dos and don’ts. Today, the privacy challenge to Aadhaar compels us to reframe the debate from ‘constructive’ versus ‘critical’ into a tussle between two competing, concrete and equally plausible designs: Aadhaar’s solution architecture and non-negotiable privacy design features. Because the Indian Constitution enshrines privacy as a basic right, for reasons advanced here, the technology design has to necessarily fit within the parameters of the legal and architectural design mandated by its values.
Before examining the place that privacy occupies within India’s constitutional set-up, it is important to dispel doubts regarding its relevance to India’s cultural ethos. Recent frontal attacks on the grounds that privacy is an imported and ambiguous concept warrant this diversion. A few hypotheticals would suffice to refute this specious argument. Consider the state mandating that certain communities are more inclined to a life of crime and hence must have cameras installed in their living rooms; or directing a corporation or venture fund to share contractual and client information so that smaller players could learn and develop new business models. Regardless of the factual basis behind such moves, they would shock reasonable nothing-to- hide Indians precisely because we, like any other society, cherish a conception of the inviolable private sphere. The real dilemma, therefore, is not whether we understand or value privacy, but whether the legal design for safeguarding privacy, outlined above, is a constitutional barrier stopping the Government from deploying a particular technology design.
Privacy, at its core, entails a set of values which its design features are meant to safeguard. One such foundational value is the distinction between private and public zones; ‘a man’s home is his castle’, as they say
But the Union, and some states, have relied on earlier Supreme Court verdicts to contend that privacy protection is not an independent fundamental right. The decisions cited, MP Sharma vs Satish Chandra (1954) and Kharak Singh vs State of UP (1962), were outcomes of a judicial philosophy espousing a narrower construction of rights—one largely abandoned after the experience of Emergency excesses. The former case involved a search and seizure operation, pursuant to a legally obtained warrant, conducted on various business locations of a promoter group suspected of embezzling funds. The specific constitutional violation alleged, infringement of the right against self-incrimination, was far removed from the privacy claims emerging from the right to life under Article 21 of the Constitution. It is within this context that the Court noted the absence of a fundamental right to privacy, to which the search and seizure regulations would be subject. The latter case involved a dacoity suspect against whom there was no evidence for a conviction, but the police nonetheless opened a ‘history-sheet’ and carried out surveillance operations and domiciliary visits. Interestingly, the Court, while negating the right to privacy and upholding surveillance as a mechanism to gather data, held that domiciliary visits were unconstitutional. The majority reasoned that ‘personal liberty’ under Article 21 was broad enough to prevent ‘an unauthorised intrusion into a person’s home and the disturbance caused to him thereby’—a classic application of the very right denied in the context of surveillance. When read along with subsequent apex court verdicts that showed a more liberal acceptance of privacy values, the more accurate picture that emerges is not one of blanket denial of the right but an acceptance of some of its design features and a rejection of others, depending on the facts at hand.
Yet, this assertion still does not address why certain privacy design features, even those recognised in the OECD guidelines, should find constitutional sanctity in India. There are two responses to this. Restrictions imposed on a right ought to be reasonable, and when gauging the same, both substantive and procedural aspects of the law in question must be examined from the perspective of the nature of the right alleged to be infringed, the underlying purpose of the restrictions imposed, the extent and urgency of the targeted problem, the dispropor- tion of the imposition, and prevailing socio-economic conditions. This classic exposition of reasonableness, laid down in State of Madras vs VG Row (1952), continues to guide courts. From the vantage point of this balancing exercise, privacy’s design features act as sufficient checks to ensure that inroads therein do not unreasonably breach the privacy right.
Additionally, privacy, at its core, entails a set of values which its design features are meant to safeguard. One such foundational value is the distinction between private and public zones; ‘a man’s home is his castle’, as they say. As a corollary, the state has no business as a general matter in an individual’s private dealings, till situations that import ‘public-ness’—the commission of crimes an oft-mentioned example—occur as part of such dealings. Individuals must also be empowered to take measures suitable to protect their privacy, as it forms an integral part of the very concept. When particular solution architectures, broadly conceived to include the technology design, its functioning and supporting legal frameworks, fail to meet privacy’s design features, there is cause for veritable apprehension that the values themselves are at risk. Both these concerns—that is, a misalignment with specific design requirements and challenges to deeper privacy values—show up on a close examination of Aadhaar’s solution architecture.
As matters stand, anyone prying open the Aadhaar black box—strictly a reference to the promised manna of deep learning tools for governance—is immediately confronted with two objections. These are the fait accompli objection, seeking strength from its high enrolments, resources expended, and, as a last-minute throwaway, the recent inclusion of a legal framework with a soon-to-come data protection law; and the go after Google objection, which tries to divert citizen ire towards private technology giants that use our data for predictive analytics and product suggestions.
Aadhaar Act envisages UIDAI as the custodian of data and effective response mechanism for data breaches. The individual is virtually a nonentity as there exist no mechanisms for even sharing information on data breaches
The first objection is strong in terms of practicality, though never in principle. I will, however, take Aadhaar’s present solution architecture for granted, lay to rest past violations, alleged and actual (in all fairness conceding that some have now been addressed), and contend with persisting privacy challenges. The second is unimaginative and deployed to obfuscate graver threats. Google lacks the might of the state or a sanctions regime to lend heft to its diktat. Even discarding this Hobbesian conception for a moment, when Google goes bad, we turn to the state. When the state goes bad, our options are limited if any at all. Having addressed these objections, let us now superimpose Aadhaar’s solution architecture over privacy’s grand design and examine if it passes muster. I will focus on three serious issues. The first two are not factual realities at the moment, but their very real possibility of occurrence is reason enough for the Supreme Court to compel important modifications to Aadhaar’s solution architecture. The third is true as of now, and deserves immediate rectification.
l. The digital panopticon: Our digital selves, when both converging and diverging from our real selves, offer deep insights on who we are, our personal preferences and choices, and most importantly, how we think. Foucault’s ‘panopticon’, though conceptualised as a physical object, has much to offer this virtual world and its design. The Aadhaar technology and architecture document (2014) displayed self-awareness of the all-seeing, all-knowing, big brother and resultantly advocated a ‘minimalistic approach to data’ and a ‘federated model’ with one-way linkage. In simple terms, existing identities such as passport and PAN numbers would not be captured within Aadhaar. Instead, these systems would add Aadhaar to their database and link it ‘one way’ to Aadhaar.
Providing assurance through this design, Aadhaar numbers have been ‘seeded’ into multiple databases today. The legal framework (Aadhaar Act, 2016) also assures that the UIDAI shall protect authentication information, with some national security exceptions. However, this is rendered meaningless because the higher the seeding, more the number of ‘requesting entity’ databases where authentication records exist, and less the need for the state to approach the UIDAI at all. In fact, the Aadhaar (Authentication) Regulations mandate that in case of any investigation involving authentication related frauds or disputes—a far cry from national security—the requesting entities shall provide access to their records to ‘any authorised investigation agency’. What this does, in practice, is making the state’s job a lot easier when it wants to track individual behaviour. All it needs is the one identity tag present in all these databases: the Aadhaar number, and unidirectional linkage does nothing to then prevent the state from forming an accurate picture of the individual through a mosaic of digital authentication crumbs from multiple sources. And a state that reveals its awareness of where one went to buy rations, attend to medical needs, and get their children schooled, all at one go, is far more fearsome than one bumbling with assorted files.
Topping this fear, upon declaration of a public emergency, or on the subjective opinion that UIDAI has been unable to discharge its functions, the Central Government can simply supersede its authority (section 48 of the Act).
2. Leaky data dam: Right from its Strategy Overview (2010), the Aadhaar project has revealed a preference for building revenue models around our private data by providing authentication services to private entities. While optimisation is generally good, it is risky business when biometric data—which one cannot change, unlike passwords—is involved. The number of Authentication User Agencies (AUAs) and their sub-agencies, spread across the country, that digitally communicate with the UIDAI’s Central Identity Data Repository (the Aadhaar data dam) is a source of much worry, as also their hectic scaling up of such services.
The state’s response has been three- fold: consent architecture, suitable contractual arrangements with AUAs, and strong security measures. The first and second are irrelevant because systemic risks are seldom factored in by individual consent-givers or contractors. The AUA Standard Template Agreement (Clause 7.2) waves a monetary penalty stick at AUAs, but waives any responsibility on the part of UIDAI itself (Clause 5.1 read with Recital E), thus leaving the policing of the system, in effect, to private entities. The third is not convincing enough when the communicating endpoints are too many, with no real background checks on sub-agencies or control over endpoint gadgets floating around. As a thumb rule, the larger the number of endpoints, higher the risk of injecting malware into the system. When state benefits and subsidies are closely tied to data stored in that system, the benefits to startup innovators from Aadhaar authentication are not comfort enough.
3. The regulatory custodian: The Aadhaar Act envisages UIDAI as both the custodian of Aadhaar data and effective response mechanism for data breaches. Indeed, UIDAI, not the citizen, gets notified of data breaches, and then decides how best to proceed. No grievance redressal mechanism exists for individuals affected by such a breach. The individual is virtually a nonentity in the statutory scheme as there exist no mechanisms for even sharing information on data breaches with the aggrieved. Considering UIDAI’s mandate as custodian, its incentives are hardly aligned with admitting security vulnerabilities in public. This makes the ‘consent architecture’ redundant, because meaningful consent cannot ever be built upon imperfect information, particularly when such information can only be realistically known to the Authority and errant AUAs, and not the end-user.
Revisiting the fait accompli objection, one is pushed to probe whether the Supreme Court can achieve anything meaningful using the ‘privacy’ hook when much water has flown under the Aadhaar bridge. The answer, refreshingly, is that the Court can still mandate important design modifications that make Aadhaar fit within the constitutional scheme. First, the Court must consider reading down Section 7 of the Aadhaar Act, which mandates Aadhaar for any ‘service’ for which funds are drawn from the Consolidated Fund. The mandate must only be for specific subsidies and welfare programmes that rely on programmatic identification cards, not services generally made available to taxpayers. As a corollary, the number must be directed to be delinked from other records such as PAN cards. Second, the Court must provide an opt- out mechanism for all individuals who wish not to avail of such subsidies. Third, the Court must direct an independent regulator, with constitutional status and no supersession risks, to be immediately constituted, along with strong grievance redressal measures and breach notification mechanisms that empower and inform the public. Fourth, the Court must immediately freeze all authentication services offered to private entities. Fifth, the Court must insist on a detailed framework to regulate the examination of authentication records at the service provider end when the Aadhaar number is seeded into specific welfare programmes.
The Silicon Valley ethos of ‘too big to fail’ and ‘lean startups’ do not always work—especially not for a national identification project, where nothing can be left to scale, chance, or improvisation. Privacy’s grand design must kick in to force changes to Aadhaar’s solution architecture. Its ability to do so has implications beyond our biometric data, shaping the role and (un)acceptability of state presence in our private affairs.
More Columns
‘AIPAC represents the most cynical side of politics where money buys power’ Ullekh NP
The Radical Shoma A Chatterji
PM Modi's Secret Plan Gives Non-Dynasts Political Chance Short Post