Biometrics programs for the developing world could put data in the wrong hands #Aadhaar #UID


Privacy for the Other 5 Billion

Western-backed biometrics programs for the developing world could put data in the wrong hands.

By  and 

Posted Friday, May 17, 2013, at 11:51 AM

An Indian villager looks at an iris scanner during the data collecting process for a pilot project of The Unique Identification Authority of India (UIDAI) in the village of Chellur, some 145kms north-west of Bangalore on April 22, 2010.

An Indian villager looks at an iris scanner for a pilot project of the Unique Identification Authority of India, or UIDAI, in the village of Chellur, northwest of Bangalore, on April 22, 2010.Photo by Dibyangshu Sarkar/AFP/Getty Images

Move over, mobile phones. There’s a new technological fix for poverty: biometric identification. Speaking at the World Bank on April 24, Nandan Nilekani, director of India’s universal identification scheme, promised that the project will be “transformational.” It “uses the most sophisticated technology … to solve the most basic of development challenges.” The massive ambition, known as Aadhaar, aims to capture fingerprints, photographs, and iris scans of 1.2 billion residents, with the assumption that a national identification program will be a key ingredient to “empower poor and underprivileged residents.” The World Bank’s president, Jim Yong Kim, effusively summed up the promise as “just stunning.”

Although few can match Nilekani’s grand scale, Aadhaar is but one example of the development sector’s growing fascination with technologies for registering, identifying, and monitoring citizens. Systems that would be controversial—if not outright rejected—in the West because of the threat they pose to civil liberties are being implemented in many developing countries, often with the support of Western donors. The twin goals of development and security are being used to justify a bewildering array of initiatives, including British-funded biometric voting technology in Sierra Leone, U.N. surveillance drones in the Democratic Republic of the Congo, and biometric border controls in Ghana supported by the World Bank.

This vigorous adoption of technologies for collecting, processing, tracking, profiling, and managing personal data—in short, surveillance technologies—risks centralizing an increasing amount of power in the hands of government authorities, often in places where democratic safeguards and civil society watchdogs are limited. While these initiatives may be justified in certain cases, rarely are they subject to a rigorous assessment of their effects on civil liberties or political dissent. On the contrary, they often seek to exploit the lack of scrutiny: Nilekani recommended in another recent speech that biometric proponents work “quickly and quietly” before opposition can form. The sensitivity of the information gathered in aid programs is not lost on intelligence agencies: Pulitzer Prize-winning journalist Mark Mazzetti recently revealed that the Pentagon funded a food aid program in Somalia for the express purpose of gathering details on the local population. Even legitimate aid programs now maintain massive databases of personal information, from household names and locations to biometric information.

Advertisement

Humanitarian organizations, development funders, and governments have a responsibility to critically assess these new forms of surveillance, consult widely, and implement safeguards such as data protection, judicial oversight, and the highest levels of security. In much of the world, these sorts of precautions are sorely lacking: For example, despite the success of information technology in Africa, only 10 countries on the continent have some form of data protection law on the books (and even those rarely have the capacity or will to enforce them).

Kenya is a good example of how these programs can go wrong. In the country’s recent election, a costly biometric voting scheme flopped, adding widespread uncertainty to an already fragile situation. The problems were manifold, from biometric scanners that couldn’t recognize thumbprints to batteries that failed and servers that crashed. As journalist Michela Wrong put it, “almost none of it worked.” With limited resources, why support expensive and often ineffective technologies like biometric voting when traditional systems often suffice? While biometrics could help clean up electoral rolls, they may very well serve to obfuscate the electoral process, as information is passed through proprietary applications and technologies, closed to public scrutiny and audit.

But the worries in Kenya extend beyond technological failure. Like many low-income countries, Kenya has historically lacked a robust program of birth registration, making public health work notoriously difficult. It also stymies the provision of education services and cash transfers to vulnerable populations. To rectify this, the Kenyan state has sought to enroll all adults in a biometric national identification scheme that aims to interoperate with various other databases, including the tax authority, financial institutions, and social security programs. According to the director of this Integrated Population Registration System, George Anyango, the government now has “the 360 degree view of any citizen above the age of 18 years.” The Orwellian language is particularly worrisome given Kenya’s lack of data protection requirements and history of political factionalism, including the ethnic violence in the aftermath of the 2007 election that resulted in the death of more than 1,000 Kenyans.

The Aadhaar project in India—a country with a history of ethnic unrest and social segregation, widespread political and bureaucratic corruption, and with no effective legislative protection of privacy—should raise similar, magnified fears. Furthermore, it’s doubtful the program could help bring about the social equality it promises. Proponents of these state registration schemes argue that a lack of ID is a key reason why the poor remain marginalized, but they risk misdiagnosing the symptom for the cause. The poor are marginalized not simply because they lack an ID, but rather because of a complex history of discriminatory political, economic, and social structures. In some cases a biometric identity scheme may alter those, but only if coupled with broader, more difficult reforms.

One of Aadhaar’s biggest promises is the opportunity to open bank accounts (which require identification). Yet, poor, marginalized Indians, even with an ID, find formal banks to be unfriendly and difficult to join. For example, the anthropologist Ursula Rao foundthat the homeless in India—even after registering for Aadhaar—were blocked from banking, most frequently for lack of proper addresses, but more fundamentally because, as she notes, biometric identification “cannot establish trust, teach the logic of banking, or provide incentives for investing in the formal economy.” Bank managers remain suspicious and exclusionary, even if an identity project is inclusive. Without broader reforms—including rules for who may or may not access identity details—novel identification infrastructures will become tools of age-old discrimination.

Another, more practical drawback is that biometric technology is particularly ill-suited for individuals who have spent years in manual labor, working in tough conditions where their fingerprints wear down or they may even lose full fingers or limbs. Even with small authentication error rates—say, the 1.7 percent that recent estimates from Aadhaar suggest—the number of failures in a population the size of India’s can be enormous. Aadhaar has already enrolled 240 million people, with plans to reach all residents. You do the math.

The growth of these systems is due in part to the lack of public education and consultation, as well as the paucity of technical expertise to advise on the risks and pitfalls of surveillance technologies. But certainly the international donors and humanitarian organizations that support these initiatives have a responsibility to critically assess and build in safeguards for these technologies. Given the enormity of the challenge facing these organizations, it is perhaps easy not to prioritize issues like privacy and security of personal data, but the same arguments were once made against gender considerations and environmental protections in development. Aid programs that involve databases of personal information—especially of those most vulnerable and marginalized—must adopt stringent policies and practices relating to the collection, use, and sharing of that data. Best practices should include privacy impact assessments and consider the scope for “privacy by design” methodologies.

As the rhetoric around Aadhaar makes clear, the promise of a quick technical solution to intractable social problems is alive and well. However, it is time to recognize that human development involves the protection of civil liberties and individual freedoms, and not blindly rush into the creation of surveillance states in the name of development and poverty alleviation. Donors and aid organizations need to remember that the other 5 billion deserve privacy, too.

 

SOURCE- slate.ocm

What Twitter’s New Censorship Policy Means for Human Rights


Image representing Twitter as depicted in Crun...

Twitter dropped quite the shocker last week when it declared its new policy to remove tweets in certain countries to abide by specific national laws. While a tweet will remain visible to the rest of the world, specific messages will disappear in the target country (e.g., following requests by governments).

The ensuing backlash saw a lot of people screaming “censorship” (ironically, on Twitter). While the first wave of criticism has quickly calmed down, for a human rights watchdog, the announcement is quite alarming:

As we continue to grow internationally, we will enter countries that have different ideas about the contours of freedom of expression… Until now, the only way we could take account of those countries’ limits was to remove content globally. Starting today, we give ourselves the ability to reactively withhold content from users in a specific country — while keeping it available in the rest of the world.

A new policy for old-school repression

Twitter claims that this isn’t a dramatic shift in policy, but rather clarification of existing policy, with a “fix.” Previous removals of content were global, for example, when they removed a tweet, no one could see it anywhere. Now, country-by-country, Twitter can block content specially tailored to that country. In a bizarre logic, the increase in control of information in response to government demands means, according to Twitter — less ‘censorship.’

One may incredulously respond that country-specific removal would further disadvantage people who saw Twitter as a means of circumventing illegal restrictions on their speech and expression. Further disadvantaging people who’ve turned to the service as a means of empowering themselves through voice, assembly, and access to information.

Though there has been an outpouring of anger in response, some are quite pleased. Today, Thailand became the first government to publicly endorse Twitter’s decision. China and Iran haven’t made any statements (China’s state-run newspaper did praise the move), but I suspect they’re pleased, as are several other governments that have sought to shut down Twitter at the first sign of dissent.

As an aside I should note that — as with any attempt to control information (see my post on SOPA/PIPA — there are already easy ways — five at last count — to bypass Twitter’s blocks.

Outrage and tough choices

I’ve appreciated the outrage, given the importance (not to be confused with value) of Twitter. I have no doubt that information posted on Twitter — and any other large public networking platform — has resulted in all manner of things, from the terrible, to the great.

We know that information spread via Twitter has saved countless lives, from natural disasters such as in Japan or in humanitarian crises, such as in Cote d’Ivoire. Twitter has contributed to regime change in repressive places. It has even helped free a prisoner in Kashmir and has become a valuable network for citizen journalists and concerned citizens, such as in Mexico. It is a medium by which human rights advocates carry forward their work, such as our Eyes on Syria project (look for #EyesonSyria — but maybe not if you are in Syria), or Amnesty’s own Twitter account.

But for all of these goods, information on Twitter has surely created harm. In crisis, it can become a dangerous medium for rumors or misinformation (or “terrorism” charges). Al-Shabab‘s recent banning of the International Red Cross (a violation of international law of the highest order) was communicated via Twitter. Indeed, Kenya‘s military has been fighting Al-Shabab on the ground, as well as in the twitterverse.

Importantly, information has no inherent value… it is the effect of the content that lends moral weight.

Twitter has never had to make difficult decisions about that content, however. Twitter has never had to be responsible for controlling content in the manner its new policy will require of it. And Twitter will be called on by governments around the world to censor. The cat is out of the bag, and the decisions that will need to be made by Twitter lawyers and staff should give them sleepless nights. At some point — somewhere — harm will be done by those choices. Voices will be silenced. Lives will be lost. Twitter will inevitably make mistakes, and the world will be different as a result. It is a power it would have been wise to deny having.

The stark fact is that — like traditional media, housing, agriculture, or any of the other sectors upon which humanity’s ability to fully enjoy their human rights is dependent — profit motivates great innovations in the digital world. Profit also motivates consolidation and control.

The source of the immense outrage over the policy says more about our collective confusion over digital networking tools than Twitter’s policy. Twitter is seen as a public good. But it is not. Twitter is a (private) company, one that probably made over $100m in profit in 2011 — though its profit potential may be an order of magnitude higher. It is a company like any other, with motives. As with other companies, we — as consumers — have leverage.

But far from suggesting a boycott, let’s start with the basics.

#International Law

I appreciate Twitter’s appeal to the rule of law. Let me make my own.

We have an international body of law that protects the rights of people, and sets forth the obligations of governments, businesses, and the everyday person. Amnesty International and other human rights organizations spend an exceptional proportion of their resources monitoring compliance with the law, and calling out those who violate human rights law. Not just governments, but businesses as well, from Shell and Dow Chemical, to cell phone manufacturers, mortgage banks, and private security firms.

Allow me to offer a word of advice to Twitter: Laws often clash. In the U.S., there were laws on the books in the southern states that were ruled unconstitutional long before they were finally scrapped. And there are surely domestic laws in countries that will be cited by governments or security elements as a basis for denying speech via Twitter that will clash with international human rights law. They will be illegal domestic ‘laws’ in contravention of established international human rights laws. They will be unjust laws.

What will Twitter do?

At some point, Twitter will be pressured by governments to change its terms of service so the work around for access to blocked tweets becomes a use violation…Twitter does in fact know where you are tweeting from, and can deny your ability to change your location to circumvent information blackouts.

At some point, user information and location will be demanded by a repressive regime with a cheap, and by international standards, meaningless veneer of a court order. They will demand it, and will appeal to domestic ‘law’.

What is abundantly clear is that human rights monitors and advocates — for the immense power Twitter and other digital networking tools have given them — have an entirely new domain to monitor. As with other sectors, business decisions in the digital world have human rights implications. For the immense value of Twitter, the policy announcement only brings into focus what we’ve known for some time — human rights monitors and advocates have a lot more work to do since the digital revolution. Our collective vigilance is needed more than ever, however we chose to communicate.

We will be watching you, Twitter. Take it as a measure of your importance.

Scott Edwards is Director of International Advocacy for Africa and Director of the Science for Human Rights program at Amnesty International USA.

Archives

Kractivism-Gonaimate Videos

Protest to Arrest

Faking Democracy- Free Irom Sharmila Now

Faking Democracy- Repression Anti- Nuke activists

JAPA- MUSICAL ACTIVISM

Kamayaninumerouno – Youtube Channel

UID-UNIQUE ?

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 6,228 other followers

Top Rated

Blog Stats

  • 1,839,237 hits

Archives

April 2021
M T W T F S S
 1234
567891011
12131415161718
19202122232425
2627282930  
%d bloggers like this: