Kamayani aka kractivist
In what The New York Times labeled the biggest security breach in the history of the atomic complex, the trio broke into the Y-12 National Security Complex on July 28, 2012 and defaced a uranium processing plant.
The Y-12 facility has been in operation since 1943 as part of the Manhattan Project, and today is responsible for both the production and maintenance of all uranium parts for the entire US nuclear weapons arsenal. Over the years, the facility has also been the target of nonviolent anti-nuclear protests.
Now, a jury in Tennessee has charged the three protesters with sabotaging the plant, with a second charge of damaging federal property.
Defense attorneys for the three activists – Sister Megan Rice, 57-year-old Greg Boertje-Obed and Michael Walli, 64 – maintained that the prosecution had overreached.
“The shortcomings in security at one of the most dangerous places on the planet have embarrassed a lot of people,” defense lawyer Francis Lloyd said.
“You’re looking at three scapegoats behind me,” he added
Defense attorneys also noted that, once the three refused to plead guilty to trespassing, which carries a maximum sentence of 10 years’ imprisonment, the prosecution introduced the charge of sabotage, which carries a maximum prison term of twenty years. They believed the higher charge should have been dismissed.
According to the Associated Press, which provided details of the court proceedings, the three activists have no remorse for their actions, and were pleased to have reached one of the most secure areas of the facility.
Prosecutor Jeff Theodore noted that the trio’s fate could have been far worse, as that area of the facility allowed guards to use deadly force.
“They’re lucky, and thank goodness they’re alive, because they went into the lethal zone,” said Theodore.
The three defendants spent two hours inside Y-12, during which time they hung banners, cut through security fences, strung crime-scene tape and sprayed “baby bottles full of human blood” on the exterior portion of the facility.
Boertje-Obed, who is a house painter from Duluth, Minnesota, explained why they sprayed the blood.
“The reason for the baby bottles was to represent that the blood of children is spilled by these weapons,” he said.
While inside the most secure portion of the facility, the three activists managed to hammer off what is described as a “small chunk” of the Highly Enriched Uranium Materials Facility.
During cross examination, Sister Rice stated that she wished she had not waited so long to stage a protest within the plant.
“My regret was I waited 70 years,” she said.”It is manufacturing which can only cause death.”
Prosecutors argued that the breach of security was serious, and caused the plant to shut down for two weeks as security staff were re-trained and defense contractors replaced.
Meanwhile, federal officials maintain that there was never any danger of the three activists reaching materials that could be detonated or used to construct an improvised bomb.
A bench said withdrawing red lights from the vehicles of ‘so called’ VIPs will instill confidence among people. Stressing that beacon lights should not be allowed to be flaunted as a status symbol, it said there was no hindrance in withdrawing the privilege straight away and giving a sign that everybody is equal.
While hearing a special leave petition ( SLP) questioning continuation of Z- category security to an MLA from Uttar Pradesh without review of threat perception, the SC had earlier decided to enlarge the scope of the matter in public interest and put under its scanner the criteria for permitting beacon lights.
The court had directed all states to file affidavits giving details on the proportion of policemen involved in providing security to VIPs, the criteria for providing security and the amount of money spent on providing security to VIPs, among others.
You should enable two-factor authentication everywhere that you can. Facebook previously added and then removed mobile phone numbers as two-factor authentication ‘Login Approvals’ because “Facebook’s reverse lookup feature can be abused to search for thousands of sequential phone numbers in order to find any Facebook profiles associated with them.” Facebook has another two-factor authentication mechanism known as Social Authentication(SA) to add another layer of security to help fight against stolen account passwords. When a suspicious login is detected, then Facebook will show you a series up to seven photos of your friends and ask you to identify them in order to verify your account. At the Annual Computer Security Applications Conference (ACSAC2012), on Friday, Dec 7, researchers will present All Your Faces Are Belong to Us: Breaking Facebook’s Social Authentication by Jason Polakis, Marco Lancini, Georgios Kontaxis, Federico Maggi, Sotiris Ioannidis, Angelos D. Keromytis, and Stefano Zanero.
Because Facebook is presenting authentication based on user-related social information, it is supposed to stop an attacker who would not know whose name goes with what Facebook friend photos. Some people have complained that photos can be tagged with a friend’s name, despite it not being a photo of a person. Previous research has shown that we can maintain a stable social relationship with a maximum 150 “friends,” so another issue for some people is that they have friended so many people that they really don’t “know” their friends to identify the photo. However, researchers said, that Facebook’s SA, “to the best of our knowledge, is the first instance of an authentication scheme based on the ‘who you know’ rationale.” And then the researchers successfully set out to create an automated system capable of breaking Facebook’s Social Authentication system.
All Your Face Are Belong to Us: Breaking Facebook’s Social Authentication [PDF]:
We implement a proof-of-concept system that utilizes widely available face recognition software and cloud services, and evaluate it using real public data collected from Facebook. Under the assumptions of Facebook’s threat model, our results show that an attacker can obtain access to (sensitive) information for at least 42% of a user’s friends that Facebook uses to generate social authentication challenges. By relying solely on publicly accessible information, a casual attacker can solve 22% of the social authentication tests in an automated fashion, and gain a significant advantage for an additional 56% of the tests, as opposed to just guessing. Additionally, we simulate the scenario of a determined attacker placing himself inside the victim’s social circle by employing dummy accounts. In this case, the accuracy of our attack greatly increases and reaches 100% when 120 faces per friend are accessible by the attacker, even though it is very accurate with as little as 10 faces.
The security researchers said in their paper, “71% of Facebook users expose at least one publicly-accessible photo album.” An attacker “needs access to the victim’s friends list” so he or she can “see the photos and try to befriend the victim’s friends, further widening the attack surface.” The victim “must have at least 50 friends and the ‘user’s friends must be tagged’.” Next, an attacker can extract the tags of people’s faces and keep the photos. They explained the image below as an “Overview of our automated SA-breaking system. It operates in four steps:”
In Step 1, we retrieve the victim’s friend list using his or her UID. Then, in Step 2 (optional), we send befriend requests, so that we have more photos to extract faces from and build face classifiers in Step 3. In Step 4, given a photo, we query the models to retrieve the corresponding UID and thus match a name to face.
When asked about the “All Your Face Are Belong to Us” research, Stefano Zanero, from the Department of Electronics and Information, Polytechnic of Milan, Italy, told me: “Social authentication is an interesting proposal to make it easier for users to log into a website without having to remember complex passwords. Our research aims to show what are the pitfalls in designing such a system, and what level of security can be achieved with it.“
Is Facebook aware of the SA vulnerability, and, if so, then what did the company reply?
Zanero: First of all – as you may have glimpsed – our work is somehow broader than “Facebook’s SA is broken.”
Our research aims to point out the potential flaws in the concept of “social authentication”, using Facebook’s specific one (the most widely deployed and thus the most interesting example!) as a case study.
We made Facebook aware of the work recently. We contacted our POC at Facebook, and this is the answer that Alex Rice of Facebook allowed us to forward to you:
Thanks so much for reaching out to us and recognizing that keeping the internet safe is a collaborative effort, and that people, like yourself, around the world can make valuable contributions. We encourage security researchers who identify security problems to embrace the practice of notifying security teams of problems and giving us the opportunity to address the vulnerability. In this case, your research has provided deeper insight into characteristics of our authentication system that we have been aware of during its evolving development.
We employ multiple layers of security and Social Authentication is only one of several potential responses that our authentication systems may trigger when suspicious activity is detected. While Social Authentication is not designed to stop small-scale or targeted attacks, it has proven incredibly effective at stopping large-scale phishing attacks. It is also important to keep in mind that users are only enrolled in Social Authentication after they have provided the correct password to the account.
For those who want to take additional steps to secure their account, we have provided true two-factor authentication with our Login Approvals product.
We remain confident in the ability of Social Authentication to combat the current threat presented by large-scale phishing attacks. As we move forward, we will continue to improve these systems to become more sophisticated and configure our protections to be more robust against any emerging threat that seeks to compromise user accounts.
When I commented that their research was interesting, but also a bit scary, Zanero said, “We would not see the result as scary, but definitely we do agree it’s interesting The point is to explore how robust this form of authentication can be made, without making it too difficult for users. It’s a bit like the CAPTCHA you find on forms and websites: designed to tell humans from computers, they are becoming so annoying that humans are getting turned aside as a result.”
pic ocurtsey – The Hindu
With the initiation of national programmes like Unique Identification number, (UID)
NATGRID, CCTNS, RSYB, DNA profiling, Reproductive Rights of Women, Privileged
communications and brain mapping, most of which will be implemented through ICT
platforms, and increased collection of citizen information by the government, concerns
have emerged on their impact on the privacy of persons. Information is, for instance,
beginning to be collected on a regular basis through statutory requirements and through egovernance projects. This information ranges from data related to: health, travel, taxes,
religion, education, financial status, employment, disability, living situation, welfare
status, citizenship status, marriage status, crime record etc. At the moment there is no
overarching policy speaking to the collection of information by the government. This has
led to ambiguity over who is allowed to collect data, what data can be collected, what are
the rights of the individual, and how the right to privacy will be protected The extent of
personal information being held by various service providers, and especially the enhanced
potential for convergence that digitization carries with it is a matter that raises issues
II. Global data flows, today, are no longer the result of a file transfer that was
initiated by an individual’s action for point-to-point transfer over 30 years ago. As soon
as a transaction is initiated on the Internet, multiple data flows take place simultaneously,
via phenomena such as web 2.0, online social networking, search engine, and cloud
computing. This has led to ubiquity of data transfers over the Internet, and enhanced
economic importance of data processing, with direct involvement of individuals in transborder data flows
. While this is exposing individuals to more privacy risks, it is also challenging businesses which are collecting the data directly entered by users, or through
their actions without their knowledge, – e.g. web surfing, e-banking or e-commerce – and
correlating the same through more advanced analytic tools to generate economic value
out of data. The latter are accountable for data collection and its use, since data has
become one of the drivers of the knowledge based society which is becoming even more
critical to business than capital and labor. The private sector on the other hand, uses
personal data to create new demands and build relationships for generating revenue from
their services. The individuals are putting out their data on the web in return for useful
services at almost no cost. But in this changed paradigm, private sector and the civil
society have to build legal regimes and practices which are transparent and which inspire
trust among individuals, and enhance their ability to control access to their data, even as
economic value is generated out of such data collection and processing for all players. In
order to understand these concerns and identify interventions for effectively addressing
these issues, a brainstorming session on privacy-related issues was held in the Planning
Commission under the chairmanship of Justice A P Shah, former Chief Justice of Delhi
High Court. The meeting was presided over by Dr. Ashwani Kumar, MOS (Planning,
S&T and MoES) and attended by representatives from industry, civil society NGOs,
voluntary organizations and government departments.
III. During the meeting it was decided to constitute a small Group of Experts to
identify key privacy issues and prepare a paper to facilitate authoring of the Privacy bill
while keeping in view the international landscape of privacy laws, global data flows and
predominant privacy concerns with rapid technological advancements. Accordingly a
Group of Experts was constituted under the chairpersonship of Justice A P Shah. The 4
constitution and the terms of reference of the group is at Annex 1. The Group held several
meetings to understand global privacy developments and challenges and to discuss
privacy concerns relevant to India. The Group was divided into two sub-groups – one for
reviewing privacy regimes around the world with a view to understand prevalent best
practices relating to privacy regulation and the other for reviewing existing legislation and
bills to identify prevalent privacy concerns in India. However, the committee did not
“make an in-depth analysis of various programs being implemented by GOI from the
point of view of their impact on privacy.” This report, which is a result of the work of
both sub-groups, proposes a detailed framework that serves as the conceptual foundation
for the Privacy Act for India.
IV. This report proposes five salient features of such a framework:
1. Technological Neutrality and Interoperability with International Standards:
Group agreed that any proposed framework for privacy legislation must be
technologically neutral and interoperable with international standards. Specifically,
the Privacy Act should not make any reference to specific technologies and must be
generic enough such that the principles and enforcement mechanisms remain
adaptable to changes in society, the marketplace, technology, and the government. To
do this it is important to closely harmonise the right to privacy with multiple
international regimes, create trust and facilitate co-operation between national and
international stakeholders and provide equal and adequate levels of protection to data
processed inside India as well as outside it. In doing so, the framework should
recognise that data has economic value, and that global data flows generate value for
the individual as data creator, and for businesses that collect and process such data.
Thus, one of the focuses of the framework should be on inspiring the trust of global
clients and their end users, without compromising the interests of domestic customers
in enhancing their privacy protection.
2. Multi-Dimensional Privacy:
This report recognises the right to privacy in its
multiple dimensions. A framework on the right to privacy in India must include
privacy-related concerns around data protection on the internet and challenges
emerging therefrom, appropriate protection from unauthorised interception, audio and
video surveillance, use of personal identifiers, bodily privacy including DNA as well
as physical privacy, which are crucial in establishing a national ethos for privacy
protection, though the specific forms such protection will take must remain flexible to
address new and emerging concerns.
3. Horizontal Applicability:
The Group agreed that any proposed privacy legislation
must apply both to the government as well as to the private sector. Given that the
international trend is towards a set of unified norms governing both the private and
public sector, and both sectors process large amounts of data in India, it is imperative
to bring both within the purview of the proposed legislation.
4. Conformity with Privacy Principles:
This report recommends nine fundamental
Privacy Principles to form the bedrock of the proposed Privacy Act in India. These
principles, drawn from best practices internationally, and adapted suitably to an Indian
context, are intended to provide the baseline level of privacy protection to all
individual data subjects. The fundamental philosophy underlining the principles is the
need to hold the data controller accountable for the collection, processing and use to
which the data is put thereby ensuring that the privacy of the data subject is
5. Co-Regulatory Enforcement Regime: This report recommends the establishment of
the office of the Privacy Commissioner, both at the central and regional levels. The
Privacy Commissioners shall be the primary authority for enforcement of the
provisions of the Act. However, rather than prescribe a pure top-down approach to
enforcement, this report recommends a system of co-regulation, with equal emphasis
on Self-Regulating Organisations (SROs) being vested with the responsibility of
autonomously ensuring compliance with the Act, subject to regular oversight by the
Privacy Commissioners. The SROs, apart from possessing industry-specific
knowledge, will also be better placed to create awareness about the right to privacy
and explaining the sensitivities of privacy protection both within industry as well as to
the public in respective sectors. This recommendation of a co-regulatory regime will
not derogate from the powers of courts which will be available as a forum of last
resort in case of persistent and unresolved violations of the Privacy Act.
Privacy Matters: Analyzing the “Right to Privacy Bill”
Privacy India, and in partnership with the Centre for Internet & Society, International Development Research Centre, Indian Institute of Technology Bombay, the Godrej Culture Lab and Tata Institute of Social Sciences, is organizing “Privacy Matters” a public conference focused on discussing the challenges and concerns to privacy in India.
Address of venue
Event dates and times
Saturday, January 21st, 2012, from 10:00 am – 6:00pm
Open & Free to the public.
Nearest train station and bus stop
16th Jan 2012
The government is likely to sort out differences between the home ministry and Planning Commission over data collection for UID cards this week.
The Nandan Nilekani-led UID project has been touted as the world’s largest, most advanced, biometric database of personal identities. And many believe, according to reports, that the UID is meant to be more secure than the US’ Social Security Number (SSN).
In the absence of a coherent privacy law, Indian CISOs aren’t buying that. “Even SSNs have been misused by criminals for years. The flaw of any personal identification project is that when you input data into a database, there must be an assured mechanism in place. Fingerprints have inherent inaccuracies as a proof of identification and retina scans make data storage requirements much higher,” says security and privacy expert Deepak Rout. “If you don’t provide enough security, then chaos is inevitable.”
Though reports suggest that Nilekani has said that use of UID cards will be voluntary, it becoming mandatory cannot be ruled out. When all transactions will get linked to a single number, the same may be used by various state agencies to monitor citizens’ activities. This may interfere with an individual’s right to privacy. “Even if owning an Aadhaar card is made compulsory, I’ll stay out of it as long as I can,” says Rout.
Pawan Kumar Singh, CISO at Tulip Telecom agrees. “I am still insecure with the idea of entrusting my data to the government. Would I go for a UID card? No, thanks. The government may lay down stringent rules but where is the enforcement mechanism? UIDAI’s security policy will remain like our constitution–on paper–if citizen awareness is not brought up.” Singh believes that India isn’t ready to consolidate its entire citizens’ personal data on a single card.
Both Singh and Rout have reason to worry. In October last year, the UID project saw its first victim of privacy breach. A citizen from Maharashtra lodged a complaint stating that his address proof was compromised. The incident raised concerns on the vulnerability of personal data being collected by UIDAI. And that’s just one of the many instances of security breaches.
Even those close to the UID project are raising questions on the loopholes that may exist in the project. Sanjay Deshpande, CEO and CIO at Uniken Technologies–a security firm that was involved in initial talks with the UID project team–says that UID could be vulnerable to insider attacks. “How are they (the government) going to ensure that the systems aren’t vulnerable to insider threat? How trustworthy are the people handling a citizen’s personal identity? Also, are the biometric devices used by the government foolproof? You might have heard of losing your e-mail ids and passwords at an Internet café owing to malicious software in public computers. How is the government ensuring that the data capture device by itself is not malicious?” asks Deshpande.
Application level security is another major concern. “My problem as an Indian citizen–once the UID project starts collecting biometric data everywhere—is how would we prove our disassociation with a wrong UID and a crime we have not committed?” asks Deshpande.
While the cabinet decides the fate of the government’s ambitious UID project, it seems like Indian CISOs have already written its destiny. The question now remains – Do you trust the government with your data?
For any queries, you can contact the author at: firstname.lastname@example.org