ACLU is suing the US gov for blocking airport facial recognition probe


The American Civil Liberties Union (ACLU) is suing the US government
for blocking a probe into the use of facial recognition in airports.

Homeland Security announced in December that it was ditching plans to
scan the faces of every person arriving at airports in favour of those
who are not US or permanent residents.

Very little information has been given by Homeland Security about what they’re doing with the facial recognition data.

Human rights groups have already made several legal attempts to get
Homeland Security to disclose information, but each has been
stonewalled.

In a lawsuit filed in a federal district court in New York City on
Thursday, it was revealed that the ACLU submitted freedom-of-information
requests to Homeland Security, Customs and Border Protection (CBP),
Immigration and Customs Enforcement (ICE), and the Transportation
Security Administration (TSA).

The agencies had 20 days to respond to the request under US
freedom-of-information laws, but none of them did so. Furthermore, none
of the agencies explained why they did not respond (an additional 10
days can be granted under certain circumstances).

“That’s why today we and the New York Civil Liberties Union filed a
lawsuit asking a federal court to order the Department of Homeland
Security, CBP, TSA, and ICE to turn over records about the
implementation of face surveillance at airports, and their plans to
subject travelers to this technology in the future,” the ACLU said in a
statement.

“Our lawsuit seeks to make public the government’s contracts with
airlines, airports, and other entities pertaining to the use of face
recognition at the airport and the border; policies and procedures
concerning the acquisition, processing, and retention of our biometric
information; and analyses of the effectiveness of facial recognition
technology.”

Last time we covered
the ACLU, the civil rights group were highlighting the inaccuracy of
Amazon’s facial recognition algorithm – especially when identifying
people of colour and females.

In the UK, the Equalities and Human Rights Commission (EHRC) called this week for the public use of facial recognition to be halted after trials so far have been nothing short of a complete failure.

An initial trial by the Met Police, at the 2016 Notting Hill
Carnival, led to not a single person being identified. A follow-up trial
the following year led to no legitimate matches but 35 false positives.

An independent report into the Met Police’s facial recognition
trials, conducted by Professor Peter Fussey and Dr Daragh Murray last
year, concluded that it was only verifiably accurate in just 19 percent
of cases.

Serious concerns remain about the use of facial recognition
technology and its impacts on our civil liberties. Those concerns are in
no way eased when the current technology has been proven to be
dangerously inaccurate time and time again.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

Leave a Reply

Your email address will not be published. Required fields are marked *