Quantcast
Channel: India | Biometric Update
Viewing all articles
Browse latest Browse all 935

Stalking-horse report on facial recognition in Indian airports draws arrows

$
0
0
A report out of India about responsible AI has been quickly criticized for being incomplete. The 74-page report in question, Responsible AI for All by the Indian government think tank NITI Aayog, is categorized as a starting place for discussion for all those with a stake in AI's development. The paper, published last month, has succeeded. It leans hard on Digi Yatra, India's growing airport digitalization project. Digi Yatra, which began deployments in three airports this month, is meant to deliver paperless travel and contactless check-in and boarding in a way that is commonplace and responsible. NITI Aayog, literally the National Institution for Transforming India policy commission, does well to not put too heavy a thumb on the scale. The authors clearly see much to like in facial recognition in and out of transportation. They briefly note design-based risks, and the rights challenges the government and industry face using face biometrics systems. But they do not go far enough for some in pushing the idea around. Executives at the Internet Freedom Foundation, an indigenous digital liberties advocacy group, posted what they felt are soft spots in policy and implementation considerations. (They also had problems with Digi Yatra not discussed here.) The foundation found NITI Aayog failed to assess the harms that police could create by using facial recognition software. Law enforcement roles for face biometrics are "the most harmful" of the technology, which will be integrated into the 30 or so facial recognition projects that police are already running. Very much related, the discussion report has suggestions to prevent function creep regardless of who is wielding the algorithms. The report also misses explainability, which increasingly is seen as bedrock attribute of trust in AI. The foundation notes that everyone from national government leaders to police officers on the street need to know how deep learning works in order to best interpret face match results. Emotion recognition also was shorted, foundation executives felt. There is little consensus on how and how well it works. It is not sensible to argue that this role will not be folded into facial recognition of government systems soon. Another rights advocate in India, the Software Freedom Law Center also chimed in quickly on NITI Aayog's invitation to comment. Executives in this group came down hard on how they felt the report overlooked the importance of explicit informed consent.

Viewing all articles
Browse latest Browse all 935

Trending Articles