Generative Data Intelligence

UK wants criminal migrants to scan their faces up to five times a day using a watch

Date:

In brief The UK’s Home Office and Ministry of Justice want migrants with criminal convictions to scan their faces up to five times a day using a smartwatch kitted out with facial-recognition software.

Plans for wrist-worn face-scanning devices were discussed in a data protection impact assessment report from the Home Office. Officials called for “daily monitoring of individuals subject to immigration control,” according to The Guardian this week, and suggested any such entrants to the UK should wear fitted ankle tags or smartwatches at all times.

In May, the British government awarded a contract worth £6 million to Buddi Limited, makers of a wristband used to monitor older folks at risk of falling. Buddi appears to be tasked with developing a device capable of taking images of migrants to be sent to law enforcement to scan.

Location data will also be beamed back. Up to five images will be sent every day, allowing officials to track known criminals’ whereabouts. Only foreign-national offenders, who have been convicted of a criminal offense, will be targeted, it is claimed. The data will be shared with the Ministry of Justice and the Home Office, it’s said.

“The Home Office is still not clear how long individuals will remain on monitoring,” commented Monish Bhatia, a lecturer in criminology at Birkbeck, University of London.

“They have not provided any evidence to show why electronic monitoring is necessary or demonstrated that tags make individuals comply with immigration rules better. What we need is humane, non-degrading, community-based solutions.”

Amazon’s machine-learning scientists have shared some info on their work developing multilingual language models that can take themes and context gained in one language and apply that knowledge generally in another language without any extra training.

For this technology demonstration, they built a 20-billion-parameter transformer-based system, dubbed the Alexa Teacher Model or AlexaTM, and fed it terabytes of text scraped from the internet in Arabic, English, French, German, Hindi, Italian, Japanese, Marathi, Portuguese, Spanish, Tamil, and Telugu.

It’s hoped this research will help them add capabilities to models like the ones powering Amazon’s smart assistant Alexa, and have this functionality automatically supported in multiple languages, saving them time and energy.

Talk to Meta’s AI chatbot

Meta has rolled out its latest version of its machine-learning-powered language model virtual assistant, Blenderbot 3, and put it on the internet for anyone to chat with.

Traditionally this kind of thing hasn’t ended well, as Microsoft’s Tay bot showed in 2016 when web trolls found the correct phrase to use to make the software pick up and repeat new words, such as Nazi sentiments.

People just like to screw around with bots to make them do stuff that will generate controversy – or perhaps even just use the software as intended and it goes off the rails all by itself. Meta’s prepared for this and is using the experiment to try out ways to block offensive material.

“Developing continual learning techniques also poses extra challenges, as not all people who use chatbots are well-intentioned, and some may employ toxic or otherwise harmful language that we do not want BlenderBot 3 to mimic,” it said. “Our new research attempts to address these issues.

Meta will collect information about your browser and your device through cookies if you try out the model; you can decide whether you want the conversations logged by the Facebook parent. Be warned, however, Meta may publish what you type into the software in a public dataset. 

“We collect technical information about your browser or device, including through the use of cookies, but we use that information only to provide the tool and for analytics purposes to see how individuals interact on our website,” it said in a FAQ. 

“If we publicly release a data set of contributed conversations, the publicly released dataset will not associate contributed conversations with the contributor’s name, login credentials, browser or device data, or any other personally identifiable information. Please be sure you are okay with how we’ll use the conversation as specified below before you consent to contributing to research.”

Reversing facial recognition bans

More US cities have passed bills allowing police to use facial-recognition software after previous ordinances were passed limiting the technology.

CNN reported that local authorities in New Orleans, Louisiana, and in the state of Virginia, are among some that have changed their minds about banning facial recognition. The software is risky in the hands of law enforcement, where the consequences of a mistaken identification are harmful. The technology can misidentifying people of color, for instance.

Those concerns, however, don’t seem to have put officials off from using such systems. Some have even voted to approve its use by local police departments when they previously were against it.

Adam Schwartz, a senior staff attorney at the Electronic Frontier Foundation, told CNN “the pendulum has swung a bit more in the law-and-order direction.”

Scott Surovell, a state senator in Virginia, said law enforcement should be transparent about how they use facial recognition, and that there should be limits in place to mitigate harm. Police may run the software to find new leads in cases, for example, he said, but should not be able to use the data to arrest someone without conducting investigations first. 

“I think it’s important for the public to have faith in how law enforcement is doing their job, that these technologies be regulated and there be a level of transparency about their use so people can assess for themselves whether it’s accurate and or being abused,” he said. ®

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?