🇳🇱The Dutch get rid of an algo used on the poor, China uses drones to tell people to put a mask on

🐙Exploring the unknown Arctic sea ice with robots

The Good

A Dutch court just ruled that an automated surveillance system for detecting welfare fraud in the Netherlands violates human rights and needs to be halted immediately. The Guardian explains how it worked and my internal alarm bells go off at every line.

Deployed primarily in low-income neighbourhoods, it gathers government data previously held in separate silos, such as employment, benefit and personal debt records, and education and housing histories, then analyses it using a secret algorithm to identify which individuals might be at higher risk of committing benefit fraud.

The court found that there were insufficient safeguards for privacy and that there was a “serious lack of transparency” on how the algo worked. The ruling also stated that the lack of info obscured whether the tech was discriminating against poor and immigrant communities.

In the wonderful book Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, Virginia Eubanks gets to what is so insidious about these algos watching our financial lives.

But that’s the thing about being targeted by an algorithm: you get a sense of a pattern in the digital noise, an electronic eye towards you, but you can’t put your finger on exactly what’s amiss. There is no requirement that you be notified when you are red-flagged. There is no sunshine law that compels companies to release the inner details of their digital fraud detection system. With the notable exception of credit reporting, we have remarkably limited access to the equations, algorithm, and models that shape our life chances.

And just in case it’s easy to scroll by this, as you, dear reader, will never be on welfare and most definitely will never commit welfare fraud, Eubanks also includes an exchange with someone on food stamps who explains her purchases are monitored with the help of an algo.

I must have looked shocked, because she explained that her caseworker routinely looked at her purchase records. Poor women are the test subjects for surveillance technology, Dorothy told me. Then she added, ‘You should pay attention to what happens to us. You’re next.’

The Bad

This story headline kills me: “Drones creatively used in rural areas in battle against coronavirus”

Certainly, limiting health risks by using remote technology is great. But this video is just icky on a lot of different levels. Firstly, it shows the importance of optics for this kind of tech. Picking an older person who looks flummoxed by the curt commands and then seems to run away just doesn’t look great.

I think there is also an underlying threat in this footage, where it makes it easy to imagine the next time we see a drone like this, it will have bene equipped with some sort of remote arresting or subduing “feature.”

More News

A look into how robots are the new arctic explorers and all the human labor that makes it possible.

In more Coronavirus news, New York’s Bryant Park got a robot that was supposed to be able to detect the virus in people (by asking them questions) but it was shut down after only an hour because it lacked a permit (and probably for being creepy).

Surprise, Russia is rolling out real-time face rec tech on Moscow’s cameras.

A profile on the first school in New York to rule out face rec tech. It’s not being used on students (yet) just a curated list of potential threats like known pedos.

I might be partial to this story because of one particular street I bike on near my apartment, but AI is spotting potholes and getting them filled faster.

Google tells Clearview AI to beat it, the photos on their platforms are for their face rec tech algos only (the CEO thinks differently). Oh, also Clearview AI wants to sell the tech to authoritarian governments.

AI upscales old footage and makes it look very hi-res and fancy.


Until we have autonomous bikes that go around potholes,