🗑️ AI joins the fight against ocean plastic and Google doesn't want naughty words anywhere near it
💃 Dance dance PR moves
The Good
At least eight million tons of plastics end up in our oceans every year, causing whirls and trash islands that are bigger than states. There are plenty of projects trying to do something about it, including a new one using a drone and AI to count and understand the amount of plastics in the far reaches of the ocean (fun fact, it’s all technically one ocean even though we usually break it into five).
Having a drone that can fly far from land or boats extends the amount of research that can be done. Once the rules loosen up for autonomous drones, this sort of monitoring could do even more. I’m not sure this is better than the satellite options, but it’s probably a good idea to have multiple measurements.
The AI also needs to be worked on: the AI these researchers trained is only at about 80%, which is a good start, but needs to be closer to 100% to save humans the time to devote to other aspects of the ocean plastic problem.
Much of the plastic in the ocean gets first washed into the riverways before connecting to the ocean, so there are earlier and easier ways to stop this (Shout out to Professor Trash Wheel!). And it’s not necessarily litter bugs dropping stuff out their car window: a windy day knocking some trash cans over can do a lot of damage.
The Bad
Google has a dataset called List of Dirty, Naughty, Obscene, and Otherwise Bad Words (shortened, somewhat, to LDNOOBW) that they’ve used to filter what is pulled for their language models. This includes the company’s new language AI system that has a trillion (!!) parameters (aka guidelines for the AI to follow, with more making it more computing intensive to run but also usually smarter). The next largest one only has 175 billion parameters.
However, some folks thought perhaps not offending your delicate sensibility wasn’t worth editing out chunks of the human experience:
Striking out pages featuring obscenities, racial slurs, anatomical terms or the word sex regardless of context would remove abusive forum postings—but also swaths of educational and medical material, news coverage about sexual politics, and information about Paridae songbirds.
Google’s Ethical AI team tried to take on these issues in a research paper, which eventually led to the firing of one of the leads with the other on a timeout that I’ve written about a couple of times recently.
There is an inherent human judgment call that goes into creating a blacklist and these lists in other arenas have had fraught results. Google had a sentiment analysis in place that found terms like “gay black woman” and “i’m a jew” had a negative connotation. This kind of tech could blacklist or demonetize sites. This is not theoretical—YouTube demonetized videos that covered LGBTQ subjects.
More News
Check out these cool dance moves and don’t consider what our gutting of our ethics team means for the future of everything. (h/t @Kneelconqueso)
Google revealed a new AI-powered heart and breathing monitor. Pretty neat! Too bad their ethic team isn’t robustly looking into the many, many pitfalls this creates.
A new AI-powered tool can tell when Fido is being a good boy and rewards him with a treat.
A member of the Pirate Party Germany sued over the EU-funded AI-powered lie detector test, calling it “snake oil.”
Three space robos will be hanging out together, zooming around on Mars, pretty soon.
There are so many low-hanging fruits for AI, like this project at the Treasury to read PDFs faster and get money to agencies faster.
Customs and Border Protection used face rec tech on 23 million+ travelers in 2020 and found zero fakers.
Got a bunch of items to count up? There is a computer vision app for that.
Someone filmed their encounter with a cop, who took a moment to put on Sublime’s Santeria (it’s a SoCal cop, so of course), probably hoping if the video was uploaded, the algo would take it down for copyright infringement.
***
Until we can all get back doing whatever the LDNOOBW we want,
Jackie