🙅 Emotion-detecting AI should be banned and a new Deepfake Detection Challenge
🔔 More Ring Camera news, which is somehow still getting worse?
|jackie snow||Dec 13, 2019|
There has been a lot of handwringing about what DeepFakes could do to society, which, funny Nicolas Cage gifs notwithstanding, go from bad (putting people into porn videos) to full-blown crisis (political attack ads that sway elections). A group of tech giants including Google, Twitter, Facebook are looking for a solution with the Deepfake Detection Challenge.
Facebook seems to be leading the charge, compiling a huge dataset and putting up more than $10 million for awards and grants for competitors. This week, the full deepfake dataset was released and the challenge began, with the wrap up scheduled for the end of March. This challenge joins other efforts, including a DARPA project to find a way to detect faked media and the AI Foundation’s quest to make a plugin that can spot them.
There is a long history of tech being released without safeguards: Microsoft released Windows without security, auto manufacturers put out cars without seatbelts. Let’s hope we get a solution for deepfakes before we end up with President Steven Miller.
A new report by the AI Now Institute calls for a ban on emotion-detecting AI, an industry already worth as much as $20 billion and growing. It’s being used for tasks like hiring, assessing pain, and tracking student’s attention levels. Criticism of this use of AI hasn’t reached the same levels as face rec tech, likely since these are all still small use cases and not in public settings (yet).
The problem, besides the ongoing existential questioning of letting potentially biased AI make important decisions that affect a person’s livelihood or liberty, is emotions and their contexts have endless possibilities. AI programs are being trained on six basic emotions based on facial expressions and minimal, if any, contextual information.
Could it be possibly good for people with autism who have a hard time recognizing emotions? Yes! Should it be used to tell how well a job applicant is doing, especially across cultures that have different behavioral norms? Hard pass!
Ring Camera news corner: there is a radio program that hacks Rings on-air followed up by the show’s hosts harassing the owner for lulz.
AI is helping researchers automate the study of animal behavior by making it easier to track a whole bunch of slight movements.
Haven’t had a chance to read it, but this year’s AI Index report is out with a bunch of stats about the state of the tech’s progress.
Lip reading AI is getting better. Great for the hearing impaired and repressive governments alike.
Australia wants to use AI-powered surveillance cameras to catch drivers playing with their phones.
AI is helping scientists map ten billion cells from the human body. It could help in everything from figuring out how life emerges from the embryo or how diseases like cancer occur.
San Diego is suspending its face rec tech program used by police after a campaign by the EFF.
Just waiting for the day an emotion detecting AI tells me to smile,