🩺 AI brings down a hospital's readmission rates and NYC gets an Algo officer
👄Plus lip-reading for both good and bad!
Hi hi,
Reinventing the newsletter idea into a format about AI for good(ish) and AI for bad. Let’s me skip all the inbetween stuff (sorry, self-driving cars ((unless you kill someone))) and focus on people/impact and not the tech. Also please forgive me for any grammar mistakes, rushed this since the idea came late to me this week.
The Good
A network of hospitals in Utah and Idaho switched to value-based pricing, where healthcare workers get paid more for outcomes than how many tests and services a patient got, which means usually well-paid out procedures like surgery were no longer (necessarily) lucrative. With budgets tighter, the company wanted to make more data-driven decisions when possible.
So Utah-based Intermountain Healthcare developed an AI system trained on large volumes of clinical information from the hospitals going back to the 1950s and used it to offer up suggestions for treatment. And so far, as The Wall Street Journal reports, it is going good! The article leads with the cost savings ($90 million) but the patient outcomes are pretty amazing:
In addition to cost reductions, patient care improved thanks to the AI system. Since January 2017, hospital readmission rates for total hip and knee replacements have fallen by 43%; complication rates have dropped by 50%; the length of a hospital stay for total-joint-replacement patients has been shortened by 35%, and the number of opioid pills prescribed to them at discharge is down 44%.
There is no discussion in the piece of the ethical considerations, like the different levels of care we see when it comes to women, people of color or rural areas. Because it is value-based, that might remove some of the inherent issues we see in the pay-per-service style (I don’t know enough about this, feel free to send along any links to get me up to speed). Medicine is an area that needs careful consideration when it comes to applied AI, but done right and lives could be saved.
The Bad
There was a lot of attention last year when NYC announced its Automated Decision Systems Task Force. The city signed up some good board members, seemed like they were getting ahead of changing technology, and that the government was going to take the report seriously. The honeymoon lasted a year before tensions arose over the city’s commitment and transparency over the automated tech it was already using.
Well, the report is out, and it is pretty useless. The three suggestions the paper puts forth for automated systems: building capacity for an equitable, effective, and responsible approach, broadening public discussion on it, and formalizing management functions. No city needs a task force to tell them to do this! The one solid step that might lead to something more tangible is Mayor Bill de Blasio issued an executive order to create a position for an “algorithms management and policy officer” that was recommended elsewhere in the report.
This seems like a missed opportunity at best, at worst, a white-paper washing of the issue of cities and law enforcement’s use of tech that citizens need to know about. Meredith Whittaker, a member of the Task Force (also founder of OpenAI and an organizer for the Google walkouts) tweeted out a long thread of comments about the process she said was left out of the report. New York is already using a lot of tools and if they want to keep using them, the city should make a real case for the tech’s fairness and usefulness.
More New
A research paper says it has beaten the previous best accuracy rates for lip reading. Good for deaf and hard of hearing folks, but also a tool for surveillance, which is not acknowledged in the paper. It might just be lip service (zing!) to have an ethical consideration section but it is better than nothing. (via Jack Clark’s newsletter, and gif from the Bad Lip Reading series)
AI-powered translation apps have become so good that fraudsters are now going after Icelanders, who have mostly been left alone because of their complicated language, and are now falling for scams at crazy levels!
Twitter wants to hear from you about what they should do for its synthetic data (aka deepfakes) and manipulated media policy. It only takes a few minutes!
A Denver group doesn’t want face rec tech for the city. Utah doesn’t want it for its driver license database. No one wants face rec tech, Sam I am.
A computer vision company called Hikvision marketed an AI camera as able to automatically identifies Uyghurs, before taking it down after a site raised questions. (via ChinAI Newsletter)
Great breakdown of questions to ask to see if a company is selling you AI snake oil.
Microsoft brought on former US Attorney General Eric Holder to investigate whether AnyVision, a facial recognition company the tech giant invested in over the summer, violates the company’s ethics guidelines.
An algo can predict seizures up to one hour before they occur with a 99.6 accuracy rate. This is based on a very small sample set but could help people prepare for seizures and make the condition easier to deal with.
Algos are deciding whether or not a person will be a good renter with a predictive screening that looks at spending habits.
Quote of the week
"The people producing this software are producing something that, in a small way, makes people like me miserable and keeps us miserable.”
-“AI software defines people as male or female. That's a problem” by Rachel Metz CNN Business
***
Until my business class flight purchases make me look like a very responsible renter,
Jackie