Himabindu Lakkaraju designed an artificial intelligence program that serve as a bias check for decision makers like judges and doctors.
Machine learning and AI are increasingly used in law enforcement to make decisions about which defendants get bail, in health care to determine medical treatments, and at financial institutions to determine who gets loans. But making decisions by automation can have pitfalls—software can miss the nuance that a human may catch when looking at a criminal, medical, or credit record. But humans can also miss nuances and have their own biases—especially when they’re pressed for time and have to make life-altering decisions.
Lakkaraju’s system doesn’t rely solely on human choices or on machine learning but uses a combination of the two. Most of her work deals with data sets in which she could see the expected outcomes from both AI and the human decision makers, and spot where bias might occur.
Her work is now being used by schools in Montgomery County, Maryland, to help them identify at-risk students and predict the likelihood that a child might need extra tutoring or mentoring.
“School districts are often limited in their resources—so knowing this likelihood will help the school districts assign those students to interventions who are most likely to benefit from them,” Lakkaraju says.