AI can’t predict a child’s future success, no matter how much data we give it
window.jwLibrary = "chM58Ml7";
According to the Princeton team’s research paper:
We investigated this question with a scientific mass collaboration using the common task method; 160 teams built predictive models for six life outcomes using data from the Fragile Families and Child Wellbeing Study, a high-quality birth cohort study. In the end, despite giving the research teams a trove of data gathered over a 15-years-long “Fragile Families” study on the lives of matriculating children, nobody’s system resulted in an accurate prediction.
Per the Princeton team’s aforementioned research paper: In other words, even though the Fragile Families data included thousands of variables collected to help scientists understand the lives of these families, participants were not able to make accurate predictions for the holdout cases. This is further confirmation that predictive AI – whether it’s Palantir’s intentionally misleading predictive-policing technology or the demonstrably racist algorithms that power the US judicial system’s sentencing software – is hogwash when it directly affects human lives.