Many US state courts offer judges algorithm-based risk scores, which predict the repeat offence odds in criminals, and can thereby be used to determine the quantum of punishment awarded. However, these algorithmic scores show a distressing bias in their views, culminating from the data used to train models – otherwise referred to as inductive bias. It’s not just courtroom judgments that are rife with algorithmic biases, as we’ve seen it emerge in instances of search engine results and hiring processes of industry giants in the past, among many other instances. The big question is how do we tackle algorithmic bias? Srinidhi Rao Senior Partner at TheMathCompany and Saurav Ghosh Senior Associate at TheMathCompany, share their views in an exclusive talk at Cypher 2019.