General take aways:

  • The use of computer models for decision making has became prevalent across industries
  • We need to guard against unconscious baking of pre-existing biases into these models to prevent perpetuation of injustices at a large scale
  • A model is a simplified representation of the world.
  • It is hard to quantify human values like “Trust” into numbers, as such data scientist often rely on proxies like the number of likes a piece of content received. This is where flaw often gets introduced into the system.
  • Be wary of how you structure the reward system, most especially second and third order implications. These may lead to unintended and often undesirable outcomes.
  • To careful of feature choice utilized to train models. The use of an applicant's zip codes to train a model for loan application is just as likely to be racially as discriminating as the use of ethnicity.

Book listings: