Every representation is an interpretation
In the age of artificial intelligence and machine learning, we are promised by Silicon Valley entrepreneurs and military or corporate funded academics, that algorithms can extend our vision to the future, tame uncertainty, and overcome human analytical limitations. Truth will emerge from big data.
However, machine learning isn’t a neutral pipeline. 2017 was the year when image recognition software misread images of Asians as people blinking and had difficulty recognizing people with dark skin. Also, commercial gender classification algorithms were found to work significantly better with lighter males and AI tools used to assess the risk of recidivism were found to be biased against black defendants.
In the name of statistical objectivity, it’s often easy to forget that predictive algorithms actually mask a series of subjective judgments from the system designers. It’s people who formalise the problems in question and measure the error between the predicted and actual value. People choose the -often non inclusive- datasets, why selected examples are representative, how to weight the data and which evaluation metrics to use. As a result, people’s ability to access healthcare, credit, jobs and education is affected at scale by those choices.
With the choice on what to put on our algorithmic maps, surfaces the epistemological problem of symbolization, generalization and classification. Machine learning is actually a feedback loop. It’s a new platform upon which we can prototype new ways of philosophising and produce alternate forms of participation. AI first societies are not exclusively the responsibility of the technically educated, male and probably middle-class and white venture capitalists and engineers. It’s everyone’s responsibility to question how our “optical” instruments and infrastructures affect the social and political context of the present.
And to remind ourselves that algorithms don’t exercise power over us. People do.