Demystifying AI

According to Google trends¹, in April 2018 the interest for the term “Machine Learning” in web search reached its all-time high. It’s the same month when Mark Zuckerberg, founder of Facebook, gave a testimony in front of the US Congress. When he was asked about moderating hate speech, he said AI will fix it. Terrorist content? AI. Russian propaganda and fake accounts? AI again. Earlier in 2017, Andrew Ng, a prominent AI researcher said² that “AI is the new electricity”, Wired magazine said that AI may discover the Higgs Boson³ and Google’s DeepMind less modestly declares that their mission is to “solve intelligence, and then use that to solve everything else.”⁴

Artificial Intelligence has been around since the 1950’s and has undergone several bust and boom circles. Since 2012, due to mainly the availability of more data and more powerful computational machines, a new kind of AI, called Deep Learning, has emerged. Deep Learning is indeed a disruptive technology that is changing the way we think, act, and learn. However, its true potential has been submerged in a sea of “technophoric cyberdrool,” to borrow Judith Squires’s phrase⁵. Inflated statements by companies and researchers in search of funding, the ambiguity of the term AI, and the sensational articles by mass media are making it hard for people to grasp the situation.

The purpose of this essay is to provide a critical reflection on the mythologies surrounding AI today. In the first part, I will briefly go through AI history, geopolitics and the main schools of thought. In the second part, I will re-contextualise AI in the current sociopolitical frame and examine some types of societal harms. In the third part, I will examine the deeper epistemological implications of AI and the ways that classification and reduction affect how we perceive the world. In the last part, I will try to provide an optimistic account of an AI-driven future, which embraces complexity and interdisciplinarity.

AI: The view from somewhere

The historical context

ArtificiaI intelligence is bound to be slippery because any claim about it implies something about what human intelligence is. It is not a homogenous field. Until this day, the very notion of -artificial- intelligence is typically “based on the behavior of a few, technically educated, young, male and probably middle-class, probably white, college students working on a set of rather unnatural tasks in a US university.”⁶

The seeds of AI were planted by philosophers who tried to describe thinking as a manipulation of symbols. In 1940, the invention of the digital computer inspired scientists to begin researching the possibility of an electronic mind⁷. By the 50’s, the idea of a thinking machine rose from the amalgamation of certain ideas: research in neurology showed that the brain comprises of a network of firing neurons, Norbert Wiener’s cybernetics theory described control in electrical networks, Claude Shannon’s information theory described digital signals, and Alan Turing’s theory of computation showed that we can describe any computation digitally.

The AI field kicked off at a workshop held at the Dartmouth College in the summer of 1956. The reductionist view of intelligence can be seen in their proposal which says⁸ that “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.“

In terms of methodology, symbolic and logic based AI were primarily the initial focus. This group of practitioners studied inverse deduction, which starts with some premises and conclusions and works back in time to fill the gaps. DARPA provided three million dollars a year to MIT until the 70's⁹, which shows how early research interests were shaped by military agendas. Eventually, everyone realised that they had massively underestimated the difficulty of such venture. In 1973, the US and British Governments stopped funding research into artificial intelligence, a period later known as “AI winter”.

In the 1980s, the connectionist approach centered around neural networks and other statistical tools known as machine learning started growing. Connectionists believe that intelligent behavior arises from interconnected networks of simple units. In the late 80’s, further advances in neuroscience and cognitive science spurred the rise of a new approach to AI based on robotics. Those researchers believed that artificial agents need to be embodied and have sensorimotor skills in order to interact with the world. They proposed building intelligence from the bottom up, reviving ideas from cybernetics and control theory.

The Deep Learning revolution of today is a resurrected connectionist approach. Deep learning is a continuous dynamic system which might use artificial neural networks, but its architecture does not resemble biological neurons. Today, connectionism and behaviorism have also been linked in the form of reinforcement learning, an approach in which agents learn using reinforcement signals from their environments. Google’s DeepMind attributed the success of their Atari system to a combination of deep learning and reinforcement learning. As they say in their paper published in Nature¹⁰, a big challenge is to “avoid superstitious behaviour in which statistical associations may be misinterpreted as causal”.

The geopolitical context

It’s often ignored that the computational power and hardware landscape (GPUs) are driving the development and implementation of AI since its early years. Tim Whang shows¹¹ in detail how the competition among nations in the AI arena and the international semiconductor supply chains determine the speed of the development of machine learning models.

Machine learning also depends on access to training data and importantly access to talent. A report¹² by McKinsey found that AI investment is concentrated geographically: in 2016 the U.S received 66% of external investment (through venture funding and M&A) and China 17% , while growing fast. Those AI hubs for investment become key locations for tech giants to set up their research and development labs. Big companies consequently create monopolies on talent. We are also seeing some activity from big companies to attract local talents such as Google at the University of Montreal, Intel at Georgia Tech and Facebook in Paris. These locations will benefit not only because of the creation of high skilled jobs but also through the knowledge and innovation spillovers and AI products developed for local markets.

Additionally, government industrial and trade policies directly impact the development of the machines needed to train and run AI models. Carlos E. Perez notices that recent AI advances in the West have sparked a new AI Sputnik era¹³. In 1957, the launch of the first satellite by the Soviet Union, Sputnik, created an urgency for US to upgrade its technical infrastructure. Similarly, today Asia is pushing AI development after DeepMind’s AlphaGo victory over Go’s world champion Lee Sedol in 2016. Seeing an AI winning over a traditionally human endeavour, shocked Asian populations. Korea soon after the event, created an 860 million dollar fund for AI research (on top of 138.8 billion won in 2016¹⁴. In July 2017, the government of Tianjin, near Beijing, said¹⁵ it will give $ 5 billion to support the AI industry.

Computational power, data, and talent are defining the speed in which machine learning is deployed, thus shaping the overall speed in which technology advances and consequently its economic impact. They also determine who has control and access to the benefits of the technology and who are the governing actors. It seems that current AI research and funding will benefit a few companies and a few nations, while the rest of the world remains unaware of the potential of AI.

How -isms creep into AI today

Having explained where AI comes from, in this section I would like to show how AI researchers traditionally haven’t recognised that “there is no unmediated photograph or passive camera obscura in scientific accounts of bodies and machines”, to borrow Donna Haraway’s words¹⁶. To this day, a very specific and normative “view” of gender, race, and class has shaped the embodiment of cognition, thought and action. AI researchers have ignored the historical and cultural specificities of their practice and have been mostly disinterested in the sociopolitical implications of their work. Let’s examine how.

Looking into the machine

The pipeline of machine learning is deeply affected by the specific “eyes” that construct it. Machine learning systems rely on a large set of examples and pattern recognition to uncover relationships in the data which might be overlooked by humans. These are then used as a basis for prediction and decision making in cases of fraud detection, insurance pricing, credit scoring, e-recruiting without any human intervention etc. For the first time in history, these automated decision-making systems are deployed at such a massive scale.

In the case of supervised learning, which is a subset of Machine learning, the set of discovered relationships is called a “model”. Three concepts are important here: the dataset used, the target variables and class labels. The dataset contains annotated data. The “target variables” are the outcomes of interest -what the developers are trying to predict-, and “class labels” divide all possible values of the target into mutually exclusive categories¹⁷.

Figure 1. Network topology of a convolutional neural network, Terence Broad

To start, data miners must translate some problem into a question that can be formalised. The translation is by nature subjective and it requires a well-defined problem. They also need to pick a formal method of measuring the error between the predicted and actual value of the target, which is the mathematical description of the problem. However, many human problems are “wicked”¹⁸. This means that they have many parts some of which are contradictory.

Developers, then, might unintentionally parse the problem in a way that systematically disadvantages some population. By coercing the problem so that it is “describable” under the logic of the model, developers, for example, might drop or minimize a property of the question to be solved, which could be associated with a population group. Or our algorithms can unintentionally learn which attributes serve as proxies to some protected classes.

Additionally, the fact that we only account for measurable outcomes creates categorical problems. In the case of an automated hiring algorithm, for example, “good” must be quantitatively defined as a function of education, higher sales, production time etc. neglecting other social, emotional or behavioral qualities that are difficult to measure and model.

Data-driven harms and algorithmic glitches

People would like to think that algorithms are not biased. That they don’t discriminate against certain races, nationalities, genders, religions or types of disabilities. However, algorithmic biases are caused by many factors. These include choices in the coding of information, incomplete/incorrect/outdated data, reliance on unrepresentative samples, or the exclusion or inclusion of some features in the model.

For example, it’s important that humans produce labels in datasets. The identity of that human, though, is considered irrelevant. Many datasets are labeled by humans in Amazon’s Mechanical Turk platform, so their assumptions are creeping in the process. As Lilly Irani argues¹⁹, the predominantly U.S.-based employers prefer U.S. AMT workers because, among other things, “they are likelier to be culturally fluent in linguistic and categorization tasks”. Additionally, Jonas Lerman argues²⁰ that there has been systemic “omission of people on big data’s margins, whether due to poverty, geography, or lifestyle, and whose lives are less ‘datafied’ than the general population’s”. This also creates underrepresentation of certain populations in our training data early on.

Those biases result in harms related to a) the allocation of resources such as a loan, credit etc. or b) representation such as stereotyping, failure to recognise, or denigration of a person or a particular group of people²¹. Our algorithms are perpetuating systemic bias against groups who have historically suffered from discrimination.

2. Coded Gaze mini documentary by Joy Buolamwini

For example, Latanya Sweeney showed²² that Google searches on typically African- American names are likely to bring up ads for arrest records. This creates a negative feedback loop, where more employers are clicking on those records, which then makes them appear more often in search engines and so on. Another study of a criminal sentencing algorithm by Propublica, showed²³ that black defendants were twice as likely to be incorrectly classified as high-risk compared to white defendants. Other examples include which exhibited biased standards towards selecting white beauty contestant winners.

We’re not doing very well with women or low-income populations either. Another study demonstrated that women are less likely than men to receive advertisements for executive jobs, Youtube’s speech-to-text failed to recognise female voices, and Google Translate translates ungendered Turkish such as nurse and doctor in a gendered manner of “she is a nurse” and “he is a doctor” perpetuating gender biases. Even in a game such as pokemon Go, Urban Institute researchers²⁴ found an average of 55 PokéStops were in majority white neighborhoods and 19 in majority black neighborhoods.

Making accountable machines

Comprehension and inspection of the systems described above becomes even more difficult when the rules of the models are so complex and interdependent, rendering our systems inscrutable. In some cases this opacity is intentional but in others, it is not. Burrell²⁵ distinguishes three types of opacity namely a) opacity as intentional corporate concealment b) opacity due to the fact that understanding code is a specialist skill c) opacity as a result of the high dimensionality of machine learning versus human scale reasoning.

To address this opacity, we want the “right thing” to be an outcome of a social process which is not made up on the spot, relying on individual intuitions. That is the point of laws and institutions. Explanations of technical systems are not enough to achieve law and policy goals, but we also need to ensure that there are ways to evaluate the decision-making process against the rest of our due processes (i.e. anti-discrimination law).

Communities of scholars and practitioners who have identified those challenges are trying to make our algorithmic systems transparent, explainable and thus accountable. There are specific sociotechnical tools to observe, access and audit those algorithms. AI Now Institute²⁶ and Andrew Selbst²⁷ came up with Algorithmic Impact Assessments for public agencies. We also have methods for algorithmic auditing²⁸, explainable machine learning interfaces and even statistical methods against discrimination²⁹.

Finally, it’s important to remember that we shouldn’t just rest here. Diakopoulos³⁰ says that “the notion that algorithms exercise their power over us is misleading”. This means that the focus on algorithmic accountability may hide the power structures behind those decisions and give the illusion that it’s only a matter of de-biasing the “black box”. We need to remind ourselves, that it’s not a technical problem but rather a sociotechnical one where people develop and benefit from those algorithms which are merely mirrors of our societies.

Machinic Epistemologies

Many of the systems we examined above have been accused that they operate based on correlations than causations and that they produce predictions rather than explanations. This is not a new criticism. Across the history of computation, from early cybernetics to AI and modern algorithmic capitalism, critical responses have highlighted the underlying determinism and positivism of this technological rationality.

The AI vocabulary we use today comprises of certain visual and linguistic metaphors. In all cases, there is a person who observes some event or process and arrives at some “conclusion”. He/she then computes, experiences or predicts using words or numbers that either “mean” or “refer to” or describe this event which occurs if the person’s reasoning is correct. Craik defines three essential processes in this reasoning³¹ :

  1. ”Translation” of external processes into words, numbers or other symbols.

2. Arrival at other symbols by a process of “reasoning,” deduction, inference, etc.

3. “Retranslation” of these symbols into external processes (as in building a bridge to a design) or at least recognition of the correspondence between these symbols and external events (as in realizing that a prediction is fulfilled)”.

Let’s use the AI lens to examine those processes from a philosophical point of view.

Human ↔ Machine Translations

The first part of this reductive thinking is the “translation of external processes into words, numbers or other symbols”. The 20th century has seen the rise of several technical discourses and communities, all of which are organised by an assemblage of metaphor and mathematics. Take for example control theory, information theory, chaos theory, general systems and so on. These communities are embedded in a larger culture, but the local configurations of their language corresponds to certain world-views. The technical language that they use is re-describing human and natural phenomena from a specific technical perspective in order to achieve rational control.

Figure 3. Adam Harley’s 3D Visualization of a Convolutional Neural Network

Even if it’s frequently said that technical practice uses a precise and well-defined language this is not necessarily true for AI. Technical AI discourse continually tries to reduce the totality of human life to a small vocabulary. For example, in AI research terms like “knowledge”, “memory” and “reasoning” are described as computational processes and

become precise as mathematics. However, artificial agents don’t “imagine”³², “dream”³³ or create “secret languages”³⁴. In cognitive science, those exact terms have a different meaning. If we use those terms as descriptions of human life, they then become rather imprecise and contextual. Unless scientists can explain how they navigate the potential meanings of a metaphor (i.e. intelligence) in their research, then the discourse collapses into relativism.

Additionally, the reduction of human phenomena to numbers vectorises reality. Apart from their relative meaning, machine learning systems have lost their indexicality to the world. This means they’ve lost their capacity to use signs that point to some object in the context which it occurs. Let’s take for example image captioning, which is the process of generating textual descriptions of images. A dataset of input images and their corresponding output captions is used together with Computer vision and Natural Language Processing to generate the captions. Those captions provide descriptions of website content, frame-by-frame video descriptions or describe videos for people with visual impairments.

Those networks map images and captions to the same multi-dimensional space and then learn a mapping from the image to the sentences that in many cases is semantically and grammatically correct. What is not taken into account in this process is the context of the images. To illustrate this, I processed some photos with Google’s show and Tell, a neural Image Caption Generator. While the neural network correctly identifies objects, these were some of the other captions that it generated:

Figure 4. Captions generated by Google’s “Show and Tell” deep neural network. Image credits: Chip Somodevilla/ Getty Images
Figure 5. Captions generated by Google’s “Show and Tell” deep neural network. Image credits: Felipe Dana, AP
Figure 6. Captions generated by Google’s “Show and Tell” deep neural network. Image credits: Shane Bauer

These photographs were chosen to show how the historical context shapes the interpretation of the scene and that more training on labeled images won’t prepare the system for something like Iraqi boy riding his bike in a neighborhood that used to be a war zone. This is not to negate the technical progress displayed by our machines, but rather question how their interpretation the world is based on reductive cost functions of visual- semantic pairs that lose their contextual meaning.

Neural reasoning and control

The second process is the arrival at other symbols by a process of “reasoning,” such as deduction, inference etc. If we define intelligence as also the invention of new rules, then currently artificial intelligence is more of a sophisticated pattern recognition system, as neural networks calculate a form of statistical induction.

Piece notices that inference -induction, and deduction- doesn’t invent new ideas, but rather repeats quantitative facts. It starts with a theory and then calculates how much the theory agrees with the fact. Only abduction (hypothesis) can break into new world-views and create new scientific and social theories.

He says: “By induction, we conclude that facts, similar to observed facts, are true in cases not examined. By hypothesis, we conclude the existence of a fact quite different from anything observed, from which, according to known laws, something observed would necessarily result. The former is reasoning from particulars to the general law; the latter, from effect to cause. The former classifies, the latter explains.”³⁵ The logic of neural networks is inductive.

Moreover, a training dataset represents categories of the world creating a closed cybernetic universe. A neural network is considered trained when it is able to generalise results to unknown data with a low error. Pasquinelli elaborates³⁶ on how this logic then becomes control. He says that “within neural networks (as according also to the classical cybernetic framework), information becomes control; that is, a numerical input retrieved from the world turns into a control function of the same world. More philosophically, it means that a representation of the world (information) becomes a new rule in the same world (function), yet under a good degree of statistical approximation.”

Below are a set of examples that show how our neural machines are stuck in our normative labeling and cannot jump between categories.

Here’s a neural network developed by Yahoo that detects offensive, adult images (Not Safe for Work) using computer vision and deep learning. The network takes in an image and outputs a probability (score between 0–1) which is used to filter NSFW images. Scores < 0.2 indicate that the image is likely to be safe with high probability. Scores > 0.8 indicate that the image is highly probable to be NSFW. I processed several photos (they can be found here) with this neural network including a renaissance painting and an activist performance, and these were the results:

Figure 7. NSFW score calculated by Yahoo’s deep neural network. Image credits: Three Graces by Raphael
Figure 8. NSFW score calculated by Yahoo’s deep neural network.

Another example is the case of gender classification. Authors of this neural network claim that they can achieve classification test accuracy of 96%. Again, I deployed their algorithms to test different variations of gender such as transgender communities in Pakistan or the androgynous Grace Jones, only to find how normative the view of the algorithm was:

Figure 9. Gender Classification by B-IT-BOTS robotics team neural network, Photo credits: Krista Anna Lewis
Figure 10. Gender Classification by B-IT-BOTS robotics team neural network
Figure 11. Gender Classification by B-IT-BOTS robotics team neural network, Grace Jones by Jean Paul Goude

This begs the important question of how much can our neural networks escape the categorical ontologies we’ve put them in to accommodate the complexity of modern life.


The last part is about the “Retranslation” of these symbols into external processes. Agre³⁷ critiques AI practitioners’ habit to reduce all descriptions of human experience to technical proposals and specifications of computing machinery. Today AI research balances between scientific and engineering output. Science and engineering pull “planning”, “reasoning” or “knowledge” into different directions.

Science pushes those terms toward human knowledge. The objection here is not against the institution or methodology of science itself, but rather the overgeneralisation of scientific thinking and the omission of other kinds of situated knowledges. When you have a hammer everything looks like a nail and in our case, everything looks that it can be solved by some AI. Wittgenstein³⁸ articulated this, referring to two features of scientific thinking: its focus on causal explanations and the aspiration to achieve generality in explanations. There are many areas of human study, where for example the search for general laws might not be appropriate such as gender expression.

Engineering, on the other hand, builds new things. It has a pragmatic view on AI and pushes its development into whatever can profitably be realised and serve some instrumental goals. In the engineering mindset, philosophical ideas about AI are not the ends in themselves. They are true if they are useful, and they are useful if we can use them to build things that work. If a particular stance doesn’t work or can’t be physically realised in some form of code or machine then it shall be abandoned. This seems to be the logic that prevails AI research these days.

Unfortunately, the majority of the AI community has by large found these arguments incomprehensible or obscure and failed to engage in conversations with the social sciences and other critics.

AI as a technology for the Imagination

So far, we have examined how AI came to be as a field, how our modern machine learning systems work and cause intentional or unintentional harms, and what are the philosophical implications of this work. For the last part, I would like to show how AI, as a tool and a constellation of techniques can help us imagine and build better societies. Societies in which the flow of ideas, values, and resources is configured in a more connected, transparent and fair way.

A new lens is needed to examine and practice AI. For without a framework to understand the role that humans play in the behaviour of our AI systems, then legal, ethical and metaphysical questions arise. I would like to focus on a few cultural suggestions and for this, I will bring together different ideas, some of which were expressed almost 20 years ago.

Having explained the impact and potential harms caused by AI I think we need more of what Sengers³⁹ defines as socially situated AI. This is a methodological framework for “researching and evaluating AI in which any artificial agent can only be evaluated with respect to its environment and the dynamics of the agent with respect to this environment”. This includes the objects with which it interacts, its creators and its observers. Agents are not limited to their code but rather form a social network, a physical, social and cultural environment around them which must be analysed to meaningfully judge the agent. Artificial agents are intelligent with reference to a particular constitution. This cannot be viewed separately from the goals of the project, the sources of funding and its developers, all of which must be taken into account when deciding which agent to build and how.

We also need more Machine Understanding. Our AI and ML systems should not be operationalized merely in the service of predicting some futures because it then becomes difficult to break the foundations of existing power structures. Instead, they should be used to detect covariates of the causal model that uncovers the structural problems of society.

Additionally, we don’t want to just build a glorious machine for the recognition of the Same. We shouldn’t forget that machines could be understood as independent participants, instead of just mirrors, and can contribute to and augment our socialities. Machine participants can help us understand human and social dynamics and shouldn’t be considered our mere cyborg extensions. If we are to rethink them accounting for their participatory contributions to our communities, then we need to abandon the thought that agency is an exclusively human characteristic.

We also need to rethink our technical practice, towards what Philip Agre⁴⁰ defines as “a critical technical practice”. A critical technical practice allows AI people to converse with different intellectual traditions- dialectical, phenomenological, feminist, biological etc. Such technical practice becomes aware of its own workings as a historically specific practice. It uses the tools of critical thinking, i.e critical reexamination, deconstruction, decoding and re-interpretation to question the underlying narratives of society and of AI practice itself.

Last but not least, I think we need more Art, with and about AI, in order to imagine new ways that it can help us evolve in an increasingly mechanized society. Art can evoke emotional responses and explore how AI changes our daily lives. Art can highlight how neurocomputational processes cannot capture our experiences, our complicated semantics, dreams, conversations, gut feelings and other psychological, intellectual, and spiritual processes that we can’t even articulate. Art can highlight our humanness rather than our intelligence.

Figure 12. im here to learn so :)))))) video still by Jemima Wyman and Zach Blas

As I was imagining how a world governed by opaque, inscrutable AI processes in the service of capitalism would look like I was reminded of Jorge Luis Borges’ “The Lottery in Babylon”⁴¹. The Lottery Company takes responsibility for all chance in Babylon and grows to become a complex and secretive institution. “The Company, with godlike modesty, shuns all publicity”. Only a few specialists could understand the complexities of this system.

Over time every citizen of Babylon is forced to participate. All aspects of their life, gradually become subject to the decisions of the Lottery. People would lose or gain a job, love, a place among the nobles, honor and their life. Prizes and awards became random but “the poor saw themselves denied access to that famously delightful, even sensual, wheel.”

The artificial layer we allow companies and governments to build on top of our societies is “profoundly altering both its spirit and operations”. If it is reductive and unintelligible, we wouldn’t be able to judge the outcome. We wouldn’t understand if and how we harm, discriminate or perpetuate structural societal problems. In our societies, AI is gradually affecting all things, and as in Borges’ story is becoming “omnipotent”. AI can help us tackle intractable challenges, re-imagine the future and revolutionise how we understand the world. This can happen only if we take thoughtful, collaborative and sustainable action.

After all, Babylon is just one possible world. One we should not settle for.

Acknowledgments: Thanks to Aleksis Brezas, Panos Tigas, Zac Ioannidis, Dr. Hamed Haddadi, Professor Peter Childs, Dr. Thrishantha Nanayakkara, Franc Camps-Febrer, Nikolaos Sarafianos, Kostas Stathoulopoulos, Dionysis Zindros, Eva Sarafianou and Andrew Earl for the helpful discussions and comments.


  1. q=machine%20learning,artificial%20intelligence&geo=,&date=all,all#TIMESERIES]
  2. ‘An Artificial Intelligence Expert Explains Why “AI Is the New Electricity”’ < andrew-ng/why-artificial-intelligence-is-the-new-electricity.html> [accessed 24 May 2018]
  3. ‘Job One for Quantum Computers: Boost Artificial Intelligence | WIRED’ < story/job-one-for-quantum-computers-boost-artificial-intelligence/> [accessed 24 May 2018]
  4. ‘The Superhero of Artificial Intelligence: Can This Genius Keep It in Check? | Technology | The Guardian’< deepmind-alphago> [accessed 24 May 2018]
  5. Squires, J., 2000. Fabulous feminist futures and the lure of cyberculture. The cybercultures reader, pp.360–373.
  6. Adam, A., 2006. Artificial knowing: Gender and the thinking machine. Routledge.
  7. Russell, S.J. and Norvig, P., 2016. Artificial intelligence: a modern approach. Malaysia; Pearson Education Limited
  8. McCarthy, J., Minsky, M.L., Rochester, N. and Shannon, C.E., 2006. A proposal for the dartmouth summer research project on artificial intelligence, august 31, 1955. AI magazine, 27(4), p.12.
  9. Crevier, Daniel (1993), AI: The Tumultuous Search for Artificial Intelligence, New York, NY: BasicBooks
  10. Schölkopf, B., 2015. Artificial intelligence: Learning to see and act. Nature, 518(7540), p.486.
  11. Hwang, T., 2018. Computational Power and the Social Impact of Artificial Intelligence. arXiv preprint arXiv:1803.08971.
  12. Artificial Intelligence, The Next Digital Frontier — McKinsey, McKinsey Global Institute, Discussion paper, June 2017
  13. Perez, C., 2017. The Deep Learning AI Playbook. Lulu. com.
  14. shock-1.19595
  15. ‘Beijing Wants A.I. to Be Made in China by 2030 — The New York Times’ <https://> [accessed 21 May 2018]
  16. Haraway, D., 1988. Situated knowledges: The science question in feminism and the privilege of partial perspective. Feminist studies, 14(3), pp.575–599.
  17. Barocas, S. and Selbst, A.D., 2016. Big data’s disparate impact. Cal. L. Rev., 104, p.671.
  18. Churchman, C.W., 1967. Guest editorial: Wicked problems.
  19. Irani, L., 2015. Difference and dependence among digital workers: The case of Amazon Mechanical Turk. South Atlantic Quarterly, 114(1), pp.22
  20. Lerman, J., 2013. Big data and its exclusions. Stan. L. Rev. Online, 66, p.55.
  21. The Trouble with Bias — NIPS 2017 Keynote — Kate Crawford, Youtube
  22. Sweeney, L., 2013. Discrimination in online ad delivery. Queue, 11(3), p.10.
  23. Angwin, J., Larson, J., Mattu, S. and Kirchner, L., 2016. Machine bias: There’s software used across the country to predict future criminals. and it’s biased against blacks. ProPublica, May, 23.
  24. Colley, A., Thebault-Spieker, J., Lin, A.Y., Degraen, D., Fischman, B., Häkkilä, J., Kuehl, K., Nisi, V., Nunes, N.J., Wenig, N. and Wenig, D., 2017, May. The geography of Pokémon GO: beneficial and problematic effects on places and movement. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 1179–1192). ACM.
  25. Burrell, J., 2016. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), p.2053951715622512.
  26. automation-in-public-agencies-bd9856e6fdde
  27. Selbst, A., 2017. Disparate Impact in Big Data Policing.
  28. Sandvig, C., Hamilton, K., Karahalios, K. and Langbort, C., 2014. Auditing algorithms: Research methods for detecting discrimination on internet platforms. Data and discrimination: converting critical concerns into productive inquiry, pp.1–23.
  29. Žliobaitė (2015): A survey on measuring indirect discrimination in machine learning. arXiv pre- print.
  30. Diakopoulos, N., 2014. Algorithmic-Accountability: the investigation of Black Boxes. Tow Center for Digital Journalism.
  31. Craik, K. (1943). The nature of explanation. Cambridge: Cambridge University Press.
  32. ‘Agents That Imagine and Plan’, DeepMind <> [accessed 24 May 2018]
  34. ‘Facebook AI Experiment Did NOT End Because Bots Invented Own Language’ <https:// language.html> [accessed 24 May 2018]
  35. Charles S. Peirce. “Deduction, Induction, and Hypothesis” (1878). Op cit. 1992. 194.
  36. ‘Machines That Morph Logic, by Matteo Pasquinelli’, Glass Bead < article/machines-that-morph-logic/> [accessed 20 May 2018]
  37. Agre, P. and Agre, P.E., 1997. Computation and human experience. Cambridge University Press.
  38. Wittgenstein, L., 1980. Philosophical remarks. University of Chicago Press.
  39. Sengers, P., 2002. Schizophrenia and narrative in artificial agents. Leonardo, 35(4), pp.427–431.
  40. Agre, P., 1997. Toward a critical technical practice: Lessons learned in trying to reform AI. Social Science, Technical Systems and Cooperative Work: Beyond the Great Divide. Erlbaum.
  41. Borges, J.L. and Fein, J.M., 1959. Lottery in Babylon. Prairie Schooner, 33(3), pp.203–207.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store