Artificial Intelligence and Post-Work Futures

When human atoms are knit into an organisation in which they are used, not in their full right as responsible human beings, but as cogs and levers and rods, it matters little that their raw material is flesh and blood.

Norbert Wiener

Very often discussions about AI and the future of work are focused on a jobless future and whether automation decreases the demand for human labor as it increases productivity. The post-labor condition driven by AI is not the end of labor. It’s rather a new condition¹ in which work, technical knowledge and social contracts are re-evaluated.

Machines and software indeed automate existing tasks or parts of them and also create new tasks. Thus, the introduction of AI in the workplace raises the structural-economic problem of reallocating the division of labour in society. Within capitalism, the labour saved by this automation takes the form of profit appropriated by technology owners. Additionally, AI in the workplace creates several other issues related to employer-employee relations, power dynamics, liability and even the role of work in human life.

So, does the development of AI mean a future with rapidly declining employment, increasing income inequality, automation and wage stagnation? Or a future free of repetitive tasks? Who will suffer most from this future and how can we ensure a more equitable distribution of resources?

Below, I’ve sketched 10 impact areas of Artificial intelligence and the future of work. I will be using the term AI broadly to mostly refer to machine learning and deep learning technologies and John Danaher’s definition of work as : “The performance of some act or skill (cognitive, emotional, physical etc.) in return for economic reward, or in the ultimate hope of receiving some such reward”. As, I’m more interested in discussing alternative work narratives, I will directly propose a set of questions that I personally find interesting for each of those areas:

  • Mixed initiative systems
  • AI + Human skill competition
  • AI assisted reskilling
  • AI Polanyi’s paradox
  • Micro micro-tasking
  • AI workforce panopticon
  • The API is the boss
  • New job creation
  • Self worth redefinition
  • Post-labour upheavals


We will increasingly see mixed-initiative creative interfaces where humans and computers will work in tight interactive loops where each suggests, produces, evaluates, modifies, and selects creative outputs in response to the other. In such a big and distributed working infrastructure, how do we structure the interaction between AI systems, helpers and users? Which functions do we automate and where is human input most relevant? How do we assign value to different parts of this system? How do we design HCI interfaces in order to facilitate this interactions?

Specifically when it comes to human labour behind machine learning algorithms and data labelling, how do we design the ownership relationship between workers and the AI model itself? How can workers market and sell trained machine learning systems? How can workers fairly divide earnings from a model trained by multiple people and various ML algorithms and ensure data validity?


Machines and software will automate, augment or amplify humans skills and even help re-design work processes and workflows. Workers will need to adapt as their occupations evolve alongside increasingly capable machines. Given this distributed control, how do we define professional responsibility? How will, for example, healthcare professionals negotiate when working with diagnostic systems? Which is most expert in this case? How do we legally and socially assign accountability?

Also, repetitive work that that can -at least in theory- follow a specific ruleset is more likely to be automatable, contributing to the job polarisation of low and high income wages, independently of the education or expertise required. Certain skills will increasingly become obsolete. Karikis, below, compiled a list of 1,600 obsolete professions which have vanished in the UK in the last 150 years as part of The Endeavour project.

And while automation is thought to relieve humans from tedious and repetitive tasks enabling them to develop more important skills, research² has shown that this is not necessarily the case. For example, in aviation, increased automation leads to certain skill atrophy (pilot awareness). Is then AI in the workplace de-skilling workers or shifting their skillset to “machine management” ? How is automation impacting our definition of professional identity and expertise? How will workers feel as their skills are being substituted by AI?


Increasing automation will directly impact education and raise the question of how will the goals of primary, secondary and postsecondary education adapt to it. On the other hand, I think AI has the potential to assist in re-skilling workers (displaced or not). This will not only happen due to advances in personalised learning. Machine learning could be used to create internal career mobility platforms so employees can focus on career growth and not promotions or even help curate employee-to-employee learning programmes.

Also, for example, O*NET OnLine³, which is sponsored by the U.S. Department of Labor, has extensively mapped all the occupational skills we use in the workplace. Research by the University of Oxford⁴ and McKinsey⁵ has shown how susceptible some of these skills are due to the AI driven automation. Machine learning could help workers, companies and governments understand faster which areas are more likely to be automated so that the workforce could decide if they should re-skill or change jobs. AI could also be used to predict which skills will be more useful for the workforce of the future.


In the past decade we’ve seen a shift in labour forces: the growth of the gig economy, contract based work, zero hour contracts, on demand workforce, platforms for “Creativity on demand” such as Fiverr and so on. We actually don’t even call workers as “workers”, but we prefer to call them Rabbits and MTurkers. Those platforms are characterised by work crowdsourcing and breaking large tasks into micro tasks so that they can be parallelised across many workers. Micro-tasks are small units of work designed to be completed individually, eventually contributing to a larger goal.

I claim that re-Captcha is the ultimate form of Micro Micro-tasking and that we might be seeing more of that in the future. Every time you are asked by Google to identify a tree or a street sign you are training the company’s computer vision systems. They took this task of data labelling, and fragmented it, and fragmented it some more until suddenly, the job is done without anyone ever working on it for more than 5”! So, how will workers interact with those containerised tasks in the future? How will they affect their performance and their ability to connect to the larger context of their work? How will crowd work for AI systems look in the future? Importantly, as this alienated, distributed and globalised workforce is increasing how will they get the benefits of community and collaboration?


The recurrence of future of work concerns suggests that researchers have not yet found a good frame for judging the susceptibility of occupations to automation specifically, by artificial agents. One example is Moravec’s paradox which describes this shortcoming: We think that tasks are computationally difficult when they require significant human focus (such as proving theorems or playing Go) and yet undervalue hard computational tasks that seem to require less human effort (such as creativity, social interaction and hand-eye coordination).

One of the ways I also think about the mismatch between human reasoning and AI learning is explained by the so called Polanyi’s Paradox. Mathematician Michael Polanyi’s paradox summarises this cognitive phenomenon explaining that many times “we know more than we can tell”. Contemporary computer science seeks to overcome Polanyi’s paradox by building machines that learn from human examples, thus inferring the rules that we tacitly apply but do not explicitly understand ⁶ . Are we creating machines that perform tasks better than us and are not able to explain why? How do we ensure explainability in AI systems in the workplace?


AI is facilitating a new corporate panopticon infrastructure when it comes to workforce surveillance (else known as people’s analytics), automated hiring processes, reputation profiling, electronic performance management and platform work interface management. New technologies offer the possibility to measure emotional labour, mood and subjectivities, reactions to situations, tone of voice and other gestures that reflect people’s emotional states. “There are over 15,000 traits that can be used to identify top performers,” says George Clark, executive at Hirevue a company that offers automated interview screening services. “These include your choice of language, the breadth of your vocabulary, your eye movements, the speed of your delivery, the level of stress in your voice, your ability to retain information, your ‘valence’ (emotion), and 14,993 others.”

In Wisconsin, a technology company called Three Square Market offered employees an opportunity to have microchips implanted under their skin in order, it said, to be able to use its services seamlessly. Amazon recently was awarded a patent for a device that would attach to its warehouse workers’ wrists and track their movements using ultrasonic waves. If the worker’s hand moves in the wrong direction, a slight vibration in the wrist would let them know. In this future, what is the impact of AI on the power dynamics between employers and employees? Is it providing new mechanisms of control over workers, leaving them more vulnerable to exploitation? How can we assess this impact?


There is increasing use of AI technologies that replace and augment employee management. Shift management in particular is changed at scale and precision. Remote and disembodied management shifts the power from frontline managers to those whose view is shaped by collected employee data. Power is also shifted to the engineers of those systems, in which specifications are set months or years in advance and potentially without knowledge of the context.

What would AI systems for worker management look like if they were co-designed by workers and other stakeholders? How might those systems set more balanced goals between employers and workers? However, a very interesting question arises here: do we really need the people? Even if we still need human beings to perform certain specialised tasks, can we remove the management from the equation instead?


Some economists argue that demand for labour would decline with the increasing automation because there would only be a finite amount of work to be done. Others disagree and refer to this argument as the “lump of labor” fallacy. Those economists suggest that as productivity arises in one industry (due to automation and other factors) new industries will emerge with the new labour demand. For example, in 1900 agriculture comprised 41% of US workforce whereas in 2000 only 2%. Or for example, the advent of self driving cars may result in higher demand for urban planners and designers to redesign everyday travel landscape.

Cognizant in their report propose 21 new jobs that will emerge over the next 10 years such as AI-Assisted Healthcare Technician, Cyber City Analyst, Man-Machine Teaming Manager, AI Business Development Manager, Ethical Sourcing Manager and others. Additionally, AI systems require more than code and human creativity. They require supervision which encompasses all roles related to the monitoring, licensing, and repair of AI. They require humans to maintain these infrastructures. This labor is often invisible and includes staff that clean data centres, maintenance or repair workers who fix broken servers, and “data janitors” who “clean” the data and prepare it it for analysis.


This is a broader question that stems from the above areas and it refers to the role of work in society. What would be the definition of labour in such an automated future and how will we constitute our sense of identity, self worth and skill acquisition? Will we really have more free time? When machines do increasingly more and more, lots of people wonder what will we do? What work will be left for people? Will more people be part of the so called “entreprecariat”? How will we make a living when machines are cheaper, faster and smarter than we are? For many people, the future of work looks full of temporary jobs, minimum wage labor and a governing technocracy gated by their circular living machines.

The founder simulation, Francis Cheng


As more and more power is concentrated by the owners of the technological capital and AI systems, we might be seeing more public dissatisfaction which will probably lead to reactionary movements. We are seeing an increase in platform co-operativism⁷ and more organisations entering the “Internet of Ownership”. Perhaps we will see workers forming collectives that span the companies’ entire supply chain. We also already see many different social pilots on the idea of universal basic income.

I’m left wondering, why has unionisation on the cloud been impractical or impossible up to this point? How will the distributed working class mobilise? Can we create decentralised unions in order to achieve better contract and dues management, voting, collective bargaining, audits and a more open-source architecture? Designing for collectives to act will require deep theoretical understanding of the social dynamics of these workers groups⁸.

Additionally, as virtual labor marketplaces accelerate AI development by generating training data for AI systems how can we develop Worker-Owned Cooperative Models for Training Artificial Intelligence⁹ ? Fred Ehrsam also explores the concept of “Blockchain-based Machine Learning Marketplaces“ which I find quite interesting. He says that ”decentralized machine learning marketplaces can dismantle the data monopolies of the current tech giants. They standardize and commoditize the main source of value creation on the internet for the last 20 years: proprietary data networks and the strong network effects surrounding them. As a result,value creation gets moved up the stack from data to algorithms.”.

Lastly, given the increase in collaborative human/AI environments in which synthetic agents play social roles, can we design Rebel Agents that facilitate attitudes of objection, protest, and rebellion across groups of workers¹⁰?


Work has been central to mankind for millennia. In the future, work will continue to be core to our identities, our nature, our dreams and our realities. But it won’t be the work we know or do now.

Technology solves — and creates — problems, and Artificial intelligence is just another kind of technology. Intelligent machines will address many problems in society, but in doing so, they will also create lots of new problems that we need to address. And these relate to the ever increasing power asymmetries created between the owners of the technology and the workers.

For the next couple months, my personal research will focus on how we can use open source AI technologies to reverse this power imbalance in the workplace and ensure a more equitable distribution of resources, power and information. If you are also working on making a fairer post-work society please reach out. We got lots of work to do. And the work ahead will go on forever. Wash, rinse, repeat.

The New Precariat vs the API: 1:0


  2. Laine Bainbridge, “Ironies of Automation,” Automatica 19 (1983): 775­779; Raja Parasuraman and V. Riley, “Humans and Automation: Use, Misuse, Disuse, Abuse,” Human Factors June 39(2) (1997): 230­253.
  4. Frey, C.B. and Osborne, M.A., 2017. The future of employment: how susceptible are jobs to computerisation?. Technological Forecasting and Social Change, 114, pp.254–280.
  5. Manyika, J., Chui, M., Miremadi, M., Bughin, J., George, K., Willmott, P. and Dewhurst, M., 2017. A Future that Works: Automation, Employment, and Productivity. McKinsey Global Institute.
  6. Autor, D., 2014. Polanyi’s paradox and the shape of employment growth (Vol. 20485). Cambridge, MA: National Bureau of Economic Research.
  7. Pazaitis, A., Kostakis, V. and Bauwens, M., 2017. Digital economy and the rise of open cooperativism: the case of the Enspiral Network. Transfer: European Review of Labour and Research, 23(2), pp.177–192.
  8. Salehi, N., Irani, L.C., Bernstein, M.S., Alkhatib, A., Ogbe, E. and Milland, K., 2015, April. We are dynamo: Overcoming stalling and friction in collective action for crowd workers. In Proceedings of the 33rd annual ACM conference on human factors in computing systems (pp. 1621–1630). ACM.
  9. Sriraman, A., Bragg, J. and Kulkarni, A., 2017, February. Worker-owned cooperative models for training artificial intelligence. In Companion of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing(pp. 311–314). ACM.
  10. Coman, A., Johnson, B., Briggs, G. and Aha, D.W., 2017. Social attitudes of AI rebellion: a framework. In Proceedings of AAAI-17 Workshop on AI, Ethics, and Society.