Nicholas Agar
Elon Musk’s repeated mispredictions of the arrival of fully driverless cars show that we can get carried away by the hype in digital tech. But it would be reckless for the Jobs and Skills Summit not to be alert to the long-term consequences of AI and machine learning on jobs. (Emanuel M Schwermer / DigitalVision)
The Jobs and Skills Summit being convened this week in Canberra sets out to address the employment challenges that Australia currently faces. But even as we respond to today’s skills shortage, we must also keep an eye on the long-term consequences of Artificial Intelligence (AI) and machine learning on jobs.
Fantasies in which AIs advance to the point of doing all of our jobs are seductive in some tech circles. Elon Musk has long promised fully driverless cars that will be safer than cars driven by humans. When these vehicles do arrive, they may delete the human Uber driver. But Musk’s repeatedly failed forecasts of fully driverless cars “next year” suggest that we will have human Uber drivers for a while yet.
Digital technologies are dislodging workers from some areas of the economy. But many jobs remain and new jobs are being created. Rather than abolishing human workers, digital technologies instead fundamentally change the nature of work in the industries they enter. Hence new jobs exist in the shadow of AI. According to the Australian Treasury, “Increased digitalisation … will change the nature of work and increase demand for workers with high levels of digital and data literacy.” Some of these new jobs will be thrilling. Imagine getting to work on the social media tech that could replace TikTok.
But as digitalisation enters an industry, many new jobs will fall into the category of digital adjunct worker. If you want to know what it is like to be a digital adjunct worker, think of the many people who find themselves fielding the complaints of telco customers during an economic downturn. They face angry calls from people who’ve endured long wait times only to be told that it’s not clear why they have failed to receive a contracted service.
Here I want to consider the prospects of digital adjunct workers in healthcare, an area of focus as we exit the pandemic. Because these domains are partially automated, careers in medicine move far from what most hope for from a career in healthcare. Aspiring healthcare workers may find that their work experience is rather more like that of the digital adjunct workers at an Amazon fulfilment centre than Dr Quinn, Medicine Woman or Marcus Welby, MD. They’ll be doing nothing more or less than what the machine directs. They may be paid accordingly.
Want the best of Religion & Ethics delivered to your mailbox? Sign up for our weekly newsletter.
Your information is being handled in accordance with the ABC Privacy Collection Statement.
In their 2017 book Machine, Platform, Crowd: Harnessing Our Digital Future, the MIT economists Andrew McAfee and Erik Brynjolfsson extoll potential contributions to medicine of machine learning. “If the world’s best diagnostician in most specialties — radiology, pathology, oncology, and so on — is not already digital — it soon will be.” This needn’t be bad news for doctors. “Most patients … don’t want to get their diagnosis from a machine.” So we’ll still need human doctors.
But suppose we pose the question in crude economic terms. Much of the reason doctors are paid so well is that they have acquired the knowledge of many years of medical school and subsequent experience doctoring. Now think of the future as described by McAfee and Brynjolfsson. Machine learners have entered medicine and are now offering diagnoses and treatments for cancer based on the vast totality of published research and data on the disease. You might say that it surely can’t hurt to have an experienced human doctor look over the machine’s diagnoses and recommendations. Digital diagnosticians won’t be perfect. They are, after all, techs made by fallible humans. When they do get it wrong it will make news in much the way that fatalities from driverless cars make news today.
But the right way to ask the question is comparative. How often will a future digital diagnostician err when compared with a human? Advances in autopilots have seen pilots relegated to aviation digital adjuncts. Nicholas Carr’s 2014 book The Glass Cage makes clear that we are now seeing crashes that happen precisely because pilots mistakenly intervene to correct what seems to them to be an autopilot error but isn’t.
Suppose a medical tech of the future recommends that you try chemotherapy A rather than chemotherapy B. Should you instead go with your doctor who enjoyed a particularly charismatic presentation by a pharmaceutical rep selling B? Will you want a doctor who arrogates a right to countermand the directive a machine learner that draws on the totality of all clinical trials on your condition? Might it be better to have a medical digital adjunct who wouldn’t dare offer an opinion about how your disease should be treated? To pose the question in economic terms: do you need to pay that digital adjunct more than the many other digital gig workers? This is how a hospital administrator having to pay for increasingly expensive medical tech might ask the question.
When might this happen? Musk’s repeated mispredictions of the arrival of fully driverless cars show that we can get carried away by the hype in digital tech. Anyone who forecasts a flawless digital oncologist next year is bluffing. But it would be reckless for the Jobs and Skills Summit not to be alert to its possibility and to prepare for it.
If machine learners do enter your industry, you might prepare by serenely accepting your status as poorly remunerated digital adjunct worker. But there is another response in which workers are paid for being human. The expansion of the digital economy could see a matching social economy whose workers meet the needs of obligatorily gregarious humans.
Is this possible? Some of us are already well paid for being human even when our jobs could be tolerably automated. One of the thrills of 2022 was seeing the preternaturally young Tom Cruise in Top Gun: Maverick. Advances in CGI are now providing characters that are difficult to distinguish from humans. In the movies we insist on the real Tom, not an animation guaranteed to look as youthful as the Cruise of the 1986 Top Gun. We pay him accordingly.
If we create a social economy, we could extend this privilege to many workers who increasingly work with machines. We should extend this human privilege both to workers who use digital tech to diagnose cancer and to make coffee.
Nicholas Agar is Professor of Philosophy at the University of Waikato in Aotearoa New Zealand, and the author of How to be Human in the Digital Economy. His book “Dialogues on Human Enhancement” is forthcoming with Routledge.
The huge profits that stand to be made from commercialising human enhancement technologies mean there is a dire need for the ethically unbiased analyses of those technologies. It also means ethicists must be wary of the distorting effect of money on our evaluation of enhancement tech.
Human enhancements are transitioning from subjects of philosophical speculation to investment grade objects of commerce. Now is an important time for philosophers to be vigilant. So it’s a problem when the method philosophers use to evaluate enhancement technologies is so easily co-opted by those seeking to commercialise them. The method of constructing and probing fantastical thought experiments has served analytic philosophy so well, but it can be a disaster when applied to humanity’s future.
The ethical questions raised by disruptive technologies are not for computer scientists alone to answer. Decision makers in education, health, product personalisation, finance, and government services all face moral questions about the uses of automation, advanced analytics, and machine learning. Citizens also face decisions about the moral permissibility of the technologies they encounter daily. What can we do to address these challenges?
Google has announced something big and shiny for Australia — a commitment to provide $1 billion into Australia’s digital future, including a new research centre looking at pivotal topics like artificial intelligence, quantum computing, and climate technology. By allowing Google to cosy up to researchers, are we providing them legitimacy over their claims to our technological future? In accepting Google’s money, have we simply made a deal with the tech devil?
Last week, Facebook founder and CEO Mark Zuckerberg announced that the company would rebrand as “Meta”. The new name, and the tweaked infinity loop logo, attempt to represent Zuckerberg’s all-encompassing vision for “the metaverse” — an interconnected virtual and augmented reality platform that promises to service almost every area of our lives. It is now incumbent on those of us who do not believe that such technology is leading humanity toward a better future, to push back.
Perhaps the most significant component in the AUKUS security partnership was not about nuclear submarines, but an agreement to cooperate in the development of a technology that has similar sub-atomic origins — namely, quantum. Deemed a “critical technology”, it may well prove to be far more consequential than submarines in the long run, as advances in quantum technology come to reshape the global economy and international security environment.
Your information is being handled in accordance with the ABC Privacy Collection Statement.
Subscribe to Compass podcast
Subscribe to Religion & Ethics RSS feed
Subscribe to Compass RSS feed