Artificial Intelligence Changing Work, Big Time

And this excellent article from City Journal examines how much has already changed and how much will change.

An excerpt.

Warning: don’t read too much about the future of jobs in an era of Artificial Intelligence if you are—psychologically speaking—in a dark place. If you’re a lover of the arts and humanities, for example, you should probably go full hermit in the basement of a university library with plenty of provisions (but no WiFi). If you greet all technological advances with gee-whiz enthusiasm, you’d best avoid long conversations with people who make a living driving trucks or reading X-rays. If you’re an antiglobalization protectionist, get ready to look with longing on a time when the biggest threats to jobs were NAFTA and an ascendant China. And even if you believe in the long-term benefits of what economist Joseph Schumpeter called creative destruction—as I do—prepare to have your convictions tested.

People have feared artificial intelligence since Mary Shelley introduced the world to Dr. Frankenstein’s hideous creature. The Luddites, who battled against the automated loom in the early nineteenth century, are now regarded as so wrongheaded that they have an economic error named after them. The Luddite fallacy refers to the fact that in the long run, disruptive technologies create more jobs—not to mention reduce drudgery, save lives, expand leisure, and enrich us all. Optimists argue that AI, too, will bring material and social progress. Things will cost less; people will live longer. They’ll have more time to enjoy their hobbies and interests. The work-life balance problem? Solved—once robots do the laundry, drive the kids to soccer, and take over the less interesting but time-consuming tasks at the office. Albert Wenger, a venture capitalist at Union Square Ventures, is not alone in seeing AI as “a development that will free us to do lots of incredible things that are more aligned with what it means to be human.”

So let’s stipulate: no one knows for sure what’s about to happen to the labor market. Most observers agree, however, on at least two things. First, the pace of AI discoveries and implementation is accelerating. Robots are now doing things that seemed like science fiction just a short time ago. Was anyone talking about a retail-sector meltdown, driven in good measure by AI-facilitated e-commerce, last year? Second, fasten your seatbelts. Whether you call it “the second machine age”—as MIT professors Erik Brynjolfsson and Andrew McAfee do, in a 2014 book by that name—or the fourth industrial revolution, this will be big. Most Silicon Valley honchos, scientists, and economists think that this time is different. Exactly how many jobs will be lost, which kinds of jobs and when, and what to do to prepare for these losses may be matters of dispute. No longer questioned is that a massive disruption in the way we earn a living is coming and that it will transform communities, education—and perhaps even our notion of an America defined by industriousness and upward mobility.

This is not to say that AI optimists don’t have plenty of evidence on their side. AI, defined as “fully autonomous machines that don’t need a human operator and can be reprogrammed to perform several manual tasks,” is already helping save workers’ lives and limbs. Much of this is happening not because machines are replacing humans but because they are helping them do their jobs more efficiently and safely. Military drones are an obvious example. Drones don’t reduce the need for soldiers—humans still need to operate and service the machines—but they do lessen the need for soldiers and military-intelligence officers on treacherous battlefields or in jets at risk of antiaircraft attacks. Similarly, firefighters use drones to get a live-video look at a forest fire or to search for victims before sending men into danger. In March, the New York City Fire Department used a drone to help place firefighters on a damaged roof during a dangerous fire in the Bronx.

For decades, robots have been assisting physicians in the operating room. A robotic system called Da Vinci has “arms” equipped with cameras and precision tools to perform everything from knee replacements to hair transplants to tumor removal. Da Vinci can operate in hard-to-reach crevices of the body with tiny tools in ways that far exceed the physical capacities of human doctors. By the latest count, 3,803 Da Vinci units are in use worldwide—2,501 in the United States. Studies have found that Da Vinci can mean smaller incisions, less blood loss, and shorter recovery periods than conventional surgery. And because surgeons use magnified, high-definition, 3-D computer-screen images of a kidney or knee, for example, to guide the robot, they don’t need to be in the same room or, for that matter, the same continent as their patient. “Telesurgery” lets a doctor in New York operate on a patient in Ghana and still be home for dinner. The potential benefits for the billions living in remote or medically underserved areas are incalculable.

More recently, robots have also been “collaborating” with doctors as they make diagnosis and treatment decisions. IBM’s cutely named robot Watson became a celebrity when he defeated Ken Jennings, famous for winning 74 consecutive Jeopardy games. Now, Watson is in training to become an Olympian medical expert. In fact, without robotic technology, we probably wouldn’t be anticipating personalized medicine. Robots like Watson are tireless info-vores; they don’t suffer overload or need naps or caffeine breaks; they can digest more medical journals, reports, patient records, websites, records, and diagnostic materials in an hour than a doctor could in a lifetime. A Watson designed to analyze genomics consumes something like 10,000 scientific articles and 100 new clinical trials that become available every month. Tell Watson the genomic makeup of a tumor, and it will sift through all the research in order to customize treatment.

The optimists can also rightfully claim to have history in their corner. In the 1930s, John Maynard Keynes fretted about “technological unemployment” but assumed that it would be temporary. In that respect, at least, Keynesianism has been vindicated. Machinery has obliterated some jobs while boosting productivity and consumer wealth, which, in turn, has created new, often higher-paying, jobs. No one could have predicted that automated looms’ cheaper clothes would change the calculus of consumer demand, leading to more jobs for weavers, as ultimately happened. Henry Ford’s Model T devastated blacksmiths, saddle and harness artisans, stable boys, and carriage makers, among others. But the automobile was a creative destructor, swelling the ranks of steel, glass, rubber, textile, and oil and gas workers and, for better or worse, giving birth to the motel and fast-food industries.

About David H Lukenbill

I am a native of Sacramento, as are my wife and daughter. I am a consultant to nonprofit organizations, and have a Bachelor of Science degree in Organizational Behavior and a Master of Public Administration degree, both from the University of San Francisco. We live along the American River with two cats and all the wild critters we can feed. I am the founding president of the American River Parkway Preservation Society and currently serve as the CFO and Senior Policy Director. I also volunteer as the President of The Lampstand Foundation, a nonprofit organization I founded in 2003.
This entry was posted in Economy, Technology. Bookmark the permalink.