man to machine
The only constant, so the old cliché goes, is change.

But the technological change we're going through now appears to be greater and more transformational than anything that's come before. Concepts such as artificial intelligence (AI), robotics, machine learning, cryptocurrency and fintech (along with a host of other terms with the 'tech' suffix) are all things we're going to have to get our heads around if we are to cope – and, hopefully, thrive – in this new world.

It’s good to remind ourselves, though, that humankind has gone through transformational technological change before. We can even say it defines us as a species. For all of the wars, oppression, natural disasters and plagues we have gone through, we’ve arguably been quite successful at managing the changes, given that there are now seven billion of us on the planet.

 

Yet this wave of change looks a bit different from those that came before. In previous eras, new technology served as a tool for humans to do things better and more productively than before, often for the betterment of everyone’s lives. Not all of these tools have been benevolent – from spears to rifles to nuclear weapons, they can cause great destruction – and some have disrupted lives by rendering certain economic activities redundant. Weavers, blacksmiths and coachmen are some of the careers that were made almost irrelevant by technological change. However, new careers and jobs evolved out of the change – usually to society’s benefit.

 

However, the new wave of technological change threatens to be disruptive on a larger scale than ever before. Futurists talk about the concept of technological singularity, also known as "the singularity", a situation where the arrival of artificial superintelligence will trigger a rapid new phase of technological change in which the superintelligence’s capabilities will far surpass anything humans can invent or create.

Stephen Hawking

Public figures like the late Stephen Hawking (pictured) and Elon Musk have expressed concern that such artificial superintelligence would have little use for humans, and that our species may even be at risk of extinction. 

Others are more sanguine, suggesting that such a superintelligence would have little motivation to dominate or get rid of humanity, and may even be a benevolent force for humanity. 

 

But even if the singularity does not come to pass, many fear mass unemployment as human jobs disappear because they are taken over by computers, robots and AI. One cross-sector study, conducted by Oxford University in 2014, found that 47% of jobs would be automated by 2034. 

 

Again, the counterargument is that such computer-driven work can help improve our lives through better and cheaper products and services, while a new generation of jobs will be created for humans who can work with the technology.

Big Data, Big Brother

Another threat identified by the doomsayers stems from the rise of so-called Big Data. In this world, we become manipulated by the technology to behave in certain ways that might not be in our own best interests. Many argue that this world has already begun.

 

Historian Yuval Noah Harari, author of the bestsellers Sapiens and Homo Deus, says humanity is entering a new era of "dataism". He notes that this is being driven by advances in two fields:

 

  • Neuroscience: Biologists and psychologists are learning more and more about how the brain works – in particular, how our decisions are driven by deep-seated emotions and processes within our brains. This is deepening knowledge about how humans can be persuaded to act in certain ways.
  • Big Data: As a result of advances in data collection, processing and analysis (and the lower costs of storage and processing), there is a wealth of information and patterns about ourselves and our behaviour.

 

Together these are a powerful combination, Harari argues. Systems will be able to monitor and understand us better than we understand ourselves, and authority will shift from humans to algorithms. "Big Data could then empower Big Brother," he says.

A tale of two paradoxes

Those who are positive about the future of AI like to quote two paradoxes. The first is Moravec’s paradox, named after robotic scientist Hans Moravec. In short, the paradox states that those things that humans find difficult to do, robots and AI find easy. But those things that humans find easy or simple to do, are very hard for robots and AI to replicate.

 

Simple examples of the former are things like playing chess or scanning insurance claims for certain kinds of information, such as the age of claimant, circumstances or any other data the insurer will find useful. Doing high-powered calculations is also something they can do easily. Computers can be programmed to do all of these quickly and efficiently. Humans cannot hope to compete with machines on this front. Machines, after all, don’t get tired or bored. And with monitoring technology, improved algorithms and increased computing power, the scope of machines to do more of the jobs formerly done by humans increases. 

 

But the machines find things difficult to do that even a small child does with ease. These are the sensorimotor skills that we regard as almost innate to being human. Simply carrying a box over slightly rugged terrain is hard work for a robot or android to get right. A robot in Singapore was recently trained to assemble an IKEA chair; it finished the job, but not without difficulty.

Those things that humans find easy or simple to do are very hard for robots and AI to replicate.

This is because these skills are the result of millions of years of evolution, and the activities involve an intricate interaction between the brain, nervous system and muscles. They are activities that seem very simple to us but are, in fact, extremely complex. 

 

As Moravec's colleague, Marvin Minsky, remarked: "We're more aware of simple processes that don't work than of complex ones that work flawlessly."

 

This leads to a second paradox, called Polanyi's paradox. It's named after the philosophical writer, Michael Polanyi, who, in the 1960s, wrote about a "tacit" dimension to human development. Much of what we have learned as humans, and indeed much of our advancement and development (maybe even most of it), is thanks to things that have been passed on to us through culture, tradition and other forms of tacit influence. His thinking can be summarised with the slogan: “We know more than we can tell.”

 

This is where the paradox comes in. Because so much of our knowledge is tacit, we cannot pass it on to computers, bots, robots and other forms of AI. So, we should always have an advantage over the machines. However smart they are, humans will find some way to improve on them. 

 

Nadia Comaneci, the first gymnast to score a perfect 10 at the 1976 Olympic Games, perhaps made the point well when commenting on a new AI application being piloted by Fujitsu to judge events like gymnastics, diving or figure skating. While the system may be able to judge an athlete extremely accurately thanks to piles of data from previous events, it can't account for creativity.

"Gymnasts are known for pushing the skills, looking for new angles, turns, points – so what happens when someone comes along with a totally different routine that has not been seen or registered by the computer?"

Nadia Comaneci in an article in The Guardian

"Gymnasts are known for pushing the skills, looking for new angles, turns, points – so what happens when someone comes along with a totally different routine that has not been seen or registered by the computer?"

Nadia Comaneci in an article in The Guardian

Machines that learn

These paradoxes provide some relief for those of us worried about humanity’s future, but perhaps we shouldn’t be too complacent. A particular branch of AI, called machine learning, could be the way for machines to bridge the gap.

 

Machine learning is defined as the ability of systems to learn from data and adapt without having to be programmed. In conjunction with Big Data, machine learning is already being applied in our daily lives. The way adverts are served on websites or social media pages, or how recommendations are made on online shops, are examples of machine learning. The 'bots' learn what your interests or needs are based on your online behaviour, and then show adverts or content accordingly. Similarly, spam filters draw on machine learning to block unwanted emails. 

 

Many of the advances in machine learning involve concepts like neural networks and natural language processing. With these, machines can recognise images and identify visual characteristics, as well as understand human language and learn to respond in ways humans can understand. Think of spoken-word applications like Siri and Alexa. There are even applications that can write news articles or market reports.

IBM Watson

Perhaps the best known example of machine learning is IBM Watson. Apart from beating humans in the game show Jeopardy! in 2011, IBM Watson is being put to use to solve some of the world's big problems, such as detecting and curing diseases early. One experiment showed that a computer algorithm picked up lung cancers in 90% of instances, compared with human doctors' 50%.

cost of data storage

As the algorithms, AI and machine learning get better, they also get better at getting better, thanks to improving processing power and data storage capability. Moreover, once something is learned, it can be immediately shared with all other machines, which will further speed up the processes. Humans have to learn new things individually.

Source: Barclays, Mearian, Lucas, Computerworld 2017
Lessons from history

Better techniques for making things, and completely new technologies like cars, have killed off or reduced demand for many occupations over the years. For example, from employing 40% of the US workforce at the turn of the 20th century, farming now only makes up 2% of the labour force.

 

However, the US now enjoys an unemployment rate that is at a multiyear low, so other jobs have clearly taken their place. Indeed, many of the new jobs that have arisen have been indirectly created out of the forces that saw agricultural employment fall. Better transport systems, crop planting techniques, refrigeration, etc all helped agriculture to consolidate, but also contributed to creating new employment elsewhere.

 

So, just as the advent of the motor car reduced demand for coachmen, grooms and whipmakers, demand arose and increased for a new range of jobs, from car factory workers to mechanics to salespeople and panel beaters. Elsewhere, clerical, service, professional and technical jobs grew. 

 

Furthermore, improvements in technology have made our lives better in ways that have allowed us to become more productive and capable of adapting to change. Medical breakthroughs, such as the polio vaccine, have allowed more people to live normal lives and not be a burden on the healthcare system. The diagnostic capabilities of AI noted above save lives. In addition, they also keep people productive and allow them to live longer and more useful lives. Technologies in the classroom and online courses mean people can upskill themselves all the time and find new ways to stay relevant in a changing world.

One experiment showed that a computer algorithm picked up lung cancers in 90% of instances, compared with human doctors' 50%.

Economists refer to the 'lump of jobs' fallacy that assumes there is a set number of jobs out there, and that when one group of jobs falls away, there is nothing to replace it with. New jobs are being created all the time, though it may not always be that easy for workers to find new work.

Employees as consumers

Henry Ford II (examining new automated processes on a factory tour): "Walter, how are you going to get those robots to pay your union dues?"

 

Walter Reuther (1950s union leader): "Henry, how are you going to get them to buy your cars?"

 

The above story (which may be apocryphal) demonstrates one of the problems with human jobs being replaced by machinery: workers are also consumers, who buy the goods and services that they make or that other workers make. At its peak, the US motor industry (and the Japanese, German, French, etc motor industries) was one of the leading employers in the country. Importantly, employees were also customers of the industry.

 

In the end, foreign competition hurt the US motor manufacturing industry, to the point that large parts of Detroit are almost like a ghost town today because of the loss of jobs. Rust belts are not an uncommon sight in many advanced industrial countries, with once booming towns in permanent decline after their mill or mine was shut down.

 

The point, though, is that jobs were created in other parts of the economy – notably in services, finance and tech – although, of course, the industry workers themselves may not have been able to benefit. People in these jobs presumably continue to buy cars. 

Are new jobs being created elsewhere, thanks to the tech boom? We will have to see.

Such structural unemployment is bad news for those directly affected, but it is not necessarily bad for the economy: the UK managed its traumatic transition from a manufacturing economy in the 1980s to one built around financial services reasonably well.

 

But some question whether such future transitions will not be even more traumatic, threatening workers at all levels, including highly skilled ones. Such a transition may even be beginning if we look at the US tech industry as an example.

 

The US tech giants of today – Facebook, Apple, Amazon, Google and others – are as important in the global economy today as the motor car manufacturers of yesteryear, but one big difference is that they employ fewer people. In his book, Rise of the Robots, Martin Ford notes that at its peak of employment in 1979, General Motors employed 840 000 people. In 2012, at similar profit levels (adjusted for inflation), Google employed about 38 000 people.

 

Algorithms and other forms of AI already do a lot of the work that humans do.

 

Are new jobs being created elsewhere, thanks to the tech boom? We will have to see.

automation potential
Referring again to Moravec's paradox, some sectors of employment will be less affected than others, at least in the medium term.

Jobs that require a high level of sensorimotor skills – for example, gardening, cleaning or types of medical care (such as paramedics and nurses) – are likely to do better than clerical jobs.

Source: McKinsey Global Institute

 

 

Work that requires empathy (nursing again, or social work) or persuasion (such as sales roles) are likely to stay in demand. However, one can see even these professions having to integrate with technology to provide the best service. A nurse would need to be able to work with the diagnostic tools; a salesperson with data to better understand the customer.

 

 

driving-related jobs

 

Within transport, long-distance trucking may be hard-hit by the growth in autonomous vehicle technology, but there may still be room for those who have to deliver packages to difficult-to-get-to locations.

 

Think of the delivery of a new lounge suite up a flight of stairs, for example. On-the-job drivers would benefit from autonomous vehicle technology.

Source: Bureau of Labour Statistics, Occupational Employment Statistics, 2015
The future of finance and investment

The financial services industry faces some of the biggest challenges in adapting to the new era of AI. Apart from clerical work such as processing insurance claims and loan applications, there is also the potential for changing the client experience completely. Natural language processing will allow clients to interact with a financial institution in a cheap and effective way.

 

IBM reckons that clients make an average of 65 service calls a year, at a cost of US$1 trillion. So-called 'chatbots' can reduce an institution's related costs by 30%, one study has found. 

smart watch
~ editor

Big Data is already being used by financial institutions to refine their offerings. Smart devices now deliver information about physical activity that is used by health insurers to 'nudge' customers towards healthier behaviour.

Similarly, tracking our financial records can nudge us towards better savings habits, such as making it easier for us to allocate a windfall into servicing debt or investing for the future. All of this is predicated on the notion of the technology being used for positive nudges, of course. 

 

Machine learning also has a role to play in detecting fraud and money laundering, by picking up statistically unusual account behaviour, for instance. Facial and voice recognition software processes can also help in the fight against criminals.

 

Robo advice, in which the technology helps the client to build an investment portfolio using low-cost products such as index-linked exchange-traded funds (ETFs), is already helping to drive down the cost of investment. To use the industry jargon, this can be explained as "beta is becoming commoditised" (translation: clients are less likely to pay high fees for simply matching the performance of the overall market). Alpha (outperformance) will, however, be rewarded.

 

AI and Big Data also have a role to play in investment analysis and portfolio management. Much of economic, market and company research involves the processing of data, which AI should be able to do in the not-too-distant future. Machines can plough through economic data and financial statements far better than any human, and detect patterns and trends that may trigger important buy and sell decisions. They can also act to offset behavioural biases in the portfolio manager or investor. On the other hand, there should still be scope for those skills that machines can’t replicate, such as intuition.

 

AI can also absorb data not reported by listed companies or government statistical offices. Think of sensors that pick up footfall at shopping malls, or traffic on major roads, which can be used as early detection signs of a turn in the economic cycle. Think also of algorithms that scan social media for keywords that might signal changes in economic sentiment.

 

All of these changes should help lower the costs of finance for both institutions and customers. But they also open things up for potential new entrants in the industry, such as in alternative financing (crowdfunding or blockchain applications, for example) or low-cost share and foreign exchange dealing. However, they can also be empowering for all involved. Wealth managers will have more powerful tools at their disposal, which should allow them to offer clients products and services better suited to their needs – and to build up relationships with clients.

 

According to the Oxford University research cited earlier, 30% of jobs in finance are at risk from automation, compared with the 47% of jobs overall. The reason for this, the researchers argue, is because those in the financial sector believe that the industry will be able to develop new products and focus on high value-add services. Given the relatively high levels of education in the industry, it is perhaps better able to retrain staff in new functions.

Conclusion: the future may be bright, after all

Man is the lowest-cost, 150-pound, non-linear, all-purpose computer system which can be mass-produced by unskilled labour – NASA report

 

Many times in history, doomsayers have predicted the permanent loss of livelihood for employees because of technology. Maybe it’s because of the nature of technology that makes people fear the worst – they are unable to picture the sorts of jobs that the new technology will create. They also are perhaps unable to get their heads around just how much technological breakthroughs improve their lives.

Loss of employment is not simply about a loss of income, but also about a loss of purpose.

Earlier detection of diseases, vaccine discoveries and other medical advances undoubtedly save lives and improve quality of life. Seemingly minor inventions like spectacles have helped to enhance and extend the working lives of many skilled persons over the years (the author would not be writing this article without them), with positive knock-on effects for others.

 

So, perhaps our fear of the robots and AI stems from our inability to imagine how our lives will change and what human skills will be required to interact with the computers. It may well be that many more careers will be created than lost.

 

But what if AI advances to such a degree that large-scale human redundancy becomes unavoidable? This is a scenario plotted out by Ford in Rise of the Robots

 

This scenario would certainly be dystopian if the redundant humans become marginalised. Resulting social unrest could undermine the benefits of the improved technology.

 

As a solution, US author and entrepreneur Peter Barnes proposes a universal basic income paid to everyone affected by job losses. He recognises that loss of employment is not simply about a loss of income, but also about a loss of purpose. So, he proposes further incentives for individuals who improve themselves or who engage in activities that make the world a better place. There would be incentives to finish school, go to university and pursue postgraduate study. Volunteering for charity or community work would also be rewarded. People could engage in things that they find meaningful, while there would be fewer disaffected people about, making society happier and more cohesive in the process. And, of course, there would be the technology to help them do all of these things.

About the author

Patrick Lawlor

Patrick Lawlor

Editor

Patrick writes and edits content for Investec Wealth & Investment, and Corporate and Institutional Banking, including editing the Daily View, Monthly View, and One Magazine - an online publication for Investec's Wealth clients. Patrick was a financial journalist for many years for publications such as Financial Mail, Finweek, and Business Report. He holds a BA and a PDM (Bus.Admin.) both from Wits University.

Receive Focus insights straight to your inbox

Sending...

Please complete all required fields before sending.

Thank you

We look forward to sharing out of the ordinary insights with you

Sorry there seems to be a technical issue