Home   Business   Article

Subscribe Now

How AI is set to cross the threshold from human aid to human replicant




The team at Cambridge Consultants will be working hard to meet the challenges and opportunities 2022 presents.

Fafaza agritech uses AI to identify plants
Fafaza agritech uses AI to identify plants

In a resume of achievements last year, the technology and innovation company’s chief commercial officer, Richard Traherne, offered an “end-of-year pick of the trends and breakthroughs that are sure to have a significant impact on the future of global business”.

The most urgent of four projects highlighted in the selection concerns the rapid transition to net-zero: this has had more focus because Capgemini – which acquired Cambridge Consultants in 2020 – has a commitment to net zero by 2030 embedded in its mission statement.

Of the four key themes outlined by Richard Traherne, two relate to artificial intelligence (AI), one to climate change and one to aviation.

What’s of note about the versions of AI presented by Cambridge Consultants researchers is how divergent they are, signalling a technology still at an early-ish stage of evolution which could go any number of ways.

This make-or-break year for AI sees the relationship between human and machines on the cusp of a momentous step change, with significant technological progress ensuring that AI-enabled devices can make deeper inroads into sharing human experience – if that’s what we want.

The late Prof Stephen Hawking’s warnings about AI sent shock waves around the world. Picture: Joe Giddens/PA
The late Prof Stephen Hawking’s warnings about AI sent shock waves around the world. Picture: Joe Giddens/PA

The speed of advances in AI is matched by the responsibility of those involved to explain how the technology will be used. Everyone in Cambridge will remember that, just four years ago, physicist and humanitarian Stephen Hawking said that AI “could develop a will of its own”, which could become the “worst event in the history of our civilisation”.

In a recent white paper titled ‘Navigating the path from automation to autonomy’, Cambridge Consultants’ authors say that “phenomenal advances in artificial intelligence, robotics, wireless connectivity and edge computing are opening up transformative commercial opportunities” – but how many people understand or are aware of the radical alterations our lives will undergo as the development of AI accelerates into unknown territory?

The white paper discusses immediate commercial opportunities that are too significant to be turned away. For instance, the ‘AI at the edge’ approach is illustrated by Fafaza, a breakthrough crop-spraying concept developed at Cambridge Consultants. It performs both plant recognitions and precise, individualised treatment in the field, in real time. The system comprises off-the-shelf components and an AI computing platform costing less than $100. That sounds useful (if you support crop-spraying in the first place). Also considered are fruit-picking robots, currently under development in Cambridge. Again, useful, as is all of the technology being developed at Cambridge Consultants.

The three key sectors for AI progress assessed in the white paper are smart infrastructure, agritech solutions, and the logistics/retail sector.

Tim Ensor, head of AI at Cambridge Consultants. Picture: Keith Heppell
Tim Ensor, head of AI at Cambridge Consultants. Picture: Keith Heppell

Tim Ensor, director of AI at Cambridge Consultants, speaking on ‘Rethinking AI investments, redefining success’ at the AI Summit London 2021, was bullish.

“We’re seeing AI general-purpose tools emerging which allow us to be much more efficient about the way we put our AI into practice, so some quite interesting underpinnings [have been] emerging in the last year,” he said, adding that “Arm putting neural accelerator technology into edge devices will allow us to run AI much more efficiently in many different contexts.”

One of the contexts is autonomous vehicles (AVs). To migrate to the era of autonomous driving requires vast amounts of data – so much data that what would happen is that we effectively move from today’s analogue world with data bolted on to an AI-enabled, AI-run world with humanity bolted on.

On AVs, Tim notes: “The problem is exacerbated by one of our key themes – the barrier to innovation caused by the need to capture massive amounts of real-world data to train the system. In response to the challenge, a team here at Cambridge Consultants developed a concept called EnfuseNet,3 which fuses data from extremely low-cost sensors and cameras to generate high-resolution depth data, the reference point that autonomous systems need.”

Data centres will be the engine room of this brave new world: Tim references “increasing power from things like the Cambridge1 supercomputer which NVIDIA has put in place to help innovators with some of the most computer-intensive tasks”.

Cambridge Consultants’ EnfuseNet before, left, and after
Cambridge Consultants’ EnfuseNet before, left, and after

There are a lot of variables AI expertise needs to address before the technology becomes fully embedded into our lifestyles. How things might go wrong was artfully described in the recent Reith Lectures. Corporations are encouraging AI to develop, if not will, then certainly the ability to make autonomous decisions. If you combine this with the fact that pretty much every technological advance in humanity’s history is put to use for military purposes before any other, and you have any number of potential calamities – that’s not my view, that the view of Stuart Russell, professor of computer science and founder of the Centre for Human-Compatible AI at the University of California, as made in the recent Reith Lectures. In the final lecture of the 2021 Reith Lectures, Prof Russell explored questions around human control over increasingly capable AI systems.

Prof Russell gave a simple example: if you instruct an AI robot device to halt climate change, it would potentially just go out and kill every human alive – that, after all, is the logical solution to our climate change problem. Just one such programming oversight could terminate humanity’s tenure on planet Earth.

Sally Epstein, head of strategic technology at Cambridge Consultants, says that if AI was a human it would be in the naughty teenage years, when rules are still being negotiated. In a November opinion piece titled ‘Human-Machine Understanding: how tech can help us to be more human’, she said that “AI has only reached adolescence – everything will change when machines learn to learn”.

Sally added: “There hasn’t been enough of the right kind of progress for me… We’ve made great strides making machines with logical intelligence – but what about the social, emotional or even ethical intelligence?”

In other words, when we hand over the data log of our daily experience – including the social/emotional/ethical choices we make and why – to AI, it will be better able to serve us. An AI Ethics course at the University of Cambridge will help understand the mechanics of the concordat.

Dr Sally Epstein, head of strategic technology at Cambridge Consultants
Dr Sally Epstein, head of strategic technology at Cambridge Consultants

To get there, Sally proposes, we have to stop treating AI as “a data problem to be met with bigger machines”.

“We know that empathy is not equal to the amount of data processed,” she suggests, adding: “To truly move forward, I believe machines need to truly understand humans, and that can only mean one thing – HMU.

HMU, ‘human machine understanding’, is required for AI to better serve us, and what we need to achieve is “real time understanding of human state… starting perhaps by pushing the boundaries of neural interfaces, non-invasive measurement, wearables and so on”.

“I expect products and services to understand my state and make decisions to aid me in my life,” Sally concludes. “Intelligent systems will undoubtedly continue to improve in their ability to calm, comfort and soothe us, to earn our trust and rapport. And it will happen when neuroscientists and psychologists successfully join forces with engineers to teach computers to truly understand humans. Viva la revolution.”

Much has rightly been made of what will happen to humans after we have handed over the cognitive keys of our humanity to machines. Is it possible that, in the haste to get AI over the line in 2022, we may have to blur the line between technology assisting humans and technology replacing humans? And what is our place in the great scheme of things if – or indeed when – that happens?



Comments | 0
This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies - Learn More