The number of interpretations of what has become known as Artificial Intelligence (AI) is only matched by the exaggerated claims of what it will provide or the havoc it could wreak, depending on your point of view. I wouldn’t dare suggest that there is universal agreement on categorising it, but I think there is at least a broadly accepted universal definition. AI is a term for a program or a machine that, if a human carried out the same activity, we would say the human had to apply intelligence to accomplish the task. [1] A broad enough definition to generate a healthy argument. Nevertheless, there is something in there for everyone: learning, reasoning, problem-solving, data manipulation and perhaps even creativity. Its use today is becoming as wide as our imagination.
There is at least one distinction that it is probably important to make. Specifically, systems that perform precise tasks (such as recognition) even without being programmed to do so represent a somewhat narrower form of AI than genuine sentience, (the representation of AI most commonly seen on the big screen). This is something else entirely and represents the sort of ‘thinking’ associated with humans. This capability doesn’t exist yet, but its prospect has fuelled the AI ethics debate, which is already wrestling with elements of the current impressive ability to perform ‘judgmental’ tasks based on the clever application of layers of logic and mass data. Of course, there is an element of ‘machine’ learning here, and the parameters and applied logic that ultimately drives answers need to come from somewhere; hence, the ethical dimension. The use of such systems is also a growing area of debate. One could hardly have missed the controversy of CCTV facial recognition trials in the UK or the news item on facial recognition becoming a prerequisite for internet access in China. Given this revolution and the depth with which it will affect every facet of our lives, what are the norms and frameworks that govern what is and is not acceptable? Elon Musk and the late Stephen Hawking have both issued warnings about unharnessed AI, but there is a considerable difference between the careful and ethical control of what we have now and in immediate prospect and guarding against the possible human-like thinking machines of the future. There is enough to think about already: the shift in the labour market, protection of privacy, security and of course it’s one thing to have the computer say “no” but what is the requirement to understand why it said it?
The potential benefits of AI sit all around us: driving forward cutting-edge medical research, increasing organisational productivity and addressing the climate crisis. Massive global investments are being made by US household names and Chinese firms such as Alibaba, Baidu, or Lenovo. The UK is, however, well-positioned to compete with these heavyweights and to lead the conversation around the ethical use of AI by building on our strengths. Across the UK’s public, private and academic sectors, great work is underway to build an AI innovation ecosystem, with large firms looking to work with start-ups and organisations such as Digital Catapult offering support to SMEs in the space. This, in tandem with proactive public sector initiatives such as the formation of the cross-departmental Office for AI, and our world-leading academics pooling their expertise in the Turing Institute, demonstrates the power of collaboration in harnessing AI for the maximum economic and societal benefit to the UK.
While it is difficult to predict the future, what WIG’s three and a half decades of experience allow us to say with some certainty is that working collaboratively across the sectors will be key in tackling these big challenges (such as harnessing the power of AI).
Explore out Technology and Digital events
[1] 1950’s definition by Minsky and McCarthy
Written by
Originally published: 06 December 2019