Managed.IT issue 66

19 www.managedITmag.co.uk Dr John Bates, CEO of Intelligent Content Automation leader SER Group and former Cambridge University computer science lecturer, explains why it’s time for an alternative to Alan Turing’s benchmark for machine intelligence Time to retire the Turing Test? Dr John Bates OPINION and the use of organisational data in training other models. Empowering employees with AI Incorporating AI into your business necessitates a shift in mindset and a new approach to work. To successfully implement AI, it’s essential to stop viewing it as an item on the IT department’s task list and instead to empower employees to apply AI models to their specific responsibilities directly. When off-the-shelf AI models are available to everyone, employees don’t simply use AI, they take ownership of it. They innovate, transform and fully embrace the technology. That’s the power of AI, unleashed. On the 70th anniversary of his untimely passing on June 7th, many of us will have quietly raised a glass to the Father of Computer Science, Alan Turing – a figure whose contributions have profoundly shaped the field of information technology. One of these is the famous Turing Test – the benchmark for when a machine intelligence could be said to be ‘thinking.’ Published in 1950, his idea centres on a machine imitating responses well enough to deceive an interlocutor into presuming they must be speaking with another person, rather than a set of programming instructions. This was devised to answer the then highly novel question of ‘Can machines think?’. Turing predicted that this would be achievable in a matter of 50 years, but it didn’t take 50 years to develop machines capable of mimicking human responses. Within just 14 years, a program called Eliza became so good at communicating that interlocutors began confiding all their secrets to her. While we could argue that the Turing Test was effectively achieved during that era, even the most impressive AI systems of the 1970s and 1980s were notoriously brittle, excelling at one specific task but lacking the ability to adapt to different use cases. Now, a significant shift is underway: ChatGPT. These models possess billions of parameters, enabling them to access a vast corpus of information covering substantial portions of the public Internet. Even so, anyone who spends any time using these systems quickly sees that while remarkably competent, they do not possess true intelligence and are just exceptionally good autocomplete algorithms. It’s their ability to generate coherent answers to our questions and prompts that creates the illusion of genuine sentience, and aside from a few fringe AI researchers who believe LLMs are achieving sentience, the interactions remain limited. Think of a chatbot that can’t fully understand you but stubbornly refuses to put you through to someone who can. That would suggest that while in some ways computers have reached the conversational abilities Turing described as necessary to pass his experiment, we still don’t view these systems as our equals. AI researchers have never been entirely satisfied with the Test, and in a world where rapidly improving AI is ubiquitous, should we seek a better litmus test for machine intelligence? As it stands, we’re a long way off that. ChatGPT isn't capable of generating novel ideas or information autonomously but relies on the data present in its training dataset. I'm happy to acknowledge that this represents a distinct form of intelligence, and indeed maybe this is how humans learn, but I believe humans also go way beyond this in being able to invent, not just apply, knowledge. I believe we'd all benefit if we ceased wishing that our machine assistants were as intelligent as us, and instead focused on maximising their usefulness and efficiency, regardless of how they achieve it. Consider Bridget Jones (happily coming back to our screens in another installment of her life story). In the most relatable way, Bridget records all her innermost thoughts on dating and her calorie intake on traditional paper. I believe we are very close to having brilliant diaries that could function as a super-smart friend to Bridget, informing her about how much wine she's actually consumed that month. Would Bridget consider this smart diary human, like Turing’s Test does? I believe she would view it, as we should regard all the new AI assistants entering our lives, as the latest convenience and a helpful tool. While acknowledging the Turing Test’s importance in the history of AI, a more compelling test of machine intelligence than winning the imitation game would be, as some have suggested, to give ChatGPT $100,000 and task it with transforming it into $1,000,000. How do you suppose it would cope with that task?

RkJQdWJsaXNoZXIy NDUxNDM=