Take away the hype and all the excitement around artificial intelligence and you quickly begin to realise that, as a term, AI is horribly misused and misunderstood in general–and it’s largely its own fault.
Lacking a clear and precise (or at least widely accepted) definition, AI is riddled with the sort of vagaries that lead to the blanketing of whole swathes of the technology landscape and attract the buzz of marketing flies like the corpse of a bloated technological whale. Type in a search for AI into the internet and you are likely to find a dizzying array of differing definitions and proposed sub-types.
Like so many concepts, Artificial Intelligence was first coined by Science Fiction authors peering myopically (or perhaps more grimly–with surprising clarity) into our dystopian futures. From those minds sprung the concept of intelligence ascribed not to a human but instead to a machine itself. More specifically, a machine demonstrating a thinking process similar to or capable of being mistaken as human.
Why exactly any machine should aspire to the dubious pinnacle that is human intelligence remains a mystery to me. By example, with decades of accumulated experience and wisdom with which to guide informed decision-making, you would think I would know better than to get drunk and message my ex but here we are—lofty aspirations indeed. If there’s one thing humans can do with absolute dependability it is make bad decisions—and often the same bad decisions apparently.
Personal shortcomings aside, the concept of AI being anything where technology is able to pass for human or perform tasks which could be seen to be normally ascribed to a human is both incredibly subjective and loose enough to invite gross misuse. In fairness, you don’t see many other plants and animals doing basic math but by that definition my calculator qualifies as being artificial intelligence.
In more recent times, the term AI seems to have been narrowed at least in common parlance to cover three broad areas: processes which involve algorithms, machine learning and deep learning. Deep learning follows the basic premise of processing large quantities of data through a complex logic structure to draw conclusions. Machine learning, on the other hand, involves parsing large quantities of data through an algorithm that is automatically iterated to improve its accuracy and capability.
Interestingly, one thing that both machine learning and deep learning have in common is that both are designed to perform very specific tasks with often surprising levels of skill and accuracy (eg. Playing Go). ‘Processes which involve algorithms’ on the other hand encompasses such a vast range of possibilities it has almost the same issue as the term AI.
I have two fundamental issues with these types of AI in relation to the broader definition. The first is that the current implementations we are seeing are designed to perform specific tasks well. If the concept of AI from its very roots was the idea of a machine being mistaken for human then surely it should need to be assessed on its ability to adapt and perform in a wide array of functions. The human eye is an absolute marvel capable of performing a very specific task with amazing ability. That said, an eyeball on its own isn’t exactly going to be mistaken for being human because it does one thing well.
The second issue is that just any task or process completed through the application of an algorithm should be ascribed the title of artificial intelligence. Surely, if the algorithm was designed solely by a human, applied by a human and provided with data by a human, then why are we in such a rush to palm the credit for the process off to an intelligence other than human. Is it that the idea of a machine having come up with the idea is somehow more enticing or is it that we, as a whole, are so painfully aware of how fallible we are that the idea of blaming the whole thing on the machine running the code just seems like a sensible deflection for when we invariably end up being wrong.
As imprecise as some of the terms and definitions surrounding artificial intelligence may be, that is not to say that the progress made in this area has not been impressive—quite the opposite in fact. Who knows, given enough time maybe machines really will progress so far as to be indistinguishable from humans. At least I’ll feel safe in the knowledge that, at that point, any of them planning on world domination will no doubt be too lazy to get off the couch for long enough to make it happen.