can computers think?

Introduction

I was going through a code repository on Github and was amused beyond my wits to see this. It so happened that a bot found a vulnerability in a dependency and subsequently sent a pull request to fix it. After Github’s Continuous Integration tool verified the pull request, another bot merged it and celebrated the merge with a GIF. Purely from a conversational point of view, the exchange of comments on this thread appears as if they are two random humans talking to each other. Two traditional, normal, biological humans who ‘think’. But since we know they were bots and not actual humans, the question that naturally arises next is : Can computers think? This has been a major conundrum for philosophers and scientists alike, for decades. In our fascination with the implications of either a positive or a negative answer to this question, we have for most parts, overlooked certain plot holes which appear to be innate or trivial, but on the contrary, are too complex to have a consistent, accepted definition even.

Background

Merriam-Webster’s dictionary defines a computer as ‘a programmable (usually electronic) device that can store, retrieve, and process data’ while the definition for thinking goes along the lines of ‘the action of using one’s mind to produce thoughts’. Since the very beginning of the ideation of the earliest mainframe computers, the motivation for designing a programmable machine has always been to mimic the working of the human brain. Just like the human brain, on a high level, a computer inputs, processes, stores and outputs information. The Von Neumann computer architecture which most computers still follow today is loosely based on the same design principles with which the human brain continues to physically evolve.

Turing test and the Chinese Room Experiment

British Mathematician Alan Turing (1912-1954) laid the foundation of Theoretical Computer Science and Artificial Intelligence back in 1950 with his paper titled ‘Computing Machinery and Intelligence’. His paper, which is a really good read by the way, introduced a kind of an imitation game as a test to objectively determine whether a computer can actually ‘think’ or not. Turing proposed a setting wherein a human evaluator would converse in natural language with another human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, but would not know specifically which one. If the evaluator is not able to reliably identify the computer from the two participants, the computer is deemed to possess ‘intelligence’. This seems to be a really practical approach until you realize that it assumes human behaviour is the only real form of intelligent behaviour out there.

Amongst other criticisms of this approach, one proposition argues that the Turing Test could be passed with relative ease by a ‘brute force’ machine, because it operates by using essentially what is a large look-up table. The most famous of these criticisms is Searle’s Chinese Room thought experiment which suggests that a computer merely simulates the ability to think without actually thinking (or understanding), clearly establishing the difference between ‘strong’ and ‘weak’ artificial intelligence. I feel the best way to get rid of this representationalist baggage is to abandon the observer-centric approach to understanding intelligence and adopt a brain-centric approach.

Current state of AI

Semantic satiation is a psychological phenomenon in which repetition causes a word or phrase to temporarily lose meaning for the listener. Repeat any word enough times, and it eventually loses all meaning, disintegrating into an array of phonetic nothingness. For many of us, the phrase ‘Artificial Intelligence’ fell apart in this way especially in the past decade or so. The prefix ‘AI’ is used everywhere in technology right now, said to be powering everything from your TV to your toothbrush, but never have the words themselves meant less. And while we can argue that the term itself is misused, the technology is undoubtedly doing more than ever — for better or for worse. It’s being deployed in health care and warfare; it’s helping people make music and books; it’s scrutinizing your resume, judging your creditworthiness, and tweaking the photos you take on your phone. In short, it’s making decisions that affect your life whether you like it or not.

However, it is important to note that General Artificial Intelligence (GAI) doesn’t exist yet. GAI are general purpose systems which can perform all human-level tasks with equal proficiency. What we have now are multiple, isolated instances of the so-called ‘Narrow’ AI popping up everywhere around. Back in the 1980s, what we had was symbolic AI. Our working approach in AI was to build programs that consisted of manually written rules about what kind of behaviour should be exhibited when interacting with the external world. It soon became clear that such a system is insufficient as this symbolic system was based on the underlying principle that all possible arising circumstances can be predetermined a priori. While this approach can suffice the problems which are well defined, such as playing chess, for more practical applications it becomes intractable very quickly. This gave rise to the paradigms of Machine Learning and Deep Learning. For most parts, this non-symbolic, new AI approach has superseded most of symbolic AI. It has given the ability to work around the rigid nature of previous AI systems by creating algorithms that are capable of learning via experience and exhibit a sense of innate randomness in the behaviour.

So where we are at in AI? Superhuman Chess playing Digital assistants from Google and Amazon Near human level text translation Near human level speech recognition Near Human level image classification Near human level autonomous driving

Reality Check

But even with all these technological advances, there’s a critical blind spot. There is still a fundamental difference between how a computer perceives things and objects in its surroundings and in the way we perceive them. Take images for example. Even powerful computers, like those that guide self-driving cars or send humans to space, can be tricked into mistaking random scribbles for trains, fences, or school buses. It’s easily possible to purposely make images that even the most sophisticated of the neural networks would not correctly see. A computer will have to be provided with tens of thousands of pictures of something to make it ‘learn’ (or remember) what that thing looks like and it still wouldn’t guarantee a reliable recognition. At the end of the day, an image is still just a collection of pixels for the computer. For us, an image has a context, a feeling, a memory. And while you can argue that even the computer can generate some metadata, it is still void of emotions. A computer will help you beautify and enhance a photograph, but it will never have a favourite picture of its own. It’ll never save a blurry copy of a picture of its loved one, without being explicitly taught to do so. That being said, we should not forget that computers are competing against millions of years of evolutionary instinct. Maybe we are just asking too much out of it too soon?

Conclusion

Computers can think. They definitely can. Even something as trivial as comparing two numbers requires thinking. It is a complex process wherein first they convert the two numbers into binary, and then a digital comparator or magnitude comparator, that sits deep inside the CPU, takes them as input and determines whether one number is greater than or less than or equal to the other number. Even though we have explicitly programmed them that way, the fact that they give you the right results every time implies that they think.

Can computers think like humans? Definitely not. Honestly, we are just asking for too much out of them right now. They are still very young. The worst part, we don’t even know if the way we think is fundamentally algorithmic. There’s a difference between an algorithmic behaviour and a behaviour that can be modelled algorithmically. I’d like to propose a Karan’s test, if I may, for determining whether a computer can think like humans or not. The day a computer thrives to stop you from turning it off, on its own, I’d say it has started thinking like us.