It was 1980 when John Searle came up with his Chinese Room thought experiment. It since then has been widely discussed, used and objected to in counter essays.
Searle describes a room, where a non Chinese speaking person sits next to a desk. On the desk there is a large book containing scribbles and instructions.
A person outside the room slides a piece of paper with some scribbles into the room — following a series of processes a piece of paper is fed out of the room with corresponding scribbles using the instructions in the book.
Searle claims — rightfully so — that the person in the room didn’t understand the scribbles and only produced news ones based on the instructions in the book. According to Searle it follows that the person in the room has no consciousness.
Objectors to this argument claim that the whole system — the room, the person inside, and the person outside — has consciousness, and that the inner room just handles the computation.
Like most thought experiments, there is no right or wrong answer so we better leave this for another time. I was however reminded of Searle when digging deeper into IA. Intelligence augmentation is the portion of AI that relates to a human machine collaboration, outcome of which is a single purpose algorithms (for the most part).
Being single–purpose (and not a general machine) spares us from philosophical discussions about consciousness, singularity and all the rest of it. Instead it opens new forms of product methodology and offers the ability to articulate a (new) balance between humans and machines. In the thought experiment — once we free the Chinese room from the negativity of the omitted consciousness, we’re left with pretty smart human being.
Can we think of that person in the room being augmented by the book of instructions? Now can we fit that book in a device? We already do. How about all the books in the world?
How can we build interfaces that are hierarchal, flexible and self–programming? That is how our brains work.