CHARLES RENNIE MACKINTOSH, The Hill House Chair, 1902. / Archi Expo
Charles Rennie Mackintosh, The Hill House Chair, 1902. / Archi Expo

A recent TED talk by a Noriko Arai caught my attention. I read before about the TODAI Robot project. The challenge set for a robot to pass the entrance exam of The University of Tokyo (TODAI).

To enter Todai, the University of Tokyo, you have to pass two different types of exams. The first one is a national standardized test in multiple-choice style. You have to take seven subjects and achieve a high score (above 84)

When attempting to compute intelligence every bone of our body would send to existing models of measuring intelligence: tests, schools, “how do I compare to a group”. When the next part of the brain is frustrated with standardized tests, and the role they play in early education.

We ought to be reminded of the idea of averagarianism, as coined by Todd Rose (The End of Average). Rose draws a line between Quetelet’s attempts to mathematically measure social phenomena to Taylor’s units of humans and assembly lines optimizations. Making it relatable through modern–day monolithic thinking, the idea that there is one way to live (school,university, career), one way to work (job, promotions, retirement) and one way to study (high school, undergrad, grad, PhD etc).

The provocations that Rose brings up are very real, and need not proving. But if you ever find yourself in need of some case studies: The Powerful Now (SYP, IDEO initiative), The Sharing Economy (Arun Sundararajan, MIT press), and a world of MOOCs (Udacity and EdX to name a few).

So the question that is baffling to me is if we are doing away with standardized, single–dimensional testing as a mean to understand intelligence what is the point of getting robots to enter a university? to compete on a ground we know if flawed?

Let’s say for a second that I did believe in the idea that we can compute intelligence. What would an 80% acceptance to a flawed system as actually mean for the design of neural activity, and brain–like logic clustering I am trying to recreate?

What is extremely ironic in the TODAI challenge is that while its designers are alluding to general AI (potentially turning to super AI) what they are in effect doing is creating a narrow AI.

Citing Arai:

modern AIs do not read, do not understand. They only disguise as if they do.

The differentiation between narrow, general and super AI is incredibly valuable for anyone interested in working in AI, and being able to separate noise from signal.

I am going to use definitions written by Frank, Roehrig and Pring in What to Do When Machine Do Everything:

Narrow AI, which is also referred to as “applied AI” or “weak AI,” is our default definition for this book. It is important to note that all AI today—and for at least the next decade—is narrow (also termed “artificial narrow intelligence” or ANI). Such AI is purpose-built and business-focused on a specific task (e.g., driving a car, reviewing an X-ray, or tracking financial trades for fraud) within the “narrow” context of a product, service, or business process. It’s what the FANG vendors utilize today in delivering their digital experiences. Thus, while it appears that the new machines can do everything, they actually focus on doing just one particular thing very well. As such, these ANI systems would be hopeless in any pursuits beyond those for which they were specifically designed (just ask your Waze GPS if that onion bagel with cream cheese fits your current diet). ANI is simply a tool, albeit a very powerful one, that provides the basis for all we will explore in the coming pages.

General AI, also referred to as “strong AI” or AGI, is what is fueling the fears of the Singularity crowd, and has been highlighted in the previously referenced films Her and Ex Machina. Strong AI is the pursuit of a machine that has the same general intelligence as a human. For example, just as you in a span of a few moments can discuss politics, tell a joke, and then hit a golf ball 150 yards, the AGI “computer will have the general intelligence to perform these activities as well. Ben Goertzel, Chairman of the Artificial General Intelligence Society, points to the Coffee Test as a good definition for AGI. That is go into an average American house and figure out how to make coffee, including identifying the coffee machine, figuring out what the buttons do, finding the coffee in the cabinet, etc.”4 This set of tasks that is seemingly easy for almost any adult to perform is currently insanely difficult for a computer. Creating AGI is a dramatically harder task than creating ANI; by most estimates we are still more than two decades away from developing such AI capabilities, if ever.

Super AI, in essence, is the technical genie being let out of the bottle. In such a scenario, would humans even know how to stop such a machine? It would run circles around our collective intellect (and, as we know, whenever 10 reasonably smart people are put in a room, the collective IQ is not 1,200 but actually somewhere around 95 once one accounts for the different opinions and objectives that people always bring). How could we then turn the machine off when it’s always 10 (or 1,000) steps ahead of us?

Malcolm Frank, Paul Roehrig & Ben Pring. “What to Do When Machines Do Everything.”


Surely an AI passing college exams speaks to the fallacy of grade–based education than the capability of an algorithm?