AI in its current manifestations is parasitic on human intelligence. It quite indiscriminately gorges on whatever has been produced by human creators and extracts the patterns to be found there—including some of our most pernicious habits. These machines do not (yet) have the goals or strategies or capacities for self-criticism and innovation to permit them to transcend their databases by reflectively thinking about their own thinking and their own goals.

They are, as Wiener says, helpless, not in the sense of being shackled agents or disabled agents but in the sense of not being agents at all—not having the capacity to be “moved by reasons” (as Kant put it) presented to them. It is important that we keep it that way, which will take some doing.

One can imagine a sort of inverted Turing test in which the judge is on trial; until he or she can spot the weaknesses, the overstepped boundaries, the gaps in a system, no license to operate will be issued. The mental training required to achieve certification as a judge will be demanding. The urge to attribute humanlike powers of thought to an object, our normal tactic whenever we encounter what seems to be an intelligent agent, is almost overpoweringly strong.

I absolutely love this sentiment. I once tried to engage a very prolific AI scientist working in San Francisco in the question of fallacy and bias, in determining machine attributes (it didn’t go very far)

We need intelligent tools. Tools do not have rights and should not have feelings that could be hurt or be able to respond with resentment to “abuses” rained on them by inept users

https://www.wired.com/story/will-ai-achieve-consciousness-wrong-question/

From “What Can We Do?” by Daniel C. Dennett. Adapted from Possible Minds: Twenty-Five Ways of Looking at AI, edited by John Brockman, published by Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2019 by John Brockman