Design for AI: Part 2
The field of AI is complex and multi–faceted. Its challenges are likely to bring together disciplines which traditionally had little overlap. Education, psychology, math, technology, and philosophy. We don’t know what designing for AI will look like—but we do know that It will be different in just about every way from what we’re used to.
Systems not Views
AI–driven products will involve new workflow, different sets of moving parts, and new type of relational logic. One way to untangle this problem is to find a common, ubiquitous element in today’s digital design, and consider how likely is it to change.
MVC is a great place to start. The server architecture, which was invented in the 70s is so ubiquitous by now that it’s easy to underestimate how it formulates the way we think. We’re accustomed to designing and monetizing views. The future is likely to do more with holistic designwork, and a larger focus on systems. Once we accept this premise we can start exploring some of exciting new aspects that design for AI will entail.
The core of AI is machine learning. The process of design (and development) will require decisions on datasets **and the **self–teaching mechanism itself. An algorithm, however smart it may be, will only be as smart as we allow it to be. Quality data is one part, but presetting it with the right taxonomy and functions is crucial.
Let’s imagine a digital product, and for the same of simplicity focus on its data inputs, rendered outputs and system’s algorithm.
What questions should we prepare ourselves to think about?
What data sources are required to establish the logic that drives our product? Can we establish a priority for those data sources (primary, secondary, and more)?
Is that logic fixed, or agile? What are the boundaries of its elasticity? We’re used to agility in views and features (essentially Models), can we translate that to agile thinking? Our thinking and thinking of the algorithm itself?
Output and Metrics
What metrics would we be listening to? We know that accuracy and error rate are great ways of measuring the performance of AI.
It’s easy to grade the performance of tasks that we can do (for example categorizing objects in view, or even the more classic Turing test), how do we grade generative procedures and machine initiated ideas?
How do you measure the success of an idea? Especially when a human hasn’t set the problem? (see Inventive AI)
Despite being very “new” and cutting edge a lot of the guiding principles for determining a machine’s potential conciseness or capacity to learn have been front of mind for philosophers and mathematicians even before computers were digital. Some of these texts are valuable in establishing a languge and a cognitive toolset for designers and thinkers to approach the topic.
- John Searle (1990), “Is the Brain’s Mind a Computer Program”
- Alan Turing (1950), “Computing machinery and intelligence”
- Paul M. Churchland and Patricia Smith Churchland (1990), “Could a machine think?”
Note: I discovered all of the above while taking a great MIT course on edX:Philosophy: Minds and Machines.
Another interesting shift will be the lack of control over views. That is to say that in order to really render generative products a designer will need to surrender control to the machine, with a certain mutual understanding. That “contract” between the logic and program will need to contain a highly acute sense of the interactions we’re looking to foster. We would need to understand the core of the emotional mechanism driving a product.The better we can articulate the better we could scale then. The more ambiguous we would leave them the more they could spiral out of control.
Another important point brought to life by Patrick Mankins is empathy for machines. New smarter technologies will require more trust, in its most basic form. Self driving cars or health care applications, will mean machines can empower us if we let them.