It wasn’t until recently that machine learning moved from medical labs (Spanr, genomics analysis) and search algorithms enhancement. Google’s recent acquisition of Deep Mind, IBM’s push of Watson and other viral developments signal a looming shift into consumer products.
As machine learning will be making its way into consumer hands there is going to be a need for design thinking to lead the way in shaping products for the consumer market.
Modern product design has established its role in linking business and strategic thinking with design; through metrics and iterative processes, businesses can now rely on strong product staff to drive revenue.
With new technical tools being developed and used by the most talented developers, we’re likely to start seeing consumer facing products being introduced to the market. If the last few years have taught us anything about digital products is that technology thinking by itself can’t grow a consumer centric business.
We’re in a similar tipping point now to a time when technologists would build (full or partial versions of) products, and only later brought in design and product thinking.
The potential in this space is immense. And there are unique new ways of ideating and designing products.
Right now we got used to a certain “gravity” linking databases and content to views and visual design. Classic models of such as MVC are based on pretty much an unchallenged notion.
Deployment of a powerful codebase that can generate a set of views; neatly organized and waiting to be accessed (in a way that the user is expecting to find them)
Sets of views
Personalization of content is now growing and users expect to receive nothing but relevant content. Publishers are fighting for our morning coffee news update (Daily Brief, NYT Now, Cheatsheet). Primarily through promises of reduced noise and high quality content. This shift is currently reserved to emails (and apps at times) but it likely to restructure online presentation (and maybe even content models themselves).
Content browsing patterns
We now know that users navigate content in very different ways than before. “The death of the homepage” and similar texts have thought us that content is fragmented, and is becoming more and more fragmented new ways. New content concepts such as cards (ref: Wild Card) can reduce stories to functional units of image, copy and call to action. Users now have new expectations when searching for content. And similarly publishers now know how to monetize those new models.
Features such as Facebook Instant Stories are a result of that shift, and will probably not be the last stop in this evolution of content (as discussions continue). Referencing The Times Innovation Report it is easy to link these kind of moves with a longer term strategy shift by publishers; building loyalty over short terms advertising gains.
Repackaging of content, as it happens at The Times (and Apple News) signals a shift, and introduces a more complex, contextual and elaborate ways of accessing the same content. Conversely, users are not wary of these new tools and expect more from publishers.
What if we could build a media site that doesn’t have a static version. There is no content tree, tags, categories et al. The data base will render content solely based on the user and their (public or private) identity.
Currently available APIs by services such as Message Resonance by IBM Watson can determine how successful would a certain tone of message be with its recipient. Using cookies and simple Google Alerts we can yield great insights about individuals’ interests (based on publicly available data).
For example customized views could live in unique URLs, or better yet locally. Those could even render to print eventually. With advancements in on–demand printing, raising sales of physical products and instant deliveries it’s not hard to imagine a personalized newspaper rendered to print at midnight and delivered to your doorstep at 6AM.
Moving beyond media — can generative thinking be applied to products themselves?
The team behind “What Make Paris Look Like Paris?” used machine learning to analyze a picture of a street and determine where it’s from (Paris, London et al) . It was done through algorithmically compiling a list of representative means.
Each image fed into the algorithm is being broken down into the patches; around 20,000 for each image. Those patches are then being compared to similar snippets in the system, fitting them into predetermined data sets. As the system tries to make a
decision it complies a percentage of certainty in an assumption. Candidates that are likely to be false are regarded as noise and are discarded. The remaining matches are likely — but not definitively– considered right.
In order to increase certainty the system runs through iterations, checking for more distinctive characteristics, and more aggressively sifting through the noise. This iterative process through what we defined to the system is right or wrong is the very core of machine learning.
What makes machine learning more relevant now than ever before is the affordability of very powerful computers, which are capable of sifting through vast amounts of data. When these tools are paired with strong thinking and scalable products they become very powerful.
In this example: the core concept was to scan Google Street View to systematically, and programmatically, define what make a street in Paris look like a street in Paris.
We then tell the computer look for what might be a data set we want to analyze (i.e. street sign, detail in a balcony), run those across the library you’ve built. Make assumptions (1000’s of them), run through iterations (Recursion) of those assumptions and increase the filter meter.
Through the process of eliminating false assumptions we’re training the algorithm to better make predictions. Which will later promote making faster decisions, and potentially making their own predictions. This is referred to as training data. These concepts are different from what we’re used to, and require a bit of new thinking.
The future of both media and product is going to have a lot more to do with setting frameworks for data to flow in, rather than designing a tree of pages.