Tools are designed to be universal, of course. A hammer can’t change to be a screwdriver. With algorithms we now try and use context where possible. With data, sensors and other externalities we might try and change the behavior of our system (the catch–all promise).
Issue is that the context we’re trying to use is rule based. If you did this, then you would want that (note that actions are always external).
Like entropy, context is much easier to mess up than to get right. And with tools claiming false universality (say schedule all of your meetings) context is almost always going to eventually miss its target.
Rather than to try and catch external circumstances, against a perceived task, we might want to think about the individual (and not the tool).
What might that individual care about in an on–going (internal) basis. And what might we do to offer (on–going and not anecdotal) utility.
What do you care about as the constitute of my system? In the 25 tasks you do a day, is there a systematic augmentation layer I can offer which alleviate 10% of your pain points, or offer added point of view you’re not currently considering (from 100% to 110%)?