Agency, Collectiveness and Emergence

Decentralization has ridden a wave of hype, particularly among those hoping to revolutionize marketplaces with blockchain technology and societies with more dispersed governments. “Some of this stems from political ideology having to do with a preference for bottom-up governing styles and systems with natural checks on the emergence of inequality,” Jessica Flack, an evolutionary biologist and complexity scientist at the Santa Fe Institute, wrote in an email. “And some of it stems from engineering biases … that are based on the assumption these types of structures are more robust, less exploitable.”

But “most of this discussion,” she added, “is naive.” The line between centralization and decentralization is often blurry, and deep questions about the flow and aggregation of information in these networks persist. Even the most basic and intuitive assumptions about them need more scrutiny, because emerging evidence suggests that making networks bigger and making their parts more sophisticated doesn’t always translate to better overall performance.

[…]

Then in a system of collective decision making:

The researchers observed that when the agents could remember only one or two outcomes, fewer strategies were possible, so more agents responded in the same way.

But because the agents’ actions were then too correlated, the collective movement in the model took it along a zigzagging route that involved many more steps than necessary to reach the target. Conversely, when the agents remembered seven or more past outcomes, they became too uncorrelated: They tended to stick with the same strategy for more rounds, treating a short string of recent negative outcomes as an exception rather than a trend. The model became less agile and more “stubborn,” according to Johnson.

The trajectories were most efficient when the length of the agents’ memory was somewhere in the middle: for about five past events. This number grew slightly as the number of agents increased, but no matter how many agents the model used, there was always a sweet spot — an upper limit on how good their memory could get before the system started to perform poorly.

“It’s counterintuitive,” said Pedro Manrique, a postdoctoral associate at the University of Miami and a co-author of the Science Advances paper. “You would think that improving the sophistication level of the parts, in this case the memory, would improve and improve and improve the performance of the organism as a whole.”

https://www.quantamagazine.org/smarter-parts-make-collective-systems-too-stubborn–20190226/

Great piece, I do wish there were more details on cognitive diversity.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.