I’m often asked what my job title is, and my standard reply is that I don’t have one. If pushed however I use the term ‘serendipity architect.’ Mentioning this phrase is usually a good indicator of the mindset of the person I’m talking to – if they’re open and want to know what the title means, I’m more likely to enjoy working with them.
Some people are dismissive of the term. In my experience those people are likely to be highly operational, and not the type of people that cope well with the inherent ambiguity of long term thinking and the connections to strategy and innovation.
The power of serendipity is acknowledged occasionally in the mainstream business media, like this recent example in McKinsey online:
Serendipity involves stumbling over something unusual, and then having the foresight or perspective to capitalize on it. What makes that such an attractive story? It’s the juxtaposition of seemingly independent things. In a serendipitous flash, one recent winner, an engineering firm, realized that the gear it designed for scallop trawlers could also be used to recover hard-to-get-at material in nuclear-waste pools. Surprising connections such as these set off a chain of events that culminate in a commercial opportunity. So to build this story line, think about the quirky combination of ideas that got you started and remember that serendipity is not the same as chance—you were wise enough, when something surprising happened, to see its potential.
By the way, the entire article is a good read…
The Vice Chairman of Korn Ferry and a McKinsey partner have published a short book that has studied the benefits of long term thinking. There’s an interview with the authors on the Wharton site that gives some context and one extract from this stands out:
Just beware of the trends going on in the world. Larry Fink, the CEO of BlackRock, which manages $6 trillion in assets, says that it would be key for CEOs to realize some of the changes going on in society. For example, [consider] this shift towards automation and artificial intelligence. A McKinsey study we cite in the book says that [those technologies] could displace 30% of American workers.
CEOs who want to survive in the long run, and want their companies to survive in the long run, have to be aware of what’s going on in society, and try to steer their companies to address some of these issues. If they do that, they’ll get the support of their investors, customers and employees.
As most of my updates now go to my clients rather than here on my blog, this post may seem out of place compared to previous writings. However I’ve become increasingly concerned about the failure of governments to understand the implications of the:
- interplay of complex systems that form the framework of modern society (including the complex system that is the climate)
- effects of automation
- alarming rise in inequality
- threats from cybersecurity
There are significantly more risks to consider in the years ahead, and these have severe implications for stability. Bain and Company has completed some good work on this recently, and a summary has just appeared on the HBR site. I don’t usually include large quotes here, but this piece of work is a concise summary that is hard to beat (the highlights are mine):
The benefits of automation, by contrast, will flow to about 20% of workers—primarily highly compensated, highly skilled workers—as well as to the owners of capital. The growing scarcity of highly-skilled workers may push their incomes even higher relative to less-skilled workers. As a result, automation has the potential to significantly increase income inequality.
The speed of change matters. A large transformation that unfolds at a slower pace allows economies the time to adjust and grow to reabsorb unemployed workers back into the labor force. However, our analysis shows that the automation of the U.S. service sector could eliminate jobs two to three times more rapidly than in previous periods of labor transformation in modern history.
Of course, the clear pattern of history is that creating more value with fewer resources has led to rising material wealth and prosperity for centuries. We see no reason to believe that this time will be different—eventually. But the time horizon for our analysis stretches only into the early 2030s. If the automation investment boom turns to bust in that time frame, as we expect, many societies will develop severe imbalances.
The coming decade will test leadership teams profoundly. There is no set formula for managing through significant economic upheaval, but companies can take many practical steps to assess how a vastly changed landscape might affect their business. Resilient organizations that can absorb shocks and change course quickly will have the best chance of thriving in the turbulent 2020s and beyond.
The full report from Bain is also well worth reading, and is available here.
A university study in Italy has simulated the effect of luck on wealth creation. The study showed that richer people were more likely to be also lucky. While this study was focused on individuals, it also looked at the wider implications, and concluded that casting wider for insights will provide better returns than placing specific bets.
If this research is able to be reproduced, it would give further support to the idea that expanding an organisation’s field of view will create long term returns.
Full details here
There’s a long history of ‘corporate antibodies’ blocking innovation. The challenge stems from the issue of KPIs vs innovation. Most organisations have a well tuned engine room that produces profit. It’s specifically tuned to eliminate variation and maximise efficiency. These two goals don’t fit well with innovation which can be messy, iterative and inefficient. In this blog post, Steve Blank offers a cunning plan to work around the anti-bodies in a manner that both enables innovation and builds capability. The essence of the idea is that organisations need a set of processes for the engine room, and another set for innovation.
In his post Steve even offers templates for how a leadership team should manage implementing this process, which is something that is increasingly rare to find online (where it’s easy to be a innovation expert on theory, but much harder to prove real-world credentials).
It’s a highly recommended read if you’re in a large organisation, and banging your head against the wall trying to move the dial on innovation.
After watching the sci-fi movie ‘Passengers,’ I became intrigued with the business models that support the plot of movies like this. In turn, this triggered a Guest Blog post with Scientific American, which you can read here.
Ten years ago a small group of us started down the path of a large-scale transformation at the Canterbury DHB. Now this work has been hailed in The Guardian as something that the NHS should follow. Read the full details here .
The Financial Times has published an article on the death of retail in the USA. In addition to being an interesting read about the impact of technology on jobs, it also contains a great quote about the risk of not having a view over the horizon, and the boiling frog effect:
Wayne Wicker, chief investment officer of ICMA-RC, a pension fund for US public sector workers says “These things creep up on you, and suddenly you realise there’s trouble. That’s when people panic and run for the exit.”
I’m betting that senior teams in the companies mentioned in the article have been sitting in their comfortable paradigms for too long, and their own biases have been filtering signposts that may have helped anticipate what’s coming.
This HBR article from a couple of years ago has some good techniques for helping make better bets about how the future might evolve for a specific outcomes. They would be useful when you’re at the pointy end of a scenario exercise, rather than at the start. The entire piece is a worthwhile read, and my three main relevant takeaways can be summarised as:
- When estimating data points that may occur in the future, make three estimates – one high, one low, and then, by extension, one that falls in the middle. The middle estimate is much more likely to be accurate.
- In a similar fashion, make two estimates about future data points, then take the average. Note that it’s important to take a break between making the two estimates in order to avoid bias.
- Create a premortem i.e. imagine a future failure and then explain the cause.
The smart, insightful and deep-thinking David Weinberger has published a must-read article on Wired about the implications of AI on the human concept of knowledge. Rather than paraphrase his excellent writing, I’m going to extract some of the key sections:
We are increasingly relying on machines that derive conclusions from models that they themselves have created, models that are often beyond human comprehension, models that “think” about the world differently than we do.
But this comes with a price. This infusion of alien intelligence is bringing into question the assumptions embedded in our long Western tradition. We thought knowledge was about finding the order hidden in the chaos. We thought it was about simplifying the world. It looks like we were wrong. Knowing the world may require giving up on understanding it.
If knowing has always entailed being able to explain and justify our true beliefs — Plato’s notion, which has persisted for over two thousand years — what are we to make of a new type of knowledge, in which that task of justification is not just difficult or daunting but impossible?
Even if the universe is governed by rules simple enough for us to understand them, the simplest of events in that universe is not understandable except through gross acts of simplification.
As this sinks in, we are beginning to undergo a paradigm shift in our pervasive, everyday idea not only of knowledge, but of how the world works. Where once we saw simple laws operating on relatively predictable data, we are now becoming acutely aware of the overwhelming complexity of even the simplest of situations. Where once the regularity of the movement of the heavenly bodies was our paradigm, and life’s constant unpredictable events were anomalies — mere “accidents,” a fine Aristotelian concept that differentiates them from a thing’s “essential” properties — now the contingency of all that happens is becoming our paradigmatic example.
This is bringing us to locate knowledge outside of our heads. We can only know what we know because we are deeply in league with alien tools of our own devising. Our mental stuff is not enough.
The world didn’t happen to be designed, by God or by coincidence, to be knowable by human brains. The nature of the world is closer to the way our network of computers and sensors represent it than how the human mind perceives it. Now that machines are acting independently, we are losing the illusion that the world just happens to be simple enough for us wee creatures to comprehend.