Why leaders should focus on long term growth (new book)

The Vice Chairman of Korn Ferry and a McKinsey partner have published a short book that has studied the benefits of long term thinking.  There’s an interview with the authors on the Wharton site that gives some context and one extract from this stands out:

Just beware of the trends going on in the world. Larry Fink, the CEO of BlackRock, which manages $6 trillion in assets, says that it would be key for CEOs to realize some of the changes going on in society. For example, [consider] this shift towards automation and artificial intelligence. A McKinsey study we cite in the book says that [those technologies] could displace 30% of American workers.

CEOs who want to survive in the long run, and want their companies to survive in the long run, have to be aware of what’s going on in society, and try to steer their companies to address some of these issues. If they do that, they’ll get the support of their investors, customers and employees.

Turbulence ahead (Bain and Co in the HBR)

As most of my updates now go to my clients rather than here on my blog, this post may seem out of place compared to previous writings.  However I’ve become increasingly concerned about the failure of governments to understand the implications of the:

  1. interplay of complex systems that form the framework of modern society (including the complex system that is the climate)
  2. effects of automation
  3. alarming rise in inequality
  4. threats from cybersecurity

There are significantly more risks to consider in the years ahead, and these have severe implications for stability.  Bain and Company has completed some good work on this recently, and a summary has just appeared on the HBR site.  I don’t usually include large quotes here, but this piece of work is a concise summary that is hard to beat (the highlights are mine):

The benefits of automation, by contrast, will flow to about 20% of workers—primarily highly compensated, highly skilled workers—as well as to the owners of capital. The growing scarcity of highly-skilled workers may push their incomes even higher relative to less-skilled workers. As a result, automation has the potential to significantly increase income inequality.

The speed of change matters. A large transformation that unfolds at a slower pace allows economies the time to adjust and grow to reabsorb unemployed workers back into the labor force. However, our analysis shows that the automation of the U.S. service sector could eliminate jobs two to three times more rapidly than in previous periods of labor transformation in modern history.

Of course, the clear pattern of history is that creating more value with fewer resources has led to rising material wealth and prosperity for centuries. We see no reason to believe that this time will be different—eventually. But the time horizon for our analysis stretches only into the early 2030s. If the automation investment boom turns to bust in that time frame, as we expect, many societies will develop severe imbalances.

The coming decade will test leadership teams profoundly. There is no set formula for managing through significant economic upheaval, but companies can take many practical steps to assess how a vastly changed landscape might affect their business. Resilient organizations that can absorb shocks and change course quickly will have the best chance of thriving in the turbulent 2020s and beyond.

The full report from Bain is also well worth reading, and is available here.

Luck = success?

A university study in Italy has simulated the effect of luck on wealth creation.  The study showed that richer people were more likely to be also lucky.   While this study was focused on individuals, it also looked at the wider implications, and concluded that casting wider for insights will provide better returns than placing specific bets.

If this research is able to be reproduced, it would give further support to the idea that expanding an organisation’s field of view will create long term returns.

Full details here

Battling ‘corporate anti-bodies’: a practical innovation guide

There’s a long history of ‘corporate antibodies’ blocking innovation.  The challenge stems from the issue of KPIs vs innovation.  Most organisations have a well tuned engine room that produces profit.  It’s specifically tuned to eliminate variation and maximise efficiency.  These two goals don’t fit well with innovation which can be messy, iterative and inefficient.  In this blog post, Steve Blank offers a cunning plan to work around the anti-bodies in a manner that both enables innovation and builds capability.   The essence of the idea is that organisations need a set of processes for the engine room, and another set for innovation.

In his post Steve even offers templates for how a leadership team should manage implementing this process, which is something that is increasingly rare to find online (where it’s easy to be a innovation expert on theory, but much harder to prove real-world credentials).

It’s a highly recommended read if you’re in a large organisation, and banging your head against the wall trying to move the dial on innovation.

Things creep up on you…

The Financial Times has published an article on the death of retail in the USA.  In addition to being an interesting read about the impact of technology on jobs, it also contains a great quote about the risk of not having a view over the horizon, and the boiling frog effect:

Wayne Wicker, chief investment officer of ICMA-RC, a pension fund for US public sector workers says “These things creep up on you, and suddenly you realise there’s trouble. That’s when people panic and run for the exit.”

I’m betting that senior teams in the companies mentioned in the article have been sitting in their comfortable paradigms for too long, and their own biases have been filtering signposts that may have helped anticipate what’s coming.

Tools for thinking about the future

This HBR article from a couple of years ago has some good techniques for helping make better bets about how the future might evolve for a specific outcomes.  They would be useful when you’re at the pointy end of a scenario exercise, rather than at the start.  The entire piece is a worthwhile read, and my three main relevant takeaways can be summarised as:

  1. When estimating data points that may occur in the future, make three estimates – one high, one low, and then, by extension, one that falls in the middle.  The middle estimate is much more likely to be accurate.
  2. In a similar fashion, make two estimates about future data points, then take the average.  Note that it’s important to take a break between making the two estimates in order to avoid bias.
  3. Create a premortem i.e. imagine a future failure and then explain the cause.

Must read article on knowledge and AI

The smart, insightful and deep-thinking David Weinberger has published a must-read article on Wired about the implications of AI on the human concept of knowledge.  Rather than paraphrase his excellent writing, I’m going to extract some of the key sections:

We are increasingly relying on machines that derive conclusions from models that they themselves have created, models that are often beyond human comprehension, models that “think” about the world differently than we do.

But this comes with a price. This infusion of alien intelligence is bringing into question the assumptions embedded in our long Western tradition. We thought knowledge was about finding the order hidden in the chaos. We thought it was about simplifying the world. It looks like we were wrong. Knowing the world may require giving up on understanding it.

If knowing has always entailed being able to explain and justify our true beliefs — Plato’s notion, which has persisted for over two thousand years — what are we to make of a new type of knowledge, in which that task of justification is not just difficult or daunting but impossible?

Even if the universe is governed by rules simple enough for us to understand them, the simplest of events in that universe is not understandable except through gross acts of simplification.

As this sinks in, we are beginning to undergo a paradigm shift in our pervasive, everyday idea not only of knowledge, but of how the world works. Where once we saw simple laws operating on relatively predictable data, we are now becoming acutely aware of the overwhelming complexity of even the simplest of situations. Where once the regularity of the movement of the heavenly bodies was our paradigm, and life’s constant unpredictable events were anomalies — mere “accidents,” a fine Aristotelian concept that differentiates them from a thing’s “essential” properties — now the contingency of all that happens is becoming our paradigmatic example.

This is bringing us to locate knowledge outside of our heads. We can only know what we know because we are deeply in league with alien tools of our own devising. Our mental stuff is not enough.

The world didn’t happen to be designed, by God or by coincidence, to be knowable by human brains. The nature of the world is closer to the way our network of computers and sensors represent it than how the human mind perceives it. Now that machines are acting independently, we are losing the illusion that the world just happens to be simple enough for us wee creatures to comprehend.

NBR Column – Why you need to understand Facebook

Here’s the full text of my latest NBR column:

You might have seen the movie, you might already pay the company for advertising or you might simply be a user. No matter how you interact with Facebook, it’s arguably the one piece of software that everyone online today should understand in detail.

The company was started by Mark Zuckerberg in 2004 as a small business in a university dorm room in the US. The premise was simple – it was a method for people to update their social life on the internet so their friends could see what they were doing.

From this humble beginning the business has now grown to the point where it is regularly used by 1.8 billion people, including almost 80% of all American adults.

The company now offers a range of compelling ways of keeping in touch with people including the capability to upload live video, send instant messages, and call friends free around the world (no-cost international phone calls). This last point is particularly relevant, as it raises the question of how it can offer these services to billions of people without the need to charge a subscription.

Facebook can offer these services free because it also shows advertisements – a lot of advertisements.  Last year the company made $US10.2 billion, primarily from advertising revenue.

Advertisers are attracted to Facebook because the average user spends almost an hour a day on the site, and the more time people spend on the site, the more advertisements Facebook can sneak in front of people. The company is showing more advertisements to users than it used to.

Checking for updates
To ensure people keep looking at Facebook, the company spends a lot of money working out how to make sure users constantly check the site for updates. The updates they’re viewing are not simply about their friends but also advertisements and information from commercial organisations including news outlets. Facebook offers people the opportunity to give their feedback on this information by clicking an icon titled ‘like.’ It’s important to note that there is no icon to ‘unlike’ something.

The updates are viewed in a user’s ‘news feed.’  Bear in mind that the news feed may contain what used to be known as news but is more likely to contain a mix of content, some of which might be from reputable media outlets. Almost any organisation can pay for updates that then appear in users’ news feeds. These updates may or may not look like advertisements.

Once users start to ‘like’ information in their news feed, detailed personal data starts to be created. Research has found that after a Facebook user clicks ‘like’ on 70 updates, the company knows more about that person than their friends. Once they get past 170 likes, Facebook knows a user better than their parents.

Knowing users at this level allows Facebook to tailor the information it delivers to each user so they spend more time on the site.  The company runs massive social experiments involving hundreds of thousands of users to understand how to manipulate information to boost time on the site and, in turn, boost advertising revenue.

One of the results of this strategy is Facebook users only see information that reflects what they like, because to view information that conflicts with their world view would run the risk that they spend less time on the site.

Shaping public opinion
Another result is that Facebook is now such a compelling way to spend ‘free’ time that over 60% of millennials get their political news from their Facebook news feed. At first glance this might not seem important but it’s critical to understand the role of technology in shaping public opinion in today’s world.

To illustrate this, consider the curious example of the UK technology entrepreneur and commentator Tom Steinberg. He was against the UK leaving the EU, and his Facebook information feeds reflected his preference for this. What this meant was that the day after the result of the referendum, he could not find a single person celebrating the Brexit victory on the site.

Bear in mind that Steinberg is very internet literate, and should have been able to find at least one person in his Facebook network from the 33 million people who voted to leave the EU.  However, as he supported the other side of the vote, Facebook filtered his information feed so it only reflected his own world view.

The implications of this start to get complex, so to recap:

  1. Facebook needs people to spend time using its software, so it can sell more advertising and generate larger profits.
  2. To achieve this, it uses psychological research to encourage people to return to the site many times a day.
  3. It also manipulates the information you see so it reflects your world views, which in turn makes you more likely to – you guessed it – spend more time on Facebook.
  4. The more time you spend on Facebook, the more likely you are to ‘like’ information updates, which then gives the company feedback that allows it to legitimately say that it knows billions of users better than their parents know them.

Political business model
At this point you may think that this isn’t really a significant issue because, after all, it’s only Facebook.  However, the influence of the company now extends well beyond influencing the virtual world and is having a real impact on the physical world.

Facebook recognises the influence it now can exert and this translates into new business models.  One of these models is focused on politics, as it points out on its own website where it gives the example of how Facebook was a crucial tool in the election of a senator in the US.

On its site, there is a quote from one of the leaders of this campaign which states: “Facebook really helped us cut through the clutter and reach the right voters with the message that matters most to them. In a close race, this was crucially important.”

The key phrase here is “the message that matters the most to them.” Now recall the point that over 60% of millennials get their political view of the world via Facebook. When you combine these two points, Facebook makes it possible to target voters with the ‘right message’ in a way that simply hasn’t been possible in history.

Granted that there’s a rich history of politicians manipulating the media but the reach of Facebook makes the power of the software unprecedented.  To put this in a local perspective, research in 2015 revealed more than two million New Zealanders use the software every day.

Suppressing the news
Consider a scenario where Facebook itself wants to influence an election – perhaps opposing a candidate who favours regulation that limits the influence of the company.  It would be remarkably easy for the company to suppress news and support for that candidate, without people even knowing it was doing so.

So what does this mean for the average Facebook user?

Next time you check your Facebook feed, consider what information you’re giving to Facebook, and how it might be used.  People freely give the company deeply personal information, and the power of that data gives the company both enormous profit and enormous influence. Most of the media headlines about Facebook focus on the former.

For most active Facebook users, the closest real-world analogy to the software would be a casino where it’s free to play and your payout isn’t cash but information that makes you feel good about yourself.  For Facebook, the result is the same as a casino – a license to print money.