Additional Conference Presentation Notes

Late last week I spoke at a conference in New Zealand which had an unusual audience.  It was made up of deep thinkers who deal regularly with ambiguity at the sharp end of policy.  The Q&A session was fascinating, and a lot of attendees asked for more information.  With this in mind, here’s a few bullet points that provide more context on some of the topics:

Practical Tips for Online Privacy

  • never connect to a public wifi, even in hotels – they’re magnets for hackers and stealing your data is literally child’s play.
  • when going online away from work or home, either use your mobile phone as a hotspot, or purchase a virtual private network service.  It increases security and makes it harder to steal your data when online. I use this service.
  • cover the front facing camera on your laptop – it’s relatively easy for hackers to access the camera even when it looks like it’s not turned on
  • when you’re browsing online, it’s very easy for advertisers to track you and show ads targeted at you across different websites.  It’s a significant privacy intrusion that you can combat with this tool.

VUCA

Read/Viewing

  • A short video on the Cynefin framework for complexity
  • an interview that explains more about software biases with Cathy O’Neil – author of the book Weapons of Math Destruction
  • a sobering view of the future is painted in the book Homo Deus.  Here’s a review of the book in The Guardian

 

 

 

NBR column – the state of AI

This is my NBR column from Feb 2017:

In June last year a fascinating aerial battle took place. It didn’t take place in the actual sky but rather in the virtual one, which was appropriate considering it was a battle of man against machine.

The man in question wasn’t an ordinary pilot but a retired US Airforce pilot, Gene Lee, with combat experience in Iraq and a graduate of the US Fighter Weapons School. The machine he was battling was a simulated aircraft controlled by an artificial intelligence (AI).

What was surprising about the outcome was that the artifical AI emerged as the victor. What was more surprising was that the computer running the software wasn’t a multimillion dollar supercomputer but one that used about $35 worth of computing power.

Welcome to the fast-moving world of AI.

It’s an area that has attracted significant media focus, and justifiably so. Experts in the field see the deployment of AI as the dawn of a new age. Andrew Ng, chief scientist at Baidu Research, is one of the gurus in the field.

“AI is the new electricity,” he says. “Just as 100 years ago electricity transformed industry after industry, AI will now do the same.”

Most of the current applications of AI focus on recognising patterns. Software is “trained” with vast amounts of information, usually with help from people who have manually tagged the data. In this way, an AI may start with images that have been labelled as cars, then, through trial and error guided by programmers, eventually recognise images of cars without any intervention.

Extraordinary breakthroughs
This simple explanation of AI belies the extraordinary breakthroughs achieved with this approach and is illustrated by an experiment conducted by an English company called DeepMind.

In 2015, DeepMind revealed that its AI had learned how to play 1980s-era computer games without any instruction. Once it had learned the games, it could outperform any human player by astonishing margins.

This feat is a stark contrast to the battle waged almost two decades ago when an IBM computer beat Russian grandmaster Gary Kasparov at chess in the mid-1990s. To beat him, the computer relied on a virtual encyclopaedia of pre-programmed information about known moves. At no point did the machine learn how to play chess.

Winning simple computer games clearly wasn’t enough to prove the abilities of DeepMind, so a more challenging option was found in the game called Go. It’s an incredibly complex Asian board game with more possible moves than the total number of atoms in the visible universe.

To learn Go, the AI played itself more than a million times. To put this in perspective, if a person played 10 games a day every day for 60 years, they would only manage to play around 180,000 games.

Despite the bold predictions of expert Go players, when the tournament ended in 2015, it was the DeepMind AI that had beaten one of the world’s best players.

The ability to “learn” can be easily leveraged into the real world. While gaming applications may excite hard-core geeks, DeepMind’s power was unleashed on a more useful challenge last year – increasing energy efficiency in data centres.

By looking at the information about power consumption – such as temperature, server demand and cooling pump speeds – the AI reduced electricity requirements for a Google data centre by an astonishing 40%. This may seem esoteric but around the world data centres already use as much electricity as the entire UK.

Potential implications
Once you start to consider the power of AI, the feeling of astonishment evaporates and is replaced with an unsettling feeling about the potential implications. For example, at the end of last year a Japanese insurance company laid off a third of one of its departments when it announced plans to replace people with an IBM AI.  In this example, only 34 people were made redundant but this trend is likely to accelerate.

At this stage, it’s useful to put this development in context and consider what jobs might be replaced by AI. Andrew Ng has a useful rule of thumb – “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.”

What’s important about this quote is the term “near future.” Once you extend the timeline out longer, researchers have theorised that the implications of AI on the workforce are significant.  One study published in 2015 estimated that across the OECD an average of 57% of jobs were at risk from automation.

This number has been disputed heavily since it was published but it doesn’t really matter what the exact percentage will be. What is important to keep in mind is that AI will change the nature of jobs forever, and it’s highly likely that work in the future will feature people working alongside machines. This will result in a more efficient workforce, which will in turn likely to lead to job losses.

However, it’s not just the workforce that could change. The potential for this technology dwarfs anything humans have ever invented, and, just like the splitting of the atom, the jury is out on how things will develop.

One of the world’s experts on existential threats to humanity – Nick Bostrom at Oxford University – surveyed the top 100 AI researchers.  He asked them about the potential threat that AI poses to humanity, and responses were startling. More than half of them responded that they believed there is a substantial chance that the development of an artificial intelligence that matches the human mind won’t end up well for one of the groups involved.  You don’t need to work alongside an AI to figure out which group.

The thesis is simple – Darwinian theory applied to the biological world leads to the dominance of one species over another.  If humans create a machine intelligence, probably the first thing it would do is re-programme itself become smarter.  In the blink of an evolutionary eye, people could become subservient to machines with intelligence levels that were impossible to comprehend.

The exact timeframe for this scenario is hotly debated, but the same experts polled by Bostrom thought that there was a high chance of machines having human-level intelligence this century – perhaps as early as 2050.

To paraphrase a well-worn cliché, we will live in interesting times.

Copyright NBR. Cannot be reproduced without permission.
Read more: https://www.nbr.co.nz/opinion/keeping-eye-artificial-intelligence
Follow us: @TheNBR on Twitter | NBROnline on Facebook

NBR Column – driverless cars

This is my NBR column from December 2016:

Since the invention of the first “horseless buggy” in 1891, there haven’t been many significant changes to the basic of the car. There have been incremental improvements to the platform – such as better engines, increased safety and more comfort – but the core has remained unchanged. A driver from 1920 would be able to adapt to a modern car and the reverse would also apply.

While a driver from the 1920s would be able to drive a car, a mechanic from the same era would no longer recognise the key components. Today’s new cars are equipped with collision avoidance sensors, traction control, ABS, air bags, reversing cameras, engine computers and media players. This technology means that new vehicles contain more software than a modern passenger aircraft and a laptop is more useful than a wrench when tinkering under the hood.

While this may be startling to some people, it pales into insignificance compared to what’s about to happen to the car when driverless vehicles become mainstream.

Since their first significant debut in 2004, driverless cars have evolved quickly. They have now been demonstrated in a range of situations, with manufacturers posting videos online showing just how well their machines work (usually in near-perfect conditions).

These advances have been enabled by developments in sensors, cameras and computing power. On their own, each of these required technologies was prohibitively expensive only a decade ago. Fast forward to now, however, and the cost has fallen to the point where it’s feasible to bundle them into a car.

For example, one of the key components is a device called a LIDAR which creates a millimetre accurate map of the world around the car. Early versions of LIDAR systems fitted on a car cost $75,000. Just last week one manufacturer announced a version with similar capabilities that would cost about $50.

Implications for ownership
While a lot of attention is on the technology in the car, most astute analysts are focused on the second and third tier implications of driverless vehicles. This is the most interesting part of the discussion because cars are ubiquitous in most urban environments, and a change in their form and function has massive implications.

The most significant implication will concern the very notion of car ownership.

A car is one of the most expensive assets in a household but at the same time it’s also one of the least used. Most a car’s life is spent stationary, though the cost of ownership is justified through what it creates.

In modern society a car creates access to opportunity, and for cities without an efficient mass transit system, car ownership is the way people access opportunity.

However, the notion of car ownership is being questioned in some cities and people have calculated that using a car-sharing service is cheaper than owning a car in some situations. Driverless cars are the next evolution of on-demand mobility without requiring ownership.

The most likely scenario to emerge in cities is that private car ownership will dwindle, and the demand for mobility will be met by fleets of vehicles available on demand and tailored to your requirements.

For example, a two-seater car could take you to a meeting, while a people carrier may stop past your house in the morning to collect your kids and take them to school.

Eliminating road congestion
Once you have a network of fleets running in a city, and every car is sending data about its state, it then becomes possible to optimise roads in a way that’s simply not possible now. When you know exactly how many cars are on the road at any one time and where they are going, you can start to organise their routes in such a way that eliminates congestion.

Another implication of driverless cars is the remodelling of city streets to remove carparks – cars without drivers never need to be parked for hours on the kerbside.

The biggest benefit of driverless cars is likely to be the near elimination of road accidents. A car that’s operated by a computer will never get distracted by phone calls or fall asleep at the wheel. Some researchers have predicted that driverless cars have the potential to reduce road deaths by up to 90%.

Regulating for driverless cars is one of the biggest hurdles to their adoption, and for this reason uptake on private roads (which are free of regulation) has already begun.

To illustrate, some Australian mines have operated driverless trucks since 2008, and since their introduction productivity has increased and accidents have decreased. In New Zealand one of the first significant pilots of driverless vehicles will take place in 2017 when Christchurch airport will introduce a driverless shuttle bus on its private roads.

In the next few years the workforce will start to be impacted by this technology, with truck drivers likely to be affected first. Already a delivery truck owned by an Uber subsidiary has driven almost two hundred kilometres across the US on interstate highways in self driving mode. This has profound implications for the three million truck drivers employed in the US and the industries that support them.

The next decade will be a transition period where driverless vehicles start to become commonplace in some situations. They’re unlikely to be widespread in cities as many experts believe that there are very hard problems that still need to be solved. For this reason it won’t be until after 2025 that we’re likely to see a dramatic change in the transportation fleet.

What makes this timeframe interesting, is that unlike many technology driven changes that have slowly changed business, this one is clear to see.  Organisations that have the foresight to leverage insights about the changes created by driverless cars will do extremely well. Those that don’t will end up like the horseless buggy.

Copyright NBR. Cannot be reproduced without permission.
Read more: https://www.nbr.co.nz/opinion/fast-forward%C2%A0normalisation-driverless-cars-not-so-far
Follow us: @TheNBR on Twitter | NBROnline on Facebook

 

Human predictions about AI winning games are wrong

When Kasparov challenged the IBM chess-playing computer called Deep Blue, he was absolutely certain that he would win.  An article in USA Today on 2 May 1997 quoted him as saying “I’m going to beat it absolutely.  We will beat machines for some time to come.

He was beaten conclusively.

In early 2016 another landmark was reached in game-playing computing, when AlphaGo (DeepMind) challenged Lee Se-dol to a game of Go.  The Asian game is a magnitude more complex than chess, and resulted in Lee making the observation that “AlphaGo’s level doesn’t match mine.”

Other expert players backed Lee Se-dol, saying that he would win 5-0.  In the end he only won a single game.

Now the same team that developed AlphaGo is setting it’s sights on a computer game called StarCraft 2. This is a whole new domain for artificial intelligence because, as The Guardian points out:

StarCraft II is a game full of hidden information. Each player begins on opposite sides of a map, where they are tasked with building a base, training soldiers, and taking out their opponent. But they can only see the area directly around units, since the rest of the map is hidden in a “fog of war”.

“Players must send units to scout unseen areas in order to gain information about their opponent, and then remember that information over a long period of time,” DeepMind says in a blogpost. “This makes for an even more complex challenge as the environment becomes partially observable – an interesting contrast to perfect information games such as Chess or Go. And this is a real-time strategy game – both players are playing simultaneously, so every decision needs to be computed quickly and efficiently.

Once again, humans believe that the computer cannot beat humans.  In the Guardian article, the executive producer for StarCraft is quoted as saying “I stand by our pros. They’re amazing to watch.”

Sound familiar?

If AI can win at a game like StarCraft, it’s both exciting and troubling at the same time.

It will mean that an AI will have to reference ‘memory,’ take measured risks and develop strategy in a manner that beats a human. These three things – pattern recognition (from memory), risk taking, and strategy, are skills that command a premium wage in economies that value ‘knowledge workers.’

In 2015 a research team at Oxford University published a study predicting 35% of current jobs are at “high risk of computerisation over the following 20 years.”  The StarCraft challenge might cause them to revise this prediction upwards.

Making Sense of Current VUCA Levels: Carlota Perez

Among colleagues around the world at the moment, there’s a definite recognition that VUCA is increasing.  One of more interesting theories about why this is happening comes from the work of academic Carlota Perez who has studied long-wave change theories for three decades.  In a nutshell, she believes that we’re currently transitioning from what she calls the “installation period” (where technology is developed) to the “deployment period” (where economic booms occur).  Perez believes that the levels of VUCA we are seeing now are reflective of the transition.

So how do you know when you’re in the gap between the two?  Here’s one metric that she uses to support her view:

During Installation, there is always strong asset inflation (both in equity and in real estate) while incomes and consumption products do not keep pace. This creates a growing imbalance in which the asset-rich get richer and the asset-poor get poorer. When salaries can buy houses again, we will be closer to the golden age.

In many countries around the world there is a profound disconnect between average income and the ability to buy a house. For example in Canada the average home price was $480,743 for July 2016 while the average Canadian employee makes just over $49,000 a year. 

In parts of the UK such as Trafford (and it’s important to note that this isn’t London) house prices are now 8.9 times higher than average wages and 7 times higher in Stockport. In Manchester, the number has risen to 5.1 times in 2015.

In New Zealand the average house price is now six times the annual household income.

One of the other key changes Perez points to as an indicator, is the birth of new economic instruments:

…there need to be innumerable investments and business innovations to complete the fabric of the new economy. Here’s one small example: Millions of self-employed entrepreneurs work from home with uneven sources of income. Where are the financial instruments to smooth out their money flow so they can work and live without anxiety?

This sounds remarkably like the innovations surrounding the deployment of blockchain, where one of the best quotes that I’ve heard about this technology is that:

If the Internet is a disruptive platform designed to facilitate the dissemination of information, then Blockchain technology is a disruptive platform designed to facilitate the exchange of value.

Perez quotes two other indicators that can be used to spot the transition: the first is more financial regulation at a global level.  However the complexity at play here is that in a world that is heading away from globalisation, it’s very difficult to bring nations together to agree on these types of initiatives.  It may take another severe financial crisis to induce a global agreement.

The final indicator is increasingly stable industry structures, and I’d argue that currently this is harder to discern.  However one signal may be in the form of  digital consolidation of internet traffic by Google, Apple, Microsoft, Facebook and Amazon.  Most of the world’s internet flows through one of these organisations and they also act as enablers – for example the creation of a store front with Amazon with promotion via Facebook/Google.

Whichever way you look at the current macro global situation, it’s clear we’re not in what Perez calls the “Golden Age.”  Perez herself notes that the Golden Age might not even eventuate, and that patterns from the past might not foretell the future:

Historical regularities are not a blueprint; they only indicate likelihood. We are at the crossroads right now.

McKinsey on foresight in Boards

Although it dates from 2014, this McKinsey article is full of gems for organisations seeking to connect foresight, strategy and innovation at the highest level. It’s a solid five minute read but I’ve culled the absolute highlights below:

Governance suffers most when boards spend too much time looking in the rear-view mirror and not enough scanning the road ahead. Directors still spend the bulk of their time on quarterly reports, audit reviews, budgets, and compliance—70 percent is not atypical—instead of on matters crucial to the future prosperity and direction of the business.
The alternative is to develop a dynamic board agenda that explicitly highlights these forward-looking activities and ensures that they get sufficient time over a 12-month period.

“Boards need to look further out than anyone else in the company,” commented the chairman of a leading energy company. “There are times when CEOs are the last ones to see changes coming.”

Complexity and technology

This is an insightful piece from the NY Times about the rise of American tech giants, but it also touches on an issue which increases the VUCA score of the world (my emphasis in bold below):

“What’s happening right now is the nation-state is losing its grip,” said Jane K. Winn, also a professor at the University of Washington School of Law, who studies international business transactions. “One of the hallmarks of modernity is that you have a nation-state that claims they are the exclusive source of a universal legal system that addresses all legal issues. But now people in one jurisdiction are subject to rules that come from outside the government — and often it’s companies that run these huge networks that are pushing their own rules.”

Ms. Winn pointed to Amazon as an example. The e-commerce giant sells both its own goods and those from other merchants through its marketplace. In this way, it imposes a universal set of rules on many merchants in countries in which it operates. The larger Amazon gets, the more its rules — rather than any particular nation’s — can come to be regarded as the most important regulations governing commerce.

Source: Why the World Is Drawing Battle Lines Against American Tech Giants

The implication for macro scale innovation cycles

This is a fascinating and long read about where the world is going to in regards to the next wave of innovation.  It’s also insightful when considering where innovation will focus in the coming years:

Intrapreneurship and skunkworks are replaced by internal innovation processes which, while ineffective at producing radical innovations, allow controllable and measurable sustaining innovation. Money that would have been spent financing external innovation is redirected back to corporate development and, perhaps, even corporate controlled research labs.

These sorts of controllable and measurable innovation processes are already taking hold, both inside and outside the corporate world. It’s no coincidence that the buzzwords in innovation the last few years have been ‘lean’ and ‘customer development.’ While these both claim to be new discoveries, they are actually old practices that fell out of favor during the installation period because they aren’t suited to radical, fast-moving innovation; they only work when innovation is slower and more predictable: Steve Jobs could not have used customer development to create the Apple computer; Henry Ford’s quip that if he asked his customers what they wanted they would have said “a faster horse” are both acknowledgements of this.

The hallmark of a new technological revolution is that the innovation trajectory is unknown: lean doesn’t work on early adopters because they will use anything novel (i.e. the Altair as an MVP was pretty well useless in predicting what mainstream customers would want in a personal computer); customer development doesn’t work when you’re developing a general purpose technology. In general, you can’t iterate your way to radical innovations, almost by definition.

Source: The Deployment Age | Reaction Wheel

Economist on the relevance of the blockchain

 

 

 

 

If you are not familiar with the BlockChain, the Economist has an excellent primer on it which goes beyond the simple first-mover of BitCoin.

The graphic below is a good explanation about how the chain is built, and how it’s kept unique.

Towards the end of the article is a section that nails why it’s important beyond currency:

One of the areas where such ideas could have radical effects is in the “internet of things”—a network of billions of previously mute everyday objects such as fridges, doorstops and lawn sprinklers. A recent report from IBM entitled “Device Democracy” argues that it would be impossible to keep track of and manage these billions of devices centrally, and unwise to to try; such attempts would make them vulnerable to hacking attacks and government surveillance. Distributed registers seem a good alternative.

The sort of programmability Ethereum offers does not just allow people’s property to be tracked and registered. It allows it to be used in new sorts of ways. Thus a car-key embedded in the Ethereum blockchain could be sold or rented out in all manner of rule-based ways, enabling new peer-to-peer schemes for renting or sharing cars. Further out, some talk of using the technology to make by-then-self-driving cars self-owning, to boot. Such vehicles could stash away some of the digital money they make from renting out their keys to pay for fuel, repairs and parking spaces, all according to preprogrammed rules.

 

Source: The great chain of being sure about things