NBR Column – Why you need to understand Facebook

Here’s the full text of my latest NBR column:

You might have seen the movie, you might already pay the company for advertising or you might simply be a user. No matter how you interact with Facebook, it’s arguably the one piece of software that everyone online today should understand in detail.

The company was started by Mark Zuckerberg in 2004 as a small business in a university dorm room in the US. The premise was simple – it was a method for people to update their social life on the internet so their friends could see what they were doing.

From this humble beginning the business has now grown to the point where it is regularly used by 1.8 billion people, including almost 80% of all American adults.

The company now offers a range of compelling ways of keeping in touch with people including the capability to upload live video, send instant messages, and call friends free around the world (no-cost international phone calls). This last point is particularly relevant, as it raises the question of how it can offer these services to billions of people without the need to charge a subscription.

Facebook can offer these services free because it also shows advertisements – a lot of advertisements.  Last year the company made $US10.2 billion, primarily from advertising revenue.

Advertisers are attracted to Facebook because the average user spends almost an hour a day on the site, and the more time people spend on the site, the more advertisements Facebook can sneak in front of people. The company is showing more advertisements to users than it used to.

Checking for updates
To ensure people keep looking at Facebook, the company spends a lot of money working out how to make sure users constantly check the site for updates. The updates they’re viewing are not simply about their friends but also advertisements and information from commercial organisations including news outlets. Facebook offers people the opportunity to give their feedback on this information by clicking an icon titled ‘like.’ It’s important to note that there is no icon to ‘unlike’ something.

The updates are viewed in a user’s ‘news feed.’  Bear in mind that the news feed may contain what used to be known as news but is more likely to contain a mix of content, some of which might be from reputable media outlets. Almost any organisation can pay for updates that then appear in users’ news feeds. These updates may or may not look like advertisements.

Once users start to ‘like’ information in their news feed, detailed personal data starts to be created. Research has found that after a Facebook user clicks ‘like’ on 70 updates, the company knows more about that person than their friends. Once they get past 170 likes, Facebook knows a user better than their parents.

Knowing users at this level allows Facebook to tailor the information it delivers to each user so they spend more time on the site.  The company runs massive social experiments involving hundreds of thousands of users to understand how to manipulate information to boost time on the site and, in turn, boost advertising revenue.

One of the results of this strategy is Facebook users only see information that reflects what they like, because to view information that conflicts with their world view would run the risk that they spend less time on the site.

Shaping public opinion
Another result is that Facebook is now such a compelling way to spend ‘free’ time that over 60% of millennials get their political news from their Facebook news feed. At first glance this might not seem important but it’s critical to understand the role of technology in shaping public opinion in today’s world.

To illustrate this, consider the curious example of the UK technology entrepreneur and commentator Tom Steinberg. He was against the UK leaving the EU, and his Facebook information feeds reflected his preference for this. What this meant was that the day after the result of the referendum, he could not find a single person celebrating the Brexit victory on the site.

Bear in mind that Steinberg is very internet literate, and should have been able to find at least one person in his Facebook network from the 33 million people who voted to leave the EU.  However, as he supported the other side of the vote, Facebook filtered his information feed so it only reflected his own world view.

The implications of this start to get complex, so to recap:

  1. Facebook needs people to spend time using its software, so it can sell more advertising and generate larger profits.
  2. To achieve this, it uses psychological research to encourage people to return to the site many times a day.
  3. It also manipulates the information you see so it reflects your world views, which in turn makes you more likely to – you guessed it – spend more time on Facebook.
  4. The more time you spend on Facebook, the more likely you are to ‘like’ information updates, which then gives the company feedback that allows it to legitimately say that it knows billions of users better than their parents know them.

Political business model
At this point you may think that this isn’t really a significant issue because, after all, it’s only Facebook.  However, the influence of the company now extends well beyond influencing the virtual world and is having a real impact on the physical world.

Facebook recognises the influence it now can exert and this translates into new business models.  One of these models is focused on politics, as it points out on its own website where it gives the example of how Facebook was a crucial tool in the election of a senator in the US.

On its site, there is a quote from one of the leaders of this campaign which states: “Facebook really helped us cut through the clutter and reach the right voters with the message that matters most to them. In a close race, this was crucially important.”

The key phrase here is “the message that matters the most to them.” Now recall the point that over 60% of millennials get their political view of the world via Facebook. When you combine these two points, Facebook makes it possible to target voters with the ‘right message’ in a way that simply hasn’t been possible in history.

Granted that there’s a rich history of politicians manipulating the media but the reach of Facebook makes the power of the software unprecedented.  To put this in a local perspective, research in 2015 revealed more than two million New Zealanders use the software every day.

Suppressing the news
Consider a scenario where Facebook itself wants to influence an election – perhaps opposing a candidate who favours regulation that limits the influence of the company.  It would be remarkably easy for the company to suppress news and support for that candidate, without people even knowing it was doing so.

So what does this mean for the average Facebook user?

Next time you check your Facebook feed, consider what information you’re giving to Facebook, and how it might be used.  People freely give the company deeply personal information, and the power of that data gives the company both enormous profit and enormous influence. Most of the media headlines about Facebook focus on the former.

For most active Facebook users, the closest real-world analogy to the software would be a casino where it’s free to play and your payout isn’t cash but information that makes you feel good about yourself.  For Facebook, the result is the same as a casino – a license to print money.

NBR column – the state of AI

This is my NBR column from Feb 2017:

In June last year a fascinating aerial battle took place. It didn’t take place in the actual sky but rather in the virtual one, which was appropriate considering it was a battle of man against machine.

The man in question wasn’t an ordinary pilot but a retired US Airforce pilot, Gene Lee, with combat experience in Iraq and a graduate of the US Fighter Weapons School. The machine he was battling was a simulated aircraft controlled by an artificial intelligence (AI).

What was surprising about the outcome was that the artifical AI emerged as the victor. What was more surprising was that the computer running the software wasn’t a multimillion dollar supercomputer but one that used about $35 worth of computing power.

Welcome to the fast-moving world of AI.

It’s an area that has attracted significant media focus, and justifiably so. Experts in the field see the deployment of AI as the dawn of a new age. Andrew Ng, chief scientist at Baidu Research, is one of the gurus in the field.

“AI is the new electricity,” he says. “Just as 100 years ago electricity transformed industry after industry, AI will now do the same.”

Most of the current applications of AI focus on recognising patterns. Software is “trained” with vast amounts of information, usually with help from people who have manually tagged the data. In this way, an AI may start with images that have been labelled as cars, then, through trial and error guided by programmers, eventually recognise images of cars without any intervention.

Extraordinary breakthroughs
This simple explanation of AI belies the extraordinary breakthroughs achieved with this approach and is illustrated by an experiment conducted by an English company called DeepMind.

In 2015, DeepMind revealed that its AI had learned how to play 1980s-era computer games without any instruction. Once it had learned the games, it could outperform any human player by astonishing margins.

This feat is a stark contrast to the battle waged almost two decades ago when an IBM computer beat Russian grandmaster Gary Kasparov at chess in the mid-1990s. To beat him, the computer relied on a virtual encyclopaedia of pre-programmed information about known moves. At no point did the machine learn how to play chess.

Winning simple computer games clearly wasn’t enough to prove the abilities of DeepMind, so a more challenging option was found in the game called Go. It’s an incredibly complex Asian board game with more possible moves than the total number of atoms in the visible universe.

To learn Go, the AI played itself more than a million times. To put this in perspective, if a person played 10 games a day every day for 60 years, they would only manage to play around 180,000 games.

Despite the bold predictions of expert Go players, when the tournament ended in 2015, it was the DeepMind AI that had beaten one of the world’s best players.

The ability to “learn” can be easily leveraged into the real world. While gaming applications may excite hard-core geeks, DeepMind’s power was unleashed on a more useful challenge last year – increasing energy efficiency in data centres.

By looking at the information about power consumption – such as temperature, server demand and cooling pump speeds – the AI reduced electricity requirements for a Google data centre by an astonishing 40%. This may seem esoteric but around the world data centres already use as much electricity as the entire UK.

Potential implications
Once you start to consider the power of AI, the feeling of astonishment evaporates and is replaced with an unsettling feeling about the potential implications. For example, at the end of last year a Japanese insurance company laid off a third of one of its departments when it announced plans to replace people with an IBM AI.  In this example, only 34 people were made redundant but this trend is likely to accelerate.

At this stage, it’s useful to put this development in context and consider what jobs might be replaced by AI. Andrew Ng has a useful rule of thumb – “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.”

What’s important about this quote is the term “near future.” Once you extend the timeline out longer, researchers have theorised that the implications of AI on the workforce are significant.  One study published in 2015 estimated that across the OECD an average of 57% of jobs were at risk from automation.

This number has been disputed heavily since it was published but it doesn’t really matter what the exact percentage will be. What is important to keep in mind is that AI will change the nature of jobs forever, and it’s highly likely that work in the future will feature people working alongside machines. This will result in a more efficient workforce, which will in turn likely to lead to job losses.

However, it’s not just the workforce that could change. The potential for this technology dwarfs anything humans have ever invented, and, just like the splitting of the atom, the jury is out on how things will develop.

One of the world’s experts on existential threats to humanity – Nick Bostrom at Oxford University – surveyed the top 100 AI researchers.  He asked them about the potential threat that AI poses to humanity, and responses were startling. More than half of them responded that they believed there is a substantial chance that the development of an artificial intelligence that matches the human mind won’t end up well for one of the groups involved.  You don’t need to work alongside an AI to figure out which group.

The thesis is simple – Darwinian theory applied to the biological world leads to the dominance of one species over another.  If humans create a machine intelligence, probably the first thing it would do is re-programme itself become smarter.  In the blink of an evolutionary eye, people could become subservient to machines with intelligence levels that were impossible to comprehend.

The exact timeframe for this scenario is hotly debated, but the same experts polled by Bostrom thought that there was a high chance of machines having human-level intelligence this century – perhaps as early as 2050.

To paraphrase a well-worn cliché, we will live in interesting times.

Copyright NBR. Cannot be reproduced without permission.
Read more: https://www.nbr.co.nz/opinion/keeping-eye-artificial-intelligence
Follow us: @TheNBR on Twitter | NBROnline on Facebook

NBR Column – driverless cars

This is my NBR column from December 2016:

Since the invention of the first “horseless buggy” in 1891, there haven’t been many significant changes to the basic of the car. There have been incremental improvements to the platform – such as better engines, increased safety and more comfort – but the core has remained unchanged. A driver from 1920 would be able to adapt to a modern car and the reverse would also apply.

While a driver from the 1920s would be able to drive a car, a mechanic from the same era would no longer recognise the key components. Today’s new cars are equipped with collision avoidance sensors, traction control, ABS, air bags, reversing cameras, engine computers and media players. This technology means that new vehicles contain more software than a modern passenger aircraft and a laptop is more useful than a wrench when tinkering under the hood.

While this may be startling to some people, it pales into insignificance compared to what’s about to happen to the car when driverless vehicles become mainstream.

Since their first significant debut in 2004, driverless cars have evolved quickly. They have now been demonstrated in a range of situations, with manufacturers posting videos online showing just how well their machines work (usually in near-perfect conditions).

These advances have been enabled by developments in sensors, cameras and computing power. On their own, each of these required technologies was prohibitively expensive only a decade ago. Fast forward to now, however, and the cost has fallen to the point where it’s feasible to bundle them into a car.

For example, one of the key components is a device called a LIDAR which creates a millimetre accurate map of the world around the car. Early versions of LIDAR systems fitted on a car cost $75,000. Just last week one manufacturer announced a version with similar capabilities that would cost about $50.

Implications for ownership
While a lot of attention is on the technology in the car, most astute analysts are focused on the second and third tier implications of driverless vehicles. This is the most interesting part of the discussion because cars are ubiquitous in most urban environments, and a change in their form and function has massive implications.

The most significant implication will concern the very notion of car ownership.

A car is one of the most expensive assets in a household but at the same time it’s also one of the least used. Most a car’s life is spent stationary, though the cost of ownership is justified through what it creates.

In modern society a car creates access to opportunity, and for cities without an efficient mass transit system, car ownership is the way people access opportunity.

However, the notion of car ownership is being questioned in some cities and people have calculated that using a car-sharing service is cheaper than owning a car in some situations. Driverless cars are the next evolution of on-demand mobility without requiring ownership.

The most likely scenario to emerge in cities is that private car ownership will dwindle, and the demand for mobility will be met by fleets of vehicles available on demand and tailored to your requirements.

For example, a two-seater car could take you to a meeting, while a people carrier may stop past your house in the morning to collect your kids and take them to school.

Eliminating road congestion
Once you have a network of fleets running in a city, and every car is sending data about its state, it then becomes possible to optimise roads in a way that’s simply not possible now. When you know exactly how many cars are on the road at any one time and where they are going, you can start to organise their routes in such a way that eliminates congestion.

Another implication of driverless cars is the remodelling of city streets to remove carparks – cars without drivers never need to be parked for hours on the kerbside.

The biggest benefit of driverless cars is likely to be the near elimination of road accidents. A car that’s operated by a computer will never get distracted by phone calls or fall asleep at the wheel. Some researchers have predicted that driverless cars have the potential to reduce road deaths by up to 90%.

Regulating for driverless cars is one of the biggest hurdles to their adoption, and for this reason uptake on private roads (which are free of regulation) has already begun.

To illustrate, some Australian mines have operated driverless trucks since 2008, and since their introduction productivity has increased and accidents have decreased. In New Zealand one of the first significant pilots of driverless vehicles will take place in 2017 when Christchurch airport will introduce a driverless shuttle bus on its private roads.

In the next few years the workforce will start to be impacted by this technology, with truck drivers likely to be affected first. Already a delivery truck owned by an Uber subsidiary has driven almost two hundred kilometres across the US on interstate highways in self driving mode. This has profound implications for the three million truck drivers employed in the US and the industries that support them.

The next decade will be a transition period where driverless vehicles start to become commonplace in some situations. They’re unlikely to be widespread in cities as many experts believe that there are very hard problems that still need to be solved. For this reason it won’t be until after 2025 that we’re likely to see a dramatic change in the transportation fleet.

What makes this timeframe interesting, is that unlike many technology driven changes that have slowly changed business, this one is clear to see.  Organisations that have the foresight to leverage insights about the changes created by driverless cars will do extremely well. Those that don’t will end up like the horseless buggy.

Copyright NBR. Cannot be reproduced without permission.
Read more: https://www.nbr.co.nz/opinion/fast-forward%C2%A0normalisation-driverless-cars-not-so-far
Follow us: @TheNBR on Twitter | NBROnline on Facebook


NBR – Monthly column

nbr-logoI’ve started writing a monthly column for a business weekly in New Zealand called The National Business Review.  The first column was online recently and looked at the Singularity Summit in NZ, and set the tone for future columns.

The column can be on the NBR site here or below:


Not many conferences in New Zealand attract more than 1400 people. Even fewer – perhaps none – of this size include a diverse range of professional directors, politicians, chief executives, teachers, university students, entrepreneurs and school pupils.

One that did was the three-day Singularity Summit in Christchurch. On stage were experts from Silicon Valley and New Zealand discussing how rapidly changing technologies would affect the world.

What was startling for many attendees is that many of these disruptive innovations aren’t vague predictions but are already in use – or about to be.

Science fiction author William Gibson once famously remarked that the future is already here – it’s just unevenly distributed. The truth of this was highlighted at the summit as speakers gave example after example of how entire industries are going to be upended as technology advances.

Given the audience size, this is clearly a hot topic and something that a lot of people are grappling with.

On the last day of the summit I talked to David Roberts, the opening and closing speaker, to get his insight on the level of interest.

“I think there really is something happening right now,” Roberts says. “My sense is that we’re at an inflection point.”

The international speakers were well placed to observe inflection points as many of them are members of the Singularity University – a think tank based in the heart of Silicon Valley. The name has its origin in a concept that speculates artificial intelligence will surpass human intelligence in the next few decades, leading to a technology singularity where computers outperform people.

While the concept of the singularity is controversial, it’s clear the world our children will inherit will have a dramatically different working environment to the one we know today.

Software running on extremely fast computers can already perform better than humans in a range of intricate tasks, including driving cars, flying planes and playing complicated games.

Technology has enabled some startling developments.

University of Auckland researcher Mark Sagar began his presentation with a relatively dry discussion about creating computing “building blocks” for designing virtual avatars.

His work aims to create super-realistic computer-generated faces that respond to external stimuli just like a real person.

For example, staying within the view of a laptop camera means that the software can “see” a human face. This then triggers the software model to release virtual oxytocin, a neurochemical that is related to trust and bonding.

The end result is that the virtual face – which is controlled by the virtual brain – starts to smile.

“It’s like a Lego system for building brains,” he casually mentioned just before he showed the audience exactly what he meant.

At this point it’s fair to say Dr Sagar is a man who knows how to capture your attention.  When he demonstrated the end result on screen there was an audible gasp as the audience watched him interact with an extraordinarily life-like baby – or at least its face.

Using only his laptop, Dr Sagar’s virtual baby smiled when it was talked to and got anxious when he moved out of camera view. Although it couldn’t “see’ the audience” if it could it would have seen 1400 jaws drop open.

Plenty of other jaw-dropping moments occurred during the event and at the end of the three days it was certain few organisations would be immune to an increasing pace of technology change.

While making predictions about the future is notoriously difficult, from a strategic standpoint it’s increasingly important to develop the capability to have an over-the-horizon view.

In a series of monthly columns I will take a closer look at some of the risks and opportunities presented by rapidly changing technology in areas such as driverless cars, artificial intelligence, employment, politics and the role of New Zealand organisations.

Copyright NBR. Cannot be reproduced without permission.
Read more: https://www.nbr.co.nz/opinion/fast-forward%C2%A0-roger-dennis-hold
Follow us: @TheNBR on Twitter | NBROnline on Facebook

Human predictions about AI winning games are wrong

When Kasparov challenged the IBM chess-playing computer called Deep Blue, he was absolutely certain that he would win.  An article in USA Today on 2 May 1997 quoted him as saying “I’m going to beat it absolutely.  We will beat machines for some time to come.

He was beaten conclusively.

In early 2016 another landmark was reached in game-playing computing, when AlphaGo (DeepMind) challenged Lee Se-dol to a game of Go.  The Asian game is a magnitude more complex than chess, and resulted in Lee making the observation that “AlphaGo’s level doesn’t match mine.”

Other expert players backed Lee Se-dol, saying that he would win 5-0.  In the end he only won a single game.

Now the same team that developed AlphaGo is setting it’s sights on a computer game called StarCraft 2. This is a whole new domain for artificial intelligence because, as The Guardian points out:

StarCraft II is a game full of hidden information. Each player begins on opposite sides of a map, where they are tasked with building a base, training soldiers, and taking out their opponent. But they can only see the area directly around units, since the rest of the map is hidden in a “fog of war”.

“Players must send units to scout unseen areas in order to gain information about their opponent, and then remember that information over a long period of time,” DeepMind says in a blogpost. “This makes for an even more complex challenge as the environment becomes partially observable – an interesting contrast to perfect information games such as Chess or Go. And this is a real-time strategy game – both players are playing simultaneously, so every decision needs to be computed quickly and efficiently.

Once again, humans believe that the computer cannot beat humans.  In the Guardian article, the executive producer for StarCraft is quoted as saying “I stand by our pros. They’re amazing to watch.”

Sound familiar?

If AI can win at a game like StarCraft, it’s both exciting and troubling at the same time.

It will mean that an AI will have to reference ‘memory,’ take measured risks and develop strategy in a manner that beats a human. These three things – pattern recognition (from memory), risk taking, and strategy, are skills that command a premium wage in economies that value ‘knowledge workers.’

In 2015 a research team at Oxford University published a study predicting 35% of current jobs are at “high risk of computerisation over the following 20 years.”  The StarCraft challenge might cause them to revise this prediction upwards.

Economist on the relevance of the blockchain





If you are not familiar with the BlockChain, the Economist has an excellent primer on it which goes beyond the simple first-mover of BitCoin.

The graphic below is a good explanation about how the chain is built, and how it’s kept unique.

Towards the end of the article is a section that nails why it’s important beyond currency:

One of the areas where such ideas could have radical effects is in the “internet of things”—a network of billions of previously mute everyday objects such as fridges, doorstops and lawn sprinklers. A recent report from IBM entitled “Device Democracy” argues that it would be impossible to keep track of and manage these billions of devices centrally, and unwise to to try; such attempts would make them vulnerable to hacking attacks and government surveillance. Distributed registers seem a good alternative.

The sort of programmability Ethereum offers does not just allow people’s property to be tracked and registered. It allows it to be used in new sorts of ways. Thus a car-key embedded in the Ethereum blockchain could be sold or rented out in all manner of rule-based ways, enabling new peer-to-peer schemes for renting or sharing cars. Further out, some talk of using the technology to make by-then-self-driving cars self-owning, to boot. Such vehicles could stash away some of the digital money they make from renting out their keys to pay for fuel, repairs and parking spaces, all according to preprogrammed rules.


Source: The great chain of being sure about things

The implications of quantum computing

At the last Foresight Week event in Singapore two years ago, Peter Schwartz and I had a long discussion about the implications of quantum computing. Where we ended up was that we thought that there was a ‘computing arms race’ developing between Governments and consumers.

At the highest abstract levels, the foundations of computing have remained unchanged since the development of the transistor.  The development of the PC meant that it was inevitable that consumers would possess extremely fast computers, and among other things these would enable levels of security and privacy through encryption.  No matter how fast Government computers became, there would be enough horsepower available to consumers to secure their privacy.

Now this is changing.  The development of the quantum computer means that the next evolution of computing will put the average person into a state of inherent insecurity, because quantum computers will be able to unlock any security currently in use.  An article in the Washington Post highlights this:

Quantum mechanics is now being used to construct a new generation of computers that can solve the most complex scientific problems—and unlock every digital vault in the world. These will perform in seconds computations that would have taken conventional computers millions of years.

This also means that Governments and corporations will once more be leaders in computing, harking back to the days of mainframe computing – when state-of-the-art computation power was unaffordable to the average person.  However unlike the democratisation of computing power that has taken place since the development of the desktop, it’s likely to be a much shorter time span before quantum computing is available in the home – or in your pocket.

In the meantime however, the deployment of this new type of computing is likely to add to global volatility through it’s deployment by security agencies.

Article: The third industrial revolution

A quick link to an article in The Economist on a topic that we’ve explored many times for different clients, starting back in 2007 for the Shell Technology Futures programme.

The factory of the future will focus on mass customisation—and may look more like those weavers’ cottages than Ford’s assembly line.

via Manufacturing: The third industrial revolution | The Economist.

Radio Interview on Cashless Societies (ABC Australia Future Tense)

Over the weekend ABC Australia played a programme about the rise (or otherwise) of cashless society.  It contained an interview with me about my experience of technologies, and specifically about my experience at egg (the UK branchless bank) in the early 2000s.  Here’s the blurb (and a link at the bottom)

We hear a lot about the cashless society and the death of the local bank branch—as commerce becomes increasingly digital. But how close are we to a completely cashless environment? Is it still possible to live a whole year without those little pieces of paper or polymer we carry in our pockets? We look at the rate of change when it comes to money’s digital future and whether all of us are heading for a cashless future at the same speed.

via Money, banks and our changing times – Future Tense – ABC Radio National Australian Broadcasting Corporation.

The Economist on 3D printing and making

A short snippet from an article in the Economist that links a new hobby of ‘tinkering’ with possible disruption.  This is something that we could easily see a few years back and identified as part of the Shell Technology Futures programme in 2007.  It’s fascinating to see it unfolding:

“The tools of factory production, from electronics assembly to 3D printing, are now available to individuals, in batches as small as a single unit,” writes Chris Anderson, the editor of Wired magazine.

It is easy to laugh at the idea that hobbyists with 3D printers will change the world. But the original industrial revolution grew out of piecework done at home, and look what became of the clunky computers of the 1970s. The maker movement is worth watching.

via Monitor: More than just digital quilting | The Economist.