Statement: Intention to fine Marriott International, Inc more than £99 million under GDPR for data breach
Statement in response to Marriott International, Inc’s filing with the US Securities and Exchange Commission that the Information Commissioner’s Office (ICO) intends to fine it for breaches of data protection law.
Following an extensive investigation the ICO has issued a notice of its intention to fine Marriott International £99,200,396 for infringements of the General Data Protection Regulation (GDPR).
The proposed fine relates to a cyber incident which was notified to the ICO by Marriott in November 2018. A variety of personal data contained in approximately 339 million guest records globally were exposed by the incident, of which around 30 million related to residents of 31 countries in the European Economic Area (EEA). Seven million related to UK residents.
It is believed the vulnerability began when the systems of the Starwood hotels group were compromised in 2014. Marriott subsequently acquired Starwood in 2016, but the exposure of customer information was not discovered until 2018. The ICO’s investigation found that Marriott failed to undertake sufficient due diligence when it bought Starwood and should also have done more to secure its systems.
Information Commissioner Elizabeth Denham said:
“The GDPR makes it clear that organisations must be accountable for the personal data they hold. This can include carrying out proper due diligence when making a corporate acquisition, and putting in place proper accountability measures to assess not only what personal data has been acquired, but also how it is protected.
“Personal data has a real value so organisations have a legal duty to ensure its security, just like they would do with any other asset. If that doesn’t happen, we will not hesitate to take strong action when necessary to protect the rights of the public.”
Marriott has co-operated with the ICO investigation and has made improvements to its security arrangements since these events came to light. The company will now have an opportunity to make representations to the ICO as to the proposed findings and sanction.
The ICO has been investigating this case as lead supervisory authority on behalf of other EU Member State data protection authorities. It has also liaised with other regulators. Under the GDPR ‘one stop shop’ provisions the data protection authorities in the EU whose residents have been affected will also have the chance to comment on the ICO’s findings.
The ICO will consider carefully the representations made by the company and the other concerned data protection authorities before it takes its final decision.
When Facebook unveiled its new digital currency libra, it explicitly said the initiative was intended to address the problems faced by the world’s unbanked: the 1.7 billion people without a bank account. As well as facing inconvenience, these people generally pay over the odds for financial services like bank transfers or overdrafts.
This is a pretty big potential market for Facebook so it’s not surprising that it would target the opportunity. But could libra really transform access to financial services for those who are currently excluded? There are reasons to raise serious doubts.
Across the world, the main reasons people give for not holding a bank account is that they don’t have enough money, don’t see the need for an account, find it too expensive, or another family member already has one. Not having the right documentation is also a barrier, as is distrust in the financial system.
But the specific barriers to financial inclusion vary significantly by region and are usually a combination of social and economic factors. For instance, while cost is a big barrier in Latin America, lack of documentation is the big issue in Zimbabwe and Philippines.
This makes it difficult for any one intervention to be a solution to this huge group of people. Worryingly, the Facebook “white paper” that outlines libra does not really engage with these problems or say how it plans to overcome them.
Trust and financial literacy
People’s trust in institutions can be very important in influencing the extent to which they use their services, as I have found from my own work into microfinance, which I have presented at conferences but is yet to be published in an academic journal.
I have found that people are more likely to choose something familiar over something novel. Since libra will be a new currency relying on digital wallets and built on blockchain online ledger technology, it is not short of novelties. Inspiring trust is therefore likely to be a major challenge.
And simply signing someone up to an account – be it a bank account or a digital wallet – is only part of the financial inclusion challenge.
In India, 190m people still do not have bank accounts, but the percentage of the population who do have accounts has steadily increased to 80%. In 2017, however, nearly half of all bank accounts in the country had seen no activity over the whole of the previous year. One of the reasons is financial literacy, which remains low both in India and many other developing countries. Many people in India have said they are simply unaware of the different benefits of a bank account, such as overdraft facilities or credit schemes.
As many as 62% of the world’s unbanked have received only a primary-level education or less, and in poorer countries the proportion is almost certainly going to be higher. Expecting such people to make complex currency conversions into a new virtual currency is asking a lot.
In the first place, there is a need for financial literacy measures and initiatives aimed at motivating them to use the services available. Without this additional support, there is a strong risk that Facebook will boast large numbers of sign-ups but very low rates of transactions from the people who are most in need.
Only a few days since Facebook’s announcement, libra has faced strong pushback from regulators and policymakers around the world. There is much concern about this proposed shift of power from central banks to a private corporation.
But aside from questions about the ethics of data privacy or the creation of a supranational currency, libra faces an important practical question. On the one hand, it is not clear how a model such as libra, where there will presumably be little or no physical presence in many countries, would interact with and adhere to local regulations.
On the other hand, if it does conform to the local standards of each country, it is unclear how it will overcome challenges like signing people up and strict documentation requirements. Will it really be able to serve the unbanked better than local providers who are used to the challenges in that specific market already?
Entrepreneurs and businesses can either start with a problem and think of the best way to solve it; or they can start with a solution and find the biggest and best problem it might solve. I’m not convinced that libra is a good move in either direction. Facebook either has a huge amount of work to do to adapt its solution to fit the problem better, or it needs to redefine the problem that it is trying to fix.
American retail giant Walmart is becoming the latest challenger to clamber into the ring and take on the reigning TV/movie streaming heavyweights with original material.
At a press conference in New York, Walmart announced a slate of new commissions for its streaming contender, Vudu. Added to the 100,000-plus TV shows and movies already available on the service, viewers can expect the likes of Friends in Strange Places, a travel/comedy series overseen by Queen Latifah; interview documentary strand Turning Point with Randy Jackson; and a series-length reboot of 1983 Michael Keaton comedy Mr Mom.
The new offering is aimed primarily at Middle America, which Walmart feels has been undersold by streaming incumbents like Netflix and Amazon Prime Video. Vudu’s shows will be a vehicle for new interactive advertising going live over the summer which will allow consumers to buy what they see without leaving their sofa. Thanks to its monster customer database, a senior Vudu manager recently described Walmart as the “sleeping giant of the digital entertainment space”.
If so, it’s about to wake up to a very crowded marketplace. It’s only weeks since Apple announced streaming service Apple TV+, which is to combine licensed shows with original programming when it launches worldwide this autumn.
Disney, meanwhile, is following suit with Disney+ in November – initially in the US, then rolling out to other countries next year.
Other existing streamers include Hulu and HBO Now, while Discovery and NBCUniversal are both launching rivals next year as well (click on the table below to make the full details bigger). Between them, these companies are spending many billions of dollars on content. It doesn’t take a seer to predict that a good few will likely fail.
Sizing them up
Among these newer announcements, Apple and Disney look the stronger contenders. Apple has the ready-made platform of a billion devices to promote and deliver its service, while Disney has the richest content portfolio across multiple categories – from video games to live sports to superheroes.
Vudu may have the heft of Walmart behind it, but the content investment is likely to be a fraction of the other two: Apple has said it will spend US$2 billion (£1.5 billion) a year at first, while Disney is spending only $500m on originals, including the likes of three Avengers spin-offs, but the group’s total annual content spend is nearly 50 times bigger. Walmart has not said what Vudu is spending. On the other hand, Vudu’s offering will be mostly free while Disney+ and Apple TV+ will both charge monthly subscriptions.
At any rate, all three are likely to struggle – and the same goes for the other new arrivals. We are heading for a serious case of “subscription fatigue”. When consumers watch free-to-air television, broadcasters take care of the messy process of making deals with content owners, aggregating it and serving it up. As pay-TV operators like Sky or the cable networks started to emerge, consumers had to sometimes choose a package to get a particular channel or programme.
But with streaming in future, this experience is going to become more and more frustrating – Where can I find Westworld? Where is Blue Planet these days? – not to mention expensive for anyone tempted by multiple offerings. By building competing services, all these media giants are playing their own Game of Thrones.
The way forward is clear, but controversial. Apple, Disney, AT&T, NBCUniversal and the other large players should collaborate to create a dominant content platform. Partnering among subscription services would take some of the burden off consumers and make the combined offering more appealing than existing options. Imagine subscribing to a single service to receive access to everything from classic TV and movies to the latest shows. The market can probably handle two or three mega platforms, but not more.
Ironically, Disney already has a ready-made option in its arsenal. Hulu was set up as a joint venture between Disney, NBCUniversal, Fox and WarnerMedia (now owned by AT&T). Yet Hulu’s claim to be a cross-industry platform is getting weaker, not stronger: Fox’s 30% share defaulted to Disney when it was taken over, and AT&T has announced it wants to sell its 10% holding. Hulu may have recently diversified with its recent partnership announcement with music streamer Spotify, but Disney’s new dominance of the service will probably make it a less attractive option for other media companies to buy into than previously.
If media companies collaborated with their streaming services, it would certainly come with antitrust concerns. But unless they evolve into an industry platform soon, the door will open for other players to take the lead – I’m thinking digital giants like Google or Facebook, internet service providers or telecommunications companies.
Many of these players already have a subscription relationship with consumers, so it would be relatively easy for them to bundle video streaming into existing services. Amazon’s shift into the media world is a textbook example of how this could play out.
It is reminiscent of the early 2000s, in which the record majors built walled gardens around their content only to watch in horror as Apple’s iTunes stole the market from under them with a convenient, cheap and comprehensive option. Spotify then stole it again a few years later. Media companies should also beware the prospect of consumers being driven in larger numbers to illegal or quasi-legal video consolidation services.
There are recent precedents that they could follow of competitive partnering in other industries: BMW and Daimler recently announced they would join forces to build common platforms for ride sharing and electric vehicle charging, among other things, having realised they are stronger together than apart.
The media giants would be well advised to start exploring similar possibilities.
Consumers are already baulking at both the cost of multiple subscription services and the inconvenience of having to keep track of which shows are on which services. The ultimate winner will be the first option that can provide scale and convenience at a reasonable cost. If today’s streaming companies aren’t careful, they will end up on the outside looking in.
Many great churches and cathedrals have suffered catastrophic fires over their long histories and medieval chronicles are full of stories of devastation and ruin as a result – but they also tell of how the buildings were reconstructed and made better than ever.
The devastating fire that destroyed the roofs and spire of Notre Dame in Paris demonstrated the vulnerabilities of medieval cathedrals and great churches, but also revealed the skills of their master masons. The lead-covered wooden roof structure burned so fast because the fire was able to take hold under the lead and increase in intensity before it was visible from the outside, and it then spread easily to all the other sections of the roof.
Notre Dame was saved from total destruction because the medieval builders gave it a stone vault over all the main spaces, and also on the tops of the aisles which meant that the burning timbers and molten lead couldn’t break through easily.
But French churches and cathedrals are more at risk than ones in Britain because they don’t usually have a stone tower in the centre to act as a firebreak – this is what saved York Minster in 1984 when the transept roof caught fire but the tower stopped it spreading further.
Turning to Britain, medieval chronicles provide fascinating reading for historians as we can find eyewitness accounts of the unfolding disasters when fires occurred in the past. At Croyland Abbey in Lincolnshire, the monk who found the fire in the 12th century rushed to the cloister to wake the sleeping monks in their dormitory, but was burned by the red-hot lead falling from the roof and had to be taken to the infirmary for treatment.
Swift action by the other monks saved the building, and the next abbot restored it to its former glory, although the loss of precious manuscripts and documents, “caused them much sorrow”.
The canons of the great priory church of Gisborough in north-east England were very unlucky: the masons had just completed a very splendid, and expensive, rebuilding project when they had to start all over again. On May 16, 1289, so the chronicles tell us, a plumber – in medieval times, someone who worked with lead – and his two assistants went up onto the roof to make a few final repairs to the leads. Unfortunately, the plumber left a fire pan on the roof beams when he went down for his lunch, leaving his assistants to put out the fire. This they failed to do, and the whole roof went up in flames, followed by the building and all its contents.
Traces of the fire can still be found at the west end of the church, which is virtually all that they were able to save, and a new building arose from the ashes over the next hundred years. Plumbers had to be very careful, they were the only ones who needed to have fires burning close to where they were working, and at Ely Cathedral you can still see where a plumber used the hollow between two arches high up on the back of the west front as a makeshift chimney for his fire. Fortunately, nothing dreadful happened there.
At Lincoln cathedral, we can see where the fire in the west front in the 12th century damaged the staircases because these acted as chimneys and spread the fire quickly up into the rest of the building. The building’s limestone turned pink in the extreme heat and it’s clear that the masons had to take down the more damaged parts of the west front to repair the stonework that had been closer to the fire and had cracked. One fascinating detail remains: the masons had to check how deeply the fire damage had penetrated the stone and the marks they cut into the stone are still there.
Canterbury Cathedral was struggling to cope with all the pilgrims drawn to the shrine of the murdered Thomas Becket and a fire of 1174 gave the monks the chance to build a fine new building to house his shrine.
The eyewitness account has details of the heroic monks rushing into the building to save all its treasures, and it’s even been suggested that this fire wasn’t an accident and was started by the monks themselves as it brought so many benefits in its wake. The master mason gave them a superb new building in the Gothic style and with all the funds pouring in, the monks were able to move back into their church within five years of the fire, although completing the building work took a little longer.
For Sir Christopher Wren, the Great Fire of London in 1666 gave him the opportunity he’d been waiting for: to give London the cathedral it needed for the modern age. The medieval cathedral had been falling into disrepair for years and various attempts to patch it up had left it weakened and muddled in appearance. Wandering among the ruins after the fire, Wren was handed a piece of stone from a tomb monument with the word “Resurgam” – I will rise again – carved on it, and this encouraged him to press on with his plans for a whole new building. It took 50 years, but it gave us the St Paul’s Cathedral that we know today.
Coventry also rose from the ashes of despair after the firebombing of November 1940 in World War II. The cathedral had been built as one of the city’s great medieval churches and became the city’s cathedral in 1918. It was a fine late-medieval building with a huge timber roof, and this was no match for the fire bombs that rained down on it during Coventry’s blitz.
Burning timbers fell straight down into the building and caused a huge bonfire that cracked the slender stone work supports and brought them crashing down. By morning, the building was a devastated shell. Basil Spence, the architect of the new Coventry Cathedral in the 1950s, sensitively integrated the ruins into the design of his new building where they stand as a memorial to the events of the 1940s.
The 20th century has seen a few serious fires. York Minster’s huge 1984 fire was believed to have been caused by either lightning, or an electrical fault. York has been very unlucky over the years, it’s had a succession of fires and without stone vaults over the building, the minster has been very vulnerable. After the last restoration, York had the inspired idea of asking school students to design some of the carvings on the new transept vault.
The threat of fire in historic buildings is a constant one, and the people who look after the buildings, on a day-to-day basis, or in response to disaster, are unsung heroes who deserve gratitude and support. Notre Dame, Paris will be restored and made glorious once again – fires have always been a risk, and restorations have always been a part of church history.
A massive redevelopment of the old Royal Albert Dock in East London is transforming the derelict waterfront to a gleaming business district. The project, which started in June 2017, will create 325,000 square metres of prime office space – a “city within a city”, as it has been dubbed – for Asian finance and tech firms. Then, in 2018, authorities in Kampala, Uganda celebrated as a ferry on Lake Victoria was unloaded with goods from the Indian Ocean, onto a rail service into the city. This transport hub was the final part of the Central Corridor project, aimed at connecting landlocked Uganda to Dar es Salaam and the Indian Ocean.
Both of these huge projects are part of the US$1 trillion global infrastructure investment that is China’s Belt and Road Initiative (BRI). China’s ambition to reshape the world economy has sparked massive infrastructure projects spanning all the way from Western Europe to East Africa, and beyond. The nation is engaging in what we, in our research, call “Silk Road urbanism” – reimagining the historic transcontinental trade route as a global project, to bring the cities of South Asia, East Africa, Europe and South America into the orbit of the Chinese economy.
By forging infrastructure within and between key cities, China is changing the everyday lives of millions across the world. The initiative has kicked off a new development race between the US and China, to connect the planet by financing large-scale infrastructure projects.
Silk Road urbanism
Amid this geopolitical competition, Silk Road urbanism will exert significant influence over how cities develop into the 21st century. As the transcontinental trade established by the ancient Silk Road once led to the rise of cities such as Herat (in modern-day Afghanistan) and Samarkand (Uzbekistan), so the BRI will bring new investment, technology, infrastructure and trade relations to certain cities around the globe.
The BRI is still in its early stages – and much remains to be understood about the impact it will have on the urban landscape. What is known, however, is that the project will transform the world system of cities on a scale not witnessed since the end of the Cold War.
Silk Road urbanism is highly selective in its deployment across urban space. It prioritises the far over the near and is orientated toward global trade and the connections and circulations of finance, materials, goods and knowledge. Because of this, the BRI should not only be considered in terms of its investment in infrastructure.
It will also have significance for city dwellers – and urban authorities must recognise the challenges of the BRI and navigate the need to secure investment for infrastructure while ensuring that citizens maintain their right to the city, and their power to shape their own future.
Developments in both London and Kampala highlight these challenges. In London, Chinese developer Advanced Business Park is rebuilding Royal Albert Dock – now named the Asian Business Port – on a site it acquired for £1 billion in 2013 in a much-criticised deal by former London mayor Boris Johnson. The development is projected to be worth £6 billion to the city’s economy by completion.
But the development stands in sharp contrast with the surrounding East London communities, which still suffer poverty and deprivation. The challenge will be for authorities and developers to establish trusting relations through open dialogue with locals, in a context where large urban redevelopments such as the 2012 Olympic Park have historically brought few benefits.
The creation of a third financial district, alongside Canary Wharf and the City of London, may benefit the economy. But it remains to be seen if this project will provide opportunities for, and investment in, the surrounding neighbourhoods.
The Ugandan capital Kampala is part of the Central Corridor project to improve transport and infrastructure links across five countries including Burundi, the Democratic Republic of the Congo (DRC), Rwanda, Tanzania and Uganda. The project is financed through the government of Tanzania via a US$7.6 billion loan from the Chinese bank Exim.
The growth of the new transport and cargo hub at Port Bell, on the outskirts of Kampala, with standardised technologies and facilities for international trade, is the crucial underlying component for Uganda’s Vision2040.
This national plan alone encompasses a further ten new cities, four international airports, national high speed rail and a multi-lane road network. But as these urban transformations unfold, residents already living precariously in Kampala have faced further uncertainty over their livelihoods, shelter and place in the city.
During fieldwork for our ongoing research into Silk Road urbanism in 2017, we witnessed the demolition of hundreds of informal homes and businesses in the popular Namuwongo district, as a zone was cleared 30 metres either side of a rehabilitated railway track for the Central Corridor required.
As Silk Road urbanism proceeds to reshape global infrastructure and city spaces, existing populations will experience displacement in ways that are likely to reinforce existing inequalities. It is vital people are given democratic involvement in shaping the outcomes.
Superficially, the plan resolves a number of the challenges posed by Brexit, including the thorny issue of the Irish border and the UK’s future trading relationship with the EU. But the plan – also known as Norway+ because it has similarities with the EU’s relationship with Norway – involves the UK compromising on a number of its current red lines, while at the same time requiring a fundamental revision of one of the EU’s existing free trade agreements.
One way to understand how a Common Market 2.0 might work – and how it would differ to other options on the table – is to look at one type of good that might move between countries. Say, cheese.
First, it’s important to establish the difference between a free trade agreement and a customs union. As a rule, tariffs are applied on the basis of where goods originate from. The EU’s free trade agreement with Canada, for example, means that you can import Canadian Avonlea cheese into the EU free from tariffs. However, the EU’s lack of an free trade agreement with the US means that American Monterey Jack cheese is charged at €221.20/100kg. If you first export Monterey Jack to Canada, and then from Canada to the EU it will still be chargeable, as the goods originated in the US. Under a free trade agreement, checks on where good originated – known as “rules of origin” checks – are still required.
A customs union is a more advanced form of trading relationship, where you agree not only to remove any tariffs on each other’s goods, but also to apply the same tariffs on goods originating from third countries. This means, for the purposes of the EU customs union, that Monterey Jack will be treated the same whether it is imported to Belgium or Bulgaria. Within a customs union it’s unnecessary to check from where goods originate when they cross a border as they will already have received the appropriate customs treatment.
Back to the 1990s
The Common Market 2.0 idea is an attempt to reverse engineer the previous 25 years of EU integration, reverting the UK’s participation in the EU to the position before the Maastricht Treaty was agreed in 1992.
Under the plan, the UK would rejoin the European Free Trade Association (EFTA) of which it was a founding member prior to joining the European Economic Community. The UK would also accede to the European Economic Area (EEA) agreement with the EU. This is a two-pillared agreement between the EU, as well as three of the four EFTA members: Iceland, Liechtenstein and Norway, but not Switzerland. This is often known as the “Norway model”.
Under Common Market 2.0, the UK would be the only state outside of the EU to participate in both the EU customs union and the single market. The lack of a red line bisecting Ireland is the reason why a customs union is so attractive. Dr Stuart MacLennan
Such an approach would result in the UK adopting EU-EEA measures relating to the internal market, including the free movement of goods and services, and competition law. But the UK would no longer be subject to the direct jurisdiction of the Court of Justice of the EU, which would be replaced by the jurisdiction of the EFTA Court.
Under this approach, regulatory alignment is all but guaranteed, as standards would ultimately be agreed by the EU and EEA (of which the UK would be a member) – meaning that all cheese capable of being sold in the EU, be that French brie or Dutch edam, ought to be capable of being sold in the UK and vice versa.
What moves the Common Market 2.0 proposal beyond simply replicating the Norway model, however, is that it also involves the UK entering a customs union directly with the EU, thereby removing the need for rules of origin checks on the Irish border between Northern Ireland and the Republic of Ireland. Checks on cheese moving between Norway and Sweden are rare – but they do happen. By entering into a customs union with the EU such checks along the Northern Irish border would never be necessary.
The major stumbling block with Common Market 2.0, however, is that under the EFTA agreement it’s not currently possible for member states to enter into a customs union with other states – whether the EU or otherwise. So Norway cannot enter into a customs union directly with the EU, or the US, for example. If the UK were to seek this, it would require special treatment not only by the EU, but by EFTA as well – the political difficulties of which have been largely overlooked.
Free movement question
The Common Market 2.0 arrangement would also, controversially for many, involve the UK continuing with the free movement of persons. The key piece of legislation providing free movement rights for EU and EEA citizens – directive 2004/38 – was incorporated into EEA law in 2007.
One saving grace for the UK might be the joint declaration attached to that 2007 EEA decision that it cannot be the basis for the creation of political rights, and that the directive does not impinge upon immigration policy. This reflects the fact that the primary focus of EEA law is on economically active migrants, rather than EU citizens.
The Common Market 2.0 approach is therefore unlikely to be viable. Not only would it enrage the right wing of the Conservative Party, it would require agreement from the EU, the EEA and Switzerland. Given the difficulties the UK has had agreeing a deal with one trading bloc, trying to win over three – the EU, the EEA, and EFTA, as well a domestic audience – looks near-impossible.
I can still recall my surprise when a book by evolutionary biologist Peter Lawrence entitled “The making of a fly” came to be priced on Amazon at $23,698,655.93 (plus $3.99 shipping). While my colleagues around the world must have become rather depressed that an academic book could achieve such a feat, the steep price was actually the result of algorithms feeding off each other and spiralling out of control. It turns out, it wasn’t just sales staff being creative: algorithms were calling the shots.
This eye-catching example was spotted and corrected. But what if such algorithmic interference happens all the time, including in ways we don’t even notice? If our reality is becoming increasingly constructed by algorithms, where does this leave us humans?
Inspired by such examples, my colleague Prof Allen Lee and I recently set out to explore the deeper effects of algorithmic technology in a paper in the Journal of the Association for Information Systems. Our exploration led us to the conclusion that, over time, the roles of information technology and humans have been reversed. In the past, we humans used technology as a tool. Now, technology has advanced to the point where it is using and even controlling us.
We humans are not merely cut off from the decisions that machines are making for us but deeply affected by them in unpredictable ways. Instead of being central to the system of decisions that affects us, we are cast out in to its environment. We have progressively restricted our own decision-making capacity and allowed algorithms to take over. We have become artificial humans, or human artefacts, that are created, shaped and used by the technology.
Examples abound. In law, legal analysts are gradually being replaced by artificial intelligence, meaning the successful defence or prosecution of a case can rely partly on algorithms. Software has even been allowed to predict future criminals, ultimately controlling human freedom by shaping how parole is denied or granted to prisoners. In this way, the minds of judges are being shaped by decision-making mechanisms they cannot understand because of how complex the process is and how much data it involves.
In the job market, excessive reliance on technology has led some of the world’s biggest companies to filter CVs through software, meaning human recruiters will never even glance at some potential candidates’ details. Not only does this put people’s livelihoods at the mercy of machines, it can also build in hiring biases that the company had no desire to implement, as happened with Amazon.
In news, what’s known as automated sentiment analysis analyses positive and negative opinions about companies based on different web sources. In turn, these are being used by trading algorithms that make automated financial decisions, without humans having to actually read the news.
In fact, algorithms operating without human intervention now play a significant role in financial markets. For example, 85% of all trading in the foreign exchange markets is conducted by algorithms alone. The growing algorithmic arms race to develop ever more complex systems to compete in these markets means huge sums of money are being allocated according to the decisions of machines.
On a small scale, the people and companies that create these algorithms are able to affect what they do and how they do it. But because much of artificial intelligence involves programming software to figure out how to complete a task by itself, we often don’t know exactly what is behind the decision-making. As with all technology, this can lead to unintended consequences that may go far beyond anything the designers ever envisaged.
But the algorithms that amplified the initial problems didn’t make a mistake. There wasn’t a bug in the programming. The behaviour emerged from the interaction of millions of algorithmic decisions playing off each other in unpredictable ways, following their own logic in a way that created a downward spiral for the market.
The conditions that made this possible occurred because, over the years, the people running the trading system had come to see human decisions as an obstacle to market efficiency. Back in 1987 when the US stock market fell by 22.61%, some Wall Street brokers simply stopped picking up their phones to avoid receiving their customers’ orders to sell stocks. This started a process that, as author Michael Lewis put it in his book Flash Boys, “has ended with computers entirely replacing the people”.
The financial world has invested millions in superfast cables and microwave communications to shave just milliseconds off the rate at which algorithms can transmit their instructions. When speed is so important, a human being that requires a massive 215 milliseconds to click a button is almost completely redundant. Our only remaining purpose is to reconfigure the algorithms each time the system of technological decisions fails.
As new boundaries are carved between humans and technology, we need to think carefully about where our extreme reliance on software is taking us. As human decisions are substituted by algorithmic ones, and we become tools whose lives are shaped by machines and their unintended consequences, we are setting ourselves up for technological domination. We need to decide, while we still can, what this means for us both as individuals and as a society.