MAG – Magazine N.3

MAG is a FREE publication product by AEG Corporation Limited Uk – International Advice.

MAG can be downloaded for free in PDF.

Download here (7 MB)

TV streaming titans are locked into a real-life Game of Thrones – here’s a way around this fight to the death

TV streaming titans are locked into a real-life Game of Thrones – here’s a way around this fight to the death

File 20190509 183112 1tzj83.jpg?ixlib=rb 1.1
Let battle commence.
Vitalii Petrushenko

Michael Wade, IMD Business School

American retail giant Walmart is becoming the latest challenger to clamber into the ring and take on the reigning TV/movie streaming heavyweights with original material.

At a press conference in New York, Walmart announced a slate of new commissions for its streaming contender, Vudu. Added to the 100,000-plus TV shows and movies already available on the service, viewers can expect the likes of Friends in Strange Places, a travel/comedy series overseen by Queen Latifah; interview documentary strand Turning Point with Randy Jackson; and a series-length reboot of 1983 Michael Keaton comedy Mr Mom.

The new offering is aimed primarily at Middle America, which Walmart feels has been undersold by streaming incumbents like Netflix and Amazon Prime Video. Vudu’s shows will be a vehicle for new interactive advertising going live over the summer which will allow consumers to buy what they see without leaving their sofa. Thanks to its monster customer database, a senior Vudu manager recently described Walmart as the “sleeping giant of the digital entertainment space”.

If so, it’s about to wake up to a very crowded marketplace. It’s only weeks since Apple announced streaming service Apple TV+, which is to combine licensed shows with original programming when it launches worldwide this autumn.

Disney, meanwhile, is following suit with Disney+ in November – initially in the US, then rolling out to other countries next year.

Other existing streamers include Hulu and HBO Now, while Discovery and NBCUniversal are both launching rivals next year as well (click on the table below to make the full details bigger). Between them, these companies are spending many billions of dollars on content. It doesn’t take a seer to predict that a good few will likely fail.

Sizing them up

Among these newer announcements, Apple and Disney look the stronger contenders. Apple has the ready-made platform of a billion devices to promote and deliver its service, while Disney has the richest content portfolio across multiple categories – from video games to live sports to superheroes.

Vudu may have the heft of Walmart behind it, but the content investment is likely to be a fraction of the other two: Apple has said it will spend US$2 billion (£1.5 billion) a year at first, while Disney is spending only $500m on originals, including the likes of three Avengers spin-offs, but the group’s total annual content spend is nearly 50 times bigger. Walmart has not said what Vudu is spending. On the other hand, Vudu’s offering will be mostly free while Disney+ and Apple TV+ will both charge monthly subscriptions.

At any rate, all three are likely to struggle – and the same goes for the other new arrivals. We are heading for a serious case of “subscription fatigue”. When consumers watch free-to-air television, broadcasters take care of the messy process of making deals with content owners, aggregating it and serving it up. As pay-TV operators like Sky or the cable networks started to emerge, consumers had to sometimes choose a package to get a particular channel or programme.

They have been warned.
diy13

But with streaming in future, this experience is going to become more and more frustrating – Where can I find Westworld? Where is Blue Planet these days? – not to mention expensive for anyone tempted by multiple offerings. By building competing services, all these media giants are playing their own Game of Thrones.

The fix

The way forward is clear, but controversial. Apple, Disney, AT&T, NBCUniversal and the other large players should collaborate to create a dominant content platform. Partnering among subscription services would take some of the burden off consumers and make the combined offering more appealing than existing options. Imagine subscribing to a single service to receive access to everything from classic TV and movies to the latest shows. The market can probably handle two or three mega platforms, but not more.

Ironically, Disney already has a ready-made option in its arsenal. Hulu was set up as a joint venture between Disney, NBCUniversal, Fox and WarnerMedia (now owned by AT&T). Yet Hulu’s claim to be a cross-industry platform is getting weaker, not stronger: Fox’s 30% share defaulted to Disney when it was taken over, and AT&T has announced it wants to sell its 10% holding. Hulu may have recently diversified with its recent partnership announcement with music streamer Spotify, but Disney’s new dominance of the service will probably make it a less attractive option for other media companies to buy into than previously.

If media companies collaborated with their streaming services, it would certainly come with antitrust concerns. But unless they evolve into an industry platform soon, the door will open for other players to take the lead – I’m thinking digital giants like Google or Facebook, internet service providers or telecommunications companies.

Many of these players already have a subscription relationship with consumers, so it would be relatively easy for them to bundle video streaming into existing services. Amazon’s shift into the media world is a textbook example of how this could play out.

One service to rule them all.
Metamorworks

It is reminiscent of the early 2000s, in which the record majors built walled gardens around their content only to watch in horror as Apple’s iTunes stole the market from under them with a convenient, cheap and comprehensive option. Spotify then stole it again a few years later. Media companies should also beware the prospect of consumers being driven in larger numbers to illegal or quasi-legal video consolidation services.

There are recent precedents that they could follow of competitive partnering in other industries: BMW and Daimler recently announced they would join forces to build common platforms for ride sharing and electric vehicle charging, among other things, having realised they are stronger together than apart.

The media giants would be well advised to start exploring similar possibilities.
Consumers are already baulking at both the cost of multiple subscription services and the inconvenience of having to keep track of which shows are on which services. The ultimate winner will be the first option that can provide scale and convenience at a reasonable cost. If today’s streaming companies aren’t careful, they will end up on the outside looking in.The Conversation

Michael Wade, Professor of Innovation and Strategy, Cisco Chair in Digital Business Transformation, IMD Business School

This article is republished from The Conversation under a Creative Commons license.

Notre Dame: a history of medieval cathedrals and fire

Notre Dame: a history of medieval cathedrals and fire

File 20190417 139084 1qv3hhs.jpg?ixlib=rb 1.1
Zabotnova Inna/Shutterstock

Jenny Alexander, University of Warwick

Many great churches and cathedrals have suffered catastrophic fires over their long histories and medieval chronicles are full of stories of devastation and ruin as a result – but they also tell of how the buildings were reconstructed and made better than ever.

The devastating fire that destroyed the roofs and spire of Notre Dame in Paris demonstrated the vulnerabilities of medieval cathedrals and great churches, but also revealed the skills of their master masons. The lead-covered wooden roof structure burned so fast because the fire was able to take hold under the lead and increase in intensity before it was visible from the outside, and it then spread easily to all the other sections of the roof.

Notre Dame was saved from total destruction because the medieval builders gave it a stone vault over all the main spaces, and also on the tops of the aisles which meant that the burning timbers and molten lead couldn’t break through easily.

But French churches and cathedrals are more at risk than ones in Britain because they don’t usually have a stone tower in the centre to act as a firebreak – this is what saved York Minster in 1984 when the transept roof caught fire but the tower stopped it spreading further.

Turning to Britain, medieval chronicles provide fascinating reading for historians as we can find eyewitness accounts of the unfolding disasters when fires occurred in the past. At Croyland Abbey in Lincolnshire, the monk who found the fire in the 12th century rushed to the cloister to wake the sleeping monks in their dormitory, but was burned by the red-hot lead falling from the roof and had to be taken to the infirmary for treatment.

Swift action by the other monks saved the building, and the next abbot restored it to its former glory, although the loss of precious manuscripts and documents, “caused them much sorrow”.

Master masons were highly skilled.
Sergio Foto/Shutterstock

The canons of the great priory church of Gisborough in north-east England were very unlucky: the masons had just completed a very splendid, and expensive, rebuilding project when they had to start all over again. On May 16, 1289, so the chronicles tell us, a plumber – in medieval times, someone who worked with lead – and his two assistants went up onto the roof to make a few final repairs to the leads. Unfortunately, the plumber left a fire pan on the roof beams when he went down for his lunch, leaving his assistants to put out the fire. This they failed to do, and the whole roof went up in flames, followed by the building and all its contents.

Traces of the fire can still be found at the west end of the church, which is virtually all that they were able to save, and a new building arose from the ashes over the next hundred years. Plumbers had to be very careful, they were the only ones who needed to have fires burning close to where they were working, and at Ely Cathedral you can still see where a plumber used the hollow between two arches high up on the back of the west front as a makeshift chimney for his fire. Fortunately, nothing dreadful happened there.

Lincoln cathedral.
Lebendigger/Shutterstock

At Lincoln cathedral, we can see where the fire in the west front in the 12th century damaged the staircases because these acted as chimneys and spread the fire quickly up into the rest of the building. The building’s limestone turned pink in the extreme heat and it’s clear that the masons had to take down the more damaged parts of the west front to repair the stonework that had been closer to the fire and had cracked. One fascinating detail remains: the masons had to check how deeply the fire damage had penetrated the stone and the marks they cut into the stone are still there.

Detail of one of the Becket Miracle Windows in Canterbury Cathedral, 1180-1220, marking the shrine to St Thomas Becket.
Platslee/Shutterstock

Canterbury Cathedral was struggling to cope with all the pilgrims drawn to the shrine of the murdered Thomas Becket and a fire of 1174 gave the monks the chance to build a fine new building to house his shrine.

The eyewitness account has details of the heroic monks rushing into the building to save all its treasures, and it’s even been suggested that this fire wasn’t an accident and was started by the monks themselves as it brought so many benefits in its wake. The master mason gave them a superb new building in the Gothic style and with all the funds pouring in, the monks were able to move back into their church within five years of the fire, although completing the building work took a little longer.

St Paul’s Cathedral. Originally a medieval church, rebuilt after the Great Fire of London.
George M Hiles/Shutterstock

For Sir Christopher Wren, the Great Fire of London in 1666 gave him the opportunity he’d been waiting for: to give London the cathedral it needed for the modern age. The medieval cathedral had been falling into disrepair for years and various attempts to patch it up had left it weakened and muddled in appearance. Wandering among the ruins after the fire, Wren was handed a piece of stone from a tomb monument with the word “Resurgam” – I will rise again – carved on it, and this encouraged him to press on with his plans for a whole new building. It took 50 years, but it gave us the St Paul’s Cathedral that we know today.

Coventry also rose from the ashes of despair after the firebombing of November 1940 in World War II. The cathedral had been built as one of the city’s great medieval churches and became the city’s cathedral in 1918. It was a fine late-medieval building with a huge timber roof, and this was no match for the fire bombs that rained down on it during Coventry’s blitz.

Burning timbers fell straight down into the building and caused a huge bonfire that cracked the slender stone work supports and brought them crashing down. By morning, the building was a devastated shell. Basil Spence, the architect of the new Coventry Cathedral in the 1950s, sensitively integrated the ruins into the design of his new building where they stand as a memorial to the events of the 1940s.

Illustration of York Minster.
Morphart Creation/Shutterstock

The 20th century has seen a few serious fires. York Minster’s huge 1984 fire was believed to have been caused by either lightning, or an electrical fault. York has been very unlucky over the years, it’s had a succession of fires and without stone vaults over the building, the minster has been very vulnerable. After the last restoration, York had the inspired idea of asking school students to design some of the carvings on the new transept vault.

The threat of fire in historic buildings is a constant one, and the people who look after the buildings, on a day-to-day basis, or in response to disaster, are unsung heroes who deserve gratitude and support. Notre Dame, Paris will be restored and made glorious once again – fires have always been a risk, and restorations have always been a part of church history.The Conversation

Jenny Alexander, Associate Professor, University of Warwick

This article is republished from The Conversation under a Creative Commons license.

China’s ‘Silk Road urbanism’ is changing cities from London to Kampala – can locals keep control?

China’s ‘Silk Road urbanism’ is changing cities from London to Kampala – can locals keep control?

File 20190327 139380 1qnb43p.jpg?ixlib=rb 1.1
View of Kampala.
Shutterstock.

Jonathan Silver, University of Sheffield and Alan Wiig, University of Massachusetts Boston

A massive redevelopment of the old Royal Albert Dock in East London is transforming the derelict waterfront to a gleaming business district. The project, which started in June 2017, will create 325,000 square metres of prime office space – a “city within a city”, as it has been dubbed – for Asian finance and tech firms. Then, in 2018, authorities in Kampala, Uganda celebrated as a ferry on Lake Victoria was unloaded with goods from the Indian Ocean, onto a rail service into the city. This transport hub was the final part of the Central Corridor project, aimed at connecting landlocked Uganda to Dar es Salaam and the Indian Ocean.

Both of these huge projects are part of the US$1 trillion global infrastructure investment that is China’s Belt and Road Initiative (BRI). China’s ambition to reshape the world economy has sparked massive infrastructure projects spanning all the way from Western Europe to East Africa, and beyond. The nation is engaging in what we, in our research, call “Silk Road urbanism” – reimagining the historic transcontinental trade route as a global project, to bring the cities of South Asia, East Africa, Europe and South America into the orbit of the Chinese economy.

By forging infrastructure within and between key cities, China is changing the everyday lives of millions across the world. The initiative has kicked off a new development race between the US and China, to connect the planet by financing large-scale infrastructure projects.

Silk Road urbanism

Amid this geopolitical competition, Silk Road urbanism will exert significant influence over how cities develop into the 21st century. As the transcontinental trade established by the ancient Silk Road once led to the rise of cities such as Herat (in modern-day Afghanistan) and Samarkand (Uzbekistan), so the BRI will bring new investment, technology, infrastructure and trade relations to certain cities around the globe.

The BRI is still in its early stages – and much remains to be understood about the impact it will have on the urban landscape. What is known, however, is that the project will transform the world system of cities on a scale not witnessed since the end of the Cold War.

Silk Road urbanism is highly selective in its deployment across urban space. It prioritises the far over the near and is orientated toward global trade and the connections and circulations of finance, materials, goods and knowledge. Because of this, the BRI should not only be considered in terms of its investment in infrastructure.

It will also have significance for city dwellers – and urban authorities must recognise the challenges of the BRI and navigate the need to secure investment for infrastructure while ensuring that citizens maintain their right to the city, and their power to shape their own future.

London calling

Developments in both London and Kampala highlight these challenges. In London, Chinese developer Advanced Business Park is rebuilding Royal Albert Dock – now named the Asian Business Port – on a site it acquired for £1 billion in 2013 in a much-criticised deal by former London mayor Boris Johnson. The development is projected to be worth £6 billion to the city’s economy by completion.

Formerly Royal Albert Dock, now Asian Business Port.
Google Earth.

But the development stands in sharp contrast with the surrounding East London communities, which still suffer poverty and deprivation. The challenge will be for authorities and developers to establish trusting relations through open dialogue with locals, in a context where large urban redevelopments such as the 2012 Olympic Park have historically brought few benefits.

The creation of a third financial district, alongside Canary Wharf and the City of London, may benefit the economy. But it remains to be seen if this project will provide opportunities for, and investment in, the surrounding neighbourhoods.

Kampala’s corridor

The Ugandan capital Kampala is part of the Central Corridor project to improve transport and infrastructure links across five countries including Burundi, the Democratic Republic of the Congo (DRC), Rwanda, Tanzania and Uganda. The project is financed through the government of Tanzania via a US$7.6 billion loan from the Chinese bank Exim.

Under construction: the Chinese-funded Entebbe-Kampala Expressway.
Dylan Patterson/Flickr., CC BY-SA

The growth of the new transport and cargo hub at Port Bell, on the outskirts of Kampala, with standardised technologies and facilities for international trade, is the crucial underlying component for Uganda’s Vision2040.

This national plan alone encompasses a further ten new cities, four international airports, national high speed rail and a multi-lane road network. But as these urban transformations unfold, residents already living precariously in Kampala have faced further uncertainty over their livelihoods, shelter and place in the city.

During fieldwork for our ongoing research into Silk Road urbanism in 2017, we witnessed the demolition of hundreds of informal homes and businesses in the popular Namuwongo district, as a zone was cleared 30 metres either side of a rehabilitated railway track for the Central Corridor required.

As Silk Road urbanism proceeds to reshape global infrastructure and city spaces, existing populations will experience displacement in ways that are likely to reinforce existing inequalities. It is vital people are given democratic involvement in shaping the outcomes.The Conversation

Jonathan Silver, Senior Research Fellow, University of Sheffield and Alan Wiig, Assistant Professor of Urban Planning and Community Development, University of Massachusetts Boston

This article is republished from The Conversation under a Creative Commons license.

Brexit with brie? Common Market 2.0 proposal explained – through the import and export of cheese

Brexit with brie? Common Market 2.0 proposal explained – through the import and export of cheese

Stuart MacLennan, Coventry University

Amid the ongoing Brexit standoff, one proposal that has been gaining traction and which MPs will now vote on in a series of indicative votes in parliament, has been the cross-party plan for a “Common Market 2.0”.

Superficially, the plan resolves a number of the challenges posed by Brexit, including the thorny issue of the Irish border and the UK’s future trading relationship with the EU. But the plan – also known as Norway+ because it has similarities with the EU’s relationship with Norway – involves the UK compromising on a number of its current red lines, while at the same time requiring a fundamental revision of one of the EU’s existing free trade agreements.

One way to understand how a Common Market 2.0 might work – and how it would differ to other options on the table – is to look at one type of good that might move between countries. Say, cheese.

First, it’s important to establish the difference between a free trade agreement and a customs union. As a rule, tariffs are applied on the basis of where goods originate from. The EU’s free trade agreement with Canada, for example, means that you can import Canadian Avonlea cheese into the EU free from tariffs. However, the EU’s lack of an free trade agreement with the US means that American Monterey Jack cheese is charged at €221.20/100kg. If you first export Monterey Jack to Canada, and then from Canada to the EU it will still be chargeable, as the goods originated in the US. Under a free trade agreement, checks on where good originated – known as “rules of origin” checks – are still required.

A customs union is a more advanced form of trading relationship, where you agree not only to remove any tariffs on each other’s goods, but also to apply the same tariffs on goods originating from third countries. This means, for the purposes of the EU customs union, that Monterey Jack will be treated the same whether it is imported to Belgium or Bulgaria. Within a customs union it’s unnecessary to check from where goods originate when they cross a border as they will already have received the appropriate customs treatment.

Back to the 1990s

The Common Market 2.0 idea is an attempt to reverse engineer the previous 25 years of EU integration, reverting the UK’s participation in the EU to the position before the Maastricht Treaty was agreed in 1992.

Under the plan, the UK would rejoin the European Free Trade Association (EFTA) of which it was a founding member prior to joining the European Economic Community. The UK would also accede to the European Economic Area (EEA) agreement with the EU. This is a two-pillared agreement between the EU, as well as three of the four EFTA members: Iceland, Liechtenstein and Norway, but not Switzerland. This is often known as the “Norway model”.

Under Common Market 2.0, the UK would be the only state outside of the EU to participate in both the EU customs union and the single market. The lack of a red line bisecting Ireland is the reason why a customs union is so attractive.
Dr Stuart MacLennan

Such an approach would result in the UK adopting EU-EEA measures relating to the internal market, including the free movement of goods and services, and competition law. But the UK would no longer be subject to the direct jurisdiction of the Court of Justice of the EU, which would be replaced by the jurisdiction of the EFTA Court.

Under this approach, regulatory alignment is all but guaranteed, as standards would ultimately be agreed by the EU and EEA (of which the UK would be a member) – meaning that all cheese capable of being sold in the EU, be that French brie or Dutch edam, ought to be capable of being sold in the UK and vice versa.

What moves the Common Market 2.0 proposal beyond simply replicating the Norway model, however, is that it also involves the UK entering a customs union directly with the EU, thereby removing the need for rules of origin checks on the Irish border between Northern Ireland and the Republic of Ireland. Checks on cheese moving between Norway and Sweden are rare – but they do happen. By entering into a customs union with the EU such checks along the Northern Irish border would never be necessary.




Read more:
Brexit: why was the Irish border ‘backstop’ so crucial to Brexit deal defeat?


The major stumbling block with Common Market 2.0, however, is that under the EFTA agreement it’s not currently possible for member states to enter into a customs union with other states – whether the EU or otherwise. So Norway cannot enter into a customs union directly with the EU, or the US, for example. If the UK were to seek this, it would require special treatment not only by the EU, but by EFTA as well – the political difficulties of which have been largely overlooked.

Free movement question

The Common Market 2.0 arrangement would also, controversially for many, involve the UK continuing with the free movement of persons. The key piece of legislation providing free movement rights for EU and EEA citizens – directive 2004/38 – was incorporated into EEA law in 2007.

One saving grace for the UK might be the joint declaration attached to that 2007 EEA decision that it cannot be the basis for the creation of political rights, and that the directive does not impinge upon immigration policy. This reflects the fact that the primary focus of EEA law is on economically active migrants, rather than EU citizens.



Read more:
Brexit: a Norwegian view on the Norway-plus model and why it wouldn’t be easy for the UK


The Common Market 2.0 approach is therefore unlikely to be viable. Not only would it enrage the right wing of the Conservative Party, it would require agreement from the EU, the EEA and Switzerland. Given the difficulties the UK has had agreeing a deal with one trading bloc, trying to win over three – the EU, the EEA, and EFTA, as well a domestic audience – looks near-impossible.The Conversation

Stuart MacLennan, Senior Lecturer in Law, Coventry University

This article is republished from The Conversation under a Creative Commons license..

Algorithms have already taken over human decision making

Algorithms have already taken over human decision making

Dionysios Demetis, University of Hull

I can still recall my surprise when a book by evolutionary biologist Peter Lawrence entitled “The making of a fly” came to be priced on Amazon at $23,698,655.93 (plus $3.99 shipping). While my colleagues around the world must have become rather depressed that an academic book could achieve such a feat, the steep price was actually the result of algorithms feeding off each other and spiralling out of control. It turns out, it wasn’t just sales staff being creative: algorithms were calling the shots.

This eye-catching example was spotted and corrected. But what if such algorithmic interference happens all the time, including in ways we don’t even notice? If our reality is becoming increasingly constructed by algorithms, where does this leave us humans?

Inspired by such examples, my colleague Prof Allen Lee and I recently set out to explore the deeper effects of algorithmic technology in a paper in the Journal of the Association for Information Systems. Our exploration led us to the conclusion that, over time, the roles of information technology and humans have been reversed. In the past, we humans used technology as a tool. Now, technology has advanced to the point where it is using and even controlling us.

We humans are not merely cut off from the decisions that machines are making for us but deeply affected by them in unpredictable ways. Instead of being central to the system of decisions that affects us, we are cast out in to its environment. We have progressively restricted our own decision-making capacity and allowed algorithms to take over. We have become artificial humans, or human artefacts, that are created, shaped and used by the technology.

Examples abound. In law, legal analysts are gradually being replaced by artificial intelligence, meaning the successful defence or prosecution of a case can rely partly on algorithms. Software has even been allowed to predict future criminals, ultimately controlling human freedom by shaping how parole is denied or granted to prisoners. In this way, the minds of judges are being shaped by decision-making mechanisms they cannot understand because of how complex the process is and how much data it involves.

In the job market, excessive reliance on technology has led some of the world’s biggest companies to filter CVs through software, meaning human recruiters will never even glance at some potential candidates’ details. Not only does this put people’s livelihoods at the mercy of machines, it can also build in hiring biases that the company had no desire to implement, as happened with Amazon.

In news, what’s known as automated sentiment analysis analyses positive and negative opinions about companies based on different web sources. In turn, these are being used by trading algorithms that make automated financial decisions, without humans having to actually read the news.

Unintended consequences

In fact, algorithms operating without human intervention now play a significant role in financial markets. For example, 85% of all trading in the foreign exchange markets is conducted by algorithms alone. The growing algorithmic arms race to develop ever more complex systems to compete in these markets means huge sums of money are being allocated according to the decisions of machines.

On a small scale, the people and companies that create these algorithms are able to affect what they do and how they do it. But because much of artificial intelligence involves programming software to figure out how to complete a task by itself, we often don’t know exactly what is behind the decision-making. As with all technology, this can lead to unintended consequences that may go far beyond anything the designers ever envisaged.

Take the 2010 “Flash Crash” of the Dow Jones Industrial Average Index. The action of algorithms helped create the index’s single biggest decline in its history, wiping nearly 9% off its value in minutes (although it regained most of this by the end of the day). A five-month investigation could only suggest what sparked the downturn (and various other theories have been proposed).

But the algorithms that amplified the initial problems didn’t make a mistake. There wasn’t a bug in the programming. The behaviour emerged from the interaction of millions of algorithmic decisions playing off each other in unpredictable ways, following their own logic in a way that created a downward spiral for the market.

The conditions that made this possible occurred because, over the years, the people running the trading system had come to see human decisions as an obstacle to market efficiency. Back in 1987 when the US stock market fell by 22.61%, some Wall Street brokers simply stopped picking up their phones to avoid receiving their customers’ orders to sell stocks. This started a process that, as author Michael Lewis put it in his book Flash Boys, “has ended with computers entirely replacing the people”.

The financial world has invested millions in superfast cables and microwave communications to shave just milliseconds off the rate at which algorithms can transmit their instructions. When speed is so important, a human being that requires a massive 215 milliseconds to click a button is almost completely redundant. Our only remaining purpose is to reconfigure the algorithms each time the system of technological decisions fails.

As new boundaries are carved between humans and technology, we need to think carefully about where our extreme reliance on software is taking us. As human decisions are substituted by algorithmic ones, and we become tools whose lives are shaped by machines and their unintended consequences, we are setting ourselves up for technological domination. We need to decide, while we still can, what this means for us both as individuals and as a society.The Conversation

Dionysios Demetis, Lecturer in Management Systems, University of Hull

This article is republished from The Conversation under a Creative Commons license.

Mag – Magazine N. 2

MAG is a FREE publication product by AEG Corporation Limited Uk – International Advice.

MAG can be downloaded for free in PDF.

Download here (2MB)

 

 

 

 

Facebook’s plan to protect the European elections comes up short

Facebook’s plan to protect the European elections comes up short

William Dance, Lancaster University

Intentionally false news stories were shared more than 35m times during the 2016 US presidential election, with Facebook playing a significant role in their spread. Shortly after, the Cambridge Analytica scandal revealed that 50m Facebook profiles had been harvested without authorisation and used to target political ads and fake news for the election and later during the UK’s 2016 Brexit referendum.

Though the social network admitted it had been slow to react to the issue, it developed tools for the 2018 US midterm elections that enabled Facebook users to see who was behind the political ads they were shown. Facebook defines ads as any form of financially sponsored content. This can be traditional product adverts or fake news articles that are targeted at certain demographics for maximum impact.

Now the focus is shifting to the 2019 European parliament elections, which will take place from May 23, and the company has introduced a public record of all political ads and sweeping new transparency rules designed to stop them being placed anonymously. This move follows Facebook’s expansion of its fact-checking operations, for example by teaming up with British fact-checking charity FullFact.

Facebook told us that it has taken an industry-leading position on political ad transparency in the UK, with new tools that go beyond what the law currently requires and that it has invested significantly to prevent the spread of disinformation and bolster high-quality journalism and news literacy. The transparency tools show exactly which page is running ads, and all the ads that they are running. It then houses those ads in its “ad library” for seven years. It claims it doesn’t want misleading content on its site and is cracking down on it using a combination of technology and human review.

While these measures will go some way towards addressing the problem, several flaws have already emerged. And it remains difficult to see how Facebook can tackle fake news in particular with its existing measures.

In 2018, journalists at Business Insider successfully placed fake ads they listed as paid for by the now-defunct company Cambridge Analytica. It is this kind of fraud that Facebook is aiming to stamp out with its news transparency rules, which require political advertisers to prove their identity. However, it’s worth noting that none of Business Insider’s “test adverts” appear to be listed in Facebook’s new ad library, raising questions about its effectiveness as a full public record.

The problem is that listing which person or organisation paid the bill for an ad isn’t the same as revealing the ultimate source of its funding. For example, it was recently reported that Britain’s biggest political spender on Facebook was Britain’s Future, a group that has spent almost £350,000 on ads. The group can be traced back to a single individual: 30-year-old freelance writer Tim Dawson. But exactly who funds the group is unclear.

While the group does allow donations, it is not a registered company, nor does it appear in the database of the UK’s Electoral Commission or the Information Commissioner. This highlights a key flaw in the UK’s political advertising regime that isn’t addressed by Facebook’s measures, and shows that transparency at the ad-buying level isn’t enough to reveal potential improper influence.

The new measures also rely on advertisers classifying their ads as political, or using overtly political language. This means advertisers could still send coded messages that Facebook’s algorithms may not detect.

Facebook recently had more success when it identified and removed its first UK-based fake news network, which comprised 137 groups spreading “divisive comments on both sides of the political debate in the UK”. But the discovery came as part of an investigation into hate speech towards the home secretary, Sajid Javid. This suggests that Facebook’s dedicated methods for tackling fake news aren’t working as effectively as they could.

Facebook has had plenty of time to get to grips with the modern issue of fake news being used for political purposes. As early as 2008, Russia began disseminating online misinformation to influence proceedings in Ukraine, which became a testing ground for the Kremlin’s tactics of cyberwarfare and online disinformation. Isolated fake news stories then began to surface in the US in the early 2010s, targeting politicians and divisive topics such as gun control. These then evolved into sophisticated fake news networks operating at a global level.

But the way Facebook works means it has played a key role in helping fake news become so powerful and effective. The burden of proof for a news story has been lowered to one aspect: popularity. With enough likes, shares and comments – no matter whether they come from real users, click farms or bots – a story gains legitimacy no matter the source.

Safeguarding democracy

As a result, some countries have already decided that Facebook’s self-regulation isn’t enough. In 2018, in a bid to “safeguard democracy”, the French president, Emmanuel Macron, introduced a controversial law banning online fake news during elections that gives judges the power to remove and obtain information about who published the content.

Meanwhile, Germany has introduced fines of up to €50m on social networks that host illegal content, including fake news and hate speech. Incidentally, while Germans make up only 2% of Facebook users, Germans now comprise more than 15% of Facebook’s global moderator workforce. In a similar move in late December 2018, Irish lawmakers introduced a bill to criminalise political adverts on Facebook and Twitter that contain intentionally false information.

The real-life impact these policies have is unclear. Fake news still appears on Facebook in these countries, while the laws give politicians the ability to restrict freedom of speech and the press, something that has sparked a mass of criticism in both Germany and France.

Ultimately, there remains a considerable mismatch between Facebook’s promises to make protecting elections a top priority, and its ability to actually do the job. If unresolved, it will leave the European parliament and many other democratic bodies vulnerable to vast and damaging attempts to influence them.The Conversation

William Dance, Associate Lecturer in Linguistics, Lancaster University

This article is republished from The Conversation under a Creative Commons license.

Privacy, GDPR and Ireland as a one stop shop

THE VIEW FROM GOOGLE: PRIVACY, GDPR AND IRELAND AS A ONE STOP SHOP

In his address, Mr Enright, Google’s Chief Privacy Officer, shares his perspectives on Google’s experiences of GDPR, almost one year on. He discusses lessons learned along the way, as well as sharing perspectives on how Google approaches privacy and data protection, and the importance of Ireland as a One Stop Shop.

About the Speaker:

Mr Enright was appointed as Google’s Chief Privacy Officer last year. He joined Google in March 2011, with nearly 20 years of experience in creating and implementing programs for privacy, data stewardship and information risk management. Prior to joining Google, Mr Enright served as the most senior privacy executive at two Fortune 500 online and offline retail enterprises.

The IIEA is Ireland’s leading European & International Affairs think tank. We are an independent, not-for-profit organisation with charitable status.

Our role is to identify key European and international policy trends, which will inform the work of Ireland’s decision makers and business leaders, and enrich the public debate on Ireland’s role in the EU and on the global stage.

Social media doesn’t need new regulations to make the internet safer – GDPR can do the job

Social media doesn’t need new regulations to make the internet safer – GDPR can do the job

Eerke Boiten, De Montfort University

From concerns about data sharing to the hosting of harmful content, every week seems to bring more clamour for new laws to regulate the technology giants and make the internet “safer”. But what if our existing data protection laws, at least in Europe, could achieve most of the job?

Germany has already started introducing new legislation, enacting a law in 2018 that forces social media firms to remove hateful content. In the UK, the government has proposed a code of practice for social media companies to tackle “abusive content”. And health secretary Matt Hancock has now demanded laws regulating the removal of such content. Meanwhile, deputy opposition leader Tom Watson has suggested a legal duty of care for technology companies, in line with recent proposals by Carnegie UK Trust.

What’s notable about many of these proposals is how much they reference and recall the EU’s new General Data Protection Regulation (GDPR). Hancock, who led the UK’s introduction of this legislation (though he has also been accused of a limited understanding of it) referred to the control it gives people over the use of their data. Watson recalled the level of fines imposed by GDPR, hinting that similar penalties might apply for those who breach his proposed duty of care.

The Carnegie proposals, developed by former civil servant William Perrin and academic Lorna Woods, were inspired by GDPR’s approach of working out what protective measures are needed on an case-by-case basis. When a process involving data is likely to pose a high risk to people’s rights and freedoms, whoever’s in charge of the process must carry out what’s known as a data protection impact assessment (DPIA). This involves assessing the risks and working out what can be done to mitigate them.

The important thing to note here is that, while earlier data protection laws largely focused on people’s privacy, GDPR is concerned with their broader rights and freedoms. This includes things related to “social protection, public health and humanitarian purposes”. It also applies to anyone whose rights are threatened, not just the people whose data is being processed.

Existing rights and freedoms

Many of the problems we are worried about social media causing can be seen as infringements of rights and freedoms. And that means social media firms could arguably be forced to address these issues by completing data protection impact assessments under the existing GDPR legislation. This includes taking measures to mitigate the risks, such as making the data more secure.

For example, there is evidence that social media may increase the risk of suicide among vulnerable people, and that means social media may pose a risk to those people’s right to life, the first right protected by the European Convention of Human Rights (ECHR). If social networks use personal data to show people content that could increase this risk to their lives then, under GDPR, the network should reconsider its impact assessment and take appropriate steps to mitigate the risk.

GDPR provides an existing remedy.
Shinonome Production/Shutterstock

The Cambridge Analytica scandal, where Facebook was found to have failed to protect data that was later used to target users in political campaigns, can also be viewed in terms of risk to rights. For example, Protocol 1, Article 3 of the ECHR protects the right to “free elections”.

As part of its investigation into the scandal, the UK’s Information Commissioner’s Office has asked political parties to carry out impact assessments, based on the concern that profiling people by their political views could violate their rights. But given Facebook’s role in processing the data involved, the company could arguably be asked to do the same to see what risks to free elections its practices pose.

Think about what you might break

From Facebook’s ongoing history of surprise and apology, you might think that the adverse effects of any new feature in social media are entirely unpredictable. But given that the firm’s motto was once “move fast and break things”, it doesn’t seem too much of a stretch to ask Facebook and the other tech giants to try to anticipate the problems their attempts to break things might cause.

Asking “what could possibly go wrong?” should prompt serious answers instead of being a flippant expression of optimism. It should involve looking not just at how technology is intended to work, but also how it could be abused, how it could go too far, and what might happen if it falls victim to a security breach. This is exactly what the social media companies have been doing too little of.

I would argue that the existing provisions of GDPR, if properly enforced, should be enough to compel tech firms to take action to address much of what’s wrong with the current situation. Using the existing, carefully planned and highly praised legislation is better and more efficient than trying to design, enact and enforce new laws that are likely to have their own problems or create the potential for abuse.

Applying impact assessments in this way would share the risk-based approach of enshrining technology firms with a duty of care. And in practice, it may not be too different but without some of the potential problems, which are many and complex. Using the law in this way would send a clear message: social media companies should own the internet safety risks they help create, and manage them in coordination with regulators.The Conversation

Eerke Boiten, Professor of Cybersecurity, School of Computer Science and Informatics, De Montfort University

This article is republished from The Conversation under a Creative Commons license.