Notre Dame: a history of medieval cathedrals and fire

Notre Dame: a history of medieval cathedrals and fire

File 20190417 139084 1qv3hhs.jpg?ixlib=rb 1.1
Zabotnova Inna/Shutterstock

Jenny Alexander, University of Warwick

Many great churches and cathedrals have suffered catastrophic fires over their long histories and medieval chronicles are full of stories of devastation and ruin as a result – but they also tell of how the buildings were reconstructed and made better than ever.

The devastating fire that destroyed the roofs and spire of Notre Dame in Paris demonstrated the vulnerabilities of medieval cathedrals and great churches, but also revealed the skills of their master masons. The lead-covered wooden roof structure burned so fast because the fire was able to take hold under the lead and increase in intensity before it was visible from the outside, and it then spread easily to all the other sections of the roof.

Notre Dame was saved from total destruction because the medieval builders gave it a stone vault over all the main spaces, and also on the tops of the aisles which meant that the burning timbers and molten lead couldn’t break through easily.

But French churches and cathedrals are more at risk than ones in Britain because they don’t usually have a stone tower in the centre to act as a firebreak – this is what saved York Minster in 1984 when the transept roof caught fire but the tower stopped it spreading further.

Turning to Britain, medieval chronicles provide fascinating reading for historians as we can find eyewitness accounts of the unfolding disasters when fires occurred in the past. At Croyland Abbey in Lincolnshire, the monk who found the fire in the 12th century rushed to the cloister to wake the sleeping monks in their dormitory, but was burned by the red-hot lead falling from the roof and had to be taken to the infirmary for treatment.

Swift action by the other monks saved the building, and the next abbot restored it to its former glory, although the loss of precious manuscripts and documents, “caused them much sorrow”.

Master masons were highly skilled.
Sergio Foto/Shutterstock

The canons of the great priory church of Gisborough in north-east England were very unlucky: the masons had just completed a very splendid, and expensive, rebuilding project when they had to start all over again. On May 16, 1289, so the chronicles tell us, a plumber – in medieval times, someone who worked with lead – and his two assistants went up onto the roof to make a few final repairs to the leads. Unfortunately, the plumber left a fire pan on the roof beams when he went down for his lunch, leaving his assistants to put out the fire. This they failed to do, and the whole roof went up in flames, followed by the building and all its contents.

Traces of the fire can still be found at the west end of the church, which is virtually all that they were able to save, and a new building arose from the ashes over the next hundred years. Plumbers had to be very careful, they were the only ones who needed to have fires burning close to where they were working, and at Ely Cathedral you can still see where a plumber used the hollow between two arches high up on the back of the west front as a makeshift chimney for his fire. Fortunately, nothing dreadful happened there.

Lincoln cathedral.
Lebendigger/Shutterstock

At Lincoln cathedral, we can see where the fire in the west front in the 12th century damaged the staircases because these acted as chimneys and spread the fire quickly up into the rest of the building. The building’s limestone turned pink in the extreme heat and it’s clear that the masons had to take down the more damaged parts of the west front to repair the stonework that had been closer to the fire and had cracked. One fascinating detail remains: the masons had to check how deeply the fire damage had penetrated the stone and the marks they cut into the stone are still there.

Detail of one of the Becket Miracle Windows in Canterbury Cathedral, 1180-1220, marking the shrine to St Thomas Becket.
Platslee/Shutterstock

Canterbury Cathedral was struggling to cope with all the pilgrims drawn to the shrine of the murdered Thomas Becket and a fire of 1174 gave the monks the chance to build a fine new building to house his shrine.

The eyewitness account has details of the heroic monks rushing into the building to save all its treasures, and it’s even been suggested that this fire wasn’t an accident and was started by the monks themselves as it brought so many benefits in its wake. The master mason gave them a superb new building in the Gothic style and with all the funds pouring in, the monks were able to move back into their church within five years of the fire, although completing the building work took a little longer.

St Paul’s Cathedral. Originally a medieval church, rebuilt after the Great Fire of London.
George M Hiles/Shutterstock

For Sir Christopher Wren, the Great Fire of London in 1666 gave him the opportunity he’d been waiting for: to give London the cathedral it needed for the modern age. The medieval cathedral had been falling into disrepair for years and various attempts to patch it up had left it weakened and muddled in appearance. Wandering among the ruins after the fire, Wren was handed a piece of stone from a tomb monument with the word “Resurgam” – I will rise again – carved on it, and this encouraged him to press on with his plans for a whole new building. It took 50 years, but it gave us the St Paul’s Cathedral that we know today.

Coventry also rose from the ashes of despair after the firebombing of November 1940 in World War II. The cathedral had been built as one of the city’s great medieval churches and became the city’s cathedral in 1918. It was a fine late-medieval building with a huge timber roof, and this was no match for the fire bombs that rained down on it during Coventry’s blitz.

Burning timbers fell straight down into the building and caused a huge bonfire that cracked the slender stone work supports and brought them crashing down. By morning, the building was a devastated shell. Basil Spence, the architect of the new Coventry Cathedral in the 1950s, sensitively integrated the ruins into the design of his new building where they stand as a memorial to the events of the 1940s.

Illustration of York Minster.
Morphart Creation/Shutterstock

The 20th century has seen a few serious fires. York Minster’s huge 1984 fire was believed to have been caused by either lightning, or an electrical fault. York has been very unlucky over the years, it’s had a succession of fires and without stone vaults over the building, the minster has been very vulnerable. After the last restoration, York had the inspired idea of asking school students to design some of the carvings on the new transept vault.

The threat of fire in historic buildings is a constant one, and the people who look after the buildings, on a day-to-day basis, or in response to disaster, are unsung heroes who deserve gratitude and support. Notre Dame, Paris will be restored and made glorious once again – fires have always been a risk, and restorations have always been a part of church history.The Conversation

Jenny Alexander, Associate Professor, University of Warwick

This article is republished from The Conversation under a Creative Commons license.

China’s ‘Silk Road urbanism’ is changing cities from London to Kampala – can locals keep control?

China’s ‘Silk Road urbanism’ is changing cities from London to Kampala – can locals keep control?

File 20190327 139380 1qnb43p.jpg?ixlib=rb 1.1
View of Kampala.
Shutterstock.

Jonathan Silver, University of Sheffield and Alan Wiig, University of Massachusetts Boston

A massive redevelopment of the old Royal Albert Dock in East London is transforming the derelict waterfront to a gleaming business district. The project, which started in June 2017, will create 325,000 square metres of prime office space – a “city within a city”, as it has been dubbed – for Asian finance and tech firms. Then, in 2018, authorities in Kampala, Uganda celebrated as a ferry on Lake Victoria was unloaded with goods from the Indian Ocean, onto a rail service into the city. This transport hub was the final part of the Central Corridor project, aimed at connecting landlocked Uganda to Dar es Salaam and the Indian Ocean.

Both of these huge projects are part of the US$1 trillion global infrastructure investment that is China’s Belt and Road Initiative (BRI). China’s ambition to reshape the world economy has sparked massive infrastructure projects spanning all the way from Western Europe to East Africa, and beyond. The nation is engaging in what we, in our research, call “Silk Road urbanism” – reimagining the historic transcontinental trade route as a global project, to bring the cities of South Asia, East Africa, Europe and South America into the orbit of the Chinese economy.

By forging infrastructure within and between key cities, China is changing the everyday lives of millions across the world. The initiative has kicked off a new development race between the US and China, to connect the planet by financing large-scale infrastructure projects.

Silk Road urbanism

Amid this geopolitical competition, Silk Road urbanism will exert significant influence over how cities develop into the 21st century. As the transcontinental trade established by the ancient Silk Road once led to the rise of cities such as Herat (in modern-day Afghanistan) and Samarkand (Uzbekistan), so the BRI will bring new investment, technology, infrastructure and trade relations to certain cities around the globe.

The BRI is still in its early stages – and much remains to be understood about the impact it will have on the urban landscape. What is known, however, is that the project will transform the world system of cities on a scale not witnessed since the end of the Cold War.

Silk Road urbanism is highly selective in its deployment across urban space. It prioritises the far over the near and is orientated toward global trade and the connections and circulations of finance, materials, goods and knowledge. Because of this, the BRI should not only be considered in terms of its investment in infrastructure.

It will also have significance for city dwellers – and urban authorities must recognise the challenges of the BRI and navigate the need to secure investment for infrastructure while ensuring that citizens maintain their right to the city, and their power to shape their own future.

London calling

Developments in both London and Kampala highlight these challenges. In London, Chinese developer Advanced Business Park is rebuilding Royal Albert Dock – now named the Asian Business Port – on a site it acquired for £1 billion in 2013 in a much-criticised deal by former London mayor Boris Johnson. The development is projected to be worth £6 billion to the city’s economy by completion.

Formerly Royal Albert Dock, now Asian Business Port.
Google Earth.

But the development stands in sharp contrast with the surrounding East London communities, which still suffer poverty and deprivation. The challenge will be for authorities and developers to establish trusting relations through open dialogue with locals, in a context where large urban redevelopments such as the 2012 Olympic Park have historically brought few benefits.

The creation of a third financial district, alongside Canary Wharf and the City of London, may benefit the economy. But it remains to be seen if this project will provide opportunities for, and investment in, the surrounding neighbourhoods.

Kampala’s corridor

The Ugandan capital Kampala is part of the Central Corridor project to improve transport and infrastructure links across five countries including Burundi, the Democratic Republic of the Congo (DRC), Rwanda, Tanzania and Uganda. The project is financed through the government of Tanzania via a US$7.6 billion loan from the Chinese bank Exim.

Under construction: the Chinese-funded Entebbe-Kampala Expressway.
Dylan Patterson/Flickr., CC BY-SA

The growth of the new transport and cargo hub at Port Bell, on the outskirts of Kampala, with standardised technologies and facilities for international trade, is the crucial underlying component for Uganda’s Vision2040.

This national plan alone encompasses a further ten new cities, four international airports, national high speed rail and a multi-lane road network. But as these urban transformations unfold, residents already living precariously in Kampala have faced further uncertainty over their livelihoods, shelter and place in the city.

During fieldwork for our ongoing research into Silk Road urbanism in 2017, we witnessed the demolition of hundreds of informal homes and businesses in the popular Namuwongo district, as a zone was cleared 30 metres either side of a rehabilitated railway track for the Central Corridor required.

As Silk Road urbanism proceeds to reshape global infrastructure and city spaces, existing populations will experience displacement in ways that are likely to reinforce existing inequalities. It is vital people are given democratic involvement in shaping the outcomes.The Conversation

Jonathan Silver, Senior Research Fellow, University of Sheffield and Alan Wiig, Assistant Professor of Urban Planning and Community Development, University of Massachusetts Boston

This article is republished from The Conversation under a Creative Commons license.

Brexit with brie? Common Market 2.0 proposal explained – through the import and export of cheese

Brexit with brie? Common Market 2.0 proposal explained – through the import and export of cheese

Stuart MacLennan, Coventry University

Amid the ongoing Brexit standoff, one proposal that has been gaining traction and which MPs will now vote on in a series of indicative votes in parliament, has been the cross-party plan for a “Common Market 2.0”.

Superficially, the plan resolves a number of the challenges posed by Brexit, including the thorny issue of the Irish border and the UK’s future trading relationship with the EU. But the plan – also known as Norway+ because it has similarities with the EU’s relationship with Norway – involves the UK compromising on a number of its current red lines, while at the same time requiring a fundamental revision of one of the EU’s existing free trade agreements.

One way to understand how a Common Market 2.0 might work – and how it would differ to other options on the table – is to look at one type of good that might move between countries. Say, cheese.

First, it’s important to establish the difference between a free trade agreement and a customs union. As a rule, tariffs are applied on the basis of where goods originate from. The EU’s free trade agreement with Canada, for example, means that you can import Canadian Avonlea cheese into the EU free from tariffs. However, the EU’s lack of an free trade agreement with the US means that American Monterey Jack cheese is charged at €221.20/100kg. If you first export Monterey Jack to Canada, and then from Canada to the EU it will still be chargeable, as the goods originated in the US. Under a free trade agreement, checks on where good originated – known as “rules of origin” checks – are still required.

A customs union is a more advanced form of trading relationship, where you agree not only to remove any tariffs on each other’s goods, but also to apply the same tariffs on goods originating from third countries. This means, for the purposes of the EU customs union, that Monterey Jack will be treated the same whether it is imported to Belgium or Bulgaria. Within a customs union it’s unnecessary to check from where goods originate when they cross a border as they will already have received the appropriate customs treatment.

Back to the 1990s

The Common Market 2.0 idea is an attempt to reverse engineer the previous 25 years of EU integration, reverting the UK’s participation in the EU to the position before the Maastricht Treaty was agreed in 1992.

Under the plan, the UK would rejoin the European Free Trade Association (EFTA) of which it was a founding member prior to joining the European Economic Community. The UK would also accede to the European Economic Area (EEA) agreement with the EU. This is a two-pillared agreement between the EU, as well as three of the four EFTA members: Iceland, Liechtenstein and Norway, but not Switzerland. This is often known as the “Norway model”.

Under Common Market 2.0, the UK would be the only state outside of the EU to participate in both the EU customs union and the single market. The lack of a red line bisecting Ireland is the reason why a customs union is so attractive.
Dr Stuart MacLennan

Such an approach would result in the UK adopting EU-EEA measures relating to the internal market, including the free movement of goods and services, and competition law. But the UK would no longer be subject to the direct jurisdiction of the Court of Justice of the EU, which would be replaced by the jurisdiction of the EFTA Court.

Under this approach, regulatory alignment is all but guaranteed, as standards would ultimately be agreed by the EU and EEA (of which the UK would be a member) – meaning that all cheese capable of being sold in the EU, be that French brie or Dutch edam, ought to be capable of being sold in the UK and vice versa.

What moves the Common Market 2.0 proposal beyond simply replicating the Norway model, however, is that it also involves the UK entering a customs union directly with the EU, thereby removing the need for rules of origin checks on the Irish border between Northern Ireland and the Republic of Ireland. Checks on cheese moving between Norway and Sweden are rare – but they do happen. By entering into a customs union with the EU such checks along the Northern Irish border would never be necessary.




Read more:
Brexit: why was the Irish border ‘backstop’ so crucial to Brexit deal defeat?


The major stumbling block with Common Market 2.0, however, is that under the EFTA agreement it’s not currently possible for member states to enter into a customs union with other states – whether the EU or otherwise. So Norway cannot enter into a customs union directly with the EU, or the US, for example. If the UK were to seek this, it would require special treatment not only by the EU, but by EFTA as well – the political difficulties of which have been largely overlooked.

Free movement question

The Common Market 2.0 arrangement would also, controversially for many, involve the UK continuing with the free movement of persons. The key piece of legislation providing free movement rights for EU and EEA citizens – directive 2004/38 – was incorporated into EEA law in 2007.

One saving grace for the UK might be the joint declaration attached to that 2007 EEA decision that it cannot be the basis for the creation of political rights, and that the directive does not impinge upon immigration policy. This reflects the fact that the primary focus of EEA law is on economically active migrants, rather than EU citizens.



Read more:
Brexit: a Norwegian view on the Norway-plus model and why it wouldn’t be easy for the UK


The Common Market 2.0 approach is therefore unlikely to be viable. Not only would it enrage the right wing of the Conservative Party, it would require agreement from the EU, the EEA and Switzerland. Given the difficulties the UK has had agreeing a deal with one trading bloc, trying to win over three – the EU, the EEA, and EFTA, as well a domestic audience – looks near-impossible.The Conversation

Stuart MacLennan, Senior Lecturer in Law, Coventry University

This article is republished from The Conversation under a Creative Commons license..

Algorithms have already taken over human decision making

Algorithms have already taken over human decision making

Dionysios Demetis, University of Hull

I can still recall my surprise when a book by evolutionary biologist Peter Lawrence entitled “The making of a fly” came to be priced on Amazon at $23,698,655.93 (plus $3.99 shipping). While my colleagues around the world must have become rather depressed that an academic book could achieve such a feat, the steep price was actually the result of algorithms feeding off each other and spiralling out of control. It turns out, it wasn’t just sales staff being creative: algorithms were calling the shots.

This eye-catching example was spotted and corrected. But what if such algorithmic interference happens all the time, including in ways we don’t even notice? If our reality is becoming increasingly constructed by algorithms, where does this leave us humans?

Inspired by such examples, my colleague Prof Allen Lee and I recently set out to explore the deeper effects of algorithmic technology in a paper in the Journal of the Association for Information Systems. Our exploration led us to the conclusion that, over time, the roles of information technology and humans have been reversed. In the past, we humans used technology as a tool. Now, technology has advanced to the point where it is using and even controlling us.

We humans are not merely cut off from the decisions that machines are making for us but deeply affected by them in unpredictable ways. Instead of being central to the system of decisions that affects us, we are cast out in to its environment. We have progressively restricted our own decision-making capacity and allowed algorithms to take over. We have become artificial humans, or human artefacts, that are created, shaped and used by the technology.

Examples abound. In law, legal analysts are gradually being replaced by artificial intelligence, meaning the successful defence or prosecution of a case can rely partly on algorithms. Software has even been allowed to predict future criminals, ultimately controlling human freedom by shaping how parole is denied or granted to prisoners. In this way, the minds of judges are being shaped by decision-making mechanisms they cannot understand because of how complex the process is and how much data it involves.

In the job market, excessive reliance on technology has led some of the world’s biggest companies to filter CVs through software, meaning human recruiters will never even glance at some potential candidates’ details. Not only does this put people’s livelihoods at the mercy of machines, it can also build in hiring biases that the company had no desire to implement, as happened with Amazon.

In news, what’s known as automated sentiment analysis analyses positive and negative opinions about companies based on different web sources. In turn, these are being used by trading algorithms that make automated financial decisions, without humans having to actually read the news.

Unintended consequences

In fact, algorithms operating without human intervention now play a significant role in financial markets. For example, 85% of all trading in the foreign exchange markets is conducted by algorithms alone. The growing algorithmic arms race to develop ever more complex systems to compete in these markets means huge sums of money are being allocated according to the decisions of machines.

On a small scale, the people and companies that create these algorithms are able to affect what they do and how they do it. But because much of artificial intelligence involves programming software to figure out how to complete a task by itself, we often don’t know exactly what is behind the decision-making. As with all technology, this can lead to unintended consequences that may go far beyond anything the designers ever envisaged.

Take the 2010 “Flash Crash” of the Dow Jones Industrial Average Index. The action of algorithms helped create the index’s single biggest decline in its history, wiping nearly 9% off its value in minutes (although it regained most of this by the end of the day). A five-month investigation could only suggest what sparked the downturn (and various other theories have been proposed).

But the algorithms that amplified the initial problems didn’t make a mistake. There wasn’t a bug in the programming. The behaviour emerged from the interaction of millions of algorithmic decisions playing off each other in unpredictable ways, following their own logic in a way that created a downward spiral for the market.

The conditions that made this possible occurred because, over the years, the people running the trading system had come to see human decisions as an obstacle to market efficiency. Back in 1987 when the US stock market fell by 22.61%, some Wall Street brokers simply stopped picking up their phones to avoid receiving their customers’ orders to sell stocks. This started a process that, as author Michael Lewis put it in his book Flash Boys, “has ended with computers entirely replacing the people”.

The financial world has invested millions in superfast cables and microwave communications to shave just milliseconds off the rate at which algorithms can transmit their instructions. When speed is so important, a human being that requires a massive 215 milliseconds to click a button is almost completely redundant. Our only remaining purpose is to reconfigure the algorithms each time the system of technological decisions fails.

As new boundaries are carved between humans and technology, we need to think carefully about where our extreme reliance on software is taking us. As human decisions are substituted by algorithmic ones, and we become tools whose lives are shaped by machines and their unintended consequences, we are setting ourselves up for technological domination. We need to decide, while we still can, what this means for us both as individuals and as a society.The Conversation

Dionysios Demetis, Lecturer in Management Systems, University of Hull

This article is republished from The Conversation under a Creative Commons license.

Mag – Magazine N. 2

MAG is a FREE publication product by AEG Corporation Limited Uk – International Advice.

MAG can be downloaded for free in PDF.

Download here (2MB)

 

 

 

 

Facebook’s plan to protect the European elections comes up short

Facebook’s plan to protect the European elections comes up short

William Dance, Lancaster University

Intentionally false news stories were shared more than 35m times during the 2016 US presidential election, with Facebook playing a significant role in their spread. Shortly after, the Cambridge Analytica scandal revealed that 50m Facebook profiles had been harvested without authorisation and used to target political ads and fake news for the election and later during the UK’s 2016 Brexit referendum.

Though the social network admitted it had been slow to react to the issue, it developed tools for the 2018 US midterm elections that enabled Facebook users to see who was behind the political ads they were shown. Facebook defines ads as any form of financially sponsored content. This can be traditional product adverts or fake news articles that are targeted at certain demographics for maximum impact.

Now the focus is shifting to the 2019 European parliament elections, which will take place from May 23, and the company has introduced a public record of all political ads and sweeping new transparency rules designed to stop them being placed anonymously. This move follows Facebook’s expansion of its fact-checking operations, for example by teaming up with British fact-checking charity FullFact.

Facebook told us that it has taken an industry-leading position on political ad transparency in the UK, with new tools that go beyond what the law currently requires and that it has invested significantly to prevent the spread of disinformation and bolster high-quality journalism and news literacy. The transparency tools show exactly which page is running ads, and all the ads that they are running. It then houses those ads in its “ad library” for seven years. It claims it doesn’t want misleading content on its site and is cracking down on it using a combination of technology and human review.

While these measures will go some way towards addressing the problem, several flaws have already emerged. And it remains difficult to see how Facebook can tackle fake news in particular with its existing measures.

In 2018, journalists at Business Insider successfully placed fake ads they listed as paid for by the now-defunct company Cambridge Analytica. It is this kind of fraud that Facebook is aiming to stamp out with its news transparency rules, which require political advertisers to prove their identity. However, it’s worth noting that none of Business Insider’s “test adverts” appear to be listed in Facebook’s new ad library, raising questions about its effectiveness as a full public record.

The problem is that listing which person or organisation paid the bill for an ad isn’t the same as revealing the ultimate source of its funding. For example, it was recently reported that Britain’s biggest political spender on Facebook was Britain’s Future, a group that has spent almost £350,000 on ads. The group can be traced back to a single individual: 30-year-old freelance writer Tim Dawson. But exactly who funds the group is unclear.

While the group does allow donations, it is not a registered company, nor does it appear in the database of the UK’s Electoral Commission or the Information Commissioner. This highlights a key flaw in the UK’s political advertising regime that isn’t addressed by Facebook’s measures, and shows that transparency at the ad-buying level isn’t enough to reveal potential improper influence.

The new measures also rely on advertisers classifying their ads as political, or using overtly political language. This means advertisers could still send coded messages that Facebook’s algorithms may not detect.

Facebook recently had more success when it identified and removed its first UK-based fake news network, which comprised 137 groups spreading “divisive comments on both sides of the political debate in the UK”. But the discovery came as part of an investigation into hate speech towards the home secretary, Sajid Javid. This suggests that Facebook’s dedicated methods for tackling fake news aren’t working as effectively as they could.

Facebook has had plenty of time to get to grips with the modern issue of fake news being used for political purposes. As early as 2008, Russia began disseminating online misinformation to influence proceedings in Ukraine, which became a testing ground for the Kremlin’s tactics of cyberwarfare and online disinformation. Isolated fake news stories then began to surface in the US in the early 2010s, targeting politicians and divisive topics such as gun control. These then evolved into sophisticated fake news networks operating at a global level.

But the way Facebook works means it has played a key role in helping fake news become so powerful and effective. The burden of proof for a news story has been lowered to one aspect: popularity. With enough likes, shares and comments – no matter whether they come from real users, click farms or bots – a story gains legitimacy no matter the source.

Safeguarding democracy

As a result, some countries have already decided that Facebook’s self-regulation isn’t enough. In 2018, in a bid to “safeguard democracy”, the French president, Emmanuel Macron, introduced a controversial law banning online fake news during elections that gives judges the power to remove and obtain information about who published the content.

Meanwhile, Germany has introduced fines of up to €50m on social networks that host illegal content, including fake news and hate speech. Incidentally, while Germans make up only 2% of Facebook users, Germans now comprise more than 15% of Facebook’s global moderator workforce. In a similar move in late December 2018, Irish lawmakers introduced a bill to criminalise political adverts on Facebook and Twitter that contain intentionally false information.

The real-life impact these policies have is unclear. Fake news still appears on Facebook in these countries, while the laws give politicians the ability to restrict freedom of speech and the press, something that has sparked a mass of criticism in both Germany and France.

Ultimately, there remains a considerable mismatch between Facebook’s promises to make protecting elections a top priority, and its ability to actually do the job. If unresolved, it will leave the European parliament and many other democratic bodies vulnerable to vast and damaging attempts to influence them.The Conversation

William Dance, Associate Lecturer in Linguistics, Lancaster University

This article is republished from The Conversation under a Creative Commons license.

Privacy, GDPR and Ireland as a one stop shop

THE VIEW FROM GOOGLE: PRIVACY, GDPR AND IRELAND AS A ONE STOP SHOP

In his address, Mr Enright, Google’s Chief Privacy Officer, shares his perspectives on Google’s experiences of GDPR, almost one year on. He discusses lessons learned along the way, as well as sharing perspectives on how Google approaches privacy and data protection, and the importance of Ireland as a One Stop Shop.

About the Speaker:

Mr Enright was appointed as Google’s Chief Privacy Officer last year. He joined Google in March 2011, with nearly 20 years of experience in creating and implementing programs for privacy, data stewardship and information risk management. Prior to joining Google, Mr Enright served as the most senior privacy executive at two Fortune 500 online and offline retail enterprises.

The IIEA is Ireland’s leading European & International Affairs think tank. We are an independent, not-for-profit organisation with charitable status.

Our role is to identify key European and international policy trends, which will inform the work of Ireland’s decision makers and business leaders, and enrich the public debate on Ireland’s role in the EU and on the global stage.

Social media doesn’t need new regulations to make the internet safer – GDPR can do the job

Social media doesn’t need new regulations to make the internet safer – GDPR can do the job

Eerke Boiten, De Montfort University

From concerns about data sharing to the hosting of harmful content, every week seems to bring more clamour for new laws to regulate the technology giants and make the internet “safer”. But what if our existing data protection laws, at least in Europe, could achieve most of the job?

Germany has already started introducing new legislation, enacting a law in 2018 that forces social media firms to remove hateful content. In the UK, the government has proposed a code of practice for social media companies to tackle “abusive content”. And health secretary Matt Hancock has now demanded laws regulating the removal of such content. Meanwhile, deputy opposition leader Tom Watson has suggested a legal duty of care for technology companies, in line with recent proposals by Carnegie UK Trust.

What’s notable about many of these proposals is how much they reference and recall the EU’s new General Data Protection Regulation (GDPR). Hancock, who led the UK’s introduction of this legislation (though he has also been accused of a limited understanding of it) referred to the control it gives people over the use of their data. Watson recalled the level of fines imposed by GDPR, hinting that similar penalties might apply for those who breach his proposed duty of care.

The Carnegie proposals, developed by former civil servant William Perrin and academic Lorna Woods, were inspired by GDPR’s approach of working out what protective measures are needed on an case-by-case basis. When a process involving data is likely to pose a high risk to people’s rights and freedoms, whoever’s in charge of the process must carry out what’s known as a data protection impact assessment (DPIA). This involves assessing the risks and working out what can be done to mitigate them.

The important thing to note here is that, while earlier data protection laws largely focused on people’s privacy, GDPR is concerned with their broader rights and freedoms. This includes things related to “social protection, public health and humanitarian purposes”. It also applies to anyone whose rights are threatened, not just the people whose data is being processed.

Existing rights and freedoms

Many of the problems we are worried about social media causing can be seen as infringements of rights and freedoms. And that means social media firms could arguably be forced to address these issues by completing data protection impact assessments under the existing GDPR legislation. This includes taking measures to mitigate the risks, such as making the data more secure.

For example, there is evidence that social media may increase the risk of suicide among vulnerable people, and that means social media may pose a risk to those people’s right to life, the first right protected by the European Convention of Human Rights (ECHR). If social networks use personal data to show people content that could increase this risk to their lives then, under GDPR, the network should reconsider its impact assessment and take appropriate steps to mitigate the risk.

GDPR provides an existing remedy.
Shinonome Production/Shutterstock

The Cambridge Analytica scandal, where Facebook was found to have failed to protect data that was later used to target users in political campaigns, can also be viewed in terms of risk to rights. For example, Protocol 1, Article 3 of the ECHR protects the right to “free elections”.

As part of its investigation into the scandal, the UK’s Information Commissioner’s Office has asked political parties to carry out impact assessments, based on the concern that profiling people by their political views could violate their rights. But given Facebook’s role in processing the data involved, the company could arguably be asked to do the same to see what risks to free elections its practices pose.

Think about what you might break

From Facebook’s ongoing history of surprise and apology, you might think that the adverse effects of any new feature in social media are entirely unpredictable. But given that the firm’s motto was once “move fast and break things”, it doesn’t seem too much of a stretch to ask Facebook and the other tech giants to try to anticipate the problems their attempts to break things might cause.

Asking “what could possibly go wrong?” should prompt serious answers instead of being a flippant expression of optimism. It should involve looking not just at how technology is intended to work, but also how it could be abused, how it could go too far, and what might happen if it falls victim to a security breach. This is exactly what the social media companies have been doing too little of.

I would argue that the existing provisions of GDPR, if properly enforced, should be enough to compel tech firms to take action to address much of what’s wrong with the current situation. Using the existing, carefully planned and highly praised legislation is better and more efficient than trying to design, enact and enforce new laws that are likely to have their own problems or create the potential for abuse.

Applying impact assessments in this way would share the risk-based approach of enshrining technology firms with a duty of care. And in practice, it may not be too different but without some of the potential problems, which are many and complex. Using the law in this way would send a clear message: social media companies should own the internet safety risks they help create, and manage them in coordination with regulators.The Conversation

Eerke Boiten, Professor of Cybersecurity, School of Computer Science and Informatics, De Montfort University

This article is republished from The Conversation under a Creative Commons license.

Venezuela: region’s infectious crisis is a disaster of hemispheric proportions

Venezuela: region’s infectious crisis is a disaster of hemispheric proportions

Martin Llewellyn, University of Glasgow

Over the last two decades, Venezuela has entered a deep socioeconomic and political crisis. Once recognised as a regional leader for public health and disease control, Venezuela’s healthcare and health research infrastructure has fallen into a state of collapse, creating a severe humanitarian crisis and a major outbreak of infectious disease.

This week, we published the first comprehensive assessment of the vector-borne disease outbreak that is assailing the country. Vector-borne diseases are those spread by insects – mosquitos, sand flies, kissing bugs and others. The “we” is a global consortium of authors, many of whom are Venezuelan doctors and academics working in the country under exceptionally difficult conditions. Others include Colombian, Brazilian and Ecuadorian academics who are witnessing the crisis unfold: Venezuelan refugees on the streets of their cities, diseases (malaria, Chagas disease, measles, diphtheria) spreading through porous land borders, and regional disease outbreaks of unprecedented proportions.

I first travelled to Venezuela in the early 2000s to study Chagas disease, a single-celled parasite spread by the kissing bug, a blood-sucking insect that infests the walls of adobe houses. Chagas disease is a silent killer. Once infected, the parasite can lie dormant for decades in its human host before causing fatal heart disease in middle age.

Kissing bug: spreader of Chagas disease.
schlyx/Shutterstock

You can’t travel to Venezuela, including to the communities where I worked in the Llanos (plains) of the west, without being entranced by the beauty of the landscape and the friendliness of its people. From the laboratory in the Institute of Tropical Medicine in Caracas, where I was taken under the wing of Professor Hernan Carrasco and his team, dancing salsa between the benches on a Friday night, to the villages where we slept under the stars in hammocks while the inhabitants sang joropo music, it is a thoroughly welcoming place.

Venezuela is also a place of extreme inequality. You only have to look up from the glitzy streets of downtown Caracas to the mud and brick ranchos clustered on the hillsides above to appreciate that. It is this inequality that drove the socialist revolution, and while times were good – and oil prices high – much of Venezuela’s wealth found its way into the hands of those who needed it most. Declining oil prices, corruption and mismanagement have changed all that. Alongside economic collapse has come a collapse in basic healthcare, an exodus of medical professionals, and a massive upsurge in disease.

Fragmented information

At the core of the infectious disease crisis in Venezuela is the lack of reliable data. Either through denial, a lack of resource, or both, the Venezuelan state is reneging on its responsibility to report on the extent of current outbreaks. The purpose of our recent review was to draw together fragmented information from Venezuelan civil societies, researchers, international organisations and neighbouring countries to get the best estimate of what is actually going on. Over 400,000 cases of malaria in 2017, 15% of the rural population infected with Chagas disease, surging dengue, Chikungunya and Zika infections. The picture is grim.

Health is highly politicised in Venezuela and working as a researcher is not without risk. My collaborators have been threatened with jail and having their medical licenses suspended simply for reporting outbreaks in the scientific literature. The Institute of Tropical Medicine where I worked has been raided by colectivos (community organisations that supports the Venezuelan government), microscopes smashed, medical records destroyed, hard drives ripped out of computers.

The centre of the current malaria epidemic in southeastern Bolivar state is also the centre of state-sponsored illegal gold mining in Venezuela. The tonnes of gold recently shipped by the Maduro regime to Russia and Turkey is soaked in the sweat and blood of poor Venezuelans, sleeping with their families beside mosquito-infested mining pits. Drawing attention to this malaria epidemic is drawing attention to the ecological and humanitarian disaster in this region where mercury is polluting pristine rivers and thousands are dying for want of antimalarial drugs that the government will not or, more likely, cannot supply.

Illegal gold mining in Bolivar state.
Author provided

Venezuelans are resilient and resourceful people. The Venezuelan researchers still living and working in the country are a testament to that, as is the support they receive from the diaspora of Venezuelans forced to live abroad. In recognising the regional aspect to the crisis, the spillover of disease in the region and the millions of refugees, we hope our review will galvanise international organisations to act. I’m optimistic that we are reaching a turning point in a crisis ten years in the making. I fervently hope the spirit of Venezuelans will break through. I hope that scientists will dance salsa again – and soon.The Conversation

Martin Llewellyn, Senior Lecturer, Institute of Biodiversity Animal Health & Comparative Medicine, University of Glasgow

This article is republished from The Conversation under a Creative Commons license.

Climate change: obsession with plastic pollution distracts attention from bigger environmental challenges

Climate change: obsession with plastic pollution distracts attention from bigger environmental challenges

File 20190220 148520 1tcw1vo.jpg?ixlib=rb 1.1
When temperatures rise and ice melts, more water flows to the seas and ocean water warms and expands in volume.
Shutterstock

Rick Stafford, Bournemouth University and Peter JS Jones, UCL

By now, most of us have heard that the use of plastics is a big issue for the environment. Partly fuelled by the success of the BBC’s Blue Planet II series, people are more aware than ever before about the dangers to wildlife caused by plastic pollution – as well as the impact it can have on human health – with industries promising money to tackle the issue.

Single use plastics are now high on the agenda – with many people trying to do their bit to reduce usage. But what if all of this just provides a convenient distraction from some of the more serious environmental issues? In our new article in the journal Marine Policy we argue plastic pollution – or more accurately the response of governments and industry to addressing plastic pollution – provides a “convenient truth” that distracts from addressing the real environmental threats such as climate change.

Yes, we know plastic can entangle birds, fish and marine mammals – which can starve after filling their stomachs with plastics, and yet there are no conclusive studies on population level effects of plastic pollution. Studies on the toxicity effects, especially to humans are often overplayed. Research shows for example, that plastic is not as great a threat to oceans as climate change or over-fishing.




Read more:
Plastics in oceans are mounting, but evidence on harm is surprisingly weak


More easily fixed?

Taking a stand against plastic – by carrying reusable coffee cups, or eating in restaurant chains where only paper straws are provided – is the classic neoliberal response. Consumers drive markets, and consumer choices will therefore create change in the industry.

Alternative products can often have different, but equally severe environmental problems. And the benefits of these small-scale consumer driven changes are often minor. Take, for example, energy-efficient light bulbs – in practice, using these has been shown to have very little effect on a person’s overall carbon footprint.

But by making these small changes, plastic still appears to be an issue we can address. The Ocean Cleanup of plastic pollution – which aims to sieve plastic out of the sea – is a classic example. Despite many scientists’ misgivings about the project and its recent failed attempts to collect plastic the project is still attractive to many as it allows us to tackle the issue without having to make any major lifestyle changes.

Scientists first became aware of a potentially warming world as far back as the 1970s.
Pexels

The real issue

That’s not to say plastic pollution isn’t a problem, rather there are much bigger problems facing the world we live in – specifically climate change.

In October last year the Intergovernmental Panel on Climate Change (IPCC) produced a report detailing drastic action needed to limit global warming to 1.5˚C. Much of the news focused on what individuals could do to reduce their carbon footprint – although some articles did also indicate the need for collective action.

Despite the importance of this message, environmental news has been dominated by the issues of plastic pollution. So it’s not surprising that so many people think ocean plastics are the most serious environmental threat to the planet. But this is not the case. In 2009 the concept of planetary boundaries was introduced to indicate safe operating limits for the Earth from a number of environmental threats.

Planetary boundaries. The green circle indicates a safe operating space. Three boundaries have been greatly exceeded.
Felix Mueller/Wikimedia Commons, CC BY-SA

Three boundaries were shown to be exceeded: biodiversity loss, nitrogen flows and climate change. Climate change and biodiversity loss are also considered core planetary boundaries meaning if they are exceeded for a prolonged time, they can shift the planet into new, less hospitable, stable states.

These “clear and present dangers” of climate change and biodiversity loss could undermine the capacity of our planet to support over seven billion people – with the loss of homes, food sources and livelihoods. It could lead to major disruptions of our ways of life – by making many areas uninhabitable due increased temperatures and rising sea levels. These changes could start to happen within the current century.

Lifestyle overhaul

This is not to distract from the fact that some significant steps have been taken to help the planet environmentally by reducing plastic waste. But it is important not to forget the need for large-scale systemic changes needed internationally to tackle all environmental concerns. This includes longer-term and more effective solutions to the plastic problem – but also extending to more radical large-scale initiatives to reduce consumption, decarbonise economies and move beyond materialism as the basis for our well-being.

The focus needs to be on making the way we live more sustainable by questioning our overly consumerist lifestyles that are at the root of major challenges such as climate change, rather than a narrower focus on sustainable consumer choices – such as buying our takeaway coffee in a reusable cup. We must reform the way we live rather than tweak the choices we make.

There is a narrow window of opportunity to address the critical challenge of, in particular, climate change. And failure to do so could lead to massive systemic impacts to the Earth’s capacity to support life – particularly the human race. Now is not the time to be distracted by the convenient truth of plastic pollution, as the relatively minor threats this poses are eclipsed by the global systemic threats of climate change.The Conversation

Rick Stafford, Professor of Marine Biology and Conservation, Bournemouth University and Peter JS Jones, Reader in Environmental Governance, UCL

This article is republished from The Conversation under a Creative Commons license.