MAG is a FREE publication product by AEG Corporation Limited Uk – International Advice.
MAG can be downloaded for free in PDF.
The 2019 UK election campaign has been particularly dispiriting for anyone who cares about the truth. Even established parties have proven they are not above using tricks to manipulate the news. Meanwhile, politicians are quick to shout “fake news” about anything they disagree with, even accurate stories.
The Conservative Party kicked things off by doctoring a Keir Starmer interview to make him appear to refuse to answer questions. Then a prankster gained thousands of views with a photoshopped Daily Mirror page claiming Jo Swinson shot squirrels for fun.
A tweet by a now-suspended account launched the fake squirrel story, getting less than a thousand shares. But a screenshot was shared on Facebook, where it went viral. Someone else added the story to the semi-professional Medium site, where it was widely shared before being taken down.
Some of this may seem trivial or nonsensical, but even the silliest stories skew the discussion away from rational debate. Jo Swinson was forced to deny shooting squirrels in a television interview, even as the shares racked up across Facebook.
At the other end of the technological spectrum, an astonishingly realistic video by Future Advocacy used an impressionist voiceover artist and real, doctored videos to show Boris Johnson and Jeremy Corbyn endorsing each other as prime minister.
Such fakes are not illegal, although Future Advocacy believes they should be, and some American legislators have moved to ban them in the run up to an election.
Meanwhile, the Conservatives exploited the public’s desire to try and sort fakes from facts by rebranding their press office Twitter account as “UK Factcheck”, mimicking the established independent FullFact.
So, with so much officially sanctioned and well created misleading content there is out there, how can you tell if an online story is actually true?
One simple thing to start with is to ask who the original poster is. Does this person have a history of unusual claims or perhaps this seems to be a newly created profile? Is the website hosting the content slightly unusual, perhaps ending with something other than the standard .co.uk or .com?
Next, look beyond the outrageous headline and read the whole story. The headline can never give the full picture and may just be clickbait. Check all the content. Are there misspellings or poor grammar? Click through on the links in the story – do they back it up?
If pictures are involved, they can be searched for using reverse image search to find the original picture. Does it appear on any reputable site?
Don’t be distracted by official-looking forms or trademarks. Research shows blind people are better at spotting scams because they are not distracted by logos.
All these things are relatively easy to check. But most readers only make these checks if they already suspect the story isn’t true. And herein lies the real problem, not with technological wizardry but confirmation bias – not on your computer but inside your head.
First, study after study shows most people are far more likely to select stories to read that are consistent with their pre-existing beliefs. Reading these stories then entrenches their beliefs further. If a story feeds into an existing set of beliefs, it is far more likely to be accepted without questioning.
To go back to our first example, if you already believe Labour politicians never give a straight answer, you are more likely to click on a doctored video of Keir Starmer looking stumped, headlined “Labour has no plan for Brexit”.
You are more likely to believe it, without considering the source. It is then used as evidence of your original belief, strengthening your view that Labour politicians are untrustworthy.
This matters because it leads to more extreme and entrenched beliefs. Hillary Clinton is not just a politician whom you wouldn’t care to vote for – she is a criminal who should be locked up (or so many Donald Trump supporters believe).
What can be done about this? Interestingly, research suggests making news slightly harder to understand may make readers less extreme. This seems to be because readers have to pay closer attention to a “disfluent” text. In engaging their brains, they make better judgements about the content – but the effect only works if the readers are not trying to multitask.
But as websites compete for eyes, few businesses would try to make their content slightly too hard for their readers.
In the end, the best advice may be to stick to reputable news providers, such as the BBC or the Times. For all their faults, they at least have trained, named, accountable professionals with a commitment to honest journalism.
This article is republished from The Conversation under a Creative Commons license. .
Statement: Intention to fine Marriott International, Inc more than £99 million under GDPR for data breach
Statement in response to Marriott International, Inc’s filing with the US Securities and Exchange Commission that the Information Commissioner’s Office (ICO) intends to fine it for breaches of data protection law.
Following an extensive investigation the ICO has issued a notice of its intention to fine Marriott International £99,200,396 for infringements of the General Data Protection Regulation (GDPR).
The proposed fine relates to a cyber incident which was notified to the ICO by Marriott in November 2018. A variety of personal data contained in approximately 339 million guest records globally were exposed by the incident, of which around 30 million related to residents of 31 countries in the European Economic Area (EEA). Seven million related to UK residents.
It is believed the vulnerability began when the systems of the Starwood hotels group were compromised in 2014. Marriott subsequently acquired Starwood in 2016, but the exposure of customer information was not discovered until 2018. The ICO’s investigation found that Marriott failed to undertake sufficient due diligence when it bought Starwood and should also have done more to secure its systems.
Information Commissioner Elizabeth Denham said:
“The GDPR makes it clear that organisations must be accountable for the personal data they hold. This can include carrying out proper due diligence when making a corporate acquisition, and putting in place proper accountability measures to assess not only what personal data has been acquired, but also how it is protected.
“Personal data has a real value so organisations have a legal duty to ensure its security, just like they would do with any other asset. If that doesn’t happen, we will not hesitate to take strong action when necessary to protect the rights of the public.”
Marriott has co-operated with the ICO investigation and has made improvements to its security arrangements since these events came to light. The company will now have an opportunity to make representations to the ICO as to the proposed findings and sanction.
The ICO has been investigating this case as lead supervisory authority on behalf of other EU Member State data protection authorities. It has also liaised with other regulators. Under the GDPR ‘one stop shop’ provisions the data protection authorities in the EU whose residents have been affected will also have the chance to comment on the ICO’s findings.
The ICO will consider carefully the representations made by the company and the other concerned data protection authorities before it takes its final decision.
Source: Information Commissioner’s Office (ICO)
When Facebook unveiled its new digital currency libra, it explicitly said the initiative was intended to address the problems faced by the world’s unbanked: the 1.7 billion people without a bank account. As well as facing inconvenience, these people generally pay over the odds for financial services like bank transfers or overdrafts.
This is a pretty big potential market for Facebook so it’s not surprising that it would target the opportunity. But could libra really transform access to financial services for those who are currently excluded? There are reasons to raise serious doubts.
Across the world, the main reasons people give for not holding a bank account is that they don’t have enough money, don’t see the need for an account, find it too expensive, or another family member already has one. Not having the right documentation is also a barrier, as is distrust in the financial system.
But the specific barriers to financial inclusion vary significantly by region and are usually a combination of social and economic factors. For instance, while cost is a big barrier in Latin America, lack of documentation is the big issue in Zimbabwe and Philippines.
This makes it difficult for any one intervention to be a solution to this huge group of people. Worryingly, the Facebook “white paper” that outlines libra does not really engage with these problems or say how it plans to overcome them.
People’s trust in institutions can be very important in influencing the extent to which they use their services, as I have found from my own work into microfinance, which I have presented at conferences but is yet to be published in an academic journal.
I have found that people are more likely to choose something familiar over something novel. Since libra will be a new currency relying on digital wallets and built on blockchain online ledger technology, it is not short of novelties. Inspiring trust is therefore likely to be a major challenge.
And simply signing someone up to an account – be it a bank account or a digital wallet – is only part of the financial inclusion challenge.
In India, 190m people still do not have bank accounts, but the percentage of the population who do have accounts has steadily increased to 80%. In 2017, however, nearly half of all bank accounts in the country had seen no activity over the whole of the previous year. One of the reasons is financial literacy, which remains low both in India and many other developing countries. Many people in India have said they are simply unaware of the different benefits of a bank account, such as overdraft facilities or credit schemes.
As many as 62% of the world’s unbanked have received only a primary-level education or less, and in poorer countries the proportion is almost certainly going to be higher. Expecting such people to make complex currency conversions into a new virtual currency is asking a lot.
In the first place, there is a need for financial literacy measures and initiatives aimed at motivating them to use the services available. Without this additional support, there is a strong risk that Facebook will boast large numbers of sign-ups but very low rates of transactions from the people who are most in need.
Only a few days since Facebook’s announcement, libra has faced strong pushback from regulators and policymakers around the world. There is much concern about this proposed shift of power from central banks to a private corporation.
But aside from questions about the ethics of data privacy or the creation of a supranational currency, libra faces an important practical question. On the one hand, it is not clear how a model such as libra, where there will presumably be little or no physical presence in many countries, would interact with and adhere to local regulations.
On the other hand, if it does conform to the local standards of each country, it is unclear how it will overcome challenges like signing people up and strict documentation requirements. Will it really be able to serve the unbanked better than local providers who are used to the challenges in that specific market already?
Entrepreneurs and businesses can either start with a problem and think of the best way to solve it; or they can start with a solution and find the biggest and best problem it might solve. I’m not convinced that libra is a good move in either direction. Facebook either has a huge amount of work to do to adapt its solution to fit the problem better, or it needs to redefine the problem that it is trying to fix.
Intentionally false news stories were shared more than 35m times during the 2016 US presidential election, with Facebook playing a significant role in their spread. Shortly after, the Cambridge Analytica scandal revealed that 50m Facebook profiles had been harvested without authorisation and used to target political ads and fake news for the election and later during the UK’s 2016 Brexit referendum.
Though the social network admitted it had been slow to react to the issue, it developed tools for the 2018 US midterm elections that enabled Facebook users to see who was behind the political ads they were shown. Facebook defines ads as any form of financially sponsored content. This can be traditional product adverts or fake news articles that are targeted at certain demographics for maximum impact.
Now the focus is shifting to the 2019 European parliament elections, which will take place from May 23, and the company has introduced a public record of all political ads and sweeping new transparency rules designed to stop them being placed anonymously. This move follows Facebook’s expansion of its fact-checking operations, for example by teaming up with British fact-checking charity FullFact.
Facebook told us that it has taken an industry-leading position on political ad transparency in the UK, with new tools that go beyond what the law currently requires and that it has invested significantly to prevent the spread of disinformation and bolster high-quality journalism and news literacy. The transparency tools show exactly which page is running ads, and all the ads that they are running. It then houses those ads in its “ad library” for seven years. It claims it doesn’t want misleading content on its site and is cracking down on it using a combination of technology and human review.
While these measures will go some way towards addressing the problem, several flaws have already emerged. And it remains difficult to see how Facebook can tackle fake news in particular with its existing measures.
In 2018, journalists at Business Insider successfully placed fake ads they listed as paid for by the now-defunct company Cambridge Analytica. It is this kind of fraud that Facebook is aiming to stamp out with its news transparency rules, which require political advertisers to prove their identity. However, it’s worth noting that none of Business Insider’s “test adverts” appear to be listed in Facebook’s new ad library, raising questions about its effectiveness as a full public record.
The problem is that listing which person or organisation paid the bill for an ad isn’t the same as revealing the ultimate source of its funding. For example, it was recently reported that Britain’s biggest political spender on Facebook was Britain’s Future, a group that has spent almost £350,000 on ads. The group can be traced back to a single individual: 30-year-old freelance writer Tim Dawson. But exactly who funds the group is unclear.
While the group does allow donations, it is not a registered company, nor does it appear in the database of the UK’s Electoral Commission or the Information Commissioner. This highlights a key flaw in the UK’s political advertising regime that isn’t addressed by Facebook’s measures, and shows that transparency at the ad-buying level isn’t enough to reveal potential improper influence.
The new measures also rely on advertisers classifying their ads as political, or using overtly political language. This means advertisers could still send coded messages that Facebook’s algorithms may not detect.
Facebook recently had more success when it identified and removed its first UK-based fake news network, which comprised 137 groups spreading “divisive comments on both sides of the political debate in the UK”. But the discovery came as part of an investigation into hate speech towards the home secretary, Sajid Javid. This suggests that Facebook’s dedicated methods for tackling fake news aren’t working as effectively as they could.
Facebook has had plenty of time to get to grips with the modern issue of fake news being used for political purposes. As early as 2008, Russia began disseminating online misinformation to influence proceedings in Ukraine, which became a testing ground for the Kremlin’s tactics of cyberwarfare and online disinformation. Isolated fake news stories then began to surface in the US in the early 2010s, targeting politicians and divisive topics such as gun control. These then evolved into sophisticated fake news networks operating at a global level.
But the way Facebook works means it has played a key role in helping fake news become so powerful and effective. The burden of proof for a news story has been lowered to one aspect: popularity. With enough likes, shares and comments – no matter whether they come from real users, click farms or bots – a story gains legitimacy no matter the source.
As a result, some countries have already decided that Facebook’s self-regulation isn’t enough. In 2018, in a bid to “safeguard democracy”, the French president, Emmanuel Macron, introduced a controversial law banning online fake news during elections that gives judges the power to remove and obtain information about who published the content.
Meanwhile, Germany has introduced fines of up to €50m on social networks that host illegal content, including fake news and hate speech. Incidentally, while Germans make up only 2% of Facebook users, Germans now comprise more than 15% of Facebook’s global moderator workforce. In a similar move in late December 2018, Irish lawmakers introduced a bill to criminalise political adverts on Facebook and Twitter that contain intentionally false information.
The real-life impact these policies have is unclear. Fake news still appears on Facebook in these countries, while the laws give politicians the ability to restrict freedom of speech and the press, something that has sparked a mass of criticism in both Germany and France.
Ultimately, there remains a considerable mismatch between Facebook’s promises to make protecting elections a top priority, and its ability to actually do the job. If unresolved, it will leave the European parliament and many other democratic bodies vulnerable to vast and damaging attempts to influence them.
This article is republished from The Conversation under a Creative Commons license.