HoC 85mm(Green).tif

Digital, Culture, Media and Sport Committee

Oral evidence: Fake News, HC 363

Tuesday 19 Dec 2017

Ordered by the House of Commons to be published on 19 Dec 2017.

Watch the meeting

Members present: Damian Collins (Chair); Julie Elliott; Paul Farrelly; Simon Hart; Julian Knight; Ian C. Lucas; Christian Matheson; Brendan O'Hara; Jo Stevens; Giles Watling.

Questions 1-85

Witnesses

I: Samantha Bradshaw, Oxford Internet Institute, and Professor Kalina Bontcheva, Professor of Text Analysis, the University of Sheffield.

II: David Alandete, Editor, El País, Francisco de Borja Lasheras, Director, Madrid Office, European Council on Foreign Relations, and Mira Milosevich-Juaristi, Senior Fellow for Russia and Euroasia at Elcano Royal Institute and Associate Professor, History of International Relations, Instituto de Empresa, Madrid.

 

 

Examination of witnesses

Witnesses: Samantha Bradshaw and Professor Kalina Bontcheva.

 

Q1                Chair: Good morning, Professor Bontcheva and Samantha Bradshaw. Welcome to this oral evidence session of the Digital, Culture, Media and Sport Select Committee. This is the first oral evidence session of the Committee’s inquiry into fake news. Unfortunately, we will have to conclude this panel at 11.15 this morning so that we can move on to the second panel.

We are primarily interested in discussing the role of technology in addressing issues with technology—in this case, the dissemination of disinformation and fake news as one area of problematic content that we believe digital media companies need to respond to, alongside other areas of content where they are required to do so by law.

Could you start us off by giving your assessment of what the capabilities are in terms of using technology to try to identify and act against the spread of disinformation?

Professor Bontcheva: Based on our experience from the PHEME project, we looked specifically at rumours associated with different types of events—some were events like shootings and others were rumours and hoax stories like “Prince is going to have a concert in Toronto”—and how those stories were disseminated and so on. We looked at how reliably we can identify such rumours: one of the hardest tasks is how to group all the different social media posts like tweets or Reddit posts around the same rumour together. In Reddit it is a bit easier because you have threads, but in Twitter it is not so easy because sometimes you have multiple originating tweets that refer to the same rumour.

That is the real challenge: to piece together all these stories, because the ability to identify whether something is correct or not depends a lot on evidence and also on the discussions the public are carrying out on social media platforms. By seeing one or two tweets, sometimes even journalists cannot be reliably certain whether something is true or false, but if we see discussion around that and more evidence accumulating as time goes on, the judgment becomes more reliable.

So as we see more evidence as time goes by, it is easier to predict the veracity of that rumour, but the main question is: can you reliably identify all these different tweets that are talking about the same rumour? That is the main challenge there. The veracity afterwards, you can do that reliably with about 85% to 90% accuracy, and you can identify quite reliably with about 80% accuracy whether a certain post is supporting, denying, questioning or commenting on a particular rumour. These are some figures to give you an idea.

Q2                Chair: So you can deconstruct the way a false rumour is spread. Can you learn lessons from the pattern of that activity to help you to spot and predict the use of those same patterns in the future?

Professor Bontcheva: Yes, exactly. This is the approach we took. We used some machine-learning algorithms and then we built a lot of features for those algorithms, looking at the source from which the information originated, how reliable or not they were in the past and what they posted before. Some of the patterns, not just over time and how it is spreading but in how much people are questioning, commenting, denying and supporting that rumour, are very different for true rumours versus false rumours. We also looked at various features of the actual content itself and how fast it is spreading through the networks, who is retweeting it and so on. These are all parts of the information being fed to the algorithms.

Q3                Chair: When you analyse those characteristics of what you believe may be the spreading of fake news, do you believe there is a role in using technology to be able to identify those patterns very quickly—almost in real time—so that it can be identified as a likely source of fake news?

Professor Bontcheva: Yes, this is what we try to do. Basically, we have a training data set of prior known rumours and their distribution through Twitter. We used that to train the models and then we applied it to new, unseen rumours. What we were seeing was about a 10% drop in accuracy when you move from a topic that has been discussed before in a rumour to a new one that the model has not seen, but then if you give it as few as 10 examples, from the current one—already labelled, for example by journalists—that accuracy goes up to the expected level of around 90%.

So you can really improve those algorithms if you have a little bit of new data, but you can certainly quite quickly start to pick up on the likelihood of something being true or not. Having said that, some rumours can stay unsubstantiated for months or years, so, again, it depends on whether the evidence is there.

Q4                Chair: Is this an approach that the tech companies should be adopting themselves, to design their own platforms to make it easier to spot likely sources of fake news being spread?

Professor Bontcheva: Everything we do is published open source and is available to them, should they wish to take it up. I am sure they are working on these challenges themselves. What I would like to see as a scientist is a bit more co-operation between the platforms and us and a bit more openness with respect to that and the availability of data we can get. They have a huge advantage over us, because they have a lot more data about these accounts than we do. They have access to the full Twitter networks—for example, the social graphs—and we do not because that is reg limited. When we have 1.8 million Twitter unique users in the Brexit data set we are studying, we do not have the capacity to download their networks, so we cannot feed this to our models, but they can.

Q5                Chair: Then again, at least you have some access, whereas with Facebook you have none.

Professor Bontcheva: Yes, exactly. That is a big benefit. We have primarily worked on Reddit and Twitter for that reason.

Q6                Chair: That does not mean to say that there is less of an issue on Facebook, just that it is much harder—or impossible—to look at the data.

Professor Bontcheva: Yes, exactly. That is why I am saying there needs to be more co-operation with the social platforms.

Q7                Ian C. Lucas: I am interested in the source of the tweets, Facebook posts or whatever. How easy is it to trace the source?

Samantha Bradshaw: When you say “trace the source”—

Ian C. Lucas: To find out who the source is of the first tweet.

Samantha Bradshaw: There are a couple of ways we do that at the Computational Propaganda Project—the ComProp project—at Oxford University. One thing that we cannot do is look at the actual account and trace back their IP address to figure out exactly where they are located in the world; we do not have the capacity to do that kind of digital forensics.

Q8                Ian C. Lucas: Is that information available?

Samantha Bradshaw: That would be something that social media companies would have access to, but it would then be user data.

Q9                Ian C. Lucas: So Twitter and Facebook, for example, would have that information.

Samantha Bradshaw: Exactly. They would know exactly where their users are connecting from, what devices they are connecting from, how long they are connected—all kinds of metadata on that account’s activity. We do not have access to that as researchers.

Q10            Ian C. Lucas: So that information is not publicly available.

Samantha Bradshaw: No.

Q11            Ian C. Lucas: For both Twitter and Facebook, do you have to give an address as an individual?

Samantha Bradshaw: Facebook has a real-name policy, where you have to give a real name. It does not necessarily have to be your full real name. I do not use my full real name on Facebook, but I do use parts of it—you can get away with things like that. With Twitter, there is no real-name policy—

Q12            Ian C. Lucas: So I could invent another person and operate on Twitter without any problem.

Samantha Bradshaw: Exactly. Now, I always like to think of the flip side of anonymity here as well, and I would like to remind people of the benefits of anonymity. It is easy to come down on Twitter and say, “Because you are not allowed to use a real name, that is why we see so many fake accounts and bot accounts appearing on Twitter,” but there is a real benefit to having anonymity online. Twitter was the platform that really helped people in the Arab Spring and protestors co-ordinate and start those movements. If they had not had that anonymity, they might not have been as successful. So there is a pro and a con to that.

Ian C. Lucas: That was very helpful. We are at the beginning, so I wanted to start from stage one. That is an important place to start.

Q13            Giles Watling: I have one additional question following on from Mr Lucas’s questions. Having established that real names are difficult to pin down for all the various reasons you outlined, I presume all these tweets and Facebook entries can be traced to specific digital addresses, so you can locate geographically where they come from and through that begin to identify individuals. Is that the case?

Samantha Bradshaw: Yes. Some of the data we do have access to, if users turn it on, is their geo-location data. On Twitter, I can set my location and I can turn on my geo-location as well, so when I tweet from here it will say: tweeted from my OnePlus at this location at this time, and it will time stamp it. That allows us as researchers to collect that data, but users have to consent to that first.

Q14            Giles Watling: Could you begin to identify individual people through this?

Professor Bontcheva: Yes, actually you could. The thing that has to be said here is, first of all, geo-tagged tweets are only about 7% to 10%, or sometimes even less, depending on the data set. People in the UK are more privacy aware than in the US in general, so it is not very easy to do.

But what we and other researchers have been working on is the location field in the Twitter profile. It is in text, so a lot of users will put rubbish there, but a large proportion of them will put information down, at least to something like a city level, so you would know if they are London-based or whatever. This is self-declared and of course not very reliable, but if you start looking also at locations that they are discussing in their own tweets and so on, sometimes you can build quite accurate pictures. When it comes specifically to the ones who have the co-ordinates turned on and available on their tweets, you can predict quite accurately their home location.

Q15            Giles Watling: And from that, you can begin to extrapolate why people are tweeting and perhaps what their motivation is.

Professor Bontcheva: That is a bit harder to connect sometimes, but yes, for some people you could.

Q16            Christian Matheson: Professor Bontcheva, you talked about asking or hoping for more co-operation in some of the studies from some of the big tech companies. How have you found them so far? From their point of view, why should they co-operate and assist with providing this kind of information?

Professor Bontcheva: Well, for example, less than a month ago we were working with BuzzFeed on a story about Russian bots, so we looked at how many tweets in the referendum data set that we had came from the Twitter accounts identified as potential Russian-linked accounts. After that, we looked at the retweet network, and we identified another 45 accounts that were very similar to the ones that had been deactivated by Twitter, but were not on that list. BuzzFeed reported those to Twitter, and they were suspended soon after.

The reason why I am saying it would be useful to have more co-operation is that, at the point when we know that the accounts have been suspended, it is for us a lost opportunity—now we are not able to study these questions at all. What were the interactions between the accounts? How were they connected? How did they operate? What happened? As soon as an account is suspended, we stop having access to anything they have done, their social networks and so on. So actually Twitter could have missed more accounts, but we have no way of even trying to find out now.

Q17            Christian Matheson: From the point of view of the provider, Twitter, what would be the advantage to them of assisting you? Why should they?

Professor Bontcheva: Just quality of information—

Samantha Bradshaw: Can I jump in there? There is a big financial incentive for them to clean up their platforms, because their business model is based on advertising. Let’s say they have a bunch of fake user accounts on their platform. Advertisers do not want to sell ads to fake people, because fake people are not going to buy products. If platforms are a lot more responsible in cleaning up those profiles and getting rid of the accounts that do not exist, advertisers will have more confidence in doing more business with them and targeting advertisements at users.

Q18            Ian C. Lucas: When you say “fake people”, do you mean real people who are using different descriptions of themselves, or computers?

Samantha Bradshaw: It could be both. We study bots a lot in the ComProp project, and those accounts are purely automated. There is very little human interaction with the accounts. The creator of the bot will just run the script and no longer interact with the account. Sometimes there are real people behind the accounts who engage with them a lot more. Sometimes they blend automation into the accounts as well. We call those cyborgs: it is a mix of automation and human curation.

Those accounts are a lot harder to detect for researchers, because they feel a lot more genuine. Instead of just automating a bunch of tweets, so that something retweets different accounts 100 times a day, they might actually post comments and talk with other users—real people—on the accounts.

Professor Bontcheva: Actually, one of these managed to successfully fool a lot of media and mainstream personalities. Look for Jenna Abrams; it is one of the famous agent accounts on the list. Her name appears in a lot of news stories. It is one of these fake accounts, but it had a really believable personality, and it was a long-standing account; it had a long history. It was interacting with celebrities and politicians. So there was a lot of coverage, even in mainstream media, about things said by that account and, in fact, in the end it turned out to be a fake account. So sometimes it is not easy even for people to find out.

Q19            Chair: Just to be clear on a couple of points—a few other colleagues want to come in with quick questions of their own: you talked about the accounts identified on Twitter as being sources of fake news or fake accounts. Do you mean the accounts identified as part of the US Senate’s investigation?

Professor Bontcheva: Yes.

Q20            Chair: Are you aware of Twitter having offered up any accounts from its own analysis, other than responding to information given to it by the US authorities?

Professor Bontcheva: Sorry, could you—

Chair: Has Twitter identified any accounts itself, or is it simply acting on information it has been given by the US authorities?

Professor Bontcheva: I don’t know. This is one thing we discussed internally: we don’t have enough information to know exactly how the accounts were identified and why they were flagged. It is potentially linked to Russia. We just took the list because that is that was what was available publicly.

Q21            Chair: Sure. Based on the work you have done, what proportion of accounts that are regularly active on Twitter are likely to be fake accounts?

Professor Bontcheva: This is an interesting one, and it is something that only recently we started looking at the data for. It is hard to say which ones are fake; it is easier to say which ones expose a high degree of automation. In Oxford, they define this as tweeting more than 50 times a day, but you can put another threshold on it if you like, because there are organisational accounts which obviously would tweet more.

There are some accounts which are looking quite suspicious when you look among the list of the most active ones around the referendum. Some of them are known bots and so on.

Q22            Chair: To take the specific example of active tweeting during the referendum period, what proportion of that do you think was being done by, if you like, programmed accounts, be they human or robot controlled, versus organic discussion?

Professor Bontcheva: I don’t have these numbers at hand at the moment, but I am planning on submitting some written evidence to the Committee and I will include that in there—no problem.

Q23            Chair: Finally from me on this, a lot of this is about contacting an audience and trying to get a story trending in the case of Twitter. How many accounts would you need to be engaging with on a story to get it to trend? Is it quite a low bar—lower than people might realise?

Professor Bontcheva: I think it really depends on the topic. Sometimes, if you engage in a topic that creates a lot of public attention and resonates with the public’s concern on that particular point, things can go viral very quickly, but, at the same time, some other topics and tweets will not take off at all. It is hard to predict.

There is work on how likely it is that a certain tweet will go viral—we don’t do this at Sheffield and we have not looked into it very much.

Q24            Giles Watling: I want to pick up on something Ms Bradshaw said about fake news not being profitable because people out there are not buying. I read somewhere in the report—I cannot dive through the papers right now—that, in fact, the reverse is true. Once you get an explosion—indeed, it was demonstrated in Catalonia where a spike of some 5 million tweets happened in a short space of time—I got the impression from the report that it was fairly profitable for the companies to have that happen, because it goes beyond the robots to real people and real life.

Samantha Bradshaw: I think what I meant there with the financial incentive of platforms to remove fake accounts is that now that advertisers are aware that all these fake accounts are out there operating on social media and they are buying advertisements targeted to particular segments of people, and those fake accounts may have certain qualities—for example, I might create a fake account that is a white male, early 20s, maybe still lives at home at parents—they have all kinds of fine-tuned indicators that they can target people with, so advertisers might not want to buy as much advertising targeted to them.

Fake news is very profitable for the people creating the stories because they get the advertising revenue from the clicks and from the people accessing the stories.

Q25            Giles Watling: I see. They are two separate things.

Samantha Bradshaw: Yes.

Giles Watling: That has clarified it. Thank you.

Q26            Paul Farrelly: I was particularly keen to have in our terms of reference advertising and the way advertising is sold in the digital age, because of the profit imperative that may drive some of the sites or people behind fake news.

Many years ago, in the pre-digital age, I was involved with the sale of a company called Media Audits. Advertisers hated that company because it existed to work for their clients—to measure the effectiveness of what they did in terms of advertising. I don’t know where that scrutiny and audit now stands or has been developed in the digital age, but I would imagine that if bots are programmed to click and click and click, and people are paying per click and it is not actually generating any sales, that would be an imperative. Do you know how effectively—we can follow this up—advertising is audited these days?

Samantha Bradshaw: This is one of my biggest concerns as someone who studies these issues. I don’t think we have enough transparency around the advertising models and the way digital advertising works. Facebook has now begun to take steps to say, “This organisation is the organisation that is targeting you.” If we look back at the US election, for example, that was not the case. You would get examples of what we call dark posts, where you would not know the source of the posts.

Because it is tailored advertising, my Facebook feed is going to look very different from yours, and there is very little transparency or auditing that happens around the advertisements. There would be no place for me to go and see all the different advertisements that were created by whatever political campaign. With TV and radio advertising, the situation is a little different. I believe that some non-profit organisations report on those things and keep active archives, but we don’t have an equivalent of that in the digital space.

Q27            Paul Farrelly: This is something that we will pursue with advertisers, I think.

My second question is this. I’m not on Twitter, because life’s too short, and I have really just got fed up of Facebook now. My advice to people if they get upset with what people do on Twitter—with impersonating accounts or the nasty little political trolling sites that upset people in my area—is, “Don’t look at it then. Stay off it.”

But there is evidence of an effect whereby it upsets people. I am not sure yet what evidence there is, in terms of definitive studies, as to whether it actually changes voting behaviour. The posters saying “Turkey (population 76 million) is joining the EU. Vote Leave” certainly changed the tone of the conversation, and from that I can surmise that maybe some people’s votes were changed by that fake poster advertising in the referendum campaign, but I can’t prove it. Are there any definitive—or even not definitive—studies out there not just saying, “All this is going on,” but actually reaching into how it may have affected voting behaviour in terms of where people cast their ballots?

Samantha Bradshaw: This is something that we had debated for a very long time in the academic community—whether the thing that I read on social media is going to make me change my ideas and vote a particular way. We can speak to a few studies around this process. It is a very complicated process—the way people develop their political identities and formulate their beliefs. A lot of different factors go into that; it is not just social media. Social media is one aspect of it, but a lot of people still get news from traditional sources, like television, radio and newspapers.

We do know from studies in psychology that the more you read a story or a headline and the more you see that headline, the more likely you are to recall it from memory, and the more likely you are to start to believe that it is actually real. You can remember it, and therefore it must be true. That is one of the reasons why bots could be very powerful, because they are amplifying certain stories and flooding people’s news feeds. The more I see stories saying Hillary Clinton was involved in a paedophile ring, the more likely I am to remember that and maybe eventually start believing it.

We can measure the after-effect. There is a quite a long tail on conspiracy theories. Segments of the population will still continue to believe that they are true—I think that 25% of the American population still believe that Hillary Clinton was involved in a paedophile ring, even though that conspiracy theory has been long disproven. We can look at studies around the issue to provide a little bit of insight, but there is no real conclusive answer.

Professor Bontcheva: I can send you a link to an experiment that one of the newspapers—it could have been The Guardian—did where they exposed a certain number of liberal voters in the US to a Facebook feed that was aligned with the beliefs of someone supporting the opposing party, and then the other way around, to try to overcome their filter bubble, because all their friends are likely to have similar content to them. They interviewed them and asked whether that changed their voting behaviour and their thoughts. It was not a very large experiment; I think it was eight or 10 people. Only one person’s voting behaviour changed, in the sense that they said they were going to support one candidate and then decided to not vote because it was just too complex for them to decide what they should do based on the evidence from the two sides. For the other people, it was mostly confirmation bias. That is very hard to go against. It is a real problem.

Paul Farrelly: Could you send us a link? As it is an attempt at a study, that would be useful.

Professor Bontcheva: Yes.

Q28            Julian Knight: I have one main question, but I want to check something with you first. Obviously, the work you do could be considered embarrassing to some of the social media companies. Kalina, you said that you co-operated with BuzzFeed in an article. Previous witnesses have said that they have received notices of potential legal action from some of the social media companies when it comes to their co-operation or their articles. I want to check whether you or anyone within academia who you know of effectively has had any legalistic approach from social media companies such as Facebook.

Professor Bontcheva: No. We do not study Facebook at all, and we make sure that we abide by the terms and conditions of Twitter and any other social media platform that we study. We have not had any problems.

Julian Knight: Have you heard of anyone within academia having some sort of legalistic approach from a social media company?

Samantha Bradshaw: We have not. I know there was a lot of controversy around the emotional contagion paper that was published by a group of researchers in the US who worked alongside Facebook, so those researchers might have some experience with that.

Q29            Julian Knight: Do you think that there are any attempts by social media companies to either divert or close down debate within academia over this subject?

Samantha Bradshaw: Because of that controversy, it received a lot of negative attention within the media. The study, for anyone in the room who is not aware of it, essentially found that you could manipulate people’s emotions by showing them different kinds of stories on their Facebook feeds. If you showed them more negative stories, they would feel more negatively. If you showed them positive stories, they would feel more positive. Since the publication of that paper, Facebook has seemed to pull back from co-operating with researchers.

Q30            Julian Knight: That is very interesting. My main question is to do with Russian involvement. We have heard quite a lot of evidence in that regard. Looking at the evidence you have seen first-hand, do you think there is any sort of step change in terms of Russian involvement? Is there any means by which they are becoming even better at their job, so to speak, and more targeted? Are the challenges we face morphing before our eyes?

Professor Bontcheva: We took the accounts that were identified by Twitter as being associated with Russia in front of Congress, and we also took the other 45 ones that we found with BuzzFeed. We looked at the tweets around the referendum, specifically one month before, and we did not find an awful lot of activity from those accounts. There were 3,200 tweets in our data sets coming from those accounts, and 800 of those—about 26%—came from the new 45 accounts that we identified. However, one important aspect that has to be mentioned is that those 45 new accounts were tweeting in German, so even though they are there, the likely impact of those 800 tweets on the British voter is, I would say, not very likely to have been significant.

              We also looked at other aspects such as the accounts associated with Russia Today and Sputnik, and at the retweets that they received and so on. Again, they are not as significant when put in context against the sheer number of tweets in that period. From that period there are about 13 million tweets in our data set.

Q31            Julian Knight: Okay, I get the picture. Basically, you are effectively saying that the evidence that you collected around the referendum does not show a great deal of impact when it comes to Russian involvement. Do you think that if you did that now today around a key point—we will hear from Catalonia later, in the second panel—that would be different? Is your feeling that people are getting even smarter and that this is a growing situation? That is to Samantha as well.

Samantha Bradshaw: I think that yes, we are seeing capabilities in this area developing. It is not just Russia; it is a global phenomenon. One of the studies published last summer identified 28 different countries that were developing these kinds of techniques for different purposes. It is not just the Russian Government who are using social media to manipulate public opinion.

I also think that a lot of planning goes into digital disinformation campaigns; it is not something that you can just turn the bots on and make them work because for them to be very successful takes a lot of time and planning, and a lot of social engineering. For example, if we look at what happened with the US election, there are all the email leaks, and things like that take a lot of time. It is not something that you can just do overnight; you have to identify your targets, and social engineer the right people to get access to the right accounts. Then you combine that with the disinformation campaign and the bots that are now going to flood social media with other leaked documents and things like that. For events that just sort of happen, such as the Catalonian referendum, and even Brexit to an extent—that was not a planned thing like the US election, which happens every four years and we can plan for—we don’t see as much sophistication in terms of the kinds of disinformation campaigns that are happening.

Q32            Brendan O'Hara: Following on from what Mr Farrelly was talking about, and the research and influence of this disinformation, do people believe that they are immune from it? Have you done any research on what people think? Do they believe that they are not affected by it and that they could spot it?

Professor Bontcheva: It is not something that we have looked at because we primarily look at computational methods for detecting and counterbalancing this type of disinformation. From personal experience, and from talking to friends on social media and so on, I think there are different levels of literacy regarding what you can do and how you go about verifying some information. I also think that many people are not capable of doing that properly, and that it takes a lot of time to do it properly. For some sources it is easier. You could say that if you know a priori that you should not trust information from Russia Today or Breitbart or Infowars that makes it easier, because if you see a link from there you know, “Okay, I should not really believe this, or I should fact-check it anyway.” In general, I think for some other sources of information it is sometimes hard to know.

Q33            Brendan O'Hara: But if you have those bots firing out the same piece of information seemingly from 1,000 different sources, and it drops on to your timeline, is there evidence that people believe it more than not? Do people think they are immune and that they have the ability to spot fake news if it is coming at them from all sides?

Professor Bontcheva: I don’t think anybody is immune. We have even seen examples of fake stories being printed in the mainstream media, under duress, in breaking news situations. I don’t think anybody is immune.

Q34            Brendan O'Hara: Is there any research, or are you guys doing any research, on trust in news sources generally, whether fake or real news? Is public trust in the source of news suffering as a result?

Professor Bontcheva: There are studies on the topic of trust in mainstream media and how that has changed in the social media age. Some of that research has also shown that the level of trust European citizens have in social media platforms and the information on social media platforms is lower than the level of trust Americans have in those platforms. That is reflected. In the referendum data set we have, we looked at the domain names of the information that was most retweeted and tweeted in links, and which domains it came from. There is a big difference—I think Samantha Bradshaw found this as well—between the types of websites from which the information is being referenced.

Here in the UK, a lot of the tweets were from sources like the BBC, The Guardian, The Sun and The Mirror, which are pretty established media outlets. There were also some references to Breitbart, especially in leave-supporting tweets. But by and large, the percentage and role of those websites is lower here than was found in the US election.

Samantha Bradshaw: I can add some numbers, or some percentages, to that. We have a Brexit memo that will be coming out imminently, where we collected Twitter data for the two weeks leading up to Brexit and evaluated the kind of information people were sharing on social media. Our “junk news” category includes highly polarising content, conspiratorial kinds of content and stuff containing information that is clearly false, but those kinds of stories only made up about 4.7% of the overall URLs being shared, which I think was 5.1 million tweets. That is quite low. We also looked at YouTube videos in that sample, because we see a lot of junk news showing up in videos. In the YouTube sample it was about 11%.

Q35            Jo Stevens: Following on from that, and thinking about digital media literacy and education, do you think that improving digital literacy in schools, or whatever, would help people to understand and filter fake news? Do you have any suggestions on how digital literacy might operate?

Samantha Bradshaw: I definitely think digital literacy should be part of the solution, but it is a very long-term solution, because it is not something we can just teach and automatically see the benefits of. Teaching young people how to read stories and find news—not just using social media but also seeing the importance of newspapers and traditional television and things like that, using all sources of media out there to develop their political identities—should be part of the educational curriculum.

Professor Bontcheva: Following up on your question, on 13 and 14 November there was a multi-stakeholder conference on fake news in Brussels organised by the Commission, recordings of which are available for anybody to watch on the internet. A large portion of the discussion was on exactly that issue of digital literacy. There is support for that not just from organisations offering those kinds of services—going into schools and so on—but also from the social platforms themselves. They said that they are supportive of those initiatives.

Q36            Jo Stevens: One of the questions that we have asked lots of witnesses as part of this inquiry is whether the major internet platforms should be reclassified as publishers. Does either of you have a view on that?

Samantha Bradshaw: Yes, I do. As critical as I am of social media platforms, I do not think that they should be considered publishers, because what they do is very different from traditional media organisations. They do not create the content themselves. They are the channel that makes the information available, but they did not write the initial stories in the same way that traditional publishers create their own content. That in itself is fundamentally different.

If you started putting the restrictions of being publishers on to social media platforms, it might have a very chilling effect on free speech, because they might be forced to over-regulate what people are sharing on social media and expressing as free speech. I think there are some dangers there that outweigh the benefits. There are different ways to regulate social media that would be healthier for democracy. I do not think that reclassifying them as publishers would be a good move.

Q37            Jo Stevens: What do you think the alternative is then, if you have to balance free speech with taking account of the spread of fake news?

Samantha Bradshaw: We definitely need more transparency around advertising. I think that a lot of the problem comes from their business models being based on user data, which they then sell back to advertisers. There is zero transparency about that. We now see politicians and foreign actors buying that data, yet users have no control over the kinds of information that Facebook collects about them and then sells on to third parties. I think that remedying the issue in that direction is one step. We also need more accountability around advertising. Creating an archive so that we can see where sponsored content comes from is important, and combining that with digital literacy so that people are a lot more critical and aware of what they are reading online.

Q38            Simon Hart: Our line of questioning earlier on was whether there was verifiable evidence that any of this had an impact on voters. I think that is one question, but my question is whether there is any evidence to suggest that it has an impact on Government. Quite a lot of us might think that if there was some very targeted activity on social media, forget voters; Government policy will change, perhaps even on the basis of 500 or 1,000 well targeted social media campaigns. I wonder whether that is an area of concern, or whether there is any evidence that Governments across the world—not just here—are vulnerable to what they perceive as a build-up of public opinion and momentum.

Samantha Bradshaw: That is actually an area that we are moving into studying in the new year—in particular, looking at science. Good policy-making and good work from Governments and our elected officials comes from well researched science. On social media, there are quite a few campaigns that seek to undermine numerous scientific studies. For example, smoking causes cancer. We have known that smoking causes cancer for many years now, but then you see stories online that say that if you are a woman it might be okay, or that if you do it only at certain times of day or have only one a day, it is not too bad. All those campaigns aim to undermine the credibility of our scientific research and our scientific communities.

It is not just smoking, of course—the anti-vax campaigns, climate change: there are all kinds of various issues. This is one thing that we are moving into studying in the future. I don’t have a good answer for you right now.

Q39            Simon Hart: I think it is a good answer in many respects. You also mentioned the sophistication of campaigns or the need for them—in order for them to be effective, they need to be sophisticated. Do you accept, as would be the case here, that where you have—I don’t know whether these figures are right—say around 70 parliamentary seats with a majority of less than 2,000 or 3,000, the sophistication does not have to extend particularly far and wide across the UK population if you can target it to a relatively small number of narrow majority seats that are possibly going to have a disproportionate effect on the outcome of an election? Is that an area we should be concerned about?

Samantha Bradshaw: Definitely. For me the fake news is not necessarily the problem; it is the fact that you can target these stories to people who would be more susceptible to internalising their messages. That is where the advertising part comes in and that is my biggest concern. When we looked at the US election, we also broke it down geographically to look at the kinds of information being shared state by state. We found that junk news was concentrated in the swing states in the US election. They had a much higher proportion of junk news stories being shared, compared with states that were uncontested. Whether or not that actually had an influence is still debatable, but we can say that in the states where there was going to be a major battle between the Democrats and Republications there was a lot more junk news.

Q40            Simon Hart: A last very quick question to follow Jo Stevens about publishers. I take your point that you think there is a danger. Do you also accept that these platforms already operate as publishers in some respect because they are making editorial decisions occasionally and taking down stuff which in somebody’s view is offensive or potentially dangerous? So they have already crossed the line between platform and publisher. The question is, where do you redraw it?

Samantha Bradshaw: That is a very fair point because social media companies play a large moderating role already in the kinds of content that people see. They will take down hate speech and all kinds of other offensive content.

Q41            Simon Hart: A picture of a woman breastfeeding—is that an editorial decision taken by a social media platform? I think that makes it a publisher.

Samantha Bradshaw: I guess my fear is that I think Facebook and all social media companies already have so much power in determining what we do or don’t see and there is very little transparency around what those guidelines are and how they are making those decisions. That is where I think the regulation comes in—opening up their little black box and determining what should or should not be taken down. I just don’t want to give them more power than they already have.

Q42            Chair: This has been a really interesting exchange. Do you mean that most users have little understanding of what they see in their news feed and why?

Samantha Bradshaw: Exactly.

Q43            Paul Farrelly: This issue of the black box: you don’t want to give them more power to take down more stuff than they already have done. This goes back to arguments that we have had about getting them to act on certain things. It starts with child pornography, where they do now operate, because otherwise they would face sanctions. It goes to an ongoing argument we have over music rip-off sites and copyright. Clearly, when you get to freedom of speech, the world is much wider. Of course, if you were to make them comply with laws of libel if they were deemed a publisher, or harassment/incitement if they were deemed to be promoting that by virtue of allowing those messages on their platform, then people could set up in other jurisdictions. It would be a question of how many people would look at those sites based west of the Urals, compared with Google, Facebook and all the other social media.

There is a still a question: if harmful effects are being identified, to what extent should those platforms take responsibility, given that they are the most respected and reviewed platforms in a particular jurisdiction?

Samantha Bradshaw: For things like child pornography, things that incite terrorism and violence, and that kind of content, it is very clear that there is a negative effect on society from that kind of content being shared. When it comes to junk news or fake news, it is quite hard to disentangle certain things. For example, the line between political humour and junk can often be quite thin, so how do you go about determining whether or not this satirical story about politics should be taken down? I think that it gets quite complicated, and it can be quite messy and therefore have a more chilling effect, compared with other kinds of issues that have a very clear line.

Q44            Paul Farrelly: It is striking a balance, isn’t it? If your comparator is, “If we did this, the Arab Spring may not have happened,” you are already sort of tilting the playing field and loading the dice, because not everyone would agree, so many years later, that the Arab Spring—or the orange revolution—worked out very well.

Samantha Bradshaw: I think that Government has an important role to play in regulating these companies, but I think that the regulation has to start with more transparency, because we do not even understand the basics—how they are moderating content, and what their algorithms are showing to people in their newsfeed. Before we can come up with clear, concrete expectations to make our social media ecosystems a lot healthier for democracy, we need to understand the problem. That is the first step that needs to happen before we jump on the “we just need to regulate it” boat.

Q45            Ian C. Lucas: Can I go back to the point about the US presidential election and the activity on social media? You said that there was an increase—are you able to identify the sources of that? Who was it? Was anyone encouraging that, or was there any link to particular candidates?

Samantha Bradshaw: With the junk news stories in the US election, a lot of them would be peddled by groups such as the alt-right. There would be very alt-right, extremist groups that would be sharing the very highly polarising content—conspiracy theory content and things like that. We included WikiLeaks as highly polarising content. A huge majority of the URLs being shared in the US came from WikiLeaks. I think from known Russian sources, it was about 6% or 7%, so a little bit higher.

Q46            Ian C. Lucas: So 6% or 7% of the activity was from Russian sources.

Samantha Bradshaw: Of the highly polarising content was from Russian sources, yes.

Q47            Ian C. Lucas: That strikes me as pretty high.

Samantha Bradshaw: Yes. There were quite a few RT and Sputnik stories being shared in the American election.

Q48            Ian C. Lucas: How do you identify those sources?

Samantha Bradshaw: We looked at the tweets. We took any tweet that had a URL and we just clicked through to that URL. Mainly, they would be identified by the base URL, such as sputnik.com or rt.com, or The New York Times—obviously, that would go into a different category. That would be how we identify them.

Q49            Ian C. Lucas: So that 6% to 7% was easily identifiable?

Samantha Bradshaw: Yes.

Q50            Ian C. Lucas: Beyond that, there could be more.

Samantha Bradshaw: Yes, there could be more, because it could be a Russia-sponsored blog on WordPress, for example. Like I said, we don’t have the digital forensics to go back to see the IP addresses that created these the WordPress accounts, look at the user information, see where it was registered—things like that.

Q51            Chair: Do you think you could have applied similar techniques to looking at activity on Facebook—sharing links to Facebook pages or groups? From the study that has been done in America as part of the Senate’s investigation, that seems to be the way in which fake news is organised on Facebook.

Samantha Bradshaw: Yes, we are trying to move away from Twitter now and really get into Facebook. We know that most of the bots and automated accounts on Facebook are going to be groups or pages, because when you register a group or a page, that is when you don’t need a real name, so it is a little bit easier to automate those kinds of accounts or to create a fake kind of movement, as opposed to using a real-name account.

There are a few ways that are being developed to study Facebook, but it is still not open enough, and I am not familiar enough with those methodologies to really speak to them.

Chair: I think we will conclude there. Thank you very much; it was very interesting evidence. Thank you to Professor Bontcheva as well, who had to leave.

 

Examination of witnesses

Witnesses: David Alandete, Francisco de Borja Lasheras and Mira Milosevich-Juaristi.

Q52            Chair: We are waiting for one or two colleagues to return, but we will make a start, if you are happy with that. Thank you very much for joining us this morning. This is the first day of oral evidence for the Committee’s inquiry into fake news. As part of that, we have been very interested, in preparing for the oral evidence sessions, in looking at the role of political propaganda distributed and disseminated through social media platforms, aggressively targeting individual users, and the way that has featured in elections.

We heard in a previous session from people who have been studying the US presidential election in particular. There is obviously a lot of interest in this country around the role of disinformation in the Brexit referendum campaign and other election campaigns in Europe. We were very interested, in the written evidence we received, in looking at the role of disinformation and targeting of social media users during the referendum in Catalonia. With the elections to the Catalan Assembly falling on Thursday this week, we thought it was a particularly interesting time to discuss this issue. We are very grateful to you for coming and giving evidence to us on this occasion, as the issue is so topical.

Perhaps you could give us, as an introduction, an overview of the extent to which disinformation was used during the referendum campaign and subsequently, and in particular of the role of Russian-backed agencies that may have been involved in the dissemination of that information. I know you have written and spoken extensively about this, but perhaps you could give, for the benefit of the Committee, a sort of summary introduction to what you think the key points are on this.

David Alandete: I am a journalist and a managing editor for El País, which, as you know, is the world’s leading daily in Spanish—the main newspaper in Spain. We started covering it the day before the referendum. You have to remember that the referendum was actually banned by the central Government, so we did not know whether it was going to happen or not; whether the referendum would take place was debatable, and if it was going to be attained, it was a victory for the independence party. Those were the facts at the time.

We started seeing news. We have some tools to measure popular content online; they are available to anyone who pays for them. They analyse social networks in the public domain. We started seeing fake news popping up very intensely in the days prior to the 1 October referendum.

What is fake news? Fake news is false information, like saying that on the day of the referendum, 1,000 people were injured, as someone claimed; or that that was the worst violence that Europe had seen since world war two, which has been said and published; or that Spain is a country that does not respect the basic freedoms of voting and everything. We started seeing those with the same pattern—always the same pattern. Russian outlets funded by the Kremlin that operate in 100 countries and in 33 languages, namely RT and Sputnik, would put out pieces of news that would be massively retweeted, reposted on Facebook or shared on other platforms, and they would immediately be No. 1. We had seen that with other conflicts, mainly Syria, the United States, and nationalistic movements within the former Soviet republics—always within the Russian sphere of influence—but we suddenly started seeing it in relation to Catalonia.

We went to the source. I will give you an example that involves the United Kingdom. A piece of information by RT had the headline, “Why isn’t NATO bombing Madrid? That was shared on social networks. It was a piece of news about how Kosovo taught lessons to the European Union, which may sound like a familiar point, because Putin himself made it very recently regarding Catalonia. The source who said that was, they claim, a former UK diplomat. That gave RT the grounds to use that headline about NATO bombing Madrid, which went viral. The UK diplomat is a former UK envoy to Uzbekistan.

Ian C. Lucas: We know who he is.

David Alandete: You guys know who he is? All right. He is widely used by RT for that. Whatever he mentions becomes a piece of news with a headline. I am a journalist and I know that “Why isn’t NATO bombing Madrid?” is not a headline, but for many people who are not very news literate, it is worrisome. Just to summarise: that is what we saw. We have been seeing it very intensely, and it is only growing. I believe it is the same pattern that has been followed in other areas in Europe.

Mira Milosevich-Juaristi: I am a senior research fellow at the Elcano Royal Institute and a university professor. I read David’s article and then I tried to investigate by myself. For me, the key question is: why would Russia like to interfere in a referendum in Catalonia, and who is responsible for that? From my point of view, that is really the key question. I have also analysed the content of the messages on Twitter of Julian Assange and Edward Snowden. I have analysed Sputnik’s articles, RT TV and Russian TV. They are for internal use, but I have seen such interesting messages there.

I have to highlight that two Ministers of the Spanish Government—the Minister of Defence, María Dolores de Cospedal, and the Minister of Foreign Affairs—said, one day after the Catalan referendum, that Russia is the territorial origin of the hyperactivity of artificial intelligence, but both of them underlined that there is no evidence of any link between the Russian Government and this kind of activities. Since the Catalan referendum, many Spanish politicians repeat this mantra about the interference of Russia.

I asked why Russia would interfere in the Catalan referendum and who was responsible for it. There are three hypotheses. The first is that Catalan pro-independence activities have Assange’s support. That could be right. The second is the interpretation that President Vladimir Putin gave: he said that when hacker patriots see one country and one political elite talking badly about Russia, they like to take revenge, so they start to tweet and spread the fake news. The third hypothesis is that the Kremlin has activated its information war to destabilise Spain, but as a part of its overall narrative about an almost collapsing European Union. That is the first and most important message spread by the TV networks and newspapers that received economic support from the Kremlin.

In my personal opinion, I think that it is impossible. The complexity of the combination of different instruments used during the referendum in Catalonia—the social networks, the tweets of Julian Assange and Edward Snowden, the factories of web with regards to media such as Sputnik and Russia Today—reveals that there is a strategy behind this. We have to remember a few things. First, the military doctrine of 2014 introduced the term “information war”. Secondly, Valery Gerasimov, who is a general and the chief of the general staff of the Russian army, has written in many articles about hybrid war, information war and the difference between conventional war and hybrid war. Many of those articles speak about the concept of strategic deterrence, which is a combination of conventional military and non-military instruments.

My hypothesis is that it is impossible to realise such a complex operation with different instruments without the support of a Government agency. Of course, I do not have material to justify that—it is a hypothesis. I also think that we have to pay attention to the previous instances of information war in Brexit, the United States and so on. I do not think that any pro-Russian actor would do anything without Russian authorisation, because Russia, as we know, is a centralised and authoritarian state. I don’t think that any actor could just act freely.

Francisco de Borja Lasheras: I just want to say, first of all, that I am very honoured to be here today. I think it is fantastic that our democracies have joined forces. I suppose that is a portrait of Mr Wellington, so it is good that we are back on track, although on this occasion the enemy is a little bit

Chair: It is not a depiction of the peninsular war, at least.

Francisco de Borja Lasheras: We call it the war of independence.

It is very important that this is going on. Perhaps I will introduce a few nuances to complement some of the things my colleagues have said. First, as you know, this is an ongoing investigation in Spain that is partly taking place in the Parliament, behind closed doors, so I am trying to be cautious with some of my statements. When I quote, or give a fact, I will quote a source so that you can follow up on it.

In the case of Catalonia, we saw a mixture of things that were right—that there were instances of police violence—and of fake news, biased reporting and a misleading account. With all of those patterns, we cannot attribute all of that to Russia; that would just not be correct. It is important to distinguish between proper fake news—there were cases of fake news—and biased reporting. In the case of the Russian-affiliated outlets, you see a little bit of both: you see instances of balanced reporting with instances of biased reporting and fake news. I will try to give some examples.

It provided a fantastic opening, because here was the west at its hypercritical best, if you know what I mean. You had democratic rule of law versus the right to decide, protest versus the constitutional order, and territorial integrity versus succession. So it did provide an opening for a democratic crisis that is ongoing and very complex, and it is important to be nuanced.

In a certain way, the narrative that fascism is back in Spain—even though, so far, knock on wood, the far right does not have parliamentary representation in Spain—wasn’t propagated only by Russian-affiliated actors. There was a little bit of that in other media. In a way, those outlets capitalised on that.

We have to distinguish between RT and Sputnik. Up to the referendum, RT provided relatively balanced coverage, bar some editorials which a Spanish person could find obnoxious. They were trying to give the different sides. Beginning probably around the day of the referendum and the images of violence and afterwards, you can then perhaps identify that they were going away from that balanced coverage. But it is important distinguish RT from Sputnik. Sputnik was less balanced and more prone to giving sensationalistic, tabloid-style headlines along the lines David mentioned.

I am not sure I can give evidence on whether this was a systematic attempt, but you saw this kind of news quoting a North Korean representative saying that the solution would be a communist Spain and so on, and then a mixture of biased reporting, misleading reporting and fake news.

On more data and Assange, I think there is some evidence—the report by George Washington University used by El País provides more evidence on this—that some of the actors that were retweeting Julian Assange content can be traced back to those who usually propagate the Kremlin narrative, but not only to them.

Q53            Chair: Could I ask about Julian Assange? I saw a study that said that, having never tweeted about Catalonia before—he only started when the referendum came—the Assange account and the WikiLeaks account tweeted nearly 1,500 times about the Catalan referendum.

Mira Milosevich-Juaristi: In Spanish! For Catalans!

David Alandete: I have some information on that actually. We published a photo of Oriol Soler. Oriol Soler is not an elected representative in Catalonia but he is member of the Sanedrín, which is a group of experts, politicians and civil society leaders that meets in the shadows in order to advance the goal of independence. This may sound like fiction but it is not; it really is a group of very influential people in Catalonia. Oriol Soler is one of them. He is a man of Esquerra Republicana de Catalunya, which is one of the main pro-independence parties. We got a photo of him here in London entering the Ecuador embassy for a meeting with Assange. There is obviously a link. I don’t know if he never tweeted—he was not tweeting regularly—on Catalonia. According to the tool that we have to measure social impact, during September and October—factoring the two months—the top three retweeted, shared and replied-to messages on Twitter were Assange’s on Catalonia.

Assange has not only tweeted fake news; he has also been very aggressive with people who he disagrees with. He has mentioned me several times. He has been very bullish and he has propagated false information, like I am paid for by the US Government or whatever, trying to discredit anything that doesn’t abide by his vision that “Catalonia has to be independent; Catalonia is already independent; they claimed independence against the facts that we see.” He and WikiLeaks have been a cornerstone of this process. The meeting with one of the leaders and some contacts that he may have had before that we don’t know of are very eloquent, I think. 

Francisco de Borja Lasheras: I have a quick follow-up, to complement that. I think someone should examine Assange for his googling of Catalonia, as he tweeted the map at some point, prior to the crisis. He is not a well-known person on Spanish affairs. That is an entirely different story. It’s true that his contents were retweeted and shared by some of the hacktivists or individual actors who are usually affiliated with Kremlin propaganda. According to Ben Nimmo from Medium DFRLab, they were not the majority but they were there. We have talked about RT and Sputnik, but it is important to look at other outlets that El País has been looking at, including NewsFront and Russia NewsNow, which are usually affiliated with Kremlin propaganda.

David Alandete: And Voice of Europe.

Francisco de Borja Lasheras: And Voice of Europe. To conclude, now there is a question of ultimate command and control, and I refer to a study by Mark Galeotti, a European Council on Foreign Relations fellow, on Russia’s “hydras”, regarding the different relationships between the intelligence agencies. It shows that some of these intelligence agencies are competing with one another, so that it is very difficult to establish things. But it was certainly not only Russia-affiliated actors propagating disinformation. There were other actors and there were, of course, other individual activists, not only in Catalonia but across the world.

David Alandete: If I may, how do you spread fake news? First, you need a quote, like from this so-called UK diplomat—I don’t know if he’s a diplomat or not—or statements, and then you need a URL, so you need a link in a site that can be a news site, like RT, or not. Then you need to amplify that with thousands or millions of retweets.

RT and Sputnik are at the centre of this. Assange and Snowden are a very handy source for them; anything that Assange says is a quote and a headline. They both share the same editor-in-chief, Margarita Simonyan, who is a journalist in Russia of Armenian descent. If you Google her, you’ll see that there are hundreds of photos of her with Putin. She is the editor-in-chief of Sputnik and RT.

To finish this argument, at the end if you analyse the Twitter accounts—the ladies before us were talking about this—that retweet this material, probably between 70% and 80% are bots. They are automated. They are not only fake accounts; they are robots.

Q54            Chair: Finally from me, and then quite a few colleagues will want to come in: you mentioned Julian Assange and fake news. Someone showed me a tweet saying that he tweeted that the Spanish police had closed down WhatsApp in the Catalonian state.

David Alandete: Yes. Assange has been tweeting a lot of information and the moment it’s proven wrong he deletes the tweets. He published some photos of police officer brutality on the day of the referendum. We and other outlets in Spain went through them photo by photo. There were photos from 2011, 2010 and 2012. The worst photos that you have seen—I am not defending what the police did that day—were photos of children being beaten. Those photos were not from October.

Q55            Paul Farrelly: I want to try to put the balance back into this. Clearly, with Assange, you have the Venezuelan connection, which is identified by the paper by Javier Lesaca. I have read that paper; I have not read yours, I’m afraid, Dr Milosevich-Juaristi.

Chair: For the benefit of everyone else, it worth pointing out that there are also agencies in Venezuela that tweet out and that may have been up to mischief as well.

David Alandete: We analyse 5 million accounts.

Paul Farrelly There’s mischief-making from Venezuela in this paper that we have been circulated, as well as mischief-making particularly by Sputnik. The Assange connection, of course, is that he is holed up in the Venezuelan embassy.

Chair: Ecuadorian.

Paul Farrelly: Is it? I thought it was Venezuela. I was completely wrong. We need to research the Ecuador-Venezuela connection!

David Alandete: The main finding of that report was that many of the 5 million accounts that tweeted in Spanish and English about the Catalan issue during the referendum are also used by the Chavez Government—the Maduro Government, now—in order to spread pro-Chavez propaganda. This is programmed by the analysis of 5 million automatised and algorithmic—

Paul Farrelly: Yes, that is quite interesting.

David Alandete: The Government of Ecuador has been very critical of what Assange has been doing. Actually, I want to say for the record that the chief of Government of Ecuador, who has been visiting Europe, has demanded that Mr Assange stop writing fake news.

Q56            Paul Farrelly: Okay, Assange is not my main point. It is just that this paper analysed 5 million social media shares and spotted mischief-making by Venezuelan groups, and by Sputnik in particular. My problem in trying to get a bit of balance is that a distinction has already been made about the capability of state actors like Russia to plan for known events, so we should be on our guard as to how other actors might play out in referendums, such as, importantly, in Japan about its self-defence forces, if that happens in future, and others. But clearly, in this instance, what you have seen might be called a crime of opportunity: they have got the capability, and they have seen an opportunity.

As a journalist, I worked for Reuters, which is the neutral of the neutral—you would hope, unless you are a Russian. The news from Catalonia would never have been, “Catalonia is holding an illegal referendum, and by the way a few people are getting kicked by the police”. That would not be the news. The actions of the Spanish Government gave them the opportunity to spread fake news and exaggerate. All this fake news may not have affected those people who had already voted in the referendum, but it certainly may have affected the image of Spain outside Spain, and possibly the image of Spain to some people in Spain, and how sophisticatedly the Spanish Government might react to such circumstances in future. The Spanish Government gave them the core of news; would you not agree?

David Alandete: We have covered how the Spanish Government has dealt with this. We published editorials in El País being highly critical of it for not dialoguing when there was the opportunity. Every Government and every country has its problems. The UK has had the Irish problem in the past; the images have not always been good for the Government, and there has been violence in the streets here and there. I am not going to say that the Spanish Government did everything right, because they didn’t, obviously; there is a big problem. Now, the fact is that Russia sees an opportunity to create discord and to bring more trouble within a weekend, from their perspective. The European Union about to fall to pieces—I think that’s the end game. But of course, the Spanish Government have a big problem on their hands.

Mira Milosevich-Juaristi: I think one of the biggest objectives of the Kremlin was to discredit Spanish democracy and foment division among the citizens of Spain and between Spain and other member states of the European Union and NATO. The final objective is just to show how western democracy is a completely wrong political system, and the final message is, I think, the internal one: just to show how European states and Spain could not be the moral reference for Russians and could not give any more lessons about democracy to the Russians. One of the objectives is also to distract the attention of Russian citizens from the internal problems of Chechnya, Dagestan or the North Caucasus. President Putin, the Minister of Foreign Affairs and the Russian Ambassador in Spain have all highlighted that Russia firmly supports the territorial integrity of Spain. This is true. I think Russia does not have any special interest in the independence of Catalonia, but I think they have a special interest in discrediting our democratic system.

Francisco de Borja Lasheras: Briefly—we could have a separate Committee on Catalonia—I want to emphasise that it did provide an opening at the time where we really have different visions of democracy clashing in our country. In this regard, this disinformation operation certainly includes Russian-affiliated actors—we need to see the command and control, although that is very difficult to prove—but it seems to differ a little from other operations we have seen, for instance, in so-called frontline states, such as the Baltics and Ukraine, where the emphasis is to divide minorities, to pit Russian minorities in Ukraine versus the Kiev Government, and so on. That is different from central and eastern European disinformation operations where the issue is migrants—migrants are afraid—and so on. In Spain, they resorted to identity politics.

There are strategic elements, but there are also a lot of opportunistic actions, if they see an opening. Spain is generally a friendly country, but it is a NATO country that is sending troops to the Baltics and so on. They will say that they support your territorial integrity—and they probably do in official terms—but underneath the table, you have an underworld of actors that see an opening and they want to poke in the eye. The aim is Spain, but the overall target is the west.

David Alandete: I have proof of that.

Q57            Paul Farrelly: I understand that. I just wanted to make one final point. It is really about the evidence we have received and the way it is constructed. My second observation about papers like Señor Lesaca’s is that, unlike a scientific paper, there is no control, no comparator. Sputnik and the Russians were identified; they were the fourth largest of the 5 million that were sending their messages all around. But they were behind El Diario, the BBC and El País. So I would have liked to have seen, particularly, a comparator paper about what analysis of the BBC content and the way the BBC content was shared around the world—

David Alandete: Well, the BBC has a Spanish site and they have been reporting on this.

Paul Farrelly: So as to be able to determine whether that content and how it was shared also may have reflected on the image of Spain internationally, in a way it has been said was the intention—

David Alandete: The problem there is—

Q58            Paul Farrelly: Just give me a moment, because there is an issue here. There is the stuff we must be aware of from the Russians—the agenda—but the question is how much influence has it got? Where has it got influence? How much should it be blamed for the bad reflections it has for instance on the image of Spain, compared with the actions of the Spanish Government that fed it in the first place? Then you can move from different situations, such as the American election. Let us just get it all in perspective.

David Alandete: Sorry, but I think the problem is not about the image of Spain. The Spanish Government or the institutions there will go with it. This is about false information and claims that are not true.

Q59            Paul Farrelly: You have misunderstood the point I was making.

David Alandete: Could you make it again?

Paul Farrelly: What emphasis should we place on that, compared with the actions that have been tweeted and shared around the world?

David Alandete: I don’t understand.

Q60            Paul Farrelly: It doesn’t matter. Let’s move on.

David Alandete: I just wanted to say that it is false information competing with real reporting with a conscious attack on the media. These outlets—RT and Sputnik and their journalists—make videos about people like me, journalists. They attack us directly. There is a Finnish journalist who reported on Russian troll farms. Her life was destroyed by these people publishing personal information. By the way, I have had Julian Assange and people from RT attacking me personally and making videos criticising El País. They prey on weakening the established media. I do not know how they work, but we have internal processes of transparency and accountability. We have an ombudswoman. It is not only about the image of Spain; it is about media reporting and having the disguise of a media outlet in order to spread false information. If we take into account what I was saying before, this is not the worst violence since WWII.

Q61            Paul Farrelly: I understand the capabilities out there; I am just trying to make a distinction. It is out there, but to what extent do the actions of the main protagonists actually fuel that? What effect do the actions of Sputnik and these people have compared with—I was talking about a comparator—the content shared as news around the world by the BBC? That is my point. Should we get a balance between the real-world effects—even though they are close and personal to you—of what these people are doing compared with the actions that give rise to what they are exaggerating? I will stop there. I was in Catalonia in the summer and in Languedoc on the other side, and I am no supporter of Catalonian independence.

Chair: I think Paul’s point is clear. The witnesses were trying to make the point that we are potentially comparing real news, whether you agree with it or not, and fake news that is being created to spread disinformation deliberately. Did you have a point on that?

Francisco de Borja Lasheras: Yes, the question of impact is something we are trying to learn about. It is not only limited to Spain. What I will say tentatively is that in encounters where you see such a multifaceted crisis with different and nasty things, these operations help to further polarise a discussion that is already polarised. External actors were not needed to polarise the discussion, but they contribute by further polarising it.

You mentioned the BBC, but there is an ongoing discussion—I am not a journalist, but I work with journalists—about the fulfilment of journalistic standards and the use of quotes without attribution. There was talk of 900 people injured in Catalonia, but that never happened.

I take your point. I think this is something that we are still learning, but the disinformation campaigns further polarise and further strengthen the different echo chambers. In Catalonia, there are different echo chambers with competing narratives on who is democratic. You see that they are perhaps replicated elsewhere in Europe in politically active segments of society. I am talking about not only public opinion, but people with views who, in a way, see those views confirmed or strengthened and are not provided with a more pluralistic view where there is no black and white. Those sectors tend to be the target of this disinformation operation—not just the average, public opinion folk.

In my country there are fringe parties in the European Parliament, and also mainstream parties, that borrow their opinions of course from the BBC, but also from RT. Do you know what I mean? I think it does further polarise. The actions of the different actors were open to criticism from different sources, and it makes things even more complicated because it fans the embers of conflicts and then internationally you have the ongoing geo-political conflict, but domestically we have very serious issues in our societies.

Q62            Ian C. Lucas: Is your evidence that the Russian Government is seeking to interfere with the outcome of the referendum in Catalonia?

Mira Milosevich-Juaristi: No. I think that the Russian Government did not seek any concrete result of the referendum. I do not have evidence, and I think that—

Ian C. Lucas: Forgive me, but this is the important point for me.

Mira Milosevich-Juaristi: It’s a very important point.

Ian C. Lucas: Because that is what a lot of people are saying.

Mira Milosevich-Juaristi: A lot of people said it, but also the politicians—Putin, Lavrov and many others—said that Russia supports the territorial integrity of Spain, and that is true. But we also have a paradoxical situation. I remember what George Kennan said in the ’60s: that we had to differentiate between the foreign relations and the foreign policy of the Soviet Union at that time. I think that we can apply that in this situation. Russia’s foreign relations are one thing—Russia is a member of many international institutions, even after the annexation of Crimea—and Russian foreign policy is a different thing. Russia would like to recover the Russian state as a great power and would like to—

Q63            Ian C. Lucas: I understand all that. You have answered the question, but I think your colleague wants to come in.

Francisco de Borja Lasheras: Just like I said at the beginning, this is an ongoing investigation so we are trying to provide you with evidence of the things we know. Secondly, in any event to prove—like I said, there is evidence of Russian-affiliated actors meddling in Catalonia. There is evidence of that.

Q64            Ian C. Lucas: There is evidence of—

Francisco de Borja Lasheras: Of Russian-affiliated—

Ian C. Lucas: Yes, but there are all sorts of people—there are people who attack me on social media who have connections in different places, but it is a serious allegation to say that the Russian Government is seeking to interfere with the referendum in Catalonia. We are interested in the evidence. That is what I want to know and I have had an answer. Do you think that the Russian Government is seeking to interfere with the outcome of the referendum in Catalonia?

Francisco de Borja Lasheras: Like I said, we have no specific evidence.

Q65            Ian C. Lucas: Is that a no?

Francisco de Borja Lasheras: That is a “we do not know”.

Q66            Ian C. Lucas: You do not have evidence. Okay. Do you have evidence that the Russian Government is seeking to interfere with the referendum in Catalonia?

David Alandete: The only evidence that I have as a journalist is that Russian state-affiliated TV organisations have been openly spreading propaganda that benefits those who want independence in Catalonia. That is the only thing, as far as I can—

Q67            Ian C. Lucas: I think that is a more interesting answer, because then I am interested in the relationship between the Russian Government and the Russian-affiliated—you were talking about Sputnik.

David Alandete: Yes, fully owned by them.

Q68            Ian C. Lucas: And you think that is sanctioned by the Russian Government.

David Alandete: Well, they are funded by the Kremlin and their editor-in-chief is Margarita Simonyan. She is a Russian journalist who is close to Putin. You can do research on her and who appointed her. I would seriously look into RT and Sputnik, what information they do and what they cover here in the United Kingdom about all sorts of issues, because I think it is worth seeing. The State Department in the United States has just requested that they register as foreign agents. Twitter has banned them from buying advertisements, because they think it is propaganda and not advertisement for commercial reasons.

Francisco de Borja Lasheras: I recommend reading Mark Galeotti’s “Putin’s Hydra”. It gives you a fantastic map of who is who, and perhaps you will find some answers there. Like I said, we do not know, but it is important to know that these institutions are not independent media. They are part of a broader strategy. Whether they are the Government or not, I do not know. I hope we will find out.

Ian C. Lucas: Well, that’s why we are having the inquiry. Thank you for your evidence to assist us in that regard.

Paul Farrelly: What was the book?

Francisco de Borja Lasheras: “Putin’s Hydra: Inside Russia’s Intelligence Services”.

David Alandete: We can send you that.

Paul Farrelly: I have read a book called “Putin’s People”.

Q69            Brendan O'Hara: We have heard many accusations about Russian interference in the Catalan referendum. How much of an impact did that have on the result?

David Alandete: First, you would have to accept the result. As you know, the referendum was illegal—not by actions of the Government but by actions of justice. There were no guarantees. There was not a real census. There was not a real—

Q70            Brendan O'Hara: I understand the line of the Spanish state on this matter, but you seem to be saying that in the lead-up to the referendum, Russian interference was so huge and so oppressive that you could not move for it. Has any analysis been done of the effect on the result of that Russian interference?

Francisco de Borja Lasheras: Can I say one thing? We have not said “huge”; we have said “relatively important” as part of other actors that were taking part in these information operations. We would like to know the result of the referendum, but for that you have to ask Puigdemont, who is in Belgium. We have no confirmed evidence of turnout—we can only suppose—nor who voted and so on. The jury is still out on whether that had an impact. This is a different question. The goal of these actors was never to actually influence the vote per se. Their goal was to polarise, and that is going on. Let me go back to Ukraine.

Q71            Brendan O'Hara: Can I just clarify that? What Russian interference there was was not there to affect the result. Is that what you said?

Francisco de Borja Lasheras: No, I am just saying that their main goal in this particular case was to put back on the surface the contradictions of western democracy. I do not have the evidence to claim that they have a goal of affecting the referendum, but it was affecting the overall crisis of the Spanish state that is going on. That is a different game.

Q72            Brendan O'Hara: Let’s take Sputnik out of the equation and look at Russia Today, which I am not a defender of and never have been. You said to this Committee that Russia Today had been broadly fair until the day of the referendum. It put both sides of the argument, but on the day of the referendum, when the violence broke out, Russia Today became partisan, for want of a better word. Does that constitute fake news or news manipulation?

Francisco de Borja Lasheras: No, we said at the beginning that we must distinguish what is fake news. Tanks in the streets of Barcelona is fake news. There were never tanks in the streets of Barcelona.

Q73            Brendan O'Hara: I don’t watch Russia Today. Did it report tanks on the streets of Barcelona?

Francisco de Borja Lasheras: Then you have to look into the news show. Maybe David can provide more information. They were asking some experts, but the headlines, like I said—it is a mixture of biased reporting sometimes and misleading information oftentimes, and sometimes fake news.

Q74            Brendan O'Hara: I am trying to get to the bottom of why Russia Today, which you said was broadly fair until the day of the referendum when the police moved in and the violence broke out, suddenly in your opinion went to that extreme position. From being broadly fair to being accused of being extreme does not add up to me.

David Alandete: If I may, on Russia Today, I am not going to judge their work, but the first piece of news is from 2015. I would not say that their coverage has been fully fair. In 2015, there was an attempt at a referendum, too, and they published that South Ossetia would be the first country to recognise Catalonia. As you know, South Ossetia, as other irredentist republics, are just satellites of the Kremlin. RT has been publishing this type of fake news for a long time.

Q75            Brendan O'Hara: As I say, I am not an apologist for or even a viewer of Russia Today but I cannot get round from being broadly fair to, the day the violence breaks out, suddenly being lumped in with Sputnik. Let me ask you what news sources correctly reported the police or state violence on the day of the Catalan referendum? Who should we look to as regards, yes, they were fair, they were correct?

Mira Milosevich-Juaristi: It is difficult to say but, for example, the Washington Post published an article about hundreds of people who were injured by police and then, 10 days after, they published a report that actually only four people had been in hospital. They just corrected a mistake but all newspapers published, not all mistaken information, but the attitude of Spanish police on 1 October was the main news for the first three days. After that, foreign newspapers changed and information started to change about Catalonia.

It is very important that the Spanish Government have won the diplomatic war, in the sense that immediately the European Parliament, the European Commission and many embassies openly supported the territorial integrity of Spain and condemned the illegal referendum. I think the Spanish Government completely lost in the first weeks—the week before the referendum and after—public opinion in foreign countries. How they held it was a bad strategy of the Spanish Government.

On the question of the impact of the final result, I have one remark. It is very difficult to say, but the director of StratCom—NATO’s Centre of Excellence—from Riga said that, if we compare it with the Russian interference, for example in the Brexit referendum and the United States presidential election, it is much lower than the Russian interference in Brexit and the United States. It is not such a big deal—interference—but what is new is why Russia, who is usually a very friendly country with Spain, has done it.

Francisco de Borja Lasheras: As I said, a relatively neutral line of reporting, but I would rather read other international or national media to form my opinion of what was going on in Spain. There is a little bit of an opportunistic attitude in the choice of the editorial line. You can find traces of seemingly western ways of reporting whereby you have different views, and then very subtle ways of putting in misleading bias and a secessionist editorial. The objective is to create confusion. They don’t try to convince you that Russia is fantastic; they try to convince you that the west is rotten in different ways and there is no truth. That is precisely the call, and it is a completely different game on the outcome of the referendum. This is just a tool for a broader strategy.

As regards newspapers, in addition to El País you have a fantastic and pluralistic media. In Spain you have La Vanguardia, El Diario, etc., and you could form a really good opinion of what was going on in Catalonia—including, by the way, the violence against police, which few people talk about but which did take place.

Mira Milosevich-Juaristi: What the article showed was that Sputnik is the fourth newspaper after El País, La Vanguardia and El Mundo. Sputnik comes as the newspaper—the agency—that is the most informed about Catalonia. That Spanish context is not normal for the interest of the Russian public.

David Alandete: If I may, sir, I would like to send for your review a piece of news that we also published about these outlets. To step away from the Catalan issue, they have also been supporting the far right in Spain. As you may know, we do not have a far-right party with parliamentary representation. It is a delicate issue, because we just got out of the dictatorship in ’78 and there has not been space for the far right. But we have published stories on the links between Russian-funded actors and far-right organisations, like HazteOír, which is a group that is advancing a far-right agenda. So I will send it for your review.

Q76            Brendan O'Hara: May I ask just one final question? Do you think that Russia or Venezuela were the only state actors involved in news manipulation and in spreading misinformation and fake news during the Catalan referendum, or were there other actors, closer to home perhaps?

Francisco de Borja Lasheras: You said state actors?

Brendan O'Hara: Were there any states other than Russia or Venezuela that were involved in manipulation?

Francisco de Borja Lasheras: No, we have no evidence.

Brendan O'Hara: There is no evidence at all. Thank you.

Q77            Giles Watling: I get the impression that you feel that there is outside influence in internal Spanish politics, and you are giving us the narrative that this is mainly coming from Russia. I would like to bring this to a more parochial level from our point of view: have you found any evidence or do you find any similarities between what you see in Catalonia and what we had with our referendum for Brexit in the UK—the spreading of misinformation?

David Alandete: Regarding our reporting and the experts who have written in El País about this, there is a pattern that has been followed in Brexit, in the US election, in Germany, in Romania, in Hungary. We have information backing this from two centres that I strongly advise you to get in contact with. One of them is the NATO Strategic Communications Centre of Excellence, based in Riga. The director is Jānis Sārts. He knows a lot about these issues. The other is the European Commission’s strategic centre for communications, under Ms Mogherini. They have also been analysing this and I have information from them that, yes, there is a pattern.

Q78            Giles Watling: In that pattern that you have had information on, has it been the same sort of actors? Is it from Venezuela, Russia, Ecuador? Is it the same sort of influence? What I am trying to say is, is it an overarching narrative that is coming from these actors outside Spain, outside the EU, to perhaps destabilise the EU? Is that perhaps your feeling, your take on it, rather than individual piecemeal reactive misinformation?

Francisco de Borja Lasheras: Like I said, I think that the gentleman from NATO also confirmed in Parliament the other day that in their view the disinformation operation in Catalonia—correct me if I am wrong—was of “low to medium intensity”. I want to emphasise that, because there are different levels of intensity: when you look at Ukraine, when you look at the elections in the US, etc. they show different levels of intensity. Probably Catalonia was not top of Russia’s priorities, but there was some evidence of some meddling, and we have tried to give evidence of that.

On whether there is a pattern, I can only speak with a certain level of expertise on Russian disinformation operations, and there is indeed a pattern that I have been trying to explain to your colleagues. In a way, it is about confusing public opinion, or contributing to that confusion, and polarising discussions that are already very divisive and polarising.

It also mobilises different sets of actors. In Spain, RT and Sputnik have been very active in, for instance, promoting disinformation on Ukraine. Their target has been the Spanish far left, which has had a relative impact, if you look at voting patterns in the European Parliament, but also the far right, regarding the brand of Putin as the epitome of social conservatism. The audience changes, and they tailor-make the campaign regarding the audience they want to influence. There is a lot of confusion, but there is a pattern, and the pattern is to weaken the west. Spain is just a part of a broader—

Q79            Giles Watling: There is one central intelligence behind it, driving it, in your view?

Francisco de Borja Lasheras: Yes. It is not a top priority actor to influence in that regard, if you compare it with Germany, for instance. They really care about Germany, because Merkel is seen as the bulwark to keep the sanctions in place. Does that answer the question?

Giles Watling: Yes. I get that.

Mira Milosevich-Juaristi: And in Germany there was the famous news about the girls who, supposedly, were raped by immigrants. The Minister of Foreign Affairs in Russia talked about it. Finally, it was shown to be pure fake news.

The first object is to weaken the west. Margarita Simonovna Simonyan used to say, “We just would like to offer the alternative point of view—we don’t trust that the truth exists, and our job is to offer the alternative point of view.” This kind of alternative point of view is, of course, always aimed at the weakest points of the liberal democracies, and in this case that was Spain.

Q80            Giles Watling: What I am getting from you is that any opportunity to drive a wedge between the unity of the EU is taken, and that they will be flexible enough to adapt how they apply that pressure.

Francisco de Borja Lasheras: I would say so. In the Czech Republic, it is about the migrants; in Germany, it is about the euro and migrants. Although there is the nuance, like I said, that populist parties also do fake news and also propel their own alternative narratives, it is just that Russian-related disinformation operations are doing so at times with a strong anti-establishment sentiment, which is very important in Spain. For anti-establishment people, there is no truth—“I don’t trust the mainstream media; I don’t trust El País, so I would rather read what these other outlets are telling me.” The Russians know that and they are really effective at mobilising those people.

Q81            Giles Watling: And it is effective, in your view?

Francisco de Borja Lasheras: Yes.

Q82            Julian Knight: I will just move away from Russia for a moment; I am sure you will be really upset at that idea. On the role of social media platforms or publishers in this respect, what do you think they should be doing? Do you have any solutions, or any experience you may have, in terms of any interaction you may have had on this particular issue?

Mira Milosevich-Juaristi: Francisco is an expert.

David Alandete: He is. I represent the media, so I—

Julian Knight: I would like to hear all your views on this particular point.

Francisco de Borja Lasheras: I am really an expert on social media trolling. I don’t know whether there is such a system in the UK, but in my country, under article 20.1 of the Spanish constitution, I am entitled to receive truthful information. That has been construed by some journalist associations in our country as meaning that we have cannot have fake news in our constitutional system. We need to find ways to redress that.

On social media—I think this is growing with Facebook and Twitter—the strengthening or promotion of hate speech should not be allowed under our constitution. There are systems to look into that. There is also a difficult issue—I am going beyond my expertise—because some of these bots are also used by companies to sell, so not every bot is related to a foreign actor. The terms of reference of Twitter are very important, meaning that social media should not be a platform for things that are against our constitutional rights and liberties. I think we are all playing catch-up with this. This is a very difficult problem and we are only seeing the tip of the iceberg. Our predecessors are used to digital literacy, but this is a role for Parliament, this is a role for companies, this is a role for journalists, and regarding the security ties, intelligence is a role for security agencies with your Parliament.

David Alandete: If I may: this has nothing to do with fake news. We at El País have undergone massive digital transformations, so we no longer consider ourselves to be a print newspaper, just digital. I tend to be put off when I see digital as an objective of something, like this Committee. Nowadays everything is digital, and it is not only digital: everything is mobile and social. Eli Pariser wrote a very interesting book on filter bubbles, and it is a very interesting reflection. The first thing that you do when you wake up in the morning is check your phone, go to social media and find all sorts of information including information about politics, relationships, sports and family: everything is digital and everything is social. If you are going to be fed information and this information creates a bubble then of course fake news comes in there, and it intoxicates. Going back to Mr O’Hara’s question, I do not think they have any interest in Catalonia being independent—quite the opposite. But there are stories about paedophilia rings in the United States led by Hillary Clinton, Obama being born in Kenya and all sorts of things. That is serious stuff, and it is very important that we actually face the fact that the internet is not like an objective any more; it is what we live through, except for the basic necessities.

Mira Milosevich-Juaristi: Last year, the word of the year was “post-truth”, this year it is “fake news”. That is very significant. I think that we have to draw a distinction between information and knowledge. With this democratisation of information by social media, it is much easier to confuse these two concepts. We have to legislate on it, but I don’t know how.

Q83            Julian Knight: Does anyone have any idea as to how you legislate it? What about the algorithms? We have had a lot of debates on whether or not these media companies should be platforms, as they argue, or whether they should be publishers. There are real issues in both respects. Most people say to us that “platform” is not enough—that is effectively not taking the responsibility that they actually have—whereas many people say that “publisher” is quite dangerous. Do you see that algorithm, and the way in which we regulate that, as being quite important? You are nodding, David.

David Alandete: Yes, that is the million dollar question. Looking back at my experience, we have seen how we used to control all of our distribution. We would print newspapers, bring them to newsstands, and we would be accountable. You could sue the newspaper and we would be responsible. Now I have to say that a massive, absolute and growing majority of our information is distributed by algorithms. What you see from El País does not depend on what I choose; it depends on the decisions of the algorithm that is tailored to you. They do not just distribute stylish media like the BBC, The Guardian, El País and The New York Times; they distribute all types of outlets that factor in there, and they could be fake news or not. Everything comes to you mediated by something that we don’t know, because it is protected by intellectual property laws. You guys don’t know it, even though you are the Government. It is very opaque in a way. By the way, these are the platforms that are advocating for net neutrality—that is kind of ironic.

For us, they are not publishers because they do not have content, but they distribute our news, so they are distributors. As distributors, they should be accountable for what they distribute. If they distribute fake news or something that is against the law, I think they should be accountable. I think even they would agree with this proposition. It is necessary for them to be accountable in order for democracy to stand strong, because the media, as you will agree, are very important for a democratic system to stand in place.

Mira Milosevich-Juaristi: I think that for western countries it is impossible to completely legislate for it, which starts with the fact that western countries cannot restrict the use of the internet and social media as Russia, China or North Korea could do. In that sense, we have a gap between us and other kinds of countries, which can make for the distribution of fake news.

Francisco de Borja Lasheras: I was going to perhaps recommend to the Committee this report—

Mira Milosevich-Juaristi: It is pure propaganda all the time! That is a joke.

Francisco de Borja Lasheras: I am sharing brilliant expertise that goes beyond me. It is a very good report by the Centre for European Policy Analysis, by Edward Lucas and Peter Pomerantsev, which tackles some of the questions that you are looking at. It is really good on recommendations, so I will send it to the Committee.

I want to say that we haven’t fathomed how important it is that in our deliberative democracies, we start from the same assumption of facts. I can agree or disagree about tax or whether it should be more progressive, but we agree on facts. The point where you eliminate the facts and put that in a context with a landscape of different echo chambers—that really bodes ill for the future of democracy as we know it. I think we have to get used to a certain level of noise; disinformation is here to stay and is not only propelled by state actors but by populist actors. It is very important that we find ways to keep our system going based on civil liberties, but also important to find ways not to let hate speech be propelled or minorities be abused. That is part of the danger, because if we eliminate the facts, I see no future for democracy.

Q84            Chair: In a sort of summary of what we have been discussing over the last hour or so, would you agree that the application of fake news in the context of the Catalan referendum and its aftermath was not organic? It is not something that developed in a piecemeal way. It is a consequence of a deliberate strategy that was deployed using news agencies and fake accounts, and its objective was to spread disinformation around that political process. Something like that, on that scale, could only be done if it was being done to a deliberate strategy, rather than something that happened by accident.

Mira Milosevich-Juaristi: I definitely agree that there was a deliberate strategy behind it. I have already mentioned the motives and why Russia would like to do that. Definitely I think it was a deliberate strategy, guided by Government or agent institutions close to the Government. It is not individual actions without any co-ordination. It is not the idea of one person.

David Alandete: I think you need a level of planning in order to activate bot accounts. With the so-called troll farms in St Petersburg and other parts of Russia, you need some type of planning and control. The computers don’t become self-aware. You need someone to say, “This is today’s message and we are going to make it”. I think that piece of evidence points towards an affirmative answer to your question.

Francisco de Borja Lasheras: I do not have enough facts to come to a fundamental conclusion that there was a strategy in this case in point. There are elements that confirm there was an orchestrated disinformation operation of different levels, as I have been trying to say. But it dovetails with a broader strategy on disinformation, of which Catalonia is just one part. There were also organic elements. We criticised the post-truth by the secessionists, but that is a separate discussion.

Q85            Chair: Thank you. Finally, can I ask if there is any evidence of renewed activity or increased fake news activity in the build-up to the elections in Catalonia on Thursday this week?

David Alandete: Yes, we saw that. Obviously, it always happens with the news. At the beginning it is very strong news and everybody is reporting on it, but it is still ongoing, every day.

Mira Milosevich-Juaristi: Yes, every day—and what is important is that the Spanish Government took measures just to protect the elections from cyber-attacks but not from disinformation and fake news.

Francisco de Borja Lasheras: I do not think that we can rule out that this will go on after a vote. Suppose the non-secessionists have a majority. I presume we will have a campaign talking about a coup d’état in Catalonia. I am not ruling it out, but we will find out.

Chair: Thank you all very much indeed.