28

Joint Committee on Human Rights

Oral evidence: Democracy, privacy, free speech and freedom of association, HC 1890

Wednesday 15 May 2019

Ordered by the House of Commons to be published on 15 May 2019

Watch the meeting

Members present: Ms Harriet Harman (Chair); Fiona Bruce; Ms Karen Buck; Joanna Cherry; Baroness Hamwee; Baroness Lawrence of Clarendon; Lord Trimble; Lord Woolf.

Questions 4758

 

Witnesses

I: Jodie Ginsberg, Chief Executive Officer, Index on Censorship; Professor Jacob Rowbottom, Associate Professor of Law, Oxford University; Richard Wingfield. Head of Legal, Global Partners Digital.

Examination of Witnesses

Ms Jodie Ginsberg, Professor Jacob Rowbottom and Mr Richard Wingfield.

Q47            Chair: Thank you for joining us in this evidence session. As you know, we are the Joint Committee on Human Rights, which means that half our members are Members of the House of Commons and half are Members of the House of Lords.

As our name suggests, we look at human rights issues. The human rights that we are asking for your evidence on today are in connection with our inquiry into human rights, democracy, free speech, freedom of expression and the right to protest, and how to balance the right of the public to protest with the right of MPs to say what they believe in and to go about their work without being restricted.

We are very grateful to you for coming, with your expertise and your backgrounds, to give evidence to us. We have with us Jodie Ginsberg, chief executive of Index on Censorship; Richard Wingfield, the legal officer of Global Partners Digital; and Jacob Rowbottom, associate professor of law at Oxford University.

I will start by asking you for a general answer to set the scene. I think we all in the House of Commons and House of Lords set great store by freedom of speech and believe that authority should be able to be challenged and that there must be a very high bar before free speech is restricted.

When and how do you draw the line when freedom of speech tips over into criminality, persistent abuse or harassment that is in itself undermining democracy? How up to date is law and practice? In answering the latter question, we should bear two things in mind. One is our general concern that we should not tolerate racism, misogyny or homophobia, but the other is a context in which social media is now an important part of freedom of speech and freedom of expression.

We will get on to questions about social media further on in the session, but for now the question for you is: do you think that the line is being drawn in the right place? Do people know where the line is drawn and where they should not overstep the mark, or are they overstepping it with impunity? How do you feel about that?

Professor Jacob Rowbottom: Thank you for inviting me along today. You ask where the line is drawn, but that is always a very difficult question to answer, because we are talking about a whole range of different types of behaviour in different contexts. It is difficult to generalise, and it is a question of balance that will often be fact-sensitive depending on the particular circumstances.

That said, there are some general principles that can guide us in the application of free speech, some of which have come from the European Court of Human Rights. The first is that one of the key principles, which is well known, is that a high level of protection is normally to be afforded to contributions to matters of public debate and political speech. The court has often distinguished between contributions to a general debate and things such as hate speech, gratuitous insults and targeted abuse, which would fall on the other side of the spectrum. That is a factor to consider in the overall balancing exercise.

Another important factor when you think about social media is that free speech requires that people have the opportunity to say things that are mistaken. People have the right to make mistakes, correct them and realise when they have got things wrong—maybe to speak spontaneously but then realise that they have been out of line and step back—and that should not have life-changing consequences. We can distinguish the spontaneous offhand remark from, say, a persistent, calculated campaign of abuse where it is known that that will cause distress and it is done deliberately.

A third factor within free-speech jurisprudence is that any restrictions have to be proportionate. You talk about where the line is between free speech and criminality, but there are certain stopping points between the two. I think criminal law should be the last resort; there might be alternative measures short of criminalisation that could deal with some of the problems that you are concerned about. That might mean certain forms of regulation that did not impose criminal sanctions, which could be one way to strike a balance.

Chair: Could you just say something about that regulation? What sort of things are you envisaging? I am not sure that anyone would automatically be able to work out what that would be.

Professor Jacob Rowbottom: As you know, there is a big debate at the moment about whether social media companies should be subject to regulation.

Chair: All right. If it concerns social media, we will park this issue and come back to it.

Professor Jacob Rowbottom: A different example is, say, media regulation, where there might be rules on inaccuracy that do not fall foul of the law. It does not say that inaccuracy is illegal, but it requires media companies to correct certain inaccurate statements. You might say that that was a more proportionate measure than a blunt criminal sanction. That is the sort of thing that I am getting at.

Chair: Before we move on to your colleagues, do you have any examples of where there was a conflict, a balance had to be struck and it was struck in the right place, or indeed of where there was a conflict and you think the balance was struck in the wrong place? As you say, everything is fact-specific, but could you just give us some examples?

Professor Jacob Rowbottom: There have been a number of cases in the past dealing with, for example, public figures who may have misled the public and it has been said, “There is a public interest in revealing private information to set the record straight”. That might be an example of where a balance has been struck between free speech and privacy.

They have got that right in certain cases, but I have seen other cases, too. The Law Commission recently looked at social media offences and abuse on social media, and it has a number of examples where it worries that the system may not be particularly protective of free speech when people have made offhand remarks online and have been subject to prosecution.

Chair: But you do not have a “for instance” when you said, “Yes, they had to strike a balance and they got it right in that case”, or, “They didn’t get it right in that case”?

Professor Jacob Rowbottom: I will not offer one just now off the top of my head. I will give that some thought.

Chair: You can come back to it if you like.

Jodie Ginsberg: I will not repeat what Jacob has already said. It is important to recognise in this Committee, which is a human rights committee, that freedom of expression and speech is a fundamental human right that comes with protections, and rightly so. Not only that; it has been recognised by this current Government as a core British value and something to be championed, which is something that I am particularly pleased about.

Index believes that individuals should be free to express themselves unless they are inciting violence, and that includes the right—this has been tested in the European courts—to say things that might shock, offend and disturb. There is some confusion, particularly among the public but also occasionally among law enforcement, about the protection for speech that shocks, offends or disturbs.

I want to express strongly to the Committee that the place for a decision about where speech and expression tip into criminally abusive and harassing behaviour and speech is through the courts, not through the court of public opinion. We think, as we indicated in our written submission, that speech that is considered unacceptable in law is already drawn fairly widely—in some cases, we would even say too widely. We would certainly be concerned about any attempts to broaden the net further for what should be considered illegal speech.

Chair: Obviously from the point of view of Index on Censorship, you talk about the importance of free speech. Do you see that as free speech for the public, or do you also see it as free speech for MPs being able to speak without impossible consequences?

Jodie Ginsberg: Both MPs and the public should be able to speak openly and freely. It is an important part of our political discourse that we should be able to share not just our personal opinions but our political beliefs, and to disagree with one another without resorting to threats of violence, which are criminally prohibited.

We think that the potential desire, as a result of the perception of an increase in unpleasant, shocking, disturbing or offensive discourse, to introduce further legal restrictions to limit that would have a knock-on negative impact on freedom of speech and expression—the freedom to express your religious beliefs, for example.

Chair: You say that there is a perception of a problem. Do you think it is all in our minds? Are we just perceiving it?

Jodie Ginsberg: I certainly do not think it is all in our minds, but we have to remember that for example the users of Twitter are actually a very small group of people. The vast majority of people do not have thousands of followers on Twitter and are not subjected to conversations and discourse on social media all the time. That is not how many people use social media; many use it simply to share pictures of their cats and their children, not to engage in any kind of politically charged discourse.

The fact that we in this room all operate often in quite a highly charged environment means that there is a tendency for us to think that that is how the rest of society is operating as a whole. That is not to say that there are not genuine threats to MPs that ought to be addressedI will come on to that later when I talk about the current legislative system—but it is worth being mindful of the fact that the way we in this room might use social media is not necessarily the way in which the wider public use or perceive it.

Richard Wingfield: I echo what both my fellow witnesses have said. I wish that I had thousands of followers on Twitter, but I am still unfortunately on about 500; I do not have quite the influence that I would like.

Both the international and European legal frameworks are fairly clear on when speech can be restricted. The rights to free speech, peaceful assembly, protest and association are of course not absolute rights. They set out a series of safeguards that I am sure you are all familiar with: that you need a clear legal basis and certainty within the law; you need to pursue a legitimate aim; and you need to have measures that are necessary and proportionate and accompanied by few safeguards, which for the most part means going through court procedures.

As Jacob said, it is difficult and fact-specific making determinations about where the balance is. That is why we have criminal laws that some might argue are potentially a little wider than they need to be, because that enables a bit of discretion to be afforded to the police and the CPS to determine when it is in the public interest to prosecute and when it might impinge upon the right to free expression. I think you need a degree of flexibility within that.

I know that you are also looking at electoral law as part of this inquiry, but my understanding from the report by the Committee on Standards in Public Life was that the criminal law was satisfactory; I think it said that there was no evidence that the criminal law was insufficient. The Law Commission report on online offences proposed greater clarity over offences relating to issues such as grossly offensive communications or menacing comment, but otherwise considered that the law applied equally online and offline. I think Max Hill also made that point in his evidence.

So I do not think we would see any need for further legislation in the criminal sphere further restricting particular forms of speech.

Q48            Fiona Bruce: I want to pick up on one or two words or phrases there. I would like to think that we are all reasonably clear when threats of violence of spoken, but Jodie used the phrase “shock, offend or disturb”; Jacob said that in certain circumstances issues would have to be viewed to decide whether they were legal or not; and I think Richard talked about discretion.

When I was a law student many decades ago—I have to say, Chair, that they say you are getting old when policemen look young, but when professors start to look young, even associate professors, you really are—the idea of the thin-skull rule did not exist. That is, you have to take your luck if the recipient sustains more injury than another one might.

I want to tease out your thoughts about the fact that if that kind of approach is applied to the spoken word, it is effectively quite a subjective task, which can make it pretty difficult for people to know whether or not they are breaking the law, and indeed might have a chilling effect. How real do you think that is? Can I have your comments on that thought?

Jodie Ginsberg: I think that is right. As the Law Commission recognised, part of the challenge, and perhaps part of the reason why we are not seeing as many prosecutions as you might expect given the level of threats of violence that we are seeing reported by MPs, is confusion around some of the offences, particularly the communications offences in the Malicious Communications Act and the Communications Act, which include terms such as “grossly offensive” that are highly subjective as well as highly dependent on the social mores of the time. We could see that clarified. The CPS has issued guidance on those offences as they relate to social media. That has been helpful in part, but we could certainly see more.

There has also undoubtedly been confusion between what is not acceptable under the terms of service of a social media company and the law. They are not the same thing, but I think that people often consider them to be. If Twitter’s terms of service say that you may not say a certain thing, people presume that that also means that it is illegal. We are finding people increasingly reporting to the police things that are said on social media, and then the police look into that. In some cases they even say to people, “We know that this is not a crime or a criminal offence”, but we would certainly agree that a police officer calling you up certainly has a chilling effect.

Professor Jacob Rowbottom: I would start by quoting Lord Justice Sedley, who said in a case called Redmond-Bate: “Freedom only to speak inoffensively is not worth having”.

I do not think that offence can be the standard by which we judge expression; as you said, it is subjective, and then the standard of what is permissible becomes reduced to the most easily offended person. For that reason, typically in law there has to be an objective standard where we look at what the impact would be on the reasonable person. The law attempts to deal with this problem by raising the threshold to “grossly offensive”, which suggests that it has to be above a certain level of seriousness before it can fall within certain criminal laws. That is still problematic because of the vagueness of what constitutes gross offence and whether that is even a basis for restricting expression. There is a debate about that.

A further matter that is relevant to the inquiry today is that there is a general assumption within the free speech law that political figures and public figures are less easily offended than the average person.

Fiona Bruce: That is what I was coming to. Should we have to tolerate more than anyone else in this respect?

Professor Jacob Rowbottom: The general view is that it is part of accountability, especially of an elected politician, that they engage with the public and their actions are open to public scrutiny, which means receiving criticism that would not normally occur with, say, a private individual. The fact that criticism is about a public figure raises that free-speech concern and puts it into the general discussion of a matter of general interest. It means that you give people a certain latitude to criticise a politician.

There was a harassment case a few years ago brought by a politician’s press officer against a national newspaper that contained some highly critical articles. That claim failed partly because the public figure would not be harassed; they should take that criticism in their stride in a way that would not be expected of a private figure. However, the court said that the decision was not a blank cheque to say whatever you want and engage in campaigns of harassment. If someone repeatedly taunted a public figure and in such a way that was unrelated to any public events, that could cross the line. This is just one of those things that means “proceed with caution”. There is a higher threshold when dealing with a public figure, but there is still a line.

Richard Wingfield: It is also important, albeit difficult in practice, to make a distinction between political speech and speech directed towards politicians. Politicians are of course human beings like the rest of us, with sensitivities and so on. We should not necessarily expect politicians to be stronger characters; I do not think we want to go in the direction of saying that we expect them to receive a level of abuse that we would not expect the ordinary citizen to deal with.

However, speech that is about policy, a politician’s decisions or political opinions, voting record or manifesto promises requires greater protection, although I accept that in practice those two forms of speech are often very heavily conflated.

On the point about subjectivity, which I know we will look at later, we try to avoid subjective terms, but one that is becoming increasingly common when we talk about online speech is “harmful” speech and “online harms”. Again, the best way is to avoid subjective offence or harm and focus on more objective, measurable impacts. That would be helpful.

Q49            Fiona Bruce: Thank you. I have a separate question. Should the organisations that protect free speech—I am thinking about the police but also others—also be as vigilant in ensuring that MPs are able to engage with the constituents who have elected them?

To give you a little context, MPs are increasingly not holding public surgeries or travelling on public transport as much as they would have done. Some MPs are even considering standing down or not standing again for election. Do you consider it to be a sufficiently serious problem for the health of our democracy that organisations that protect free speech should also be vigilant about ensuring that MPs can conduct their business and interact with their constituents as freely as has been the case until recently? This is not a small phenomenon; we are hearing this from quite a number of our colleagues.

Richard Wingfield: The short answer is of course yes, absolutely. I am fortunate to come from an organisation that looks at human rights in the round rather than one particular right, and we are very concerned at the idea that actions are inhibiting politicians from being able to do their jobs or inhibiting those who want to become politicians from being able to run for office without fear. There is a spectrum of openness.

I lived in the Netherlands, where you would occasionally see the Prime Minister cycling down the street or what have you, and then you can go to very different democracies where public surgeries just do not exist at all. So I do not think there is a precise point where you can say, “This is the exact amount of openness that we want to have”. But it is a huge problem if politicians are now reluctant to hold surgeries, individuals are now reluctant to run for office or MPs are reluctant to engage in discussions or open forums. It means identifying the conduct or speech that is creating the problems that we need to address, asking whether it is prohibited, and, if so, ensuring that that is enforced.

I go back to the point that I am not sure that this issue requires new legislation, but we should think again about how it is enforced, the resources and who is involved in that enforcement.

Jodie Ginsberg: As a representative of the only organisation here that focuses only on freedom of expression, I think that if we are doing our job correctly—in other words, if we are promoting freedom of expression as a fundamental freedom and human right that benefits everyone, regardless of their political views, religious beliefs, gender or sexuality—we will create an environment that enables politicians to engage freely with their constituents but also enables constituents to engage freely with politicians. Part of that is also about talking openly about the kinds of discourse that we as a society might want to encourage without resorting to legislative curbs.

Professor Jacob Rowbottom: I would add that if there is action that is unreasonably deterring anyone from standing for office or communicating with their constituents, you would say that any measures that restrict expression are not to protect a competing interest; they are to protect another fundamental right. When you deal with that, it is not a matter of saying that there is a priority in favour of speech; you are dealing with two rights of equal weight. That can be a factor in the overall accommodation. The ideal is to accommodate both the right to free speech and to criticise but also the right to meet citizens and stand for election, in so far as possible, but it means that you are dealing with two rights of equal weight.

Fiona Bruce: You have touched on a fascinating subject that we cannot go into today: the hierarchy of rights and the importance of looking at reasonable accommodation in more detail than we have thus far as Parliament. Very briefly, as a lawyer do you think there is something in that?

Professor Jacob Rowbottom: You are looking not so much at a hierarchy of rights because you would be looking at speech rights versus speech rights and balancing the two equals. You see that in public order law, where you have protester and counter-protester and the police trying to keep the two separate and allowing them to coexist. How that translates into the online world is less certain, but that is the kind of thing that you are looking for.

Q50            Lord Woolf: The final arbiter in this situation would be the European court, which would be able to influence some of the topics that we are raising. It should be possible to exercise your right to freedom of speech so that you can talk about anything without impinging on the ability of politicians to perform their role. Would you agree?

Professor Jacob Rowbottom: You would hope so. I guess it comes down to the question of the threshold where you think the speech of another person starts to impinge on the politician’s activities. If we are talking about threats of violence, obviously that falls outside the free-speech principle in the first place.

Lord Woolf: What I was suggesting, to see if we could get your response to this, is that if politicians are saying, as they are in some numbers, “We can’t do our job properly in the way that we think it should be done”, and if that is objectively right, that suggests that things are happening that freedom of speech would not protect.

Professor Jacob Rowbottom: This is not a very helpful response, but it depends what is happening that is stopping them from doing their job. The next question would be, “What do we want to do about it?” That is where the response might start to impinge. You might then have a law, but if it was cast in very broad terms it might start to impinge on other forms of legitimate free speech. That is the problem.

Lord Woolf: What I am really saying is: do rights not come with responsibilities? Do you accept that?

Professor Jacob Rowbottom: Yes.

Lord Woolf: Should the responsibility not be that if you want to exercise your right to free speech, you should not do so in a manner that can obviously terrify Members of Parliament?

Richard Wingfield: I am terrified that I will get my law wrong now, but the European Convention on Human Rights does not allow the abuse of those rights in order to undermine others within it. If there is a kind of speech or conduct that is making it impossible for a politician to do their job, it is important to identify what it is.

Lord Woolf: And you have to identify what effect it is having on politicians.

Richard Wingfield: Indeed.

Lord Woolf: If it means that they cannot hold their surgeries, which, in our system at any rate, is a very important part of a politician’s responsibilities, that would suggest, would it not, that people are abusing the right of free speech in such a way that they seek to hide behind that right to justify what they are doing. Surely an MP is a reasonably firm person and we would normally expect that to be so.

Against that background, can you help us at all about the attitude that the European court would take if a case came before it with clear evidence that Members of Parliament were being seriously and significantly affected in their ability to do their duty? One of the things that we have heard from Members of Parliament is that, if they had known how they would have to suffer as they are suffering at the moment, they would never have taken the job on.

Jodie Ginsberg: I echo Jacob here in saying that the question that we are trying to answer here is: what kinds of speech are preventing MPs and politicians doing their job? Are they the kinds of speech that are already prohibited but not being prosecuted and followed up, or are they new kinds of speech acts that we believe are not covered by the law and ought to be outlawed?

The concern is that the law already has a number of protections that prevent people using speech in such a way that an individual feels threatened or unable to go into the workplace, for example. We already have those protections in place, so the question for me is: what new protections are envisaged or requested that are not already on the statute book?

When we talk about people not being able to do their job, we need to be very specific and objective about the kinds of behaviours being experienced and about what we would like to do about them. My concern—this is understandable, because we are in the place where laws are made—is that the automatic reaction is always to say, “This problem has arisen. That must automatically mean that there is a problem with the law, so we must need new laws to deal with it”. That is a very dangerous route to go down, as we may well end up limiting vast swathes of entirely legitimate and desirable political speech.

Lord Woolf: What I am trying to get to, Jodie, is that there is nothing wrong with the laws; what is wrong is the misconception that the banner of free speech can be used to justify conduct that really cannot be justified. It is not sufficient to say, “Why doesn’t the law protect them?”, because the law, in so far as it could protect them, is not able to be enforced.

You cannot have a policeman standing at a surgery and you cannot have policemen looking after MPs going home every night. That is just not realistic. If you are reasonable and accept that these are some of the concerns, you have to make sure that when you defend free speech you defend it as properly exercised. Under the banner of free speech, people are doing things that, in fairness, other people find they cannot deal with. That is especially the case with women, who are being harassed.

Jodie Ginsberg: I would question the term “properly exercised” and what for one individual might be freedom of expression properly exercised—in other words, their right to express concerns about the way their children are taught LGBT issues in schools, for example. They might consider it to be harmful to their children, whereas it might not be considered harmful by another group of people. That is why I think we have such a high bar for the kinds of speech that should be legally curbed.

That is different from saying that as a society we might want to speak more publicly about the kind of language that we use with each other that results in large swathes of different communities feeling unable to speak. That is a different solution from the one I am concerned we might be moving towards because of the concerns about unpleasant speech: that is, the introduction of new legal restrictions.

Q51            Lord Woolf: Can any one of you venture an opinion as to where the European court is moving with regard to this? Looking at it as a spectator from this part of the world, I am conscious that other parts of Europe are experiencing the same kinds of problems. Can you detect any likelihood of any indication being given by the European Court of Human Rights in either direction?

Professor Jacob Rowbottom: First, you are quite right to say that free speech can be misused. It has a powerful rhetorical appeal and often it is used to try to justify conduct that would not normally fall within the free-speech principle. Article 10 of the European Convention on Human Rights has exceptions built into it. It has long permitted some measures to be taken to, say, restrict hate speech, harassment campaigns and so on.

I shall certainly not try to predict the European Court of Human Rights, because I find that quite difficult to do and I have a very bad track record there. However, when action could destroy the essence of someone else’s rights, not only can that be a justification for further state action but it can impose a positive obligation on the state as well in some circumstances. We have seen that in privacy law, for example, where there can be positive obligations on the state to protect people’s privacy.

Therefore, it can be justified to take certain measures. Certainly where you have two fundamental rights at stake and it is a question of balance, the European Court of Human Rights will often give a margin for signatory states to come up with their own measures to deal with that, so it can fall within the margin of appreciation.

Therefore, there is scope and it is quite permissive in some respects, although I guess that it does not mean that we can justify anything done in the name of protecting.

Lord Woolf: Are you saying—and I think you are—that to the European court these are two very important rights, probably of equal weight? If one right is not capable of being exercised in a significant way due to the other right, the court would probably feel that the state concerned should be prepared to take sufficient action not to allow it to impinge.

Professor Jacob Rowbottom: If some action was taken by undermining the right to stand for election, then, yes.

Chair: That leads us neatly on to Joanna’s question.

Q52            Joanna Cherry: I want to ask Jodie some questions about Index on Censorship’s position in a moment, but, first, I want to go back to something that we have heard a bit of evidence about previously on this Committee, an Amnesty International report into Twitter called Toxic Twitter.

It set out some qualitative and quantitative research that it carried out over 16 months into women’s experiences of social media platforms, including the scale, nature and impact of violence and abuse directed towards women on Twitter. It focused in particular on politicians in the UK and the States. In Scotland, it looked in particular at the three female leaders of the Scottish political parties at that time. The First Minister of Scotland, Nicola Sturgeon, in her interview with Amnesty International, said the following, and I will read it out to you to see whether you agree with it.

She was talking about the impact of the abuse of female politicians on young women. She said, “What makes me angry when I read that kind of abuse about me is I worry it is putting the next generation of young women off politics. So I feel a responsibility to challenge it not so much on my own behalf, but on behalf of young women out there who are looking at what people say about me and thinking they don't want to ever be in that position”.

Would any of you disagree with that statement—that as female politicians, and indeed just as politicians, we should be looking not so much at protecting ourselves as at the impact of abuse and how it might discourage young women from entering politics.

Jodie Ginsberg: I would not disagree with that at all.

Joanna Cherry: Nicola Sturgeon said that she felt a responsibility to challenge that abuse. I think you would agree with me that there are methods other than the law to challenge it. That is why I was particularly interested in Index on Censorship’s written evidence, in which you said that there were already extensive laws to protect MPs and indeed other people from harassment. Can you elaborate on that?

Jodie Ginsberg: Yes. We have indicated in our written evidence, as you say, that there are a number of laws that deal with speech: the Public Order Act, the Protection from Harassment Act, the Racial and Religious Hatred Act, the Communications Act and the Malicious Communications Act.

There is a raft of laws. We have heard repeatedly that MPs and others are receiving threats daily which under many of these laws would be considered illegal but that they are not being put forward for investigation and are certainly not coming to prosecution is a cause for concern. However, that does not necessarily indicate a problem with the law. We would agree with the Law Commission that there is a lack of clarity, particularly with regard to the communications offences, which is leading to some confusion among law enforcement but also among individuals. Perhaps some clarity and better guidelines on that would be helpful.

Potentially one of the questions—this is not a question for us—is a resource question: are police forces sufficiently resourced to pursue those most egregious offences? I am particularly pleased that the First Minister talked about the need to challenge, but it is also incumbent on us to recognise that that is incredibly difficult. Politicians are finding it difficult, so imagine how difficult it is for individuals who do not have the same platform.

One of the ways in which we at Index on Censorship would talk about how we can counter some of this unpleasant, shocking, offensive, disturbing speech is to call it out and challenge it. That is part of our responsibility. Fiona Bruce talked about our responsibility as a freedom of expression organisation. One of our responsibilities is to call out that kind of speech using the great freedom that is freedom of expression, which gives us the ability to call out and criticise when we see people using freedom of expression in what we consider to be negative ways.

Joanna Cherry: In your written evidence you said, “Index on Censorship believes that the root of the problem is in a dramatic social change which has occurred in a relatively short and politically tumultuous period, rather than any deficiency in the criminal law”. Can you elaborate on that?

Jodie Ginsberg: It is worth remembering that Facebook has been around for only 15 years. That is an incredibly short period. As a society, we are having to adjust very rapidly to a new social norm in which we are able to broadcast our views about everything, all the time, to a wide variety of people, many of whom do not know us. Initially, when the first flush of excitement about our ability to do that was upon us, people went out and quite happily broadcast all sorts of things that with hindsight they might have regretted.

What I am alluding to here is that I think that in part some of the adjustments that we are discussing about how we interact with one another will inevitably evolve as we learn how to use and adapt to social media and understand the impact that it is having. There are without doubt malicious users of social media, but many people, certainly in their initial enthusiasm in using social media, did not really consider some of the impacts that their language and behaviour might have. We are gradually learning to adapt to that.

If you talk to younger people, certainly those who have gone through bullying education at school that teaches people how to think about how they use social media and interact with one another on social media, you learn that a younger generation who are essentially digitally native have quite a different view of how they can use social media, particularly for public dissemination of thoughts and ideas.

Joanna Cherry: You have said in your written evidence, “One of the key challenges in drawing up regulatory systems for social media is that current models seek to identify and prevent ‘harms’ that exceed current legal definitions”.

You go on to give examples of how, for example, “feminist activists expressing concern about changes to the Gender Recognition Act have been accused of ‘hate speech’”. I had personal experience of that in recent days and it is pretty unpleasant. How do you think we preserve the balance between making sure that people can be discouraged from using social media to issue violent threats or commit very unpleasant and offensive abuse but still allow people to express opinions in a civilised debate about law reform?

Jodie Ginsberg: This comes back to the question that Jacob raised earlier with regard toobjective and subjective. Direct threats of violence are easily identified. It becomes more challenging when we start to talk about “harm” or “hateful”, because what one individual might consider to be harmful is important and essential speech for another person.

To use a different example from the trans example, if you as a particularly religious person have strong views about gay marriage and want to express them as they are your deeply held religious beliefs, which are protected as religious expression, that may be considered harmful and even hateful by a gay person. We deal with that by being extremely circumspect about what we consider to be hateful and harmful conduct when it comes to terms of service and even more circumspect about having an external body enforce that via a social media regulator, for example.

I know that the Committee is looking at that, and I hope we will talk a little more about it in future, because the unintended consequences of saying for example that social media companies must have a duty of care and a responsibility to prevent harm being done to their users can very easily turn into a mechanism by which vast swathes of what I think many of us would agree is legitimate political speech or expression of religious belief or deeply held personal beliefs is considered hateful for one group and is removed. That is increasingly happening on social media.

Professor Jacob Rowbottom: I would like to follow up on what was said earlier about whether there should be more policing and the scope of the current laws. One of the challenges with social media and some of the comments that are directed at politicians is a problem of scale. If an MP is receiving 600 comments or thousands of comments, it is quite hard to enforce the criminal law against that number of people.

There are also risks of perhaps starting to impose the criminal law on people’s private conversations or spontaneous remarks. When you are dealing with the scale, it is not just a question of resources but of consistent decision-making. There are the limits of the law, and your point earlier about counter-speech as a way to deal with it is a very important alternative. I am also less hostile to some of the regulation on social media, because they offer a level of efficiency that can deal with those issues of scale.

Joanna Cherry: I wonder about that. When we were taking evidence from Twitter and Facebook a couple of weeks ago, I asked the representative of Twitter about its policy, which seems to involve having no problem with tweets that threaten physical violence to women but suspending women who make factual statements such as most killers are male and very few killers are female. That statement was taken down and the person who made it was suspended from Twitter. Equally people who sent to me and other feminists a picture of a cartoon character with a real hand pointing a gun at us saying “Shut the fuck up TERF” were considered acceptable. Do you think something has gone wrong with Twitter’s moderation policy there?

Richard Wingfield: There are absolutely huge issues to do with the way social media platforms moderate content. Whether speech is regulated or moderated by state actors in the offline environment or by private companies in the online environment, it is important that we have clear, well-understood rules that are enforced consistently, not arbitrarily, and that there are safeguarding mechanisms for challenging those decisions.

At the moment we have an absolute lack of transparency from many, not all, social media companies about their terms of service, how they are enforced, how they make sure that decisions are made consistently, how they use AI, how moderators are trained and how appeals can be brought against those decisions. This all operates in a black box. The most you might get is “your comment has been removed” and perhaps you can report it to an email address.

From a freedom of expression perspective, that is hugely troubling, and more needs to be done by companies to develop clear terms of service that are understandable, enforced consistently, transparently and non-discriminatorily—or non-arbitrarily—and for there to be mechanisms to appeal decisions on content.

That is something that we care very much about when it comes to social media companies, and we think it is in line with their own responsibilities with regard to freedom of speech and other human rights.

Professor Jacob Rowbottom: Regulation of social media companies is not just about clamping down on content; it is about accountability for some of the standards that are in place. That can mean making sure that they have codes of practice to deal with certain types of problems, such as the abuse of public figures, but ensuring that there is also protection for free speech and rights of appeal.

Jodie Ginsberg: I think there is a problem with Twitter’s hateful conduct policy, and I have had discussions with Katy Minshall about this particular issue.

More broadly, there is a challenge, as the others have said, with the content moderation policies of the social media platforms in general. They lack transparency and accountability, and often there is difficulty in challenging any decisions. Often we feel that if only we could come up with an easy definition we would be able to get rid of all the bad content and only have the good content, but in practice that is incredibly difficult, particularly so if you want to do that through tools such as AI.

Facebook gets something like 1 million complaints a day, and it would be impossible to deal with that entirely using human beings. If we wanted that kind of content moderation we would have to resort to some kind of AI, and AI is notoriously bad at understanding context. For example, we have seen a number of antiracism groups in the US having their Facebook accounts either suspended or removed altogether for racist speech. Actually they are trying to highlight and call out the racist speech of others, but they find that they are the ones being targeted and having their own voices silenced. We need to be extremely cautious when we resort to those kinds of methods.

Another example is the difficulty that AI and content moderators sometimes have in identifying extremist content. Is that content being used as propaganda for a particular regime or to incite hate against a group, or is it essential evidence of criminal action by a Government that is being compiled by activists and which might be needed in order to hold those perpetrators to account?

We have to be very cautious about what we wish for because, in the drive to eliminate bad speech, the unintended consequence is almost certainly that you end up also eliminating what we might consider to be speech that encourages a more tolerant and equitable society.

Q53            Chair: Did any of you see the TV footage of Anna Soubry coming back to the House of Commons from a media interview on College Green when a group of men surrounded her, shouting into her face that she was a traitor and blocking her path as she was trying to get into the building? What did you think when you saw that? Did you think, “Fantastic. The right to demonstrate is alive and well. Freedom of speech is afforded to ordinary people to challenge their representatives”? Or did you think, “Why is the police officer not stepping in and dealing with this as a public order issue?”

What did you actually think? Obviously you are very involved in all these balances being struck, but what was your instinctive reaction when you saw it?

Jodie Ginsberg: I have seen the footage. It is very difficult to judge without actually being there, but what I saw looked to me very much like intimidation. It looks like a group of people trying to prevent someone from going about their lawful everyday business. That is not freedom of expression. We do not have the right to prevent other citizens from going about their everyday business, in the same way that we do not have the right to call people up on the phone all the time to shout at them because we consider that to be harassment.

To Lord Woolf’s point, we are, worryingly, increasingly seeing free speech being used as a catch-all term to excuse bad behaviour—“I have the right to do this, because I have free speech”—without understanding that while we may have the right to say things that shock, offend and disturb, and I absolutely champion that right, that does not stretch for example to being able to graffiti swastikas on people’s front doors or prevent them from walking down the street. Those things are prohibited in law, and we should be more confident in explaining those restrictions, because they do not fall under Article 10 protected free expression.

Richard Wingfield: I echo what has been said. I also feel deeply uncomfortable when I see newspaper headlines such as “Enemies of the people” and some of the terms that are bandied around such as “treachery” and “traitor”. I wonder whether there has been a coarsening in public discourse that is not just an issue in the UK but is occurring in many other countries as well. It might be that the immediacy of the online environment has helped this, but there certainly seems to be a global societal shift in behaviour that goes beyond the internet and goes to something a bit deeper.

Professor Jacob Rowbottom: I saw the footage as well and echo what has been said. The police have fairly broad powers to intervene in those sorts of situations and I was surprised that there was not at least some sort of minimal intervention. I am not sure why that was the case.

Chair: They were probably thinking that they were protecting these men’s right to protest. Surely that is the only explanation, is it not? If it had been in an ordinary, non-political situation, they would have said something like, “Move along now. Can’t you see the lady’s walking along the street?” Surely that is the context in which they were not interfering—that they were trying to protect the right to protest.

Jodie Ginsberg: Potentially. Sometimes we have to accept that people, including the police, make the wrong calls on these decisions. We certainly believe that wrong calls have been made in the other direction, where we think speech has been wrongly infringed. As I said, the “grossly offensive” terminology in malicious communications is one of those areas. As I came here today, a number of diametrically opposed protests seemed to be happening quite boisterously but without any infringement of people’s ability to walk along the street or any particular aggression. It is really important that we continue to demonstrate that as a country and a society we strongly defend people’s right to be able to protest, including boisterously, without limiting the ability of people to walk down the street.

Chair: Perhaps they did not want to protest against you. That might be the issue. We know some of the people that they were waiting to protest against.

Q54            Ms Karen Buck You have responded to the point about Anna Soubry and the scope for that being intimidation. To go back to the point about social media, if people are to be on social media, and I think it is now accepted that most people in the public life and politics will be, I am not sure that I really get how you respond to the concept of intimidation on social media. If a politician makes a remark or takes a position on social media and then receives a thousand responses, which the DPP has described as virtual mobbing, is that not intimidation? When we were talking about that earlier, I am not sure I got the sense from you that anything could be done about that.

Jodie Ginsberg: I think there is a difference when it comes to intimidation in the legal sense. In the case of Anna Soubry that we are talking about, quite apart from what those individuals might have said to her, three people preventing you from walking down the road by whatever they are doing is a very clear act. It is undoubtedly true that people may feel intimidated, although “intimidated” is a loaded term. It is undoubtedly true that people will react differently to the receipt of many messages telling them that they are considered to be idiotic and they are disagreed with. However, if we are going to create new legislation, we need to be very clear that those messages have criminal content—in other words, that they include direct threats. If one of those messages contains a direct threat to that person’s family or includes posting their address and encourages—

Chair: “We know where your next advice surgery is”.

Jodie Ginsberg: Which directly—

Chair: I must say that being called an idiot would be—

Jodie Ginsberg: —leads that person to believe that violence is likely to be imminent, then yes, and that is already covered in the law. What I cannot see is the necessity for new legislation that would encompass the 1,000 people saying negative things about you. That is a consequence of social media that I do not think should be limited in law.

Ms Karen Buck I am not necessarily asking you to say that there should be a legislative response. We have agreed that there are massive difficulties with what the social media companies can do, which then effectively leaves us with nothing. This goes back to Joanna’s point: the issue here is that the level of media abuse, which probably for the most part falls short of actual threats of violence, is disproportionately targeted at women, black and minority ethnic representatives and gay representatives. That impact is itself stifling public life and the freedom of speech of people who are elected in a representative capacity. I am not sure where that leaves us. You have told us convincingly why we should not do anything, but that leaves us with nothing being done.

Professor Jacob Rowbottom: The Law Commission has looked at some of the issues with virtual modelling and talked about how in the law at the moment there is a lack of clarity, with some possible gaps within the law. There are occasions where groups can be liable—for example, if someone encourages or assists another in a particular offence. However, the problem that we are talking about is where you might have lots of messages, none of which crosses the threshold for criminal liability, but the aggregate effect of all those messages has an incredibly aggressive effect. How do you go about dealing with that?

There are cases where you might have harassment caused by a group that can be criminal, but I think that has to involve some degree of co-ordination so it will not apply when lots of people are acting independently. How should that be dealt with? I would not want to see an extension of the criminal law, because I think that would lower the threshold for criminal liability. People could be found liable where they were just acting and happened to be with other people acting in a similar way at the same time.

That could be an area, and the Committee on Standards in Public Life pointed this out, where the social media companies should look for ways to stop MPs being inundated by certain messages with ways of filtering or blocking out those things, or taking down messages that might be particularly bullying even though they do not meet the threshold for criminal liability. That is discussed in the White Paper and it might be a way forward.

Richard Wingfield: That approach is preferable to looking at legislation and regulation. There are some limited tools and functionality within social media platforms. On Twitter, for example, you can make tweets private so that they cannot be retweeted, and Facebook gives you some control over the privacy level of your posts.

I would like to see users have much greater control over their experience online—to be able to report problems more easily and have them dealt with quickly—but also functionality such as having a small maximum number of comments that can be added to a tweet, so you mitigate the risk of inundation.

I think this is a technical issue that the platforms could and should look at and try to address. That would be preferable to any criminal offence, for example.

Jodie Ginsberg: I agree, but I also go back to the point that I made to Joanna Cherry: that we have to be cautious, because often we think it is possible to possible to solve these problems by saying, “If we could just define the kinds of things that we consider to be abusive then we would eliminate the problem”, but, as we have seen, with the questions around Twitter’s hateful conduct policy when it comes to misgendering and trans issues, that has actually ended up with many feminists on Twitter being silenced and intimidated.

We also have to be aware that we tend to think of the phenomenon that allows social media pile-ons in an extremely negative way—that it is automatically a bad thing. However, the same mechanisms that allow social media pile-ons also allow people to convene in groups in support of one another. Think about things like the #MeToo hashtag. You may be aware of the case of a young Saudi woman who fled to Thailand; she used Twitter to highlight her plight, and very quickly the same mechanism that we see happening in negative social media pile-ons was used to incredibly positive effect to highlight her case and get UN authorities and others to visit and intervene, and she now has asylum in Canada.

I do not disagree that there may be technical solutions, but I would be concerned if we said that there should be a power to limit how many people could comment and who could share. Remember that social media has been hugely beneficial, especially to groups who do not have a voice in mainstream media. One of the unintended consequences that you might find from looking to some of these technical solutions is that those people who have been able to find much bigger communities of support, and to get their voices heard and in front of people because they cannot use mainstream media, may no longer have a platform to do so.

Joanna Cherry: To go back to the people shouting in Anna Soubry’s face: for clarification, I certainly would not suggest that those people should have been arrested or anything as heavy-handed as that. However, as a legal adviser to this Committee has pointed out, the police also have a duty to prevent harm, and it would have been perfectly in order for the police just to have said, “Back off. Stop shouting in this woman’s face”, without arresting anyone. Can we perhaps agree that that might be the appropriate response a group of men shouting in the face of a lone female politician? The appropriate police response is really to say, “Calm down, everyone”.

Jodie Ginsberg: In many cases, the police do that. Although we might agree that that is the appropriate response, we also have to be careful that it does not have the chilling effect that people feel that they are unable to protest. We have been involved in a number of cases where police have indeed said, “Step back the protest. You are intimidating people and harm may occur”, and then, for example, artistic venues have shut down productions because they are frightened of the anger that might be generated at the protest. We just have to be cautious when the police advise you that the protest is looking as if it may turn into something harmful. That should not mean that the protest is halted altogether.

Professor Jacob Rowbottom: I agree. That is what I meant about minimal intervention: someone having a word. It also shows some of the challenges in this area where you are dealing with large organisations such as the police—this is about dealing with a fact-sensitive issue about the balance between free speech and other considerations—and how to get them to act in a consistent way. They will be probably acting according to whatever guidelines they have been given.

Q55            Baroness Hamwee: I should start by declaring that when I was a very young solicitor, about a million years ago, I acted for Index on Censorship. In fact, I think I formed you. Your constitution has stood the test of time.

Jodie Ginsberg: We are still here, so that is a good start.

Baroness Hamwee: The Government have recently proposed that people who intimidate candidates or campaigners in the run-up to an election should themselves be banned from standing for public office for a period. We wanted to get your views on that.

Chair: You will be aware that this issue has been freshened up by the fact that a UKIP candidate in the European election has made offensive and threatening comments to Jess Phillips. That is the context.

Richard Wingfield: Indeed. My understanding is that this would be an electoral offence that attached to existing criminal offences relating to intimidation. There is a slight risk that a lot of those criminal offences capture a broad range of different kinds of expression, the very lowest being a Section 5 public order offence right up to much more direct threats.

If that kind of offence were to be pursued, there would need to be a couple of safeguards in place. The first is that he would need some kind of requirement either that there was an intention to intimidate a candidate in some way, or that you committed the offence because you knew the person was a candidate.

Chair: No, this is about the candidate doing it. This is about someone not being able to be a candidate.

Richard Wingfield: Yes, which is the disqualification part of the offence. The second part is that we would want to see this being made discretionary. The European Court of Human Rights has tended to look at blanket disqualifications in the context of voting rights—I think we are looking at the same approach here—as problematic. If you were giving the electoral court or the appropriate court discretion to consider that an option in the most serious cases, that would be worth considering. But if it were de facto in all cases of all conduct, that would be problematic.

Baroness Hamwee: Do you think there should be a distinction between intimidation of a candidate and intimidation of somebody who is already in public office?

Richard Wingfield: It is difficult conceptually to make that distinction. I understand that it might be easier to know whether somebody was an elected official, more so than a candidate, given that they tend to have a more prominent profile, particularly locally, in the constituency or what have you, but conceptually I find it difficult to see why that distinction would matter.

Professor Jacob Rowbottom: Also, if it is going to be an electoral offence, I guess that is why they are focusing on candidates and campaigners rather than existing office holders. I will reiterate something that has already been said: if it is going to be built on existing criminal offences, I wonder what additional deterrent it would be. You might think that the existing criminal law already provides a deterrent. How many people who are going on Twitter posting abusive remarks are really contemplating running for office?

The Government’s proposals on this also say that it is consistent with free speech because it applies only where there is an existing criminal offence. To support that, you have to be pretty confident that the existing law is compatible with freedom of speech, and we have heard about things, such as Section 127 of the Communications Act, which raise certain questions.

However, I am not that sure how it would be enforced. It is not that clear in the proposal. Will it be an electoral sanction imposed after a finding that it was a criminal offence, or would it be open to someone to bring an election petition after a campaign if they make such an accusation about the candidate who has been elected? I do not know.

Baroness Hamwee: There are big practical problems in actually applying this, are there not? If it is in the run-up to an election—I think the Government have made some distinction between the long campaign and the short campaign, but even the long campaign is not that long—if you are going to prove a criminal offence, time will have gone by. Sorry, I should not put words in your mouth.

Jodie Ginsberg: I am deeply concerned about this. Decisions about who should be barred from standing for public office should be taken only in the most extreme circumstances. I am unclear about who is making the decision in this instance about how they should be barred. We have a court system for a reason; it is fundamental to our democracy. If somebody is guilty of or suspected of a criminal offence, they ought to be going through the procedures where that is found.

I am not clear how this would deal with the UKIP incident, for example. If that person has not been charged with, or even investigated for, a criminal offence, who is deciding whether he has engaged in something that is a criminal offence? Are we deciding that that is happening in the court of public opinion, or is there some kind of judicial review of it?

Without some kind of transparent, independent judicial process that is linked to the criminal law, I cannot see how we would not end up with a candidate potentially being barred simply because someone does not like their speech, rather than that they are provably considered to be potentially guilty of a criminal offence. It is very worrying.

Q56            Baroness Hamwee: Can I pursue with Richard one of the points made in his written evidence?

You say that where a report is made that alleges threats of violence or intimidation against those running for election, those reports should be considered more urgently, and you suggest that as a precautionary measure they might be taken down immediately pending a final decision.

The question is again about the distinction between threats against candidates—particular protection for them, if you like—and threats against those who have actually been elected.

Richard Wingfield: I suppose my point was about looking at the way companies enforce their terms of service. That will often include things that go beyond the criminal law, as has been pointed out, such as intimidation or lesser kinds of threats.

My hope is that when it comes to the removal of content or moderation we apply as many of the safeguards from the offline world—the real world, if you will, in some ways—to the online world. If you are looking at things like that, with speech you can make interim measures pending a final determination, because it is very difficult to make a decision quickly, but there is a risk of harm if you do not temporarily put a pause on things.

That is an alternative: instead of immediately deleting it and forgetting about it, saying that we will make a temporary decision before we look into it. There are challenges in doing that online, because things move so quickly that even taking a few days to make a decision might mean that the issue is no longer of relevance or that the election might already have happened.

I am not knocking the fact that there are some real challenges with this, but it might be a more proportionate way of trying to deal with some of the speech that platforms see, are not quite sure what to do with, think there is a risk and so perhaps temporarily suspend until they can really look into it.

Q57            Baroness Lawrence of Clarendon: We have been talking quite a bit about private social media companies. What would you say about their responsibility for the safety of users and about illegal harmful content that is hosted on their site? How do you try to balance that? I listened to some of the stuff that you were saying, and one of the questions you answered just now was about Jess Phillips, what was posted on the site about her and whether he should be stepped down as a candidate. How do you balance those harms?

Jodie Ginsberg: Let us start from the beginning. Private social media companies have the ability to set their own terms of service and can decide what they consider to be welcome or unwelcome on those platforms. As we have heard, when we talk about their responsibilities to their users, some of the primary responsibilities have to be those terms of service being implemented transparently, a means of appeal, and users understanding the terms of service and what they are signing up to. That is key. When we talk about responsibility, those are the kinds of responsibility that we might want to think about.

It is interesting that increasingly we are talking about “illegal” and “harmful”. I come back to the point that there is a huge challenge in conflating the two. The terms “illegal”, “harmful” and “abusive” are all conflated to the point where people begin to think that things that are harmful are illegal, or potentially should be illegal. If we think that they should be illegal, the right place for that decision to be made is indeed in the legislature.

That process should not be devolved to the companies to make those decisions effectively by proxy, which means that we need to be clear, because one of the things that are being proposed under the online harms White Paper is that the responsibility for dealing with harmful content should fall to the companies. What is meant by “harmful content”? As we have heard repeatedly, there is a broad range of speech and expression that some people consider harmful that is essential speech for others.

I am concerned by the way the online harms paper is constructed. Some of the suggestions relating to the speed with which companies are expected to deal with harmful content mean that the knee-jerk response of companies faced with enormous fines and 24 hours to remove content will be to do it indiscriminately, which will end up hovering up vast swathes of legitimate content. That is not to absolve them of responsibility for putting in mechanisms that for example allow people to report abusive or bullying behaviour, but the focus needs to be on whether companies have processes in place rather than, if you like, setting up a quasi-judicial system to decide the types of content that are prohibited.

Richard Wingfield: I echo that absolutely. One of the points I made in our evidence is that we absolutely think there is a need for platforms to be more reactive when problems occur, for users to be able to report them and for them to be looked at expeditiously, and for those terms of service to be enforced consistently.

Our concern with the model which the Government seem to be proposing is that it encourages a preventive approach—as some would say, prior restraint—to prevent the harms from taking place before they have even happened. The analogy is health and safety, where I try to find what might be risky and we stop it from becoming dangerous. That would be their role.

Our concern is that it essentially means either checking everything that is said before it goes online to see whether it is legal, illegal or in breach of terms of service, or the constant and proactive monitoring of all content on the platform. As we read it, those are the only ways in which the duty of care could be complied with to prevent the harm.

From a freedom of expression perspective, if we looked at what that would mean in the offline world we would be pretty horrified. It would basically mean us needing permission every time we wanted to say something or having it checked before we were allowed to say it, or having recording devices installed in every room, corridor, pub and place of employment and constantly being listened to in order to ensure that no one was ever saying anything illegal or harmful. That is my concern.

As Jodie pointed out earlier, the only way that could feasibly be done is with artificial intelligence, which, first, is unfortunately very inaccurate because it cannot understand context, satire or irony, and, secondly, would mean that instead of having transparent and accountable decision-making about what content was or was not acceptable, it would go into an invisible black box so that we did not understand what decision was being made or how to hold it accountable.

So huge concerns are thrown up by that kind of model, despite the Government’s very good intentions, which we understand. We would strongly urge the Committee to look at mitigating those risks by taking a more reactive approach and ensuring that users are more in control and platforms more responsible for enforcing their terms of service, rather than any duty to prevent harm before it arises.

Professor Jacob Rowbottom: We hear concern being expressed about the power of some of these private companies. My take on that is slightly different. They already have that power and they exercise it. They organise information and make decisions about what information people are likely to see.

A key should be to say: “Should they not have some accountability for decisions that are going to be made?” I take the point that if there is going to be regulation, there are reasons to proceed with caution and to think very carefully about the steps that are taken, but there is scope for regulation or platform responsibility to do things that law cannot.

It is often said that law is a very blunt tool and can have grave consequences for the people involved. Regulation and platform responsibility can be more flexible and might provide a way to deal with matters such as bullying or invasions of privacy where you might think that even if someone is free to say something, that does not mean it should be stored and made permanently accessible to anyone. In some cases, taking something down might be the most proportionate response. So there is some scope for that, even though there is a need to proceed with caution.

Baroness Lawrence of Clarendon: Do the threats to kill and so on fall between “harm” and the illegal side, where it is the company’s responsibility to look at that? If individuals are receiving those sorts of threats, whose responsibility is it to decide what they are going to take down and what are they going to leave up? If I were receiving those sorts of threats, I would begin to think, “Okay, who is thinking about the harm to me?”

Richard Wingfield: Ultimately, whether it is a harm that takes place online or offline, if we as a society think that it should not be allowed and therefore should be prohibited by the law—ordinarily by the criminal law, but sometimes not—it is the responsibility of the police to enforce the criminal law. They are a state actor who is accountable, and transparent and safeguards are in place.

I would be concerned about the idea that just because there are difficulties with that and concerns about resource and scale, we essentially privatise that law enforcement role and give it to the companies to deal with instead. If something illegal happens online—someone makes a death threat, for example—we should deal with that in the same way that we deal with a death threat made offline: to have the police investigate it and to have the person brought to justice through the courts.

We as a society say that that is a threshold and so unacceptable that we think criminal sanctions are the way to go. If you simply leave it to the platform to delete it, the person who made the statement is sort of forgotten about even though they did the crime, and there is no transparency or accountability with regard to what has happened. As far as possible, I would like to keep that illegal form, which covers the vast majority of what we are concerned about, within the existing process of law enforcement—the police, the CPS and the courts.

Jodie Ginsberg: I would go back to the previous point that although we talk about these platforms as the public square, for the time being—at least until someone wants to nationalise them, which would raise a whole different set of concerns—they are private companies able to set their terms of service, and we as users sign up to those terms and agreements. The important thing is that we understand how they are being implemented and that that is done fairly and transparently.

We are talking about responsibility. If the platform has said that it will not tolerate these kinds of behaviours, you would expect it to enforce that equitably across the board. In practice, however, it seems to be much more whack-a-mole than that; there are certain kinds of speech that they quickly remove, but other kinds seem to remain up for a long time despite several complaints.

Those are the kinds of behaviours that we want to be looking at and tackling, rather than drilling down to say, “These are the specific kinds of content that we think you should be outlawing on platforms.

Richard Wingfield: One interesting mechanism that some are considering, rather than taking a health and safety duty of care approach, is to look at it from the perspective of consumer protection. When we buy goods, services and products and enter agreements, for example, there are conditions about ensuring a minimal level of protection that the goods in service will be acceptable, and there are consumer protection bodies that can enforce that if they are not.

Their focus is very much on making sure that the contracts and agreements that we as consumers get into are clear: that consumers understand them, that they are enforced consistently and consumers are not left out of the picture.

There is a role for looking at that kind of model to deal with some of the harms. As Jodie was saying, a strong focus is needed on the transparency of companies’ terms of service and their enforcement rather than any question of liability.

Q58            Lord Trimble: Turning to online anonymity, such anonymity might protect people who would otherwise be at risk but can also provide a cloak for people who are breaking the law. How does one resolve that?

Professor Jacob Rowbottom: The default rule at the moment is that there is no right to anonymity, but also no obligation to reveal your identity. There are circumstances in which you can require someone to reveal their identity. That, I think, has been discussed in relation to something like the imprint requirements in electoral communications. That might be one example of where you require someone to reveal who they are.

How far there is true anonymity online is questionable, in so far as if you have a court order you can take steps to find out who is behind a message. In many cases that will be effective and it is what most lawyers would use. That might be a way of dealing with things where a legal wrong has occurred.

Jodie Ginsberg: There are plenty of good reasons why people would choose to be anonymous online. Often we think about them in relation to repressive regimes, where people might want to be anonymous because otherwise they would face jail or worse. But actually there are many other reasons.

You might want to ask a question about a medical condition that you do not want anyone to know about. You might want to ask questions about changing your religion or your religious beliefs when if somebody knew who you were it would incur the wrath or worse of your family members or peers. You might use anonymity to question dubious business practices and not want your employer to know.

It is important to remember that anonymity is not just about activists under incredibly repressive regimes. There are many reasons why people might want to be anonymous. One suggestion has been that people could remain largely anonymous but with social media companies able to know who you are. But given what we know about social media companies and their ability to control our data, I certainly would not want to hand over my most precious anonymous details and risk exposure.

Anonymity is an important protection for many reasons, but we have heard examples of where people’s identities can be forcibly revealed when they are suspected of criminal action; we have those measures in place.

Another interesting thing is that we often assume that bad behaviour is fuelled by the ability to remain anonymous. In fact, studies have shown that unfortunately people are quite prepared to be vile using their own names. A study by the University of Zurich in 2016 looked at something like half a million social media accounts that were engaged in insulting and abusive messages, and the majority of those accounts were under real names; they were not anonymous. So the assumption that anonymity drives atrocious behaviour is false. Sadly, people are quite prepared to be atrocious when identifiable.

The Chairman: So it is not anonymity; it is impunity that is allowing what you describe as atrocious behaviour.

Jodie Ginsberg: Yes, in part.

The Chairman: So they are prepared to be identified.

Jodie Ginsberg: For many people there are no consequences. We have talked about different levels of consequence. Some people are engaging in actively criminal behaviour and there is no consequence. They are threatening people with violence, which is criminal, yet are not being pursued.

In other cases, they feel the safety of the group; we have talked about the online mob that enables them to feel that. We have not yet always found sufficient mechanisms to challenge those people, but it does happen. We are increasingly seeing the effect of negative pile-ons being used in a positive way, where people have been criticised for their poor or vile behaviour.

The problem is not anonymity. We will not wave a magic wand and solve the problem of vile behaviour by making people reveal their names and identities.

Richard Wingfield: I echo that. I would point out that while there may not be a right to anonymity per se, anonymity is a critical part of the right to freedom of expression. The then UN special rapporteur on freedom of expression and opinion, Frank La Rue, published a report in 2015 looking at anonymity and encryption.

Often it is not just the ability to express yourself but the ability to do so through anonymity that makes freedom of expression meaningful. Without it, you may not be able to say some of the things that you want to say. You may fear repercussions or may just need that cloak or shield to allow you to find information. For example, you might be a young gay man who wants to search for information about sexuality. If there is no means to do that through anonymous or encrypted web searching, there is a risk that people will find out.

I echo Jodie’s point that this is not necessarily about activists in repressive regimes. All of us want some kind of privacy when we are talking to other people and looking for things. Restrictions on that need to be very carefully considered.

I share the point, Chair, that the issue is really about impunity and the ability to get away with things rather than anonymity itself.

The Chairman: Thank you very much indeed for your evidence to us this afternoon. That is the end of the public session.

Oral Evidence: Democracy, Free Speech and Freedom of Association