Facebook messages at centre of Christchurch teen’s death, Napier’s mayor abused – these represent just a couple of the many thousands of ‘toxic’ online messages and images we now have to deal with, reports Heather Wright
In April, a Christchurch teenager was ordered by a judge to delete her Facebook account after sending “harmful” messages to a 13-year-old girl who later died in a suspected suicide. Also in April, Napier’s mayor, Bill Dalton, quit Facebook following abusive comments. These stories have propelled online abuse back into the news.
But the effects of online abuse and harassment are being felt much more widely according to a recent Netsafe survey that showed one in three New Zealanders had received some form of unwanted communication over a 12-month period. Around one-third of these people – around 10 percent of the population – reported being unable to take part in their usual activities, including working, eating, sleeping or participating online, as a result.
For young New Zealanders, this figure increases to one in five receiving unwanted harmful communications.
The Netsafe results echo an Amnesty International report that found one-third of 500 Kiwi women surveyed had experienced online abuse and harassment. Seventy-five percent of these said it had affected their sleep, 49 percent feared for their physical safety and 32 percent feared for the physical safety of their families.
And we’re not talking about people disagreeing with someone else’s opinion. Amnesty International New Zealand campaigns manager Meg de Ronde says the survey set a high threshold, defining harassment “in terms of threats of violence, threats of sexual assault and so on.”
“It’s definitely a big issue,” says Netsafe’s CEO, Martin Cocker. “If 10 percent of the population are being affected, that’s a sizeable problem.”
Set up in 1998, Netsafe is funded by the government to assess and investigate non-criminal complaints about online bullying, harassment and abuse under the 2015 Harmful Digital Communications (HDC) Act. Criminal complaints are handled by the police.
The act defines harmful digital communications as digital communications aimed at an individual which cause “serious emotional distress” and either has or could seriously breach one or more of 10 communications principles outlined in the act. These principles include disclosing sensitive personal facts; being threatening, intimidating or menacing; making false allegations; and inciting someone to commit suicide.
Each week, Netsafe receives around 50 reports of harmful digital communications. Cocker believes this is just the tip of the iceberg.
“People whose partners had shared revenge porn has been one of the primary charges brought using this legislation”
“We know one in 10 New Zealanders is harmed by digital communication each year, which means there are hundreds of thousands of people being affected by online abuse and harassment,” he says.
“There can be a perception that online communication doesn’t harm adults, or that it is an issue for a small minority. However, our research highlights the very real impact that online abuse can have on a person’s quality of life. For far too many New Zealanders, what is being said to or about them online is having a very real negative impact on their daily lives,” Cocker says.
“The culture of exchanges online is so aggressive and toxic at times that people don’t realise that if they are harming someone it is an offence in New Zealand,” he adds.
That’s a view shared by Jon Duffy, Trade Me’s head of Trust and Safety, who notes Trade Me – which, with its 3.5 million subscribers, tends to be a good reflection of New Zealand in general – sees its share of keyboard warriors emboldened by the anonymity of the internet.
“The very existence of the HDC Act supports this – that discourse on the internet is fundamentally different from discourse in real life, hence you need a specific piece of legislation that only applies to communication on the internet,” Duffy says.
“Isn’t it weird that you need kind of ‘rules of combat’ for how you conduct yourself on the internet? Those rules are self-evident in face-to-face dealings – you don’t need a piece of legislation that says you can’t be intimidating or rude in real life, but you do for the internet. I find that really, really interesting.”
As a content host, Trade Me has obligations under the HDC Act and monitors the site, removing or editing content if necessary and regularly banning users if they present a financial, emotional or physical risk to other members. The company, whose disputes team handles about 300 complaints a week, also works with the police if members are threatened.
Questions and answers, feedback and the message boards are key areas monitored, with Duffy noting that the message board is “its own magical world”. While it provides a wealth of information for site users, seemingly innocuous threads can descend into name calling, defamation and potentially harmful digital communications.
“We have to very actively monitor what goes on the message boards because we don’t want people defamed. We want them to have a pleasant experience and also we don’t want to end up a party to a defamation action because we have seen a comment and not taken action.”
But Duffy also notes that the HDC Act process could result in well-resourced or otherwise aggressive complainants shutting down free speech on topics.
“There is a real risk there, and I think the courts, Netsafe and venues like Trade Me need to be quite vigilant about that.”
Amnesty International, too, has questions about the HDC Act. In the wake of its survey, it consulted with other parties, including domestic and sexual violence workers.
“The feedback was very much that they wanted a review of the legislation to see whether all of the people that need protection are being protected and whether it was working as it was intended,” de Ronde says.
A request to the Minister for Justice, Andrew Little, for a review was successful, she says, with Little saying this will take place later this year.
Of particular interest are areas such as the cumulative effects of abuse. “The legislation has been really good for one-off, serious cases where the person is known to the survivor, but it hasn’t been so effective that we can see where, for instance, it might not be an individual tweet or Facebook message but a number of people attacking someone over multiple days and multiple posts,” de Ronde says.
Whether people in public-facing positions have the same protection as private individuals, and whether the threshold for harm is appropriate, are also areas Amnesty International would like reviewed.
In April, the Auckland High Court overturned a district court judgement that a man who posted explicit images of his ex-wife on Facebook, shouldn’t be convicted under the HDC Act because there wasn’t enough evidence that harm was caused to the woman.
“The [district court] judge had taken the interpretation of harm in quite a narrow view,” de Ronde says.
While initial debate around the HDC Act was focused on technology and civil liberties, with bullying of children a core reason for enacting the act, de Ronde says in reality many taking cases to Netsafe or the courts are people experiencing abuse in their partnerships.
“What hadn’t been considered is that this would be an important piece of legislation for people who were in domestic partnerships. People whose partners had shared revenge porn has been one of the primary charges brought using this legislation.”
Revenge porn is a key focus for police in their criminal prosecutions under the HDC Act. Official figures show police prosecuted 176 cases in the first 18 months under the act, the majority for revenge porn, with sentences of up to 11 months handed out.
While Cocker says the law treats revenge porn as the worst of HDC content, he says all of the things Netsafe deals with on a day-to-day basis are “by and large unnecessary communication”.
Netsafe works with industry partners to get the offending content removed, as well as advising people about the ban, block and delete tools available, and about other steps they can take. Talking to the person who produced the offensive content and advising them of the law to “enable them to take remedial steps voluntarily” is also an option.
It doesn’t always work though. Around four percent of cases – often disputes in which harmful digital communication is only one part – fail to get resolved, and the offended party considers legal options. Few follow through to court, however.
But, Cocker says, New Zealand has done more than most other countries to prevent online abuse.
“Could more resources be spent on online safety and would that have a flow-on effect to more positive outcomes? Absolutely.
“But if you were going to give the government a review, you would score them higher than governments in other countries at this stage.”
While we have Netsafe, Australia has a politically appointed eSafety Commissioner, with formal powers to reprimand. Ireland is also considering creating such a position.
“In general, the New Zealand government gives Netsafe quite a bit of flexibility, and for us that has been a very successful model,” Cocker says. “But I can also see the benefits of the commissioner model.”
Multinationals such as Facebook, Google and Twitter have bases in Ireland, “so it may make sense to have a stronger, more aggressive regulator in that country because they have jurisdiction over the companies they would aggressively regulate.”
Cocker says all of the big name players work “very constructively” with Netsafe. “They do their best to comply with us, and it’s important to recognise that we have no jurisdiction over them.
“Organisations like Facebook, Google and Twitter are relatively young companies who still have some learning to do in terms of how they create a trusted, safe environment for people to operate in.
“But, by and large, that is their objective, as much as it is ours. They want their users to trust the environment, be safe and return on a daily basis because they enjoy their experience.”