Plato Data Intelligence.
Vertical Search & Ai.

Can Big Tech make livestreams safe?

Date:

Abby Rayner was 13 when she first watched livestreams on Instagram that demonstrated self-harm techniques and encouraged viewers to participate.

Over the next few years, she would become deeply involved in so-called self-harm communities, groups of users who livestream videos of self-harm and suicide content and, in some instances, broadcast suicide attempts.

“When you are unwell, you do not want to avoid watching it,” she says. “People glamorise [self-harm] and go live. It shows you how to self-harm [so] you learn how to do [it],” she added.

Now 18, Rayner is in recovery, having undergone treatment in mental health wards after self-harming and suicide attempts. When she logs on to both Instagram and TikTok, she says the algorithms still show her graphic and sometimes instructive self-harm posts a couple of times a day.

“I do not wish to see it, I do not seek it out, and I still get it,” she says. “There have been livestreams where people have tried to kill themselves, and I have tried to help, but you can’t . . . that is their most vulnerable moment, and they don’t have much dignity.”

Meta, the owner of Instagram, says it does not allow content that promotes suicide or self-harm on its platforms and uses technology to make sure the algorithm does not recommend it.

“These are extremely complex issues, and no one at Meta takes them lightly,” it added. “We use AI to find and prioritise this content for review and contact emergency services if someone is at an immediate risk of harm.”

TikTok, which is owned by China’s ByteDance, says it does not allow content that depicts or promotes suicide or self-harm, and if somebody is found to be at risk, content reviewers can alert local law enforcement.

What Rayner witnessed is the darker side of livestream video, a medium that has become an increasingly popular way of communicating online. But even within the minefield of social media moderation, it poses particular challenges that platforms are racing to meet as they face the prospect of tough new rules across Europe.

The real-time nature of livestream “quickly balloons the sheer number of hours of content beyond the scope of what even a large company can do”, says Kevin Guo, chief executive of AI content moderation company Hive. “Even Facebook can’t possibly moderate that much.” His company is one of many racing to develop technology that can keep pace.

Social media platforms host live broadcasts where millions of users can tune in to watch people gaming, cooking, exercising or conducting beauty tutorials. It is increasingly popular as a form of entertainment, similar to live television.

Research group Insider Intelligence estimates that by the end of this year, more than 164mn people in the US will watch livestreams, predominately on Instagram.

Other major platforms include TikTok, YouTube and Amazon-owned Twitch, which have dominated the sector, while apps like Discord are becoming increasingly popular with younger users.

More than half of teenagers aged between 14 and 16 years old in the UK have watched livestreams on social media, according to new research from Internet Matters, a not-for-profit organisation that offers child safety advice to parents. Almost a quarter have livestreamed themselves.

Frances Haugen, the former Facebook product manager who has testified before lawmakers in the UK and the US about Meta’s policy choices, describes it as “a very seductive feature”.

“People go to social media because they want to connect with other people, and livestreaming is the perfect manifestation of that promise,” she says.

But its growth has raised familiar dilemmas about how to clamp down on undesirable content while not interfering with the vast majority of harmless content, or infringing users’ right to privacy.

As well as self-harm and child sexual exploitation, livestreaming also featured in the racially motivated killing of 10 black people in Buffalo, New York, last year and the deadly mosque shootings of 51 in Christchurch, New Zealand, in 2019.

These issues are coming to a head in the UK in particular, as the government plans new legislation this year to force internet companies to police illegal content, as well as material that is legal but deemed harmful to children.

The online safety bill will encourage social media networks to use age-verification technologies and threatens them with hefty fines if they fail to protect children on their platforms.

Last week it returned to parliament with the added threat of jail sentences for social media bosses who are found to have failed in their duty to protect under-18s from harmful content.

The EU’s Digital Services Act, a more wide-ranging piece of legislation, is also likely to have a significant impact on the sector.

Age verification and encryption

Both aim to significantly toughen age verification, which still consists largely of platforms asking users to enter their date of birth to ascertain whether they are under 13.

But data from charity Internet Matters shows that more than a third of 6- to 10-year-olds have watched livestreams while UK media regulator Ofcom found that over half of 8- to 12-year-olds in the UK currently have a TikTok profile — suggesting such gateways are easily circumvented.

Most younger children are on YouTube and TikTok … Apps/sites used by UK children, by age group (%) … and they often give a fake birth year to appear older – Proportion of 8 to 12-year-olds who said they used a false date of birth when setting up their profile, by app/site (%)

At the end of November, TikTok raised its minimum age requirement for livestreaming from 16 to 18, but in less than 30 minutes the Financial Times was able to view several livestreams involving girls who appeared to be under 18, including one wearing a school uniform.

The company reviewed screenshots of the streams and said there was insufficient evidence to show that the account holders were under-age.

Age estimation technology, which works by scanning faces or measuring hands, can provide an additional layer of verification but some social media companies say it is not yet reliable enough.

Another obvious flashpoint is the trade-off between safety and privacy, particularly the use of end-to-end encryption. Available on platforms such as WhatsApp and Zoom, encryption means only users communicating with each other can read and access their messages. It is one of the key attractions of the platforms that offer it.

But the UK’s proposed legislation could force internet companies to scan private messages and other communications for illegal content, undermining end-to-end encryption.

Its removal is supported by law enforcement and intelligence agencies in both the UK and the US, and in March a Home Office-backed coalition of charities sent a letter to shareholders and investors of Meta urging them to rethink rolling out end-to-end encryption across its platforms.

“I agree with people having privacy and having that balance of privacy, but it shouldn’t be at the cost of a child. There must be some technological solution,” says Victoria Green, chief executive of the Marie Collins Foundation, a charity involved in the campaign.

Meta, which also owns WhatsApp and Facebook, and privacy advocates have warned that removing encryption could limit freedom of expression and compromise security. Child safety campaigners, however, insist it is necessary to moderate the most serious of illegal materials.

Meta points to a statement in November 2021 from Antigone Davis, its global head of safety, saying: “We believe people shouldn’t have to choose between privacy and safety, which is why we are building strong safety measures into our plans and engaging with privacy and safety experts, civil society and governments to make sure we get this right.”

The company’s global rollout of encryption across all its platforms including Instagram is due to be completed this year.

Content overload

Even if age verification can be improved and concerns around privacy addressed, there are significant practical and technological difficulties involved in policing livestreaming.

Livestreams create new content that constantly changes, meaning the moderation process must be able to analyse rapidly developing video and audio content at scale, with potentially millions of people watching and responding in real time.

Policing such material still relies heavily on human intervention — either by other users viewing it, moderators employed by platforms or law enforcement agencies.

TikTok uses a combination of technology and human moderation for livestreams and says it has more than 40,000 people tasked with keeping the platform safe.

Meta says it had been given advice by the Samaritans charity that if an individual is saying they are going to attempt suicide on a livestream, the camera should be left rolling for as long as possible — the longer they are talking to the camera, the more opportunity there is for those watching to intervene.

When someone attempts suicide or self-harm, the company removes the stream as soon as it is alerted to it.

The US Department of Homeland Security, which received more than 6,000 reports of online child sexual exploitation last year, also investigates such abuse on livestreams mainly through undercover agents who are tipped off when a broadcast is about to happen.

During the pandemic, the department saw a rise in livestreaming crimes as lockdowns caused more children to be online than usual, giving suspects more access to children.

“One of the reasons I think [livestream grooming] has grown is because it offers the chance to have a degree of control or abuse of a child that is almost at the point where you have hands-on,” says Daniel Kenny, chief of Homeland Security’s child exploitation investigations unit.

“Livestreaming encapsulates a lot of that without to some degree the danger involved, if you’re physically present with a child and the difficulty involved in getting physical access to a child.”

Enter the machines

But such people-dependent intervention is not sustainable. Relying on other users is unpredictable, while human moderators employed by platforms often view graphic violence and abuse, potentially causing mental health issues such as post-traumatic stress disorder.

More fundamentally, it cannot possibly keep pace with the growth of material. “This is where there’s a mismatch of the amount of content being produced and the amount of humans, so you need a technology layer coming in,” says Guo.

Crispin Robinson, technical director for cryptanalysis at British intelligence agency GCHQ, says he is seeing “promising advances in the technologies available to help detect child sexual abuse material online while respecting users’ privacy”.

“These developments will enable social media sites to deliver a safer environment for children on their platforms, and it is important that, where relevant and appropriate, they are implemented and deployed as quickly as possible.”

In 2021, the UK government put £555,000 into a Safety Tech Challenge Fund, which awards money to technology projects that explore new ways to stop the spread of child abuse material in encrypted online communications.

One suggested technology is plug-ins, developed by the likes of Cyacomb and the University of Edinburgh, which companies can install into existing platforms to bypass the encryption and scan for specific purposes.

So far, few of the larger platforms have adopted external technology, preferring to develop their own solutions.

Yubo, a platform aimed primarily at teenagers, says it hosts about 500,000 hours of livestreams each day. It has developed a proprietary technology that moderates frames, or snapshots, of the video and clips of audio in real time and alerts a human moderator who can enter the livestream room if necessary.

But the technology available is not perfect and often, multiple different forms of moderation need to be applied at once, which can use vast amounts of energy in computing power and carry significant costs.

This has led to a flood of technology start-ups entering the moderation space, training artificial intelligence programmes to detect harmful material during livestreams.

“The naive solution is ‘OK, let’s just sample the frame every second’, [but] the issue with sampling every second is it can be really expensive and also you can miss things, [such as] if there was a blip where something really awful happened where you missed it,” says Matar Haller, vice-president of data at ActiveFence, a start-up that moderates user-generated content from social networks to gaming platforms.

In some moderation areas, including child sexual abuse material and terrorism, there are databases of existing videos and images on which companies can train artificial intelligence to spot if it is posted elsewhere.

In novel, live content, this technology has to assess if the material is similar and could be harmful — for example, using nude detection as well as age estimation, or understanding the context of why a knife is appearing on screen in a cooking tutorial versus in a violent setting.

“The whole premise of this is, ‘How do you build models that can interpret and infer patterns like humans?’,” says Guo at Hive.

Its technology is used by several social media platforms, including BeReal, Yubo and Reddit, for moderation of livestream and other formats. Guo estimates that the company’s AI can offer “full coverage” for livestreams for less than $1 an hour in real time — but multiply that by the daily volumes of livestreaming on many platforms and it is still a significant cost.

“There’s been really horrible instances of livestreamed shooting events that have occurred that frankly should have lasted only two seconds. For our customers, we would flag almost immediately, they will never propagate,” he adds.

Technological advances also offer help to smaller sites that cannot afford to have 15,000 human moderators, as social media giant Meta does.

“At the end of the day, the platform wants to be efficient,” says Haller. “They want to know that they’re not overworking their moderators.”

Social media platforms say they are committed to improving safety and protecting vulnerable users across all formats, including livestreaming.

TikTok says it continues “to invest in tools and policy updates to reinforce our commitment to protecting our users, creators and brands”. The company also has live community moderators, where users can assign another person to help manage their stream, and keyword filters.

Improvements across the industry cannot come soon enough for Laura, who was groomed on a live gaming app seven years ago when livestream technology was in its infancy and TikTok had yet to be launched. She was nine at the time. Her name has been changed to protect her anonymity.

“She became incredibly angry and withdrawn from me, she felt utter shame,” her mother told the Financial Times. “She was very angry with me because I hadn’t protected her from it happening . . . I thought it was unthinkable for a 9-year-old,” she added.

Her abusers were never caught, and her mother is firmly of the view that livestreaming platforms should have far better reporting tools and stricter requirements on online age verification.

Haugen says social media platforms “are making choices to give more reach [for users] to go live while having the least ability to police the worst things on there, like shootings and suicides”.

“You can do it safely; it just costs money.”

Anyone in the UK affected by the issues raised in this article can contact the Samaritans for free on 116 123

#mailpoet_form_1 .mailpoet_form { }
#mailpoet_form_1 form { margin-bottom: 0; }
#mailpoet_form_1 .mailpoet_column_with_background { padding: 0px; }
#mailpoet_form_1 .wp-block-column:first-child, #mailpoet_form_1 .mailpoet_form_column:first-child { padding: 0 20px; }
#mailpoet_form_1 .mailpoet_form_column:not(:first-child) { margin-left: 0; }
#mailpoet_form_1 h2.mailpoet-heading { margin: 0 0 12px 0; }
#mailpoet_form_1 .mailpoet_paragraph { line-height: 20px; margin-bottom: 20px; }
#mailpoet_form_1 .mailpoet_segment_label, #mailpoet_form_1 .mailpoet_text_label, #mailpoet_form_1 .mailpoet_textarea_label, #mailpoet_form_1 .mailpoet_select_label, #mailpoet_form_1 .mailpoet_radio_label, #mailpoet_form_1 .mailpoet_checkbox_label, #mailpoet_form_1 .mailpoet_list_label, #mailpoet_form_1 .mailpoet_date_label { display: block; font-weight: normal; }
#mailpoet_form_1 .mailpoet_text, #mailpoet_form_1 .mailpoet_textarea, #mailpoet_form_1 .mailpoet_select, #mailpoet_form_1 .mailpoet_date_month, #mailpoet_form_1 .mailpoet_date_day, #mailpoet_form_1 .mailpoet_date_year, #mailpoet_form_1 .mailpoet_date { display: block; }
#mailpoet_form_1 .mailpoet_text, #mailpoet_form_1 .mailpoet_textarea { width: 200px; }
#mailpoet_form_1 .mailpoet_checkbox { }
#mailpoet_form_1 .mailpoet_submit { }
#mailpoet_form_1 .mailpoet_divider { }
#mailpoet_form_1 .mailpoet_message { }
#mailpoet_form_1 .mailpoet_form_loading { width: 30px; text-align: center; line-height: normal; }
#mailpoet_form_1 .mailpoet_form_loading > span { width: 5px; height: 5px; background-color: #5b5b5b; }#mailpoet_form_1{border-radius: 3px;background: #27282e;color: #ffffff;text-align: left;}#mailpoet_form_1 form.mailpoet_form {padding: 0px;}#mailpoet_form_1{width: 100%;}#mailpoet_form_1 .mailpoet_message {margin: 0; padding: 0 20px;}
#mailpoet_form_1 .mailpoet_validate_success {color: #00d084}
#mailpoet_form_1 input.parsley-success {color: #00d084}
#mailpoet_form_1 select.parsley-success {color: #00d084}
#mailpoet_form_1 textarea.parsley-success {color: #00d084}

#mailpoet_form_1 .mailpoet_validate_error {color: #cf2e2e}
#mailpoet_form_1 input.parsley-error {color: #cf2e2e}
#mailpoet_form_1 select.parsley-error {color: #cf2e2e}
#mailpoet_form_1 textarea.textarea.parsley-error {color: #cf2e2e}
#mailpoet_form_1 .parsley-errors-list {color: #cf2e2e}
#mailpoet_form_1 .parsley-required {color: #cf2e2e}
#mailpoet_form_1 .parsley-custom-error-message {color: #cf2e2e}
#mailpoet_form_1 .mailpoet_paragraph.last {margin-bottom: 0} @media (max-width: 500px) {#mailpoet_form_1 {background: #27282e;}} @media (min-width: 500px) {#mailpoet_form_1 .last .mailpoet_paragraph:last-child {margin-bottom: 0}} @media (max-width: 500px) {#mailpoet_form_1 .mailpoet_form_column:last-child .mailpoet_paragraph:last-child {margin-bottom: 0}}

Can Big Tech make livestreams safe? Republished from Source https://www.ft.com/content/5280535a-4dd5-482d-ad0d-730e47354d4a via https://www.ft.com/companies/technology?format=rss

<!–

–>

<!–
–>

spot_img

Latest Intelligence

spot_img