'Instagram may NEVER be safe for 14-year-olds': Whistleblower Frances Haugen quotes Facebook's OWN research - and says social media giant won't sacrifice a 'slither of profit' as she reveals massive dossier of damning internal evidence

 Facebook whistleblower Frances Haugen today issued a stark warning to parents that Instagram 'may never be safe for 14-year-olds' as she said the tech giant's own research found children are turning to addicts and bullying was 'following them into their bedrooms'. 

The former employee said Facebook's knew Instagram is dangerous for young people but did not want to act because 'young users are the future of the platform and the earlier they get them the more likely they'll get them hooked'. She said the platform was unwilling to sacrifice 'even a slither of profit' for safety improvements. 

Her appearance coincided with her release of a fresh trove of documents which sensationally revealed CEO Mark Zuckerberg 'personally intervened' to allow US right wingers to 'say what they wanted' on the platform. 

The memos - which have been dubbed 'the Facebook Papers' and comprise internal research she secretly copied while working at the firm's 'integrity unit' - also revealed how bosses ignored internal complaints from staff for years to put profits first, 'lied' to investors and sought to shield Mr Zuckerberg from public scrutiny. 

Today, Ms Haugen told MPs said Facebook had meant childhood bullying was no longer confined to the classroom. 

'Facebook's own research says now the bullying follows children home, it goes into their bedrooms. The last thing they see at night is someone being cruel to them. The first thing they see in the morning is a hateful statement and that is just so much worse.'

She claimed that the firm's own research found that Instagram is more dangerous than other social media such as TikTok and Snapchat, because the platform is focused on 'social comparison about bodies, about people's lifestyles, and that's what ends up being worse for kids'.

Ms Haugen also cast doubt on whether Instagram could ever be made safe for children. At present, you must be at least 13 years old to use the service, though it is easy for users to lie about their age.

Facebook was developing an Instagram Kids specifically for children but the idea was put on hold earlier this year due to the raft of concerns.

'I am deeply worried that it may not be possible to make Instagram safe for 14-year-olds and I sincerely doubt it is possible to make it safe for a 10-year-old,' Ms Haugen said.

During today's hearing, Ms Haugen said Facebook's algorithm 'prioritises' extreme content and although the firm is 'very good at dancing with data' it is 'unquestionably' making online hate worse and pushing users towards extremism.  

The whistleblower was addressing a parliamentary committee scrutinising the government's Online Safety Bill, which would place a duty of care on social media companies to protect users – with the threat of substantial fines of up to 10% of their global revenue if they fail to do so.

Opening the session, she said: 'I am extremely, extremely worried about the state of our societies. I am extremely concerned about engagement-based ranking, which prioritises extreme content.' 

Ms Haugen said Facebook was reluctant to sacrifice even a 'slither of profit' to make the platform safer, and said the UK could be particularly vulnerable because its automated safety systems may be more effective with US English than British English. 

'I am deeply concerned about their underinvestment in non-English languages and how they mislead the public in how they are supporting them,' she said.

'UK English is sufficiently different that I would be unsurprised if the safety systems that they developed primarily for American English were actually underenforcing in the UK. Facebook should have to disclose those dialectical differences.'

She said that one of the effects of Facebook's algorithm was to give hateful advertising greater traction, meaning it was 'cheaper' for companies and pressure groups to produce angry messages rather than positive ones. 

Ms Haugen described this process as 'subsiding hate'. 

Responding to this afternoon's session, Home Secretary Priti Patel said 'tech companies have a moral duty to keep their users safe' following a meeting with Facebook whistleblower Frances Haugen. 

Ms Patel said it was a 'constructive meeting' on online safety.  

Firing off a barrage of devastating allegations that will further trash Facebook's already tattered reputation, Ms Haugen claimed - 
  • Facebook's algorithm prioritises hate speech by showing people content based on how much engagement it has received;
  • 'Anger and hate' is the 'best way to grow' on the platform, and 'bad actors' are playing the algorithm by making their content more hateful;
  • The world is 'at the opening stages of a horrific novel' due to extremism spreading via social media unless regulators act;
  • Facebook is reluctant to sacrifice 'even slithers of profit' to prioritise online safety and 'unquestionably' makes online hate worse;
  • Children's relationship with platforms like Facebook is an 'addicts' narrative', with youngsters saying social media sites make them unhappy but they are unable to stop using them;
  • Facebook could tackle this problem but 'they don't because they know that young users are the future of the platform and the earlier they get them the sooner they get them hooked'; 
  • Platform had demonstrated 'negligence' and 'ignorance', but resisted the term 'malevolence' as this 'implies intent';
  • 'Underinvestment' in foreign languages means Facebook is less able to monitor content not in US English.  

Facebook whistleblower Frances Haugen is appearing before a parliamentary committee scrutinising the government's draft legislation to crack down on harmful online content

The data scientist's appearance coincided with her release of a fresh trove of documents which sensationally revealed CEO Mark Zuckerberg 'personally intervened' to allow US right wingers to 'say what they wanted' on the platform

The data scientist's appearance coincided with her release of a fresh trove of documents which sensationally revealed CEO Mark Zuckerberg 'personally intervened' to allow US right wingers to 'say what they wanted' on the platform 

The Facebook Papers: Whistleblower's sensational new data dump to coincide with today's hearing  

Zuckerberg 'personally intervened' to allow US right wingers say 'whatever they wanted' 

Facebook's boss interfered to protect political figures who violated the company's content moderation rules, new leaked documents suggested today. In one internal note, dated December 2020, an employee claimed Facebook's public policy team, blocked decisions to take down posts 'when they see that they could harm powerful political actors'. In one case in 2019, Facebook moderators took down a video that falsely said that abortions are 'never medically necessary.' After Republican politicians including Texas Sen. Ted Cruz complained about the move, Mark Zuckerberg was personally involved with Facebook's decision to put the video back up, according to the Financial Times

 

Apple threatened to pull Facebook from app store over fears it was being used to traffic Filipina maids

Two years ago, Apple threatened to pull Facebook and Instagram from its app store over concerns about the platform being used as a tool to trade and sell maids in the Mideast, according to documents obtained by the Associated Press. After publicly promising to crack down, Facebook acknowledged in internal documents that it was 'under-enforcing on confirmed abusive activity' that saw Filipina maids complaining on the social media site of being abused.

 

 

Company knew young people were going off Facebook but kept from investors 

Facebook researchers compiled a report in March for chief product officer Chris Cox, detailing concerning data that showed the site was losing popularity with teenagers and young people. One graphic showed 'time spent' on Facebook by US teenagers was down 16 per cent year on year, according to Bloomberg. It also revealed that young adults were spending five per cent less time on the social network. Teenagers were delaying signing up to the site and the number of new teen signups was also declining. The average age Facebook expected people to join was now as old as 24, if ever. While the popularity decline has been studied extensively within Facebook, executives have stayed mostly quiet about the concerns in public. 

 

Staff have reported concerns about tackling hate speech for years - while platform failed to anticipate Capitol riot 

Facebook had been warned by staff for years that it was not doing enough to police hate speech, Ms Haugen has claimed based on documents she has leaked. 

One of the problems is its AI tools do not have the capability to appropriate pick out hateful commentary, and there aren't enough staff with the language skills to do it manually. 

Documents also suggested staff failed to anticipate the disastrous January 6 Capitol riot despite monitoring a range of individual, right-wing accounts. On an internal messaging board that day, staff said: 'We've been fueling this fire for a long time and we shouldn't be surprised it's now out of control'.  

Speaking to MPs today, Ms Haugen likened failures at Facebook to an oil spill.

'I came forward now because now is the most critical time to act,' she told the select committee. When we see something like an oil spill, that oil spill doesn't make it harder for a society to regulate oil companies.

'But right now the failures of Facebook are making it harder for us to regulate Facebook.'

The whistleblower said she had 'no doubt' that events like the storming of the US Capitol would happen in the future due to Facebook's ranking system prioritising inflammatory content. 

She said the problem could get worse due to the social media giant prioritising the creation on large Facebook groups so people spend more time on the network.  

'Facebook has been trying to make people spend more time on Facebook, and the only way they can do that is by multiplying the content that already exists on the platform with things like groups and reshares,' she said. 

'One group might produced hundreds of pieces of content a day, but only three get delivered. Only the ones most likely to spread will go out.' 

Ms Haugen said Facebook groups were increasingly acting as 'echo chambers' that are pushing people towards more extreme beliefs.  

'You see a normalisation of hate and dehumanising others, and that's what leads to violent incidents,' she said. 

She added that the platform was 'hurting the most vulnerable among us' and leading people down 'rabbit holes'.

'Facebook has studied who has been most exposed to misinformation and it is ... people who are socially isolated,' she told the select committee.

'I am deeply concerned that they have made a product that can lead people away from their real communities and isolate them in these rabbit holes and these filter bubbles.

'What you find is that when people are sent targeted misinformation to a community it can make it hard to reintegrate into wider society because now you don't have shared facts.' 

The whistleblower argued regulation could benefit Facebook in the long run by making it a 'more pleasant' place to be.   

She said that Twitter and Google were 'far more transparent' than Facebook, as she called for Mr Zuckerberg to hire 10,000 extra engineers to work on safety instead of 10,000 engineers to build its new 'metaverse' initiative. 

Ms Haugen said that 'anger and hate' is the 'best way to grow' on Facebook, and said bad actors were playing the algorithm by making their content more hateful. 

'The current system is biased towards bad actors and those who push Facebook to the extremes.' 

The whistleblower urged ministers to take into account the harm Facebook does to society as a whole rather than just individuals when considering new regulation. 

'Situations like [ethnic violence in] Ethiopia are just the opening chapters of a novel that is going to be horrific to read. 

'Facebook is closing the door on us being able to act. We have a slight window of time to regain people control over AI - we have to take advantage of this moment.'

Ms Haugen urged MPs to regulate paid-for advertisements on Facebook, because hateful ones were drawing in more users. 

'We are literally subsidising hate on these platforms,' she said. 'It is substantially cheaper to run an angry hateful divisive ad than it is to run a compassionate, empathetic ad.'

The whistleblower said Facebook was reluctant to sacrifice 'even slithers of profit' to prioritise online safety. 

Ms Haugen said systems for reporting employee concerns at Facebook were a 'huge weak spot' at the company.

Ministers fear social media regulation plans could be leaked to Facebook by civil servants who 'want to get job at tech giant' 

By David Wilcock, Whitehall Correspondent for MailOnline 

Ministers fear that plans for greater regulation of social media sites could be leaked by civil servants to former mandarins now working for Facebook.

The alarm was raised after an online harms issue known only to a few people at the Department for Digital, Culture, Media and Sport was raised by a senior executive at Facebook in a recent meeting.

Mark Zuckerberg's social media colossus is facing increasing pressure over misinformation and harmful material, including abuse, shared by its users, with ministers drawing up plans for tighter rules.

Jobs taken by former senior civil servants in the private sector are meant to be scrutinised by the Advisory Committee on Business Appointments (Acoba). But its powers are weak and more junior appointments are not vetted.

A source lashed out at department mandarins, telling the Times: 'The problem is that DCMS officials think it's their job to work there for four years then get a job at Facebook.

'They don't get scrutinised by Acoba except at the most senior level.'

Civil Servants in DCMS are among the better paid mandarins. Median wages there were just below £50,000 last year, compared to a cross-Whitehall median of below £30,000, according to the Institute for Government.

Average pay for the Civil Service is around £30,000, but at Facebook's UK arm in 2019 it was more than £117,000.

Several DCMS officials have gone on to work for Facebook recently, after working elsewhere before joining Facebook. There is no suggestion they have solicited information from former Civil Service colleagues.

Nicola Aitken, who formerly 'led UK Government efforts to counter disinformation' is now working there as a misinformation policy manager, having spent a year in between at Full Fact, an independent organisation that highlights misinformation online.

And Farzana Dudhwala has been Facebook's privacy policy manager since January, having spent a year in 2018-9 at DCMS's Government Office for Artificial Intelligence and then two years at the Centre for Data Ethics and Innovation.  

'When I worked on counter espionage, I saw things where I was concerned about national security and I had no idea how to escalate those because I didn't have faith in my chain of command at that point,' she said.

'We were told to accept under-resourcing.

'I flagged repeatedly when I worked on civic integrity that I felt that critical teams were understaffed.

'Right now there's no incentives internally, that if you make noise saying we need more help, like, people will not get rallied around for help, because everyone is underwater.' 

Ms Haugen told the parliamentary select committee that the social media giant was 'unquestionably' making online hate worse.

'We didn't invent hate, we didn't invent ethnic violence. And that is not the question.

'The question is what is Facebook doing to amplify or expand hate ... or ethnic violence?'  

Ms Haugen said she 'sincerely doubted' that it was possible for Instagram to be made safe for children and that the platform promoted an 'addict's narrative'.

'Children don't have as good self regulation as adults do, that's why they're not allowed to buy cigarettes,' she said.

'When kids describe their usage of Instagram, Facebook's own research describes it as 'an addict's narrative'.

'The kids say 'this makes me unhappy, I don't have the ability to control my usage of it, and I feel if I left it would make me ostracised'.'

She continued: 'I am deeply worried that it may not be possible to make Instagram safe for a 14-year-old and I sincerely doubt that it is possible to make it safe for a 10-year-old.'

Ms Haugen said Facebook could estimate people's ages with 'a great deal of precision' but did not act to stop under-age users.

'Facebook could make a huge dent on this if they wanted to and they don't because they know that young users are the future of the platform,' she told a parliamentary select committee.

'The earlier they get them, the more likely they'll get them hooked.'

Facebook has previously outlined plans to set up a so-called Instagram Kids. It argues that under-13s already use Instagram despite age barriers, and that the new app would be safer for them.

Ms Haugen first aired her bombshell revelations in front of the US Senate earlier this month, where she argued a federal regulator is needed to oversee digital giants like Facebook.  

The draft Online Safety Bill proposes something similar by creating a regulator that would monitor Big Tech's progress with removing harmful or illegal content from their platforms, such as terrorist material or child sex abuse images.

Ministers also want social media companies to stamp down on online abuse by anonymous trolls. 

Damian Collins, Chair of the Joint Committee on the Draft Online Safety bill, called Ms Haugen's appearance 'quite a big moment'. 

'This is a moment, sort of like Cambridge Analytica, but possibly bigger in that I think it provides a real window into the soul of these companies,' he said.  

Mr Collins was referring to the 2018 debacle involving data-mining firm Cambridge Analytica, which gathered details on as many as 87 million Facebook users without their permission.

Mr Haugen first discussed her huge tranche of leaked internal Facebook documents in front of the US Senate earlier this month

Mr Haugen first discussed her huge tranche of leaked internal Facebook documents in front of the US Senate earlier this month The committee has already heard from another Facebook whistleblower, Sophie Zhang, who raised the alarm after finding evidence of online political manipulation in countries such as Honduras and Azerbaijan before she was fired. 

It comes as concerns were raised that details of the new legislation could be leaked to Facebook  by civil servants who 'want to work for government for four years before getting job at tech giants' 

The alarm was raised after an online harms issue known only to a few people at the Department for Digital, Culture, Media and Sport was raised by a senior executive at Facebook in a recent meeting.

Jobs taken by former senior civil servants in the private sector are meant to be scrutinised by the Advisory Committee on Business Appointments (Acoba). But its powers are weak and more junior appointments are not vetted.

A source lashed out at department mandarins, telling the Times: 'The problem is that DCMS officials think it's their job to work there for four years then get a job at Facebook.

'They don't get scrutinised by Acoba except at the most senior level.'

Several DCMS officials have gone on to work for Facebook recently, after working elsewhere before joining Facebook. 

There is no suggestion they have solicited information from former Civil Service colleagues.

 

Facebook whistleblower docs reveal it 'has known for YEARS' that it fails to stop hate speech and is unpopular among youth but 'lies to investors': Apple threatened to remove app over human trafficking and staff failed to see Jan 6 riot coming 

By Jack Newman for MailOnline

A trove of documents from Facebook whistleblower Francis Haugen have revealed in detail how the tech firm has ignored internal complaints from staff for years to put profits first, 'lie' to investors and shield CEO Mark Zuckerberg from public scrutiny.

The documents were reported on in depth this morning as part of an agreement by a conglomerate of media organizations, as Haugen testified before the British Parliament about her concerns. 

They comprise internal research that she chose to make public. They are now being referred to by the US media as the 'Facebook Papers'.  

They claim, among other things, that;

  • Facebook staff have reported for years that they are concerned about the company's failure to police hate speech; 
  • That Facebook executives knew it was becoming less popular among young people but shielded the numbers from investors;
  • That staff failed to anticipate the disastrous January 6 Capitol riot despite monitoring a range of individual, right-wing accounts;
  • On an internal messaging board that day, staff said: 'We've been fueling this fire for a long time and we shouldn't be surprised it's now out of control';
  • Apple threatened to remove the app from the App Store over how it failed to police the trafficking of maids in the Philippines; 
  • Mark Zuckerberg's public comments about the company are often at odds with internal messaging. 

Some of the most damning comments were posted on January 6, the day of the Capitol riot, when staff told Zuckerberg and other executives on an internal messaging board that they blamed themselves for the violence. 

'One of the darkest days in the history of democracy and self-governance. History will not judge us kindly,' said one worker while another said: 'We've been fueling this fire for a long time and we shouldn't be surprised it's now out of control'. 

Facebook whistleblower Frances Haugen testifying before British lawmakers on Monday about her concerns over the tech giant's power in the tech and telecomms space. She said, among other things, that Facebook misleads the world by claiming it helps non-English-speaking companies with its technology, when it in fact fuels extremism

One of her complaints is how the company had been warned by staff for years that it was not doing enough to police hate speech.  

Apple threatened to pull Facebook and Instagram from app store over fears it was being used to traffic Filipina maids 

Two years ago, Apple threatened to pull Facebook and Instagram from its app store over concerns about the platform being used as a tool to trade and sell maids in the Mideast.

After publicly promising to crack down, Facebook acknowledged in internal documents obtained by The Associated Press that it was 'under-enforcing on confirmed abusive activity' that saw Filipina maids complaining on the social media site of being abused. 

Apple relented and Facebook and Instagram remained in the app store.

But Facebook's crackdown seems to have had a limited effect. 

Even today, a quick search for 'khadima,' or 'maids' in Arabic, will bring up accounts featuring posed photographs of Africans and South Asians with ages and prices listed next to their images. 

That's even as the Philippines government has a team of workers that do nothing but scour Facebook posts each day to try and protect desperate job seekers from criminal gangs and unscrupulous recruiters using the site.

While the Mideast remains a crucial source of work for women in Asia and Africa hoping to provide for their families back home, Facebook acknowledged some countries across the region have 'especially egregious' human rights issues when it comes to laborers' protection.

'In our investigation, domestic workers frequently complained to their recruitment agencies of being locked in their homes, starved, forced to extend their contracts indefinitely, unpaid, and repeatedly sold to other employers without their consent,' one Facebook document read. 'In response, agencies commonly told them to be more agreeable.'

The report added: 'We also found recruitment agencies dismissing more serious crimes, such as physical or sexual assault, rather than helping domestic workers.'

In a statement to the AP, Facebook said it took the problem seriously, despite the continued spread of ads exploiting foreign workers in the Mideast.

'We prohibit human exploitation in no uncertain terms,' Facebook said. 'We've been combating human trafficking on our platform for many years and our goal remains to prevent anyone who seeks to exploit others from having a home on our platform.'


One of the problems is its AI tools do not have the capability to appropriate pick out hateful commentary, and there aren't enough staff with the language skills to do it manually.   

The failures to block hate speech in volatile regions such as Myanmar, the Middle East, Ethiopia and Vietnam could contribute to real-world violence.  

In a review posted to Facebook's internal message board last year regarding ways the company identifies abuses, one employee reported 'significant gaps' in certain at-risk countries. 

Facebook spokesperson Mavis Jones said in a statement that the company has native speakers worldwide reviewing content in more than 70 languages, as well as experts in humanitarian and human rights issues. 

She said these teams are working to stop abuse on Facebook's platform in places where there is a heightened risk of conflict and violence.

'We know these challenges are real and we are proud of the work we've done to date,' Jones said.

Still, the cache of internal Facebook documents offers detailed snapshots of how employees in recent years have sounded alarms about problems with the company's tools - both human and technological - aimed at rooting out or blocking speech that violated its own standards. 

The material expands upon Reuters' previous reporting on Myanmar and other countries where the world's largest social network has failed repeatedly to protect users from problems on its own platform and has struggled to monitor content across languages.

Among the weaknesses cited were a lack of screening algorithms for languages used in some of the countries Facebook has deemed most 'at-risk' for potential real-world harm and violence stemming from abuses on its site.

The company designates countries 'at-risk' based on variables including unrest, ethnic violence, the number of users and existing laws, two former staffers told Reuters. 

The system aims to steer resources to places where abuses on its site could have the most severe impact, the people said.

Facebook reviews and prioritizes these countries every six months in line with United Nations guidelines aimed at helping companies prevent and remedy human rights abuses in their business operations, spokesperson Jones said.

In 2018, United Nations experts investigating a brutal campaign of killings and expulsions against Myanmar's Rohingya Muslim minority said Facebook was widely used to spread hate speech toward them. 

That prompted the company to increase its staffing in vulnerable countries, a former employee told Reuters. 

Facebook has said it should have done more to prevent the platform being used to incite offline violence in the country.

Ashraf Zeitoon, Facebook's former head of policy for the Middle East and North Africa, who left in 2017, said the company's approach to global growth has been 'colonial,' focused on monetization without safety measures.

More than 90 per cent of Facebook's monthly active users are outside the United States or Canada.

Facebook has long touted the importance of its artificial-intelligence (AI) systems, in combination with human review, as a way of tackling objectionable and dangerous content on its platforms. 

Machine-learning systems can detect such content with varying levels of accuracy. 

On January 6, staff wrote on an internal messaging board: 'We¿ve been fueling this fire for a long time and we shouldn¿t be surprised it¿s now out of control'

On January 6, staff wrote on an internal messaging board: 'We've been fueling this fire for a long time and we shouldn't be surprised it's now out of control' 

But languages spoken outside the United States, Canada and Europe have been a stumbling block for Facebook's automated content moderation, the documents provided to the government by Haugen show. 

The company lacks AI systems to detect abusive posts in a number of languages used on its platform. 

In 2020, for example, the company did not have screening algorithms known as 'classifiers' to find misinformation in Burmese, the language of Myanmar, or hate speech in the Ethiopian languages of Oromo or Amharic, a document showed.

These gaps can allow abusive posts to proliferate in the countries where Facebook itself has determined the risk of real-world harm is high.

Zuckerberg 'personally decided company would agree to demands by Vietnamese government to increase censorship of 'anti-state' posts' 

Mark Zuckerberg personally agreed to requests from Vietnam's ruling Communist Party to censor anti-government dissidents, insiders say.

Facebook was threatened with being kicked out of the country, where it earns $1billion in revenue annually, if it did not agree.

Zuckerberg, seen as a champion of free speech in the West for steadfastly refusing to remove dangerous content, agreed to Hanoi's demands.

Ahead of the Communist party congress in January, the Vietnamese government was given effective control of the social media platform as activists were silenced online, sources claim.

'Anti-state' posts were removed as Facebook allowed for the crackdown on dissidents of the regime. 

Facebook told the Washington Post the decision was justified 'to ensure our services remain available for millions of people who rely on them every day'. 

Meanwhile in Myanmar, where Facebook-based misinformation has been linked repeatedly to ethnic and religious violence, the company acknowledged it had failed to stop the spread of hate speech targeting the minority Rohingya Muslim population.

The Rohingya's persecution, which the U.S. has described as ethnic cleansing, led Facebook to publicly pledge in 2018 that it would recruit 100 native Myanmar language speakers to police its platforms. 

But the company never disclosed how many content moderators it ultimately hired or revealed which of the nation's many dialects they covered.

Despite Facebook's public promises and many internal reports on the problems, the rights group Global Witness said the company's recommendation algorithm continued to amplify army propaganda and other content that breaches the company's Myanmar policies following a military coup in February. 

Reuters this month found posts in Amharic, one of Ethiopia's most common languages, referring to different ethnic groups as the enemy and issuing them death threats. 

A nearly year-long conflict in the country between the Ethiopian government and rebel forces in the Tigray region has killed thousands of people and displaced more than 2 million.

Facebook spokesperson Jones said the company now has proactive detection technology to detect hate speech in Oromo and Amharic and has hired more people with 'language, country and topic expertise,' including people who have worked in Myanmar and Ethiopia.

In an undated document, which a person familiar with the disclosures said was from 2021, Facebook employees also shared examples of 'fear-mongering, anti-Muslim narratives' spread on the site in India, including calls to oust the large minority Muslim population there. 

'Our lack of Hindi and Bengali classifiers means much of this content is never flagged or actioned,' the document said. 

Internal posts and comments by employees this year also noted the lack of classifiers in the Urdu and Pashto languages to screen problematic content posted by users in Pakistan, Iran and Afghanistan.

Jones said Facebook added hate speech classifiers for Hindi in 2018 and Bengali in 2020, and classifiers for violence and incitement in Hindi and Bengali this year. She said Facebook also now has hate speech classifiers in Urdu but not Pashto.

Facebook's human review of posts, which is crucial for nuanced problems like hate speech, also has gaps across key languages, the documents show. 

An undated document laid out how its content moderation operation struggled with Arabic-language dialects of multiple 'at-risk' countries, leaving it constantly 'playing catch up.' 

The document acknowledged that, even within its Arabic-speaking reviewers, 'Yemeni, Libyan, Saudi Arabian (really all Gulf nations) are either missing or have very low representation.'

Facebook's Jones acknowledged that Arabic language content moderation 'presents an enormous set of challenges.' She said Facebook has made investments in staff over the last two years but recognizes 'we still have more work to do.'

Three former Facebook employees who worked for the company´s Asia Pacific and Middle East and North Africa offices in the past five years told Reuters they believed content moderation in their regions had not been a priority for Facebook management. 

These people said leadership did not understand the issues and did not devote enough staff and resources.

Facebook's Jones said the California company cracks down on abuse by users outside the United States with the same intensity applied domestically.

The company said it uses AI proactively to identify hate speech in more than 50 languages. 

Facebook said it bases its decisions on where to deploy AI on the size of the market and an assessment of the country's risks. It declined to say in how many countries it did not have functioning hate speech classifiers.

Company knew young people were going off Facebook but kept from investors

Facebook researchers compiled a report in March for chief product officer Chris Cox, detailing concerning data that showed the site was losing popularity with teenagers and young people.

One graphic showed 'time spent' on Facebook by US teenagers was down 16 per cent year on year, according to Bloomberg

It also revealed that young adults were spending five per cent less time on the social network.

Teenagers were delaying signing up to the site and the number of new teen signups was also declining.

The average age Facebook expected people to join was now as old as 24, if ever.

While the popularity decline has been studied extensively within Facebook, executives have stayed mostly quiet about the concerns in public.

The falling rate among young people has stayed mostly invisible as the audience continues to expand, often with duplicate profiles, leading to misrepresentations of audience size, it is claimed.      

The discrepancy forms part of Haugen's argument that Facebook 'has misrepresented core metrics to investors and advertisers' by showing overall growth but excluding factors such as the decline in key demographics.  

Facebook also says it has 15,000 content moderators reviewing material from its global users. 'Adding more language expertise has been a key focus for us,' Jones said.

In the past two years, it has hired people who can review content in Amharic, Oromo, Tigrinya, Somali, and Burmese, the company said, and this year added moderators in 12 new languages, including Haitian Creole.

Facebook declined to say whether it requires a minimum number of content moderators for any language offered on the platform.

Facebook's users are a powerful resource to identify content that violates the company's standards. 

The company has built a system for them to do so, but has acknowledged that the process can be time consuming and expensive for users in countries without reliable internet access. 

The reporting tool also has had bugs, design flaws and accessibility issues for some languages, according to the documents and digital rights activists who spoke with Reuters.

Next Billion Network, a group of tech civic society groups working mostly across Asia, the Middle East and Africa, said in recent years it had repeatedly flagged problems with the reporting system to Facebook management. 

Those included a technical defect that kept Facebook's content review system from being able to see objectionable text accompanying videos and photos in some posts reported by users. 

That issue prevented serious violations, such as death threats in the text of these posts, from being properly assessed, the group and a former Facebook employee told Reuters. They said the issue was fixed in 2020.

Facebook said it continues to work to improve its reporting systems and takes feedback seriously.

Language coverage remains a problem. A Facebook presentation from January, included in the documents, concluded 'there is a huge gap in the Hate Speech reporting process in local languages' for users in Afghanistan. 

The recent pullout of U.S. troops there after two decades has ignited an internal power struggle in the country. So-called 'community standards' - the rules that govern what users can post - are also not available in Afghanistan's main languages of Pashto and Dari, the author of the presentation said.

A Reuters review this month found that community standards weren't available in about half the more than 110 languages that Facebook supports with features such as menus and prompts.

Facebook said it aims to have these rules available in 59 languages by the end of the year, and in another 20 languages by the end of 2022.

No comments:

Powered by Blogger.