Connect with us


Meta faces sanctions for Facebook’s involvement in Genocide

“Facebook’s role in igniting ethnic bloodshed in Myanmar has led to a renewed demand for Meta to make restitution to the Rohingya people.”



Meta faces sanctions for Facebook’s involvement in Genocide

Facebook’s role in igniting ethnic bloodshed in Myanmar has led to a renewed demand for Meta to make restitution to the Rohingya people.

According to a new report by Amnesty International, Facebook’s involvement in the atrocities committed against the Rohingya in 2017 was not merely that of “a passive and neutral platform” that failed to adequately respond to a serious crisis, as the company has attempted to claim, but rather that Facebook’s core business model — behavior analytics — was at the root of the genocide.

Amnesty concludes that “Meta’s content-shaping algorithms actively amplified and promoted content on the Facebook platform which incited violence, hatred, and discrimination against the Rohingya,” placing the blame on its tracking-based business model, also known as “invasive profiling and targeted advertising,” which it claims relies on “inflammatory, divisive, and harmful content.” This dynamic holds Facebook accountable for actively inciting violence against the Rohingya.

In 2018, UN human rights investigators issued a warning that Facebook was fueling hate speech and acts of violence against the Muslim minority living in Myanmar. The internet giant continued by acknowledging that it had “moved too slowly to stop the spread of misinformation and hate” on its platform.

As a result of its ad systems’ penchant for heightening division and outrage, the site was allegedly optimized for hate speech due to its use of algorithms meant to maximize interaction, but it has not yet accepted this charge.

Meta faces sanctions for Facebook’s involvement in Genocide

The Facebook papers, which were documents leaked by Facebook whistleblower Frances Haugens last year, according to Amnesty, provide “a shocking new understanding of the true nature and extent of Meta’s contribution to harms suffered by the Rohingya.”

Amnesty says its report, which is based on interviews with Rohingya refugees, former Meta staff, civil society groups, and other subject matter experts, also draws on this new information.

In the executive summary to the 74-page report, it states:

“This evidence shows that the core content-shaping algorithms that power the Facebook platform including its news feed, ranking, and recommendation features all actively amplify and distribute content which incites violence and discrimination, and deliver this content directly to the people most likely to act upon such incitement.”

Thus, it continues, “content moderation alone is fundamentally insufficient as a remedy to algorithmically-amplified damages.” “ One internal Meta document from July 2019 states that “we only take action against about 2% of the hate speech on the platform,” indicating that the company is aware of these limits. According to another paper, at least some members of the Meta staff are aware of the limitations of content control.

‘We are never going to erase everything detrimental from a communications medium utilized by so many, but we can at least do the best we can to cease magnifying harmful content by giving it artificial circulation,’ states one internal letter from December 2019 in this regard.

Meta faces sanctions for Facebook’s involvement in Genocide

This investigation also demonstrates that Meta was long aware of the dangers posed by its algorithms but did not take the necessary precautions.

Internal research dating back as far as 2012 has regularly suggested that Meta’s content-shaping algorithms may cause significant harm in the real world.

Internal Meta research conducted in 2016 was very apparent in recognizing that “[o]ur recommendation systems grow the problem” of extremism, prior to the 2017 atrocities in Northern Rakhine State.

These internal analyses may have and should have prompted Meta to take prompt action to reduce the risks to human rights posed by its algorithms, but the corporation continually failed to do so.

In its executive summary, Amnesty cites an internal memo from August 2019 in which a former Meta employee states: “We have evidence from a variety of sources that hate speech, divisive political speech, and misinformation on Facebook and the family of apps are affecting societies around the world.” The Facebook Papers also show Meta has continued to ignore the risks generated by its content-shaping algorithms in “the relentless pursuit of profit.”

They also claim to have convincing evidence that their primary product dynamics, such virality, recommendations, and engagement optimization, play a big role in why certain kinds of speech are successful on the platform.

Amnesty International’s investigation demonstrates how Meta’s careless business practices and content-shaping algorithms encouraged and facilitated prejudice and brutality against the Rohingya.

Meta faces sanctions for Facebook’s involvement in Genocide Meta faces sanctions for Facebook’s involvement in Genocide

By magnifying harmful anti-Rohingya content, including calls for hatred towards the Rohingya, Meta’s algorithms directly led to harm. By supporting, facilitating, and encouraging the activities of the Myanmar military, they also indirectly contributed to the actual atrocities against the Rohingya, including violations of the rights to life, to be free from torture, and to sufficient shelter.

Because of a campaign of violence, rape, and murder carried out by Myanmar’s military Junta, (at least) tens of thousands of Rohingya refugees have been forced to escape the nation since August 2017. Meta has rejected requests to make amends.

Apple is dealing with a class-action lawsuit brought by Rohingya refugees who are suing the firm in the US and the UK, seeking billions in damages for its part in provoking the genocide.

The findings of what Amnesty’s study refers to as “Meta’s egregious disrespect for human rights” are not only pertinent to Rohingya survivors, the organization’s platforms run the risk of enabling “severe human rights abuses again,” according to the report.

Meta faces sanctions for Facebook’s involvement in Genocide

Meta already poses a serious and immediate threat to human rights in Ethiopia, India, and other places where there is ethnic conflict and bloodshed. To ensure that Meta’s history with the Rohingya does not occur elsewhere, extensive and urgent reforms are required, the report claims.



UK Government To Set Online Bill Criminalizing Self Harm



UK Government Sets Online Bill To Criminalize Self Harm

In an effort to stop what it calls “tragic and preventable deaths caused by people seeing self-harm content online,” the UK government has announced it will further broaden the scope of online safety legislation by making encouraging self-harm a crime.

According to the most recent modification to the divisive but popular Online Safety Bill, in-scope platforms would be compelled to remove anything that purposefully encourages someone to physically harm themselves, or face legal repercussions.

The government intends to tackle “abhorrent trolls urging the young and vulnerable to self-harm,” according to the secretary of state for digital. People who post such content online may also be prosecuted under the new offence of encouraging self injury.

The maximum fines will be announced in due time, according to the administration.

In the UK, it is already unlawful to promote or aid suicide, whether in person or online. By creating a new offense, self-harm content will now be subject to the same laws that already ban suicide promotion.

Following a snag, last summer associated with political unrest in the ruling Conservative Party, the Online Safety Bill’s progress through parliament is now on hold. However, the newly reorganized UK government has declared that it will reintroduce the measure to parliament next month after making changes to the law.

The abuse of intimate imagery is a problem that will be addressed by recent revisions to the Online Safety Bill, which was just made public by the Ministry of Justice. However, other revisions are planned regarding “legal but harmful” information, thus the final form of the Act is still up in the air.

The government responded to concerns about the bill’s impact on online freedom of expression a few months ago. The (new) secretary of state, Michelle Donelan, announced in September that she would be “editing” the bill to lessen concern about its impact on “legal but harmful” speech for adults.

The most recent changes, making it illegal to send online communications encouraging self harm, came after that announcement.

UK Government Sets Online Bill To Criminalize Self Harm

Donelan was quoted by the BBC as claiming that Molly Russell, a 14-year-old teenager who committed suicide five years ago after watching thousands of online articles on self-harm and suicide on websites like Instagram and Pinterest, was a factor in the most recent changes.

Social media was found to have contributed to Russell’s death, according to the results of an inquest into her death in September. While the coroner’s “prevention of future deaths” report from last month that a number of steps be done to control and monitor young people’s access to social media content.

The addition of the crime of promoting self harm, according to the Department for Digital, Culture, Media, and Sport, will outlaw “one of the most worrying and prevalent internet harms that now falls below the threshold of criminal behavior.”

Donelan stated in a statement:

“I am determined that the abhorrent trolls encouraging the young and vulnerable to self-harm are brought to justice.

“So I am strengthening our online safety laws to make sure these vile acts are stamped out and the perpetrators face jail time.

“Social media firms can no longer remain silent bystanders either and they’ll face fines for allowing this abusive and destructive behaviour to continue on their platforms under our laws.”

Hate crimes, rules regarding revenge porn (including disseminating deepfake porn without content), harassment, and cyberstalking are among the other top criminal offenses already mentioned in the bill.

Regardless of what the measure states on paper, there are still a lot of unknowns regarding how platforms will react to having legal obligations imposed on them to police all forms of speech, as well as if it would actually increase web user safety as claimed.

Critics worry that the regime will have a chilling effect by turning platforms into de facto speech police and encouraging them to overblock content in order to reduce their legal risk of paying a hefty fine.

The regime’s penalties scale up to 10% of global annual turnover, and non-cooperative senior executives even run the risk of going to jail.

On Monday, December 5, the bill is scheduled to return to parliament.








Continue Reading


Twitter Amnesty Is What Elon Musk is Going For Next



Twitter Amnesty Is What Elon Musk is Going For Next

Tesla CEO and newly appointed Twitter CEO, Elon Musk did promise a new dimension for the micro-blogging social media platform prior to taking over, and his actions recently, have just about lived up to the promise, but now, the billionaire is set for an ‘amnesty’ that surely will drive some political divides nuts if certain individuals are granted Twitter amnesty as he wants.

Elon Musk announced on Thursday that starting the next week, Twitter will provide suspended accounts “a general amnesty.” The day before, the platform’s CEO published a poll asking users if they thought affected accounts should be restored.

The announcement comes just after Musk lifted the platform’s restriction on former president Donald Trump after conducting a related poll. Trump declared he had no intention of returning to the platform despite being banned following the attack on the US Capitol on January 6, 2021.

Users of the Twitter platform who had their accounts suspended could rejoin the network “assuming they have not broken the law or engaged in egregious spam,” according to Musk’s user survey.

Twitter Amnesty Is What Elon Musk is Going For Next

The survey received responses from about 3.2 million individuals, who voted 72.4% in favor of amnesty.

“The people have spoken. Amnesty begins next week. Vox Populi, Vox Dei,” Musk said, using a Latin phrase that means “The voice of the people is the voice of god.”

Historically, Twitter has deactivated accounts who advocate violence, celebrate hate and harassment, or persistently disseminate false information that may be harmful.

Some well-known people who were banned from the website include MyPillow CEO Mike Lindell, who made a number of claims that Trump actually won the 2020 presidential election, former Trump advisor and former executive chairman of Breitbart Steve Bannon, who said Anthony Fauci and FBI Director Christopher Wray should be beheaded, and Proud Boys founder Gavin McInnes, who broke the website’s rule against violent extremist groups.

Considering that more voices with possibly negative views will be returning to the site, it’s unclear from Musk’s brief post how Twitter will handle content control going forward.

These worries have only grown as a result of Musk’s huge firings and the outflow of workers who would rather leave than remain “hardcore.”

Elon Musk is surely growing more unpopular by remaining popular these days.

Continue Reading


Twitter Working On New Feature For Long Texts



Twitter Working On New Feature For Long Texts

Writing a thread on Twitter can be considered daunting especially when you have to divide the text into 280-character sections for it to make meaning.

Good news though as the platform is stated to be working on a way to convert lengthy texts into threads automatically.

When a tweet exceeds the 280-character limit, Twitter’s composer will automatically divide it into a thread, according to a tweet from app researcher Jane Manchun Wong.

Twitter wants to make making threads less difficult, as she stated in a message to a user (identified as me).

Currently, in order to add a tweet to a thread and post the subsequent 280 characters, users must click the Add button. This can be particularly unpleasant when you are trying out an idea or pasting information from another document.

Several users have recently brought up the difficulty posting to and reading conversations with more than a few tweets; the thread in question was 82 tweets long and focused on the defunct crypto-currency exchange FTX. In response, Musk stated that the team is working to make thread writing simpler.

It will be useful to have markers to designate the start and end of a tweet in the thread, although the exact implementation details remain unknown, as Financial Times product manager Matt Taylor noted. This makes it simpler for users to change the text in a way that doesn’t disrupt the reading flow.

Musk has previously addressed the problem of posting lengthy tweets. He previously stated that the social network is developing the capability to attach long-form content to tweets. If that will be a standalone feature from the new thread composer is unclear.

Currently, some users rely on third-party programs like Typefully, ThreadStart, and Chirr App, which offer capabilities like scheduling along with tools to automatically divide your post into threads without interfering with sentence flow.

Thanks to its acquisition of Threader the previous year, the company today provides Twitter Blue customers with a simple way to read threads. However, Musk hasn’t actually stated whether he is altering the reading experience for the typical user.

There is already a long-form writing program on Twitter called Notes, but it is exclusively available to a small number of writers, and under Musk’s leadership, its future is unclear.

Even though Twitter programmers are already working on it, it is unclear when the new composer tool for threads will launch. Since taking over the business, Musk has let go of more than half the employees.

Numerous executives have left, and the new leader even gave the remaining employees yesterday an ultimatum: either be “hardcore” or quit. There is no assurance that goods will be delivered on time in this situation.

The new Twitter Blue plan with a verification mark was hurriedly launched by the firm, only for the scheme to be discontinued a few days later. Musk stated earlier this week that the launch date had been moved to later in the month.

Wong just found code that suggests Twitter is working on encrypting direct communications from end to end.


Continue Reading