Meta updated its punishment system to be “more fair and effective”, saying it would continue to remove illegal content, but would focus on helping people understand why pages were blocked.
Which according to Facebook, “will help prevent recurrence, rather than limiting your ability to post so quickly.”
So instead of punishing people with little notice for misdemeanors and limiting their ability to speak out, this new approach will take swift action for persistent policy violators.
“Since nothing has changed with regards to removing the underlying content, we do not expect these improvements to the penal system to have a significant impact on prevalence.
With this update, we continue to allow people to use our apps to protect.
To express themselves.”
Which the unit says is effective in preventing breaches. However, they will enforce an account ban for repeat offenders, usually from the seventh offense onwards, after providing a warning and explanation.
However, for more serious crimes, they will continue to apply direct consequences, such as terrorism, child abuse, human trafficking, promoting suicide, se-xual exploitation, selling non-medical drugs or promoting dangerous people and organizations.
Meta claims that some people end up in “Facebook Jail” without understanding what they did wrong or if they were affected by content application errors.
According to the data provided, 80% of users with fewer warnings no longer violate the policy within 60 days. This means that most people respond well to warnings and clarifications because they don’t want to violate our policies.
But at the same time, some people are determined to post infringing content regardless of the policy.
According to the analysis, this “shows that imposing harsher punishments for the seventh precept is an effective way of giving the righteous the advice they need.”
For example, someone might jokingly post “I’m going to kidnap you” without realizing that such statement may violate our policy against violence and incitement, when in fact it is a message in a different context.
Older systems used longer penalties, such as a 30-day ban on a person’s ability to create content. “These long stalemates frustrate well-meaning people who get them wrong,” he said.
The new system reduces the number of outage periods and will allow repeat offenders to be identified more quickly.
The Oversight Board said, “Meta’s plan to issue more comprehensive penalty notices should ensure that users are in a better position to understand the reasons for the consequences of strikes and reasons for feature restrictions in the future.”
Meta will use artificial intelligence
Meta confirmed its plans with its artificial intelligence and assured that it would be a useful tool for developers interested in their research and innovation projects.
“The company remains committed to this open research model, and we will make our new model available to the AI research community,” Mark Zuckerberg said in a post on Facebook and Instagram.
The tool is called Large Language Model Meta AI (LLAMA) and works as a GPT-like language model, created by OpenIA and behind the operations of Chat GPT, to generate text and responses, using machine learning Working from and databases.