How League of Legends Self-Learning Reform System Works?
When it comes to controlling toxicity, it’s always easier to curb the issue by bringing down the hammer on every player who has ever dared to diagnose others with cancer. With Riot Games, however, the company is still adamant on continuing its crusade to reform players for the better, and let go of only those who are beyond help.
A valiant effort, but one that has required persistence from the developer and a lot of years of research and patience.
Previously, League of Legends had a Tribunal which was mostly at the hands of players themselves. The workings were simple. Players who got reported often landed themselves in the Tribunal, where the community would visit and vote on whether the accused should be pardoned or punished.
There were slight flaws with this system, and even though Riot Games claimed the results to be wondrous, it decided to disable the Tribunal until it was further improved.
Today, a new Reform System has been introduced in the game. It’s currently live in North America only and is based on report cards.
When a player is reported for toxicity, the game generates a detailed report card which includes the player’s chat log, highlights what went wrong, and identifies what could be the result for punishment.
That’s mostly similar to the Tribunal, but with the new system, the report card is going to be sent to both the reporter and accused.
For the toxic player, he/she will get to know exactly what they did wrong. This will also put a stop to the many who come on forums and Reddit after their banning, claiming to have done no mischief. Their pitiful stories are mostly voted to the top, and cause developers to waste previous resources in proving they really were the devil.
For the one who reported the player, their copy of the report card helps to keep them updated. Players previously complained about not knowing whether their reporting efforts were yielding any fruit. With the new system, they are now kept updated to the status of the players they have reported and will know when justice has been served.
The “instant feedback” portion of this new Reform System, learns from what the community actively rejects. It then examines each case and determines automatically whether the reported behavior deserves rejection or punishment based on “community-driven standards of behavior.”
Once decided, the report is forwarded to the player behavior team which sorts through the “first few thousand cases” manually and finalizes the impending decision. That’s a tall order in itself, and the company is fully willing to spare a large amount of manpower to control toxicity in its game.
Currently Riot Games is targeting homophobia, racism, sexism, death threats, and other forms of excessive abuse. “These harmful communications will be punished with two-week or permanent bans within fifteen minutes of game’s end,” promises Riot Games.
With time, Riot Games hopes that the new improved Tribunal will work almost like a sentient being. A continuous self-learning system that will leave no rock unturned when it comes to deciding the fate of mischievous players.
The accuracy rate is the key here and they system is so far looking promising. RiotLyte, the guy in charge of curbing toxicity from League of Legends explained on Reddit how important the error rate is in this situation:
“Any system that we launch needs to have a 0.1% error rate or lower or we’ll turn the system off. In first beta test of the new reform system, we were having Player Support teams review thousands of cases and we found that generally there was 1 error in every 6000 cases.
In the case that a player did receive a false penalty, the e-mail has instructions and links to contact Player Support.”
The system is in its initial testing period and based on its success, should roll out to other regions shortly. Riot Games is also planning to hand out rewards for successful reporters, a far better version of the honor tags it announced before.