Posted by Gareth Icke - memes and headline comments by David Icke Posted on 11 August 2022

The solution to online abuse? AI plus human intelligence – WEF

  • Bad actors perpetrating online harms are getting more dangerous and sophisticated, challenging current trust and safety processes.
  • Existing methodologies, including automated detection and manual moderation, are limited in their ability to adapt to complex threats at scale.
  • A new framework incorporating the strengths of humans and machines is required.

With 63% of the world’s population online, the internet is a mirror of society: it speaks all languages, contains every opinion and hosts a wide range of (sometimes unsavoury) individuals.

As the internet has evolved, so has the dark world of online harms. Trust and safety teams (the teams typically found within online platforms responsible for removing abusive content and enforcing platform policies) are challenged by an ever-growing list of abuses, such as child abuse, extremism, disinformation, hate speech and fraud; and increasingly advanced actors misusing platforms in unique ways.

The solution, however, is not as simple as hiring another roomful of content moderators or building yet another block list. Without a profound familiarity with different types of abuse, an understanding of hate group verbiage, fluency in terrorist languages and nuanced comprehension of disinformation campaigns, trust and safety teams can only scratch the surface.

A more sophisticated approach is required. By uniquely combining the power of innovative technology, off-platform intelligence collection and the prowess of subject-matter experts who understand how threat actors operate, scaled detection of online abuse can reach near-perfect precision.

Read more: The solution to online abuse? AI plus human intelligence – WEF

The Trap

From our advertisers