Tech & Science AI is our best weapon against terrorist propaganda

13:50  19 june  2017
13:50  19 june  2017 Source:   MSN

Terrorist fears for staff at Irish Facebook HQ over jihadis leak

  Terrorist fears for staff at Irish Facebook HQ over jihadis leak One Irish-Iraqi expert at the Dublin HQ was so terrified of being attacked by jihadis he fled the country .The Mirror has learned that Gardai are investigating and a spokesman for the Office of the Data Protection Commissioner confirmed there has been a breach.He said: “We are aware of the breach but have nothing else to say at this time.”Detectives are looking into a systems failure which revealed the profile information of counter-terrorism staff to potential supporters of Isis..

Powerful AI can help us identify terrorist propaganda online - but can we also harness it to address those at risk of radicalization? Building on our successful Creators for Change programme promoting YouTube voices against hate and radicalisation, we are working with Jigsaw to implement the

Find Terrorist Clusters: The AI is designed to look for terrorism -associated pages, posts, groups, personal accounts, and other materials that support terrorism content. Doing so will prevent the upload and sharing of any terrorist propaganda .

  AI is our best weapon against terrorist propaganda © Provided by The Next Web

In the past four months alone, there have been three separate terrorist attacks across the UK (and possibly a third reported just today) – and that’s after implementing efforts that the Defense Secretary claimed helped in thwarting 12 other incidents there in the previous year.

That spells a massive challenge for companies investing in curbing the spread of terrorist propaganda on the web. And although it’d most certainly be impossible to stamp out the threat across the globe, it’s clear that we can do a lot more to tackle it right now.

Last week, we looked at some steps that Facebook is taking to wipe out content promoting and sympathizing with terrorists’ causes, which involve the use of AI and relying on reports from users, as well as the skills of a team of 150 experts to identify and take down hate-filled posts before they spread across the social network.

Terrorist fears for staff at Irish Facebook HQ over jihadis leak

  Terrorist fears for staff at Irish Facebook HQ over jihadis leak One Irish-Iraqi expert at the Dublin HQ was so terrified of being attacked by jihadis he fled the country .The Mirror has learned that Gardai are investigating and a spokesman for the Office of the Data Protection Commissioner confirmed there has been a breach.He said: “We are aware of the breach but have nothing else to say at this time.”Detectives are looking into a systems failure which revealed the profile information of counter-terrorism staff to potential supporters of Isis..

But we know we can do better at using technology — and specifically artificial intelligence — to stop the spread of terrorist content on Facebook. Although our use of AI against terrorism is fairly recent, it’s already changing the ways we keep potential terrorist propaganda and accounts off Facebook.

In a blog post, Monika Bickert, Director of Global Policy Management and Brian Fishman, Counterterrorism Policy Manager said: “Although our use of AI against terrorism is fairly recent, it’s already changing the ways we keep potential terrorist propaganda and accounts off Facebook.

Watch: AI is Facebook's new anti-terror watchdog (Newsy)

What to watch next
  • EU Fines Facebook Over $100 Million For Misleading Merger

    EU Fines Facebook Over $100 Million For Misleading Merger

    Newsy Logo
    Newsy
    0:45
  • Google's Next Daydream VR Headset: A Stand-Alone Model

    Google's Next Daydream VR Headset: A Stand-Alone Model

    Newsy Logo
    Newsy
    0:45
  • The New 'Sonic' Game Lets Fans Insert Their Own Characters

    The New 'Sonic' Game Lets Fans Insert Their Own Characters

    Newsy Logo
    Newsy
    1:00
  • North Korea Could Be Linked To The WannaCry Cyberattack

    North Korea Could Be Linked To The WannaCry Cyberattack

    Newsy Logo
    Newsy
    0:57
  • Lyft And Waymo Are Teaming Up To Test Self-Driving Cars

    Lyft And Waymo Are Teaming Up To Test Self-Driving Cars

    Newsy Logo
    Newsy
    0:50
  • That Massive Global Cyberattack Is Bigger Than We Thought

    That massive global cyberattack is bigger than we thought

    Newsy Logo
    Newsy
    1:08
  • Major Ransomware Attack Hits Thousands Of Systems Worldwide

    Major Ransomware Attack Hits Thousands Of Systems Worldwide

    Newsy Logo
    Newsy
    1:03
  • In Memoriam: The MP3, 1995-2017

    In Memoriam: The MP3, 1995-2017

    Newsy Logo
    Newsy
    0:46
  • This Robotic Exoskeleton Helps You Stay On Your Feet

    This Robotic Exoskeleton Helps You Stay On Your Feet

    Newsy Logo
    Newsy
    2:55
  • Robots May Have Answered The FCC's Call For Net Neutrality Comments

    Robots May Have Answered The FCC's Call For Net Neutrality Comments

    Newsy Logo
    Newsy
    1:11
  • Did Your Favorite Classic Video Game Make It Into The Hall Of Fame?

    Did Your Favorite Classic Video Game Make It Into The Hall Of Fame?

    Newsy Logo
    Newsy
    1:14
  • Apple Has A Record Amount Of Money In The Bank

    Apple Has A Record Amount Of Money In The Bank

    Newsy Logo
    Newsy
    0:55
  • Trump Creates Council To 'Modernize' How The Government Uses Tech

    Trump Creates Council To 'Modernize' How The Government Uses Tech

    Newsy Logo
    Newsy
    1:12
  • Elon Musk's Wild Tunneling Idea Now Has A Concept Video

    Elon Musk's Wild Tunneling Idea Now Has A Concept Video

    Newsy Logo
    Newsy
    0:55
  • The NSA Just Halted A Controversial Part Of Its Surveillance Program

    The NSA Just Halted A Controversial Part Of Its Surveillance Program

    Newsy Logo
    Newsy
    0:50
  • Federal Communications Commission Set To Undo Net Neutrality

    Federal Communications Commission set to undo net neutrality

    Newsy Logo
    Newsy
    0:45
UP NEXT
UP NEXT

Google tightens measures to remove extremist content on YouTube

  Google tightens measures to remove extremist content on YouTube Alphabet Inc's Google will implement more measures to identify and remove terrorist or violent extremist content on its video sharing platform YouTube, the company said in a blog post on Sunday. Google said it would take a tougher position on videos containing supremacist or inflammatory religious content by issuing a warning and not monetizing or recommending them for user endorsements, even if they do not clearly violate its policies.

Find Terrorist Clusters: The AI is designed to look for terrorism -associated pages, posts, groups, personal accounts, and other materials that support terrorism content. Doing so will prevent the upload and sharing of any terrorist propaganda .

"Although our use of AI against terrorism is fairly recent, it's already changing the ways we keep potential terrorist propaganda and accounts off Facebook. The company admitted that " AI can't catch everything" and technology is "not yet as good as people when it comes to understanding" what

Now, Google has detailed the measures it’s implementing in this regard as well. Similar to Facebook, it’s targeting hateful content with machine learning-based systems that can sniff it out, and also working with human reviewers and NGOs in an attempt to introduce a nuanced approach to censoring extremist media.

The trouble is, battling terrorism isn’t what these companies are solely about; they’re concerned about growing their user bases and increasing revenue. The measures they presently implement will help sanitize their platforms so they’re more easily marketable as a safe place to consume content, socialize and shop.

Meanwhile, the people who spread propaganda online dedicate their waking hours to finding ways to get their message out to the world. They can, and will continue to innovate so as to stay ahead of the curve.

Ultimately, what’s needed is a way to reduce the effectiveness of this propaganda. There are a host of reasons why people are susceptible to radicalization, and those may be far beyond the scope of the likes of Facebook to tackle.

How Can We Trust ISIS's Claims of Responsibility After Terror Attacks?

  How Can We Trust ISIS's Claims of Responsibility After Terror Attacks? The militants will take responsibility both for attacks they plan or inspire and may keep mum about incidents that don't serve their political goals.This article originally appeared on The Conversation.

In a blog post, Monika Bickert, Director of Global Policy Management and Brian Fishman, Counterterrorism Policy Manager said: “Although our use of AI against terrorism is fairly recent, it’s already changing the ways we keep potential terrorist propaganda and accounts off Facebook.

"Although our use of AI against terrorism is fairly recent, it's already changing the ways we keep potential terrorist propaganda and accounts off Facebook. The company admitted that " AI can't catch everything" and technology is "not yet as good as people when it comes to understanding" what

AI is already being used to identify content that human response teams review and take down. But I believe that its greater purpose could be to identify people who are exposed to terrorist propaganda and are at risk of being radicalized. To that end, there’s hope in the form of measures that Google is working on. In the case of its video platform YouTube, the company explained in a blog post:

Building on our successful Creators for Change programme promoting YouTube voices against hate and radicalisation, we are working with Jigsaw to implement the “Redirect Method” more broadly across Europe.

This promising approach harnesses the power of targeted online advertising to reach potential ISIS recruits, and redirects them towards anti-terrorist videos that can change their minds about joining. In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages.

In March, Facebook began testing algorithms that could detect warning signs of users in the US suffering from depression and possibly contemplating self-harm and suicide. To do this, it looks at whether people are frequently posting messages describing personal pain and sorrow, or if several responses from their friends read along the lines of, “Are you okay?” The company then contacts at-risk users to suggest channels they can seek out for help with their condition.

I imagine that similar tools could be developed to identify people who might be vulnerable to becoming radicalized – perhaps by analyzing the content of the posts they share and consume, as well as the networks of people and groups they engage with.

The ideas spread by terrorists are only as powerful as they are widely accepted. It looks like we’ll constantly find ourselves trying to outpace measures to spread propaganda, but what might be of more help is a way to reach out to people who are processing these ideas, accepting them as truth and altering the course their lives are taking. With enough data, it’s possible that AI could be of help – but in the end, we’ll need humans to talk to humans in order to fix what’s broken in our society.

Naturally, the question of privacy will crop up at this point – and it’s one that we’ll have to ponder before giving up our rights – but it’s certainly worth exploring our options if we’re indeed serious about quelling the spread of terrorism across the globe.

Social media companies update policies on blocking extremism .
YouTube and Facebook are relying on technology, with a helping hand from humans, to monitor contentThat’s the foundation for a couple social media giants’ newest policies targeting extremist content, skirting a delicate balance between freedom of expression and helping people spread dangerous rhetoric.

—   Share news in the SOC. Networks

Topical videos:

This is interesting!