Looking at the history of mankind spanning thousands of years, hatred was always present and people have been persecuted for a wide variety of reasons up until this day, but in recent times hate has been digitized and weaponized with the rise of social media. And that’s unacceptable to us.
One of the biggest roadblocks law enforcement agencies and private institutions face when attempting to detect then combat hateful material found on social networks, especially Twitter, is how exactly to deal with the massive amount of data, including tons of false positives, sarcastic posts, emerging trends that go viral within hours as well as many other variables. It’s a monumental task that no analyst, army of analysts or social media intelligence tools of the past can accomplish.
Machine Learning Supercomputers
For the past 2 years, the team at Soteria Intelligence has focused on developing technologies to confront online hate, school threats, terrorist propaganda and other challenges we face in a very unorthodox way, and through our research and development it became clear that the only way to solve the complex problem at hand was to use deep learning and machine learning.
Taking 10 years of research on social media behavior and 5 years of research on social media threats in particular, along with input from a wide range of subject-matter experts, we’ve focused on creating machine learning systems with ability to assess social media activity faster and more accurately than humanly possible.
And aside from looking at the past, we also search for what’s on the horizon. As new hate messages or groups emerge on social networks, such activity is used as training material that allows our system to understand evolving threat landscapes.
Part of this process involves Natural Language Processing (NLP) where we’ve taken all of our research, including many years of guiding large organizations on complex social media initiatives, and trained our technologies to understand linguistic patterns. For example, the difference between “I have a bomb in Los Angeles” versus “I had a bomb dinner in Los Angeles” or taking a sentence like “I’m putting wings on pigs today” and being able to determine it’s a direct threat to police based on historical data.
Taking things one step further, we found that NLP itself is not the golden ticket, and ultimately it’s a combination of linguistic patterns, other key data points (image recognition, etc.) and proprietary algorithms that work together to paint a more complete picture.
Hate Leads to Violence
When you take a step back and think about it, many of the most heinous attacks that occurred in the past few years were rooted in hate ranging from terrorist attacks around the world to mass shootings at malls, schools, and other locations. Hate fueled the fire and when it burned out of control lives were lost.
Understanding that hateful messages on social media are clear pre-incident indicators of potential harm or violence down the road, by analyzing past incidents where there were signs on social media before attacks occurred we can empirically assess and gauge levels of hate over time to uncover powerful insights.
Objective vs Subjective Decision Making
Hate itself is born from bias, which all humans exhibit in one form or another, and we realized that even if some super analyst walked in the door and he or she could outperform our software, there will inherently be bias introduced. The more hands-on analysts are in the decision-making process and the more they rely on their own subjective opinions, the more skewed results become.
We are taking a drastically different, objective approach where data drives the results not a focus on race, gender, religious beliefs, sexual orientation or other factors that could be labeled as profiling. The way to combat hate is not to introduce even more bias. Instead, regardless of where someone is from, the color of their skin, what god they choose to worship, etc. it’s their behavior that matters most.
For example, we can all agree that someone saying on Twitter “I’m going to kill John Doe” is a clear threat just as someone standing outside of a Jewish temple with a gun is a threat. But what if someone made hateful comments on Twitter and it just so happens that when other individuals made similar comments in the past, violence ensued 80% of the time? That is what’s important.
It is those breadcrumbs of hate that we look to extract before catastrophes occur then find ways to bridge the divide and work to make the world a better place.