Looking at the history of mankind spanning thousands of years, hatred was always present and people have been persecuted for a wide variety of reasons up until this day, but in recent times hate has been digitized and weaponized with the rise of social media. And that’s unacceptable to us.
One of the biggest roadblocks organizations (public companies, etc.) face when attempting to detect then combat hateful material found on social networks is how exactly to deal with the massive amount of data, including tons of false positives, sarcastic posts, emerging trends that go viral within hours as well as many other variables. It’s a monumental task that no analyst, army of analysts or social media intelligence tools of the past can accomplish.
Machine Learning Supercomputers
For the past 2 years, the team at Soteria Intelligence has focused on developing technologies to confront online hate and other challenges we face in a very unorthodox way, and through our research and development it became clear that the only way to solve the complex problem at hand was to use deep learning and machine learning.
Taking 10 years of research on social media behavior and 5 years of research on social media threats in particular, along with input from a wide range of subject-matter experts, we’ve focused on creating machine learning systems with ability to assess social media activity faster and more accurately than humanly possible.
And aside from looking at the past, we also search for what’s on the horizon. As new hate messages or groups emerge on social networks, such activity is used as training material that allows our system to understand evolving threat landscapes.
Part of this process involves Natural Language Processing (NLP) where we’ve taken all of our research, including many years of guiding large organizations on complex social media initiatives, and trained our technologies to understand linguistic patterns.
Taking things one step further, we found that NLP itself is not the golden ticket, and ultimately it’s a combination of linguistic patterns, other key data points (image recognition, etc.) and proprietary algorithms that work together to paint a more complete picture.
Hate Leads to Violence
When you take a step back and think about it, many heinous attacks that occurred in the past few years were rooted in hate ranging from mass shootings at malls to workplace violence — all of which are corporate security nightmares. Hate fueled the fire and when it burned out of control lives were lost.
Understanding that hateful messages on social media are clear pre-incident indicators of potential harm or violence down the road, by analyzing past incidents where there were signs on social media before attacks occurred we can empirically assess and gauge levels of hate over time to uncover powerful insights.
Objective vs Subjective Decision Making
Hate itself is born from bias, which all humans exhibit in one form or another, and we realized that even if some super analyst walked in the door and he or she could outperform our software, there will inherently be bias introduced. The more hands-on analysts are in the decision-making process and the more they rely on their own subjective opinions, the more skewed results become.
We are taking a drastically different, objective approach where data drives the results not a focus on race, gender, religious beliefs, sexual orientation or other factors that could be labeled as profiling. The way to combat hate is not to introduce even more bias. Instead, regardless of where someone is from, the color of their skin, what god they choose to worship, etc. it’s their behavior that matters most.
Ultimately, it’s the breadcrumbs of hate that can be assessed over time to help corporations form a better understanding of their environment and how to keep their employees safe and operations running smoothly.