There aren’t many vehicles that allow you to travel to far-off destinations within seconds though Instagram is definitely one of them, and when I seek information on different worlds I find myself searching for hashtags and locations to discover what lies ahead. But the question is: What are the driving factors behind the content/results we’re actually seeing?
Going back to the early days when we began our journey at Soteria Intelligence using artificial intelligence – particularly deep learning – to form ecosystems around clients we realized that breaking down the algorithms social networks use to produce results on the back-end is equally as valuable as metrics on the surface (e.g. the number of likes, comments).
Essentially, if we can understand what’s going on behind the scenes we can take a predictive approach to tackling the future, and if things going on in the background accelerate/determine what the masses see before subsequent actions take place we need to be one step ahead of that.
The results: Instagram integrates heavily trained image recognition models to classify/curate Top Posts for content they’re confident in (images of burgers, pizza, etc.) then default to humans when they’re not sure. But why?
For 10+ years, going back to the Myspace days, there have been efforts to thwart pornography and other inappropriate content on social networks and I’ve had good talks with pioneers in this space. Instagram is drawing from these robust data sets that have stood the test of time to present safe results to users by displaying content they have a high confidence in (e.g. images of pizza) then images of humans if the objects in the scene are unclear.
In a nutshell, when you search for common, recognizable (from an image perspective) hashtags or locations you’ll be presented with relevant results in the Top Posts section but for more obscure searches the Top Posts will be geared towards displaying humans. Today, that’s the safest bet.