Sometimes everything in life feels fine and dandy then things come out of nowhere that totally change the game – cell phones, the internet, etc. – that promise a new future, which many people grasp onto, though in the end they didn’t think about or weren’t informed on the potential downsides. This post will hit on that.
Going back to the early days of HITL systems, Netflix is a prime example of where things started then evolved into a company worth billions of dollars (a market cap of $210B at the time this blog was published). And a study from Chase many years ago showed how the use of feedback from humans in the Netflix UI (understanding what they choose to watch and why) then curating new content that’s similar generated billions of dollars in value for the company.
Similarly, YouTube has been implementing that approach for many years and most recently Google (who acquired YouTube back in 2006) rolled out their MUM algorithm for search in 2021, which integrates the same core principles to enhance the quality of search results and ultimately user experiences.
Meta has jumped on the same bandwagon and it’s cool to see how their algorithms for Instagram are not only intelligent but also adapting in the moment based on a variety of factors, and honestly blowing many others out of the water. Try doing a test where you run searches for food content on Instagram for 30 minutes then force quit the app and reopen it – what do you see in the search/explore field? Food content. It’s not magic, just AI.
The main point is: You have to be aware that everything you do online and feedback provided, especially when using ChatGPT, is not confidential and every single thing you share will be used as training data to improve models (the main goal of HITL systems), which can create tons of issues related to confidentiality and NDAs.
Now thinking about the above, and the use of ChatGPT where you feed it information to help learn as a Human in the Loop (HITL) to produce results that become incrementally better, what you’re actually doing is training a model where you’re not only divulging potentially classified information that may violate NDAs but you’re also training models with the potential to help your competitors.
Whether you’re a funded startup or a Fortune 100 company it’s critical to put practices in place now for the use of ChatGPT and other similar platforms in the future where there are clear roadblocks established that mitigate the risk of your company secrets accidentally becoming part of a model your competitors will also have access to.