When it comes to live-fire high-wire acts in the tech industry, there can be few endeavors more daunting than executing a security update to a software platform hosting more than 2.6 billion users.
But that’s exactly what Facebook does every time it rolls out an update. Sure, it mitigates the potential for terror by making the changes in batches and conducting an incredible amount of internal testing. But at the end of the day, you never know precisely how any given change could upset the delicate user balance that keeps Facebook on people’s screens.
Lucky for Facebook, the company’s AI team recently came up with a pretty clever way to make sure those software updates and tweaks don’t screw up its platform and cost it any users: they built a fake Facebook-scale social media network full of bots to test things out on.
Per a company blog post :
Basically, it’s a copy of Facebook that’s filled with bots. The bots are trained by AI models representing human social media capabilities. In essence, the bots can add friends, like posts, and generally do anything a person could do on a given social media platform.
These bots aren’t like the ones you’re used to seeing on Twitter (shout out to @infinite_scream on Twitter) that exist simply to respond when a text trigger occurs. They’re meant to simulate the experience of using a social media site.
According to Facebook:
Quick take : This seems like a very intelligent way to determine whether or not a security function or new user feature is operating properly without risking a broken user experience in the human-facing version of the production code. I expect these simulations will become the status quo for social media networks.
Realistically though, the simulation itself solves the biggest problem Facebook has: human users. What I wouldn’t give for an invitation to the bot-only version.