Researchers Will “Fork” Facebook and Populate It With Bots
In order to pinpoint vulnerabilities and exploits, Facebook launches a software simulation of its own network populated exclusively by bots. The project was dubbed Web-Enabled Simulation (WES). Its white paper was published on the Facebook research site.
How Does It Work?
Unlike traditional simulations which often employ models of the real product, the WES system will operate on the actual real-world Facebook platform. This Facebook “fork” will be no different from the original one, except that its users will be exclusively bots that simulate human users’ behaviour. Researchers state that this behaviour can be rule-based and programmable, or it can be learnt independently. Researchers compared this simulation to a game and their bots to existing bots in online multiplayer games. The project is thus expected to draw heavily on recent advances in machine learning for software games, a topic that has witnessed many recent breakthroughs.
“A WES simulation can be seen as a game, in which we have a set of players that operate to fulfil a certain objective on a software platform. This observation connects research on WES systems with research on AI-Assisted Game Play.”
Researchers state that the main purpose of WES bots will be to misbehave. Supposedly applying advanced machine learning technology, bots will “try to break the community standards in a safe isolated environment in order to test and harden the infrastructure that prevents real bad actors from contravening community standards.” Some advanced mischievous bots will even be able to cooperate toward a common goal.
Other bots will play the role of scam victims and exhibit behaviours akin to real-life Facebook scam-targets. But the actual Facebook users may never learn about the existence of this “underworld Facebook” populated by rogue robots and their hapless bot victims. Researchers insist that the “human” version of Facebook should be isolated from the “bot” Facebook. But at the same time suggest that certain “read-only bots” will be able to peer through the looking-glass into the world of human users for the sake of the experiment.
This simulation is meant to help Facebook better protect their users. For example, once a new update is rolled onto the WES simulation there might be an uptick in rogue bots compromising the victim bots’ data or otherwise successfully attacking them. In that case, Facebook can track and fix the bug, without real users having to suffer.
What Are the Real-Life Implications?
Battling hackers and scammers certainly is a plausible and sufficient reason for having a shadow-copy of Facebook populated exclusively by bots. But it is hard to deny that such an experiment does provide Facebook with many other opportunities.
For one, a realistic Facebook simulation has to be something any internet marketer would give a limb for. With some tweaking, it could become a handy instrument to gauge the efficiency of ad campaigns, test new features and otherwise predict the behaviour of Facebook users in bulk.
But there may be other implications, closely connected to cyber warfare. We know that there are independent and government-backed groups that already experiment with creating extensive programmable bot networks within various social networks to simulate real-user behaviour and interact with users for different nefarious reasons, from pushing scam to meddling in the elections. A good example would be the “Russian bots” scandal that first erupted in 2016. Facebook got under a lot of heat over that incident but the truth is almost four years later they are still struggling to solve that issue.
Without a doubt, the WES Facebook project will be used to give team Facebook a better understanding of how bot networks operate and help it identify bot behaviours and take action against rogue bot accounts early. But another implication is that Facebook itself will end up with an extensive network of bots who are nigh indistinguishable from human users.
Researchers insist that during the experiment only the read-only bots should be allowed to access the human-populated Facebook to avoid any interactions between humans and bots. But things may change in the future.
Subscribe to our Newsletter<
- How Centralized Social Media are Forced to Censor Content: Facebook Case
- Trump Acknowledges 2018 Cyberattack Against Russian Troll Farm
- Stalkerware Usage in on the Rise as Domestic Violence Rates Surge During Lockdown
- Voice Social Media App Goes Live, Dan Larimer Shares Glimpse of New World Order
- ‘TikTok Spies On You and Transfers Data to Chinese Authorities.’ But Is It All That Bad?
- Secure Identity Expert Explains How Cryptography Gives Us Power Over Personal Data
- Amnesty Tech Exec: NSO Group’s Malicious Spyware Is Enabling State-Sponsored Repression of Human Rights Defenders
- Eastern European Hacker Group Stole $200m From Crypto Exchanges via Supply-Chain Attack