I used to write custom anti-cheat software for popular Minecraft networks, and often discuss the different aspects of an anti-cheat system with other anti-cheat developers. Last time I had this discussion, they suggested I write out my thoughts on a post for others to read. This article covers my thoughts and opinions on various anti-cheat forms and what I view as important.
The general benchmark for what constitutes a good anti-cheat is how accurate it is. How many cheaters does it catch? How many innocent players does it falsely detect? Perfect accuracy isn't achievable; therefore, the software must always make a compromise. Depending on factors such as the game and the company developing it, the ideal solution can vary. Minimizing the chances of false positives is the most commonly accepted approach since it increases player trust in the system.
Some games have used systems where any irregularities are flagged and manually verified before applying a ban. This approach, whilst generally accurate, comes with a massive labour cost. Other games may impose a ban for every irregularity and provide appeal systems to correct false positives. However, this approach can cause players to lose trust in the anti-cheat and lead to clashes between the community and developers. In games with severe cheating problems, it is sometimes worth sacrificing a few false positives to catch more cheaters.
When dealing with false positives on an individual basis, there should be a system to revert them. I have a false-positive VAC ban on my steam account from early 2015 that cannot be appealed due to the lack of such a system.
There are numerous techniques for anti-cheat, some of which can be employed simultaneously for improved detections.
Signature detection is a common form of anti-cheat, most famously known from Valve's "Valve Anti-Cheat" (VAC). This technique involves scanning the system memory for known 'memory signatures'. In VAC, this activates with the detection of dynamic library injection and other undisclosed systems. Signature detection, a technique borrowed from antivirus systems, is a reactive anti-cheat system that does not require specific knowledge about the game. The lack of knowledge needed about the game allows this form of anti-cheat to apply to many different games easily.
Signature detection can be accurate in some circumstances, though when used alone it is easily defeated. For example, basic implementations of this technique can be bypassed merely by recompiling the cheat in question. The biggest downside of this technique is that it requires the cheats to be known to the developers. This requirement causes a delay between a cheat's release and its detection. This technique must run on the client and therefore, can be potentially spoofed. The need to run on the client and manually find cheats' signatures often make this impractical for smaller developers.
Another form of anti-cheat, commonly used in games with a centralized server, is the validation of player input. This technique involves determining if what a player has sent to the server can happen in legitimate circumstances. These calculations can be very resource-intensive for the server and are therefore generally only used for simple-to-detect cheats. Typical cases include movement and damage validation.
Server validation is my personal favourite and the form of which I am most familiar. This technique cannot entirely prevent cheating; it only makes it difficult for cheats to perform non-player-like actions, limiting their usefulness. This form of player validation is known as "Pattern Detection" and uses a heuristic approach to detect cheaters based on their behaviour on the server.
Limiting false positives with this method can be incredibly difficult. For example, it can be challenging to differentiate an extremely skilled player from a cheater using an aimbot. I have written a more thorough post on aimbot detection here.
Another limitation with this technique is making allowances for lag. As lag can cause players to behave erratically on a server, this detection form cannot be stringent. These allowances allow cheats to provide a small advantage while still operating within requirements.
This technique, which goes by many names, refers to having someone 'spectate' or monitor the player whilst they play. Whilst this can be accurate assuming the spectator understands the game, it is incredibly labour-intensive. Games such as 'Counter-Strike: Global Offensive' use a crowd-sourcing system where recorded matches get watched by numerous committee members who judge a reported player. A committee prevents one inexperienced spectator from incorrectly banning a player. When false positives do occur, it is generally due to exceptionally high skill levels from the player. Many administrators of private game servers manually use this technique to determine whether an accused player is cheating.
The punishments for cheating are an oft-debated topic, with opinions varying from too harsh to too lenient. On the harsh side, some services permanently ban a computer for a single cheating infraction. On the forgiving side, services may warn the player or revert actions related to cheating. None of these are wrong; the punishment should fit in with the game's specific circumstances and the severity of the incident.
Another aspect that you must consider is how to deal with false positives. For more severe punishments, this is especially important. I touched upon this briefly in the section on accuracy. However, there is much more to it. In cases of false positives, it's better to create a community that doesn't believe in them. Otherwise, illegitimate claims of false positives can tarnish the reputation of a company. If someone claims to have been banned falsely and does not get unbanned, anyone who believes them may lose faith in the company, or stop purchasing their games in the future for fear of being banned.
Concerning severity, there is pre-existing psychological research to use. It's evident that when there's a low severity punishment, such as a 3-warning system, people who are on the fence about cheating are more likely to cheat. In scenarios where penalties are very severe, the undecided are less likely to cheat. Perhaps unsurprisingly, it has been found that any ban longer than three years is unlikely to impact the likelihood of cheating. If someone will cheat and receive a 3-year ban, they will cheat and receive a lifetime ban. This finding is likely due to the prevalence of accounts created purely for cheating.
One contested aspect of cheating punishments is a social punishment. The most well-known implementation of this is Valve's VAC. When banned, players have a red message on their profile page, informing other players that this person has cheated in the past. This warning leads to social stigma throughout gaming communities, extending beyond the game and account the ban occurred in. I've written an article about this in more depth from a personal perspective. Studies have shown that players with bans are likely to have a smaller friends list after the ban, and more banned friends.
One potential alternative to anti-cheat in some situations is to create the gameplay to limit the impact of cheats. One example of this is Facepunch's game Rust. There is a gameplay mechanic in Rust that allows players to protect their items and doors with a combination lock. Bruteforcing cheats quickly emerged which attempted codes until they found the correct one. These were patched by having the combination lock 'zap' the player upon an incorrect code, increasing in severity on each wrong attempt. Not only does this add to the gameplay, it also prevents players using this form of cheat. Of course, you can't solve every form of cheating with this method. However, it's something to consider.
From a business perspective, creating a state-of-the-art anti-cheat system is rarely worth it. Minimizing cheating is an important goal to have, but not if it ends up consuming a large portion of available resources. If preventing cheats stops you from adding features or fixing other issues in a game, it may no longer be worth it. If a game is having serious cheating problems, it may be worth rethinking the type of anti-cheat in use, rather than spending large amounts of time implementing small improvements to the current system. Adding features and increasing the game's value for customers impacts a company's bottom line while preventing cheating may not.
In only particular situations does cheating have a statistically significant impact on the success of a game. If there is only a minor cheating problem, it's probably not worth spending a lot of time and money on the solution.
Anti-cheat is not a 'one size fits all' system. It would be best to choose appropriately for each game, and the specific resources the development team has available. Appropriate punishments depend on support measures and the rate of false positives. Anti-cheat requires more thought than most people usually think, and is a vital part of the game. Inadequate or improperly managed solutions are more of a burden on the players than the cheaters that it is there to prevent. A well-established anti-cheat system can make the game more enjoyable by reinforcing the player's confidence in the game's fairness.