Teetering on the Edge of Balance

January 22, 2008 at 12:17 am (Game Balance, Game Design, Philosophy)

In a previous post, The Tangled Concept of Balance, I argued that balance should be thought of as the game’s stability, or ability to maintain the shape of its gameplay under the stresses of players who are trying to win.  Actually achieving balance is hard.  Balancing a game is often a long and laborious process, requiring designers to carefully explore the large possibility space of different game strategies and search for problematic special cases.

There’s no “magic bullet” that is going to make this problem go away, but there are good design practices that can make it easier.  In this post, I’m going to discuss my favorite trick:  building in stability.

Lots of times, people think of balancing as simply tweaking the numerical parameters of the game, after everything else is done…how much a fireball costs relative to a lightning bolt, how much damage you can do with a battle axe compared to a dagger, how much faster than a soldier a thief can run, and those sorts of things.  This is an important step in balancing, but it’s hard to make more than fine adjustments at this stage.

Think of these numerical parameters as the ballast in your game:  you can use more or less in different places to compensate for unexpected forces.  But your most powerful tools are the shape and structure of the game, which determine how much stress it will be under and how the forces will be distributed.  If the hull is full of holes (or is so thin that holes are invariably created), trying to solve the problem by shifting the ballast is a difficult and desperate strategy.  If you design the game’s overall structure to be sturdy, you won’t have to.

What does this mean in concrete terms?  Two (related) things:

  1. Strategies that are similar to the best strategies are also pretty good (doing something nearly perfect produces nearly perfect results)
  2. A small change to the parameters of the game (like the cost of a fireball) produces only a small change in the optimum strategies.

The first is good because it decreases the chances that the designer will be surprised by an effective strategy she didn’t anticipate.  If there are a dozen things that are individually really bad moves but that turn out to be an awesome strategy when done in exactly the right combination, you may never know it exists until some player randomly stumbles across it and uses it to clobber your game’s balance.  It would be bad even to discover that a strategy you knew about, when refined to a slightly greater precision than you used in testing, suddenly becomes much better than you imagined.  Unless the designer knows (at least in general terms) what the good strategies are, and how good they are, there’s not much chance of the game being balanced.

If the strategies that are “close” to the best strategies are also good strategies, then it becomes easier for the designer to find all of the general areas where good strategies reside, so she won’t be as likely to be surprised by one she missed.  Additionally, even if she doesn’t pin down the perfect execution of a strategy, she’ll have a general idea of how good the perfect execution is going to be, and so can still take it into account.

The second trait is important when you get to the stage of shifting the ballast around, because it ensures that the game will react in some sane and predictable way to those shifts.  If a small change in your game’s parameters sometimes radically alters the optimal strategies and sometimes produces no noticeable change, it’s going to be very difficult to fine-tune the balance of the game.  It’s much easier if you can “home in” on a balanced set of parameters gradually.

This trait also means that you don’t have to find the perfect set of parameters for your game–because something close to perfect is going to be pretty good, since a small change only produces small changes.  You only need to get close to the right parameters, and your game will be pretty well balanced (and of course, you can continue to make gradual changes to get better and better).  This can save innumerable headaches later on.

Control as a Utility Function

In order to analyze the mathematical properties of game balance, we need to describe the player’s goals.  Therefore, we introduce the abstract concept of control, defined to be the amount of power the player has to change the outcome of the game (and in particular, to ensure that player wins).  When analyzing a game’s balance, we assume that players are trying to maximize their control (in order to maximize their chances of winning).  In the language of economics, this is the player’s utility function.

As in economics, we don’t usually know precisely what this function is–in fact, we probably don’t want to be able to know, because if the player could ever figure out exactly what this function is, he could theoretically find the exact optimal strategy for the game, which would probably take most of the fun out of it.  So we actually want this function to be too complicated for us to exactly figure it out.

However, we can often make some general observations about this function without knowing its exact solution.

Marginal Return

In most games, players can make trade-offs along some continuum.  This might be explicit in the mechanics, such as a trade-off between having an avatar with more strength or more intelligence, but it can also be more abstract, such as a trade-off between aggressive and defensive behavior.  If we think of these assets as variables in the control function, we can look at the marginal utility of each, or how much control the player gains by getting a little more of one asset (e.g. how much more influence the player has over the game’s outcome if his avatar is a little stronger).

Though almost all games actually work in discrete units, it often simplifies the analysis if we pretend the player’s choices are continuously divisible, and then (for those familiar with calculus) we can look at the marginal utility as being the derivative of the control function with respect to that variable.  If our game math is well-behaved, we can expect the discrete solution to be similar to the continuous approximation.

Of course, the marginal utility of an asset is rarely constant–in fact, we usually don’t want it to be.  It may depend on the current level of that asset, and also the current levels of other assets.  But, again, we probably don’t want to be able to solve for it exactly, as that would make playing well too easy for the players.

Perfect Equality

One might imagine that we want everything to be completely even–no favoritism of one assset over another, no fancy circumstantial rules…essentially, for the marginal return of each asset to be a constant.

Figure 1 - Constant Margins (Teetering on the Edge of Balance)

The catch here is that all strategic incentive to choose one strategy over another has been eliminated.  Sometimes that’s a good thing–you may want that for choices that are intended to be purely stylistic, rather than strategic.  But this has become a cosmetic choice, and we need to have some other choices if the game’s going to have any hope of being strategically interesting.

∂²C/∂a² and Diminishing Returns

So, stylistic choices aside, we presumably have some real strategic considerations for the player, and that means, at some point, that one asset’s marginal return must be higher than another’s.  At this point, the player may ask why he shouldn’t simply put all of his resources into the asset with the highest margin.  It’s a fair question–but, of course, we hope our game is not so easily solvable.  So the asset that has the highest margin now must not always have the highest margin, and that means that the margins have to change. 

There are a few ways this can happen.  The first interesting property of an asset’s marginal value is how that margin changes as the asset changes–for example, does the marginal value of another point of strength go up or down as you become stronger?  (For the folks who’ve taken calculus, this would be the second derivative.)

The easiest thing to do is to apply diminishing returns–to cause the marginal value of an asset to decrease the more of that asset the player already has.  This means the second derivative is negative.

 Figure 2 - Diminishing Returns (Teetering on the Edge of Balance)

In the example depicted in figure 2 (above), an optimal strategy is obtained by dividing resources between assets a and b, because the marginal value of either decreases as its absolute value increases.

Notice in the third graph that the control function is almost flat around the position of optimum strategy–strategies near that point are nearly optimum.  If we wanted to shift that optimum strategy a little to one side, or make it a little higher or lower, that should be relatively easy, because the function doesn’t change dramatically in its vicinity–making fine adjustments to the game’s balance will be relatively easy.  Contrast this to the opposing case of increasing returns:

Figure 3 - Increasing Returns (Teetering on the Edge of Balance)

In this example (figure 3), a player who wishes to win is forced to adopt a very extreme strategy in order to do well, because the more invested the player becomes in a particular asset, the more valuable continued investment in that asset becomes.  The control function is very steep near the optimum strategies, because small changes around those points represent trading off between one asset with a high marginal value and one with a low marginal value.

The optimal strategies are probably not hard for the player to find (they just pursue their first thought to its logical extreme), but calculating the maximum effectiveness of these strategies may be difficult, because the curves are so steep–a small error in the calculations will result in a large error in the predicted effectiveness.  Additionally, a small change to the parameters of the game can result in a major change in the effectiveness of one of the optimum strategies (again, because the curves are so steep), but if we want to move the optimal strategies horizontally, it requires drastic changes to the game.  This game will be easy to play, but hard to balance.

Note that the concavity of the marginal returns has not been considered in this analysis.  Those pictured in figure 2 are concave up and those in figure 3 are concave down (which is likely the most sensible way to contruct them, given the constraints), but the same general conclusions hold even if this is not the case.

∂²C/∂a∂b and Synergies

The player can be induced to shift attention from one asset to another as the object of his attention drops in marginal value, but another way to curb the tendency towards excess is to increase the value of another asset.  If the value of asset a increases as I invest in asset b, then I am more likely to shift resources towards a (if the effect is mutual, I’m likely to divide resources between them on an ongoing basis).  You can think of this as a synergy between a and b, and in terms of calculus, this means that ∂²C/∂ab > 0.

Figure 4 - Synergy (Teetering on the Edge of Balance)

Notice the changes in the axis labels compared to previous graphs (we are now examining ∂C/∂a as a function of b, rather than a, and vice-versa).  The final graph is very similar to that in figure 2, and a similar analysis applies.  In fact, with only two variables, mutual synergy looks pretty much the same as diminishing returns, except with a constant offset in overall control.  However, it’s useful to examine synergies separately, because we often have more than two variables, and synergies (or anti-synergies) can cause specific combinations of assets to perform better or worse than others.

Anti-synergies look pretty much like you’d expect:

Figure 5 - Anti-Synergy (Teetering on the Edge of Balance)

Compare with figure 3.  Again, the player is forced to specialize to be effective.

Isolated anti-synergies aren’t necessarily a bad thing, if you really intend for two tactics not to be mixed (for example, if you don’t want any hybrid fighter/casters in your game, then you can make “fighter” attributes and “caster” attributes anti-synergistic).  However, the sharp curves near the optimal strategies mean that you can’t fine-tune the trade-off; it remains an all-or-nothing deal unless you make drastic changes to your game.

Breakpoints and Discontinuity

The actual mechanics used in many games don’t always make such nice, smooth curves as depicted in the examples above.  Many games have breakpoints, places where the marginal value of a variable makes a sudden jump up or down (if we’re pretending our game units are continuously divisible, these often manifest as discontinuities or points of undifferentiability in the function).

Figure 6 - Breakpoints (Teetering on the Edge of Balance)

The sharp corners in the control function (where the angle of the line changes suddenly) make it harder to balance strategies in that area, because the strategic implications of small changes in the game’s parameters become less predictable.

Figure 7 - More Breakpoints (Teetering on the Edge of Balance)

Discontinuities in the control function are even more noticeable.  Small changes in the game’s parameters may gradually affect the optimal strategy until you get up to the breakpoint, and then your changes may have no effect at all on the strategy for a long time, before the changes become drastic enough to jump to the other side of the discontinuity.

 Additionally, it’s possible to be very close to the optimal strategy, yet produce very poor results–this makes it more likely that you’ll overlook the strategy in testing, and punishes players disproportionately if they make a minor error in the execution of this strategy.

Synopsis

Though we often don’t know the control function (and don’t necessarily want to), examining the general behavior of specific interactions of your game mechanics can often reveal clues to the game’s overall stability.  Smooth, gradual changes–especially near good strategies–make the game easier to understand and to adjust later on in the development process.

Though many of the examples above may seem woefully simple–we probably wouldn’t think much of a game where the optimal strategy was actually to split resources exactly evenly between all possible pursuits, for example–the same general reasoning applies even when the game systems become more complex.  You can gradually hone the parameters of your game to achieve greater balance if your broader game mechanics are “well behaved,” but steep slopes and sharp corners lead to an unstable system that’s difficult to calibrate.

This doesn’t tell you how to balance your game, or even how your mechanics should work–but it does provide some rules-of-thumb to make your game easier to balance in the long run.  If you want to balance your game, don’t just focus on the ballast–focus on the hull.

Advertisements

2 Comments

  1. Tommi said,

    This is good stuff. Have you studied some combination of game theory and analysis or is this original work?

  2. Antistone said,

    The concepts of marginal utility, diminishing returns, breakpoints, etc. are hardly original, and have all been applied to games before, but I don’t believe I’ve seen them discussed elsewhere in the specific context of game *design*, and some of the specific observations I’ve made are original, as far as I know (though it wouldn’t surprise me if others have formulated similar ideas).

    I haven’t studied game theory, except that it was covered briefly in a couple of economics courses I took.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: