Insofar as there is ever a theme
in blog posts, there has been a bit of a trundle around ways of
conceptualising, or at least considering, the shape of rules. Firstly, I noted perturbation
theory, the rather pretentious name I assigned to the idea that units, in
battle, are slowly reduced in capacity until they run away. Secondly, I considered
a crisis sort of rule, in that a unit can crumble immediately a threat is
perceived.
As has been observed in some of
the comments, the outcome for a unit is more than a function of simply shooting
and being shot at, and the ratio between the two. While casualties may have
some impact, perhaps more important is command and control within the unit, and
this includes the ability of lesser commanders, or even ordinary soldiers, to
take control at a point of crisis.
The modern wargame rule set,
however, tends to focus on the unit as a whole. The argument is that a general would not know
that the Grenadiers have just had their colonel wounded and that the major is
taking over command. He might note that the unit is hesitating, slowing in its
advance or whatever, but the cause would be opaque to him. He might just mutter
‘Tell the Grenadiers to get on with it’ to an ADC and then turn to other
matters.
This sort of approach leads us to
consider much more widely the statistics of battles, and here we hit a snag.
There are many reasons why a unit might hesitate. A disruption to the chain of
command is just one of them. So far as the general is concerned, a unit hesitating
in its advance is an event, and that event might be replicated in many other
units across the army. The cause of the event, on this model, is irrelevant and
unknown by the observer.
However, if we focus in more
closely on the unit and what is happening to it in detail, we can say that the
event of the colonel being wounded is a specific thing. On this, more detailed,
look, the chance of the poor chap being hit would be, at least to some extent,
more calculable. We could consider the colonel in his recognisable uniform, and
ponder the efficacy of skirmishers sniping. We could classify the density of
shooting incoming to the unit, and calculate the chance of any individual being
hit. And so on. I am not suggesting that we could, in fact, do this
calculation, but we could possibly come up with something plausible.
If we stick with the more global
view, however, we have to simply try to work out the probability of a unit hesitating,
whatever the root cause might be. Here, we have a problem, because we simply do
not have the data required. An event is an event. Its cause is a unique set of
circumstances, hidden to us as observers.
In theory (if not in practice) we
can calculate the portability of an event happening. We could do this by observing
how often a unit in an army does hesitate, and work from there. I have, of
course, no idea what the outcome of that might be, but suppose we come up with
a number that states that one fifth of the units in an army will hesitate once
in a four hour battle. Or, put another way, one twentieth of the units will
hesitate per hour.
Now, there are two problems, at
least, jumping out here. The first is the difference between the set of events
and the ideal probabilities I have just stated. If the process is statistical,
then there will be fluctuations away from the ideal probabilities. The only way
to try to cure this is by increasing the number of tests. We know that the
ideal probability of tails in a coin toss is ½. Even if we make 5,000 such coin
tosses, we will not land up with exactly 2,500 tails. If we were seeking to
define the ideal probability from the empirical results, we would land up with
strange probabilities.
Of course, we are not so naïve. We
can work out the possibilities and proceed from there. But in a battle, or even
in all the battles in history, there are not, I suspect sufficient unit
histories to define a probability of hesitation. We might be able to say that
units seem to hesitate with a probability of one in twenty per hour, but that
is not necessarily helpful. Along the same lines, we might be able to say that
infantry squares were broken twice in the Napoleonic wars, but that is rather
hamstrung by the issue that we do not know how many times squares were charged.
We have no idea how frequently squares were charged, and so can make no stab at
the probability of the square collapsing. In short, we cannot approach the
ideal probability, because we do not have sufficient evidence.
Ideally, of course, we would work
out the ideal frequency and use that, with suitable fluctuations, in wargame
rules. If we could suggest that ‘a unit under fire will hesitate one time in
twenty’ then we can roll a die and get on with it. The sort of calculation made
tends towards the unrealistic, in that firing a musket one hundred times at a battalion
sized sheet might give us some interesting data of ‘accuracy’, but it takes
little account of the (average) battlefield conditions, where what is important
to most people is not getting hit yourself. While these sorts of experiment
might provide a useful upper limit, it is only that – no more than (say) eighty
per cent efficacy.
So statistical ideas are
necessary to wargame rules, but their application is by no means as simple as
we would like. We can, only to some extent, monitor events. We can try to classify
events, although, of course, that classification depends on what we are doing
and the level of detail we are interested in. but we do not seem to be able to
access the ideal probabilities, unless we persuade the world to have a lot more
battles and that is almost certainly a bad idea.
No comments:
Post a Comment