"It's something a statistician might wear around the neck." Paul Dussere, Mathematician.
"On Sept. 5, [The Boston Globe] published a survey showing [Gore] and Bradley in a statistical tie in New Hampshire". (James Bennet, "The Reeducation of Al Gore." The New York Times Magazine, January 23, 2000.)
What exactly is a statistical tie? Is a special sort of tie, and if so, how does it differ from a "normal" tie?
Strangely enough, in almost all cases, a statistical tie is not a tie at all! A statistical tie is a different type of tie altogether.
In short, a statistical tie is a polling result for which the difference between two (or more) candidates is of the nature we would expect sampling error alone to reasonably explain. Presuming the actual election result between the candidates to be exactly a tie, poll results still willdue to sampling errordiffer somewhat. If that difference is in the range of differences that are reasonably a result of sampling error alone, then the poll result comparing the contending candidates is ruled a statistical tie. More informally, we might say that a statistical tie occurs when the poll results lead us to conclude the election is "too close to call."
To illustrate, make use of a standard deck of playing cards. Assume each card represents a voter; black cards are voters for Candidate B, red cards for Candidate R.There are 52 voters and the election will result in a (true) tie. Now, suppose our public opinion poll samples 10 voters (cards) randomly. (You can do this at home: just be sure to ripple shuffle at least 7 times to thoroughly randomize the deck.) The poll result might be an exact tie; more likely it is not. For instance, 6 reds (R votes) and 4 blacks (B votes) might be your result. Such a result is common when sampling from a standard decksuch a result falls under the heading of statistical tie. Note that a nonstandard, unbalanced deck might well produce such a split. Still, because the result is consistent with what's reasonably expected from a balanced deck, the result is designated a statistical tie! On the other hand, a poll result of 10 Rs and 0 Bs is very unlikely (although not impossible) to occur when the true election result is really a tie. A split of 100 would not be a statistical tie.
Who determines what does and doesn't constitute a statistical tie? For example, a split of 64 seems reasonable as a statistical tie, and 100 seems unreasonable, but where exactly is the cutoff? Is 73 a statistical tie? How about 82? And 91? These are determinations that are made by statisticians, who use mathematics to obtain the likelihood of various results.
Consider our little minielection above. I've table each of the possible poll results below.
Poll Result 

Vote for R Vote for B 
Likelihood 
10 0 9 1 8 2 7 3 6 4 5 5 4 6 3 7 2 8 1 9 0 10 
0.03% 
These quantities form what is called a hypergeometric distribution of probabilities. How they are obtained is unimportant. They represent the chance of the given result when randomly sampling 10 voters from a pool of 52 in which the two candidates are tied at 26 votes each.
Procedures used in reporting poll results take the confidence level to be about 95% and consequently are in error 5% of the time. (There is no scientific reason for the use of 95%/5%; these figures are accepted for historical reasons. Most people agree that a 1 in 20 (5%) error rate is acceptably small; especially when any error is not particularly harmful [improper poll results have never injured a single voter in any fashion]).
You can see that the results with %s in BLUE account for the middle 92.5% of outcomesas close as 95% as we can get here. So, those results are designated statistical ties.
A statistical tie occurs when a poll result is "close
enough" to what would be expected in an election that really
is tied. If you reread above, you'll see that the term "reasonable"
appears often. In media polls especially, "reasonable" is quantified by
the 95% figure called the "confidence."
Other equivalent expressions: statistical dead heat and too close to call.
When sample sizes are relatively large there's a convenient way to make the determination with a hypothesis (significance) testing procedure.