We knew people struggled to understand large numbers. What we did not fully appreciate was the extent of that struggle, or how quickly it begins to shape moral and political discourse.
That became one of the richest findings to come out of our Objectivity AI interactions. In the earliest alpha-testing period, we had a small but unusually transparent cohort: 76 individuals who agreed to full data sharing for focus-group testing in exchange for more generous token allocations. Over the course of a year, their use of AI tools through the Objectivity AI platform gave us a rare longitudinal view into how people processed information, reacted to news, asked questions, corrected facts, and made sense of public events.
This was never designed as a conventional political poll. In fact, we were deliberately trying to avoid the weaknesses of polling as it is usually practiced. We did not want to ask people, “Are you left or right?” or “Do you support X or Y?” We were less interested in political self-description than in civic behavior. The system we developed was closer to what we called a civic instinct profile: a dynamic picture of how a person responds to information, ambiguity, authority, conflict, harm, uncertainty, and social pressure.
The exact architecture of that profile is proprietary, and for good reason. The value is not in reducing people to a single ideological score. The value is in observing the pattern of their interactions with the world. How do they respond to a headline? What do they notice first? Do they ask for context, challenge the source, search for moral responsibility, correct a statistic, express empathy, retreat into factional language, or look for missing information? Over time, those small reactions become more revealing than a direct answer to a political survey.
One of the most revealing patterns had to do with numbers.
At one level, this was not surprising. Cognitive science has long shown that humans possess an approximate sense of number, but that our intuitive grasp of quantity is limited. We are much better at dealing with small, concrete sets than with massive abstract magnitudes, and research on the “mental number line” suggests that numerical intuition is often compressed rather than linear. In other words, our minds do not naturally feel the difference between 10,000 and 100,000 the same way a spreadsheet does. The math scales; the emotion does not.
The classic example is the difference between one million and one billion. Mathematically, one billion is one thousand times larger than one million. But emotionally, both often land in the same mental bucket: “a very large number.” Time helps make the difference more visible. One million seconds is about 11.6 days. One billion seconds is about 31.7 years. One trillion seconds is about 31,700 years. The zeros are not decorative. They are the difference between a long vacation, a human adulthood, and the span from prehistory to the present.
But even that example is small compared with the numbers people encounter in probability, cosmology, computation, and war.
Take a deck of 52 playing cards. The number of possible arrangements is 52 factorial, or 52!, which means 52 × 51 × 50 × 49 and so on down to 1. The result is:
80,658,175,170,943,878,571,660,636,856,403,766,975,289,505,440,883,277,824,000,000,000,000
In scientific notation, that is about 8.07 × 10^67 possible deck orders.
Scientific notation is useful because it lets us calculate with numbers too large to write comfortably. But it does not automatically make those numbers meaningful. If a person shuffled one deck once per second, with no repeats, it would take about 2.56 × 10^60 years to pass through every possible arrangement. The universe itself is about 13.8 billion years old, meaning that this shuffle-time is roughly 1.85 × 10^50 times the age of the universe. (StarChild)
That is where intuition breaks. At that scale, “large” stops being a description and becomes a failure mode. The number of atoms in the observable universe is commonly estimated on the order of 10^80, another figure that is mathematically compact but conceptually almost unusable.
So yes, we knew people could not really understand huge numbers. The deeper finding was not that people struggled with cosmic-scale quantities. The deeper finding was that the same compression happens with human-scale tragedy.
This became especially clear in how participants reacted to civilian casualty data.
A major part of Objectivity AI’s conflict-related work involved consolidating casualty reporting across multiple sources, especially in environments where reporting is fragmented, politicized, delayed, incomplete, or incentivized toward distortion. In such contexts, responsible analysis cannot simply trust one number from one side and declare the matter settled. It has to compare sources, examine methodologies, distinguish between confirmed and estimated figures, account for undercounting and overcounting, and where necessary provide a range rather than a false point estimate.
This is where the discourse often breaks.
Imagine one report says 100,000 casualties and another says 70,000. That difference matters. It is 30,000 people. Mathematically, 100,000 is about 43 percent higher than 70,000. Or take a wider hypothetical contrast: 100,000 versus 20,000. That is a fivefold difference. No serious analyst should treat those numbers as interchangeable.
But the emotional response we observed did not scale proportionally with the count.
Once a casualty figure reached a certain moral threshold, many participants reacted primarily to the existence of mass civilian harm rather than to the precise difference between competing estimates. A headline saying 70,000 casualties and a headline saying 100,000 casualties did not reliably produce a 43 percent stronger emotional reaction. In many cases, it did not even produce a clearly distinguishable emotional reaction. The numbers were different. The moral category was the same.
That does not mean accuracy is unimportant. It means accuracy and emotional salience operate differently.
The scientific literature gives us language for this. Researchers have described phenomena such as psychic numbing, scope insensitivity, and the identifiable victim effect: the tendency for people to feel strongly for one concrete, identifiable person, while becoming emotionally less responsive as suffering is represented by larger and more abstract statistics. Paul Slovic’s work on psychic numbing argues that statistics of mass death often fail to convey the human meaning of the events they describe. Small, Loewenstein, and Slovic similarly showed that people often respond differently to identifiable victims than to statistical victims.
Our cohort gave us a behavioral version of that same problem. People were not ignoring numbers. They were reaching the edge of what numbers could do emotionally.
This matters because many public arguments about casualty data are not really arguments about numbers, even when they appear to be. They are arguments about moral recognition.
One side says: “The number is wrong. It is not 100,000; it is 70,000.”
The other side hears: “You are trying to make this acceptable.”
The first side may believe it is defending factual accuracy. And sometimes it is. Correcting inflated, unsupported, or misleading numbers is important. Bad numbers distort policy, accountability, journalism, history, and public trust. But the second side is often responding to something equally real: the fear that correction is being used as minimization.
This is one of the recurring bottlenecks we saw in discourse around geopolitical conflict. A person posts or reacts to a casualty figure because they are trying to express horror that civilians were harmed at all. Someone else responds by disputing the figure. Technically, the correction may be relevant. Emotionally, it can land as evasion.
The problem becomes sharper when the correction is not accompanied by moral clarity.
If someone says, “It was not 100,000, it was 70,000,” and stops there, many listeners do not hear a commitment to truth. They hear a reduction of suffering. They hear arithmetic being used to shrink the moral claim. In our cohort, attempts to “play down” casualty numbers often produced more emotional intensity than a simple acknowledgment that any credible figure represented a grave harm.
That reaction is not irrational in the simplistic sense. It is socially diagnostic. People are not only evaluating whether the correction is true. They are evaluating why it is being made, what it is being used to imply, and what has been left unsaid.
A more responsible correction sounds different.
It does not say: “No, it was only 70,000.”
It says: “The available reports differ. Some estimates place the figure closer to 70,000, while others report numbers nearer to 100,000. The evidence is contested, and the methodology matters. But even the lower figure represents mass civilian harm, and that should not be minimized.”
That is the difference between accuracy and minimization.
The distinction matters because casualty numbers are never just numbers. They are compressed human realities. They represent people killed, wounded, missing, displaced, traumatized, orphaned, widowed, or permanently changed. Even the word “casualty” needs care, because in many reporting contexts it does not mean only deaths; it can include killed, injured, missing, captured, or otherwise removed from normal life depending on the source and methodology. Precision begins with knowing what the category actually measures.
The irony is that people who care about numbers and people who care about moral recognition often need each other more than they realize.
Without accuracy, moral outrage can become manipulable. It can be exploited by governments, movements, media institutions, or online factions. A false number, even in service of a just cause, gives opponents an easy path to discredit the broader claim. If the figure collapses, people may wrongly assume the underlying harm collapses with it.
But without moral recognition, accuracy becomes sterile. It starts to sound like accountancy at the edge of a mass grave. When people correct casualty numbers without acknowledging the suffering behind them, they should not be surprised when others interpret the correction as indifference.
The responsible posture is not to choose between truth and empathy. It is to hold both at once.
In practice, that means using ranges when the evidence supports ranges. It means saying “confirmed,” “reported,” “estimated,” and “alleged” carefully. It means distinguishing civilian casualties from combatant casualties, deaths from injuries, direct deaths from indirect deaths, and official counts from independent estimates. It means naming uncertainty without weaponizing uncertainty. It means refusing to inflate numbers for emotional effect and refusing to deflate them for political comfort.
Most importantly, it means remembering that the moral threshold is often crossed long before the statistical debate is settled.
A difference between 70,000 and 100,000 is analytically important. It is historically important. It is legally and institutionally important. But for many observers, the primary emotional fact is that the number is already in the tens of thousands. The reaction is not calibrated to the last digit. It is anchored in the recognition that a mass-casualty event has occurred.
This is where large-number cognition becomes politically consequential. People do not merely misunderstand numbers in the abstract. They misunderstand one another through numbers.
One person believes they are saying: “Let’s be precise.”
Another hears: “Let’s care less.”
One person believes they are saying: “This source may be unreliable.”
Another hears: “These victims do not count.”
One person believes they are defending objectivity.
Another hears a refusal to condemn harm.
That is why the emotional context around casualty data is not a distraction from objectivity. It is part of the terrain objectivity has to navigate.
Objectivity is not achieved by pretending numbers are weightless. It is achieved by being honest about what numbers can and cannot do. Numbers can establish scale, compare claims, reveal patterns, expose exaggeration, and discipline emotion. But numbers cannot, by themselves, carry the full moral reality of suffering. At some point, the statistic has to be reconnected to the human beings it represents.
That was one of the most important lessons from the Objectivity AI cohort. The participants were not simply revealing ideological positions. They were revealing civic instincts: how they balanced correction and compassion, certainty and humility, skepticism and denial, moral judgment and evidentiary restraint.
A conventional poll might ask someone whether they support a policy, trust a source, or identify with a political label. But behavior tells a deeper story. Show someone a contested casualty report and watch what they do. Do they ask for sourcing? Do they acknowledge uncertainty? Do they show concern for victims? Do they immediately defend a side? Do they correct the number but ignore the harm? Do they express outrage but ignore methodology? Do they treat uncertainty as a reason for caution, or as permission to dismiss the event entirely?
Those responses are political, but not in the narrow partisan sense. They are political because they reveal how people believe society should reason under moral pressure.
The central lesson is simple: numbers matter, but they do not matter in only one way.
They matter mathematically. A fivefold difference is real. A 43 percent difference is real. A range is not the same as a point estimate. A casualty count is not the same as a death toll. A confirmed number is not the same as an allegation. These distinctions are essential.
But numbers also matter rhetorically. They can clarify or obscure. They can honor victims or flatten them. They can correct the record or provide cover for indifference. The same factual correction can function as truth-seeking in one context and moral evasion in another.
That is uncomfortable, but it is unavoidable.
The answer is not to stop correcting numbers. The answer is to correct numbers in a way that makes the moral premise unmistakable. Accuracy should not be performed as emotional detachment. It should be practiced as a form of respect.
Because when the subject is civilian harm, the question is rarely only, “Which number is correct?”
It is also, “What does this number require us to recognize?”
And sometimes the most important recognition is that even the lower number is already enough to demand seriousness. Enough to demand grief. Enough to demand accountability. Enough to make the argument over the exact figure necessary, but never sufficient.
Large numbers are where human intuition fails. Casualty numbers are where that failure becomes moral.
The task, then, is not merely to teach people to read exponents or compare percentages. It is to build a civic culture capable of holding scale and humanity together: a culture where we can say, with equal seriousness, that the numbers must be right and that the people inside the numbers must not disappear.
References
Feigenson, L., Dehaene, S., & Spelke, E. (2004). Core systems of number. Trends in Cognitive Sciences, 8(7), 307–314. https://doi.org/10.1016/j.tics.2004.05.002
Useful for: Human numerical cognition, approximate number sense, and why large quantities become cognitively compressed.
National Aeronautics and Space Administration. (2023, July 21). How old is the universe? NASA StarChild.
Useful for: The universe’s age, especially the 13.8-billion-year figure used for scale comparisons.
Nieder, A. (2003). The neural basis of the Weber–Fechner law: A logarithmic mental number line. Trends in Cognitive Sciences, 7(7), 305–306. https://doi.org/10.1016/S1364-6613(03)00055-X
Useful for: The idea that numerical perception is compressed/logarithmic rather than naturally linear.
Slovic, P. (2007). “If I look at the mass I will never act”: Psychic numbing and genocide. Judgment and Decision Making, 2(2), 79–95.
Useful for: Psychic numbing, mass suffering, genocide, and why people often fail to emotionally scale with large casualty numbers.
Small, D. A., Loewenstein, G., & Slovic, P. (2007). Sympathy and callousness: The impact of deliberative thought on donations to identifiable and statistical victims. Organizational Behavior and Human Decision Processes, 102(2), 143–153. https://doi.org/10.1016/j.obhdp.2006.01.005
Useful for: The identifiable victim effect and the difference between emotional response to individual victims versus statistical victims.
Wolfram Research. (2022). Factorial. Wolfram Language & System Documentation Center.
Useful for: The mathematical definition of factorials, supporting the deck-of-cards example. The exact 52! value can be calculated directly from this definition.
Immerman, N. (n.d.). Mass, size, and density of the universe. University of Massachusetts Amherst.
Useful for: A rough order-of-magnitude estimate of atoms/mass in the observable universe. I would treat this as explanatory background, not a primary scientific citation.



