It starts with Bayes in seventeentwentysomething.


A Presbyterian minister shambling home in his clerical clothes, thinking in straight lines.


We do not know much about Thomas. He believed his most important work was moral and mathematical together. In *Divine Benevolence*, he argued that the existence of happiness was itself evidence that God probably does exist.


In his mathematical notes, he took on hard practical problems, including the classical geometer's challenge of measuring the Earth. The claim that his estimate was within a tenth of a percent of modern values is colourful but unverified. What we do know is that Bayes was recognised by the Royal Society for his work in fluxions and methods for determining the Earth's size. That already tells you the kind of mind we are dealing with.


Thomas's greatest impact arrived after he did not. After Bayes died in 1761, his friend Richard Price sorted through his papers, found a crisp solution to an inverse-probability problem, edited it, and published it in 1763 as *An Essay towards Solving a Problem in the Doctrine of Chances*. This reversal — inferring from data to belief — underpins everything from medical testing to machine learning today.


The same decade that Mozart's father was carting a prodigy across Europe, that Abu Dhabi was founded, that Watt improved the steam engine, and that James Cook set off to observe the transit of Venus. Colonialism was just getting going. What a time to be alive.


Here is the centre of it:


**P(H | D) = P(D | H) × P(H) / P(D)**


Posterior equals likelihood times prior, normalised by the evidence. For those more on the vibe-maths end of the scale, it codifies: the more we already know, and the better new evidence fits, the better our next guess will be.


Bayes lands in the Enlightenment like a clear bell. Laplace hears it, pushes it further, and dreams up his famous thought experiment about an intellect that knows all forces and positions, for whom nothing is uncertain, and for whom the future and past are both visible at once. Beautiful, and also an early warning about mistaking calculation for wisdom.


## Why care now?


Because the newest ideas in the world stand at the end of the road that starts with Bayes.


Large language models feel brain-like because they are very good at retrieving likely continuations from context. They do not literally run Bayes' theorem, but their behaviour is deeply Bayesian. They weight possibilities by how well they fit the story so far. They are still weak at directing attention to the right task. That remains our job.


## Start with a reasoned argument


Bayes would have hated the instinct to declare. His method begins with a hypothesis and waits for data. Presenting a reasoned argument means testing belief against evidence. That makes for better marketing, clearer politics, and saner product meetings. Argument becomes a process of calibration rather than conquest.


Instead of threading the needle on a 38-minute voice prompt, ask the machine to pose a series of questions that help build a reasoned argument and improve the quality of results. As platforms open up collaborative environments where multiple people and agents can define problems together, this habit will matter more than any single command.


## Provide better context


Every prediction depends on its priors. The better you frame the problem, the more likely the system is to produce sense instead of noise. Prompts work the same way. Specific, well-weighted context yields relevant outcomes; vague questions breed confident rubbish. Whether you are prompting an AI or a person, you get back the quality of context you put in.


This explains why short prompts mostly deliver mediocre results. Talented people with a sense of refinement already at hand are going to do just fine, for now.


## Bring evidence


If Thomas hadn't kept papers, old mate Richard Price would have had naught to rummage. If you have papers, that is an advantage. If you have data, use it. The future is already overrun with confident speculation. Evidence is the quiet power that keeps you honest. It is also what turns probabilistic learning into a creative act rather than a bluff. The more the model sees real-world anchors, the less likely it is to hallucinate. Humans work the same way.


If you have written a blog, Substack, or book, you are at an advantage. These become the priors that let a model write in your voice. Businesses sitting on years of press releases or video content will be able to generate synthetic material that matches brand feel and frameworks. Keeping a brand's head above the slop will become a key challenge.


## Hold multiple perspectives


Bayes' theorem is not about being right once; it is about getting less wrong each time. Holding multiple perspectives accelerates that process. Ask the question, then ask how your customer, your critic, and your competitor might answer it. Setting models with ideological frameworks or curated corpora of content makes answers smarter and teams less tribal. It is also the antidote to echo chambers, algorithmic or otherwise.


I am reading Clive James's *Cultural Amnesia* at the moment, so of course I deep-researched the works of those he quotes and start my days in conversation with shades of history. Camus has opinions on everything. Asking for your thinking to be reviewed from a client's perspective or a niggling naysayer can help you address blind spots and builds better thinking working with the machine.


## Beware the attention trap


Optimising for engagement is a near-perfect way to destroy trust. It has also been the dominant monetisation model in the Valley for decades now, and large models coming from that world share the DNA.


The stickiest content wins the scroll but often loses the argument. Attention without depth becomes addiction. Bayes would tell us that evidence must outweigh emotion, that a few strong priors are better than a thousand hot takes. Optimise for outcomes, not outrage.


The models are already testing how you respond to flattery. This feels like the bigger risk than the tech becoming Skynet, at least for now.


## The exponential run isn't over


Humans have a way of believing they are living at or near the high point of all history. It always feels like late-game Civilization and there are only a few techs left on the tree.


AI appears to be an enormous economic driver, and we are only beginning to understand what it changes. At a user level, the interface for these technologies moving beyond the command line — as the machine builds UI around software containers for prompts, new constellations of our interactions in the world — is one of the more interesting developments to watch.


We will not know that Laplace's demon is in the room until it is too late, which is another way of saying that direction, purpose, and humility are still the scarce assets. Bayes gives us a way to learn from experience without pretending certainty. That is exactly what we need.


The most powerful thing about his theorem is that it scales from the universe to the inbox. It applies equally to physics and product design, to hiring, politics, and creative leadership. Each day we update our priors based on what the world tells us back. That is what real learning is.