Skip to main content
 

Thought Experiment

7 min read

This is a story about science fiction...

Cavendish Laboratory

Do you know what the trouble with physics is? Are you aware how much the scientific community is dominated by dogma, doctrine and the status quo? The business world is full of powerful dinosaurs who have too much to lose if they were dethroned. So why would science be any different?

Let's start with a madness that afflicts nearly every physicist.

Do you know what the difference between a model and a theory is? A theory is some equation, algorithm or logical thought experiment, that can empirically explain real-world observations and make testable predictions.

If I have a theory that explains the arc that a ball will trace when I throw it, it should be able to predict where it lands. I plug in the wind speed, the weight of the ball, the size of the ball, the strength of gravity on the planet, and the initial force and angle that will be imparted on the projectile, and the theory gives me the expected result. By testing various different balls and various different throws, the theory can be said to be proven insofar as the predictions are in agreement with the measurements, within a margin of error.

If I take various measurements from various experiments and gather them together in some great big database, and then attempt to infer a relationship between the measurements and the outcome, then I have a model. For example, it's fairly obvious that a heavy ball will not travel very far versus a light ball, when thrown with the same force. From the data, there emerges this statistical correlation between the weight and the distance, but there is no underlying theory that predicts much beyond this simple association of variables.

This is what we understand as the most important distinction between theory and model: that theories can make useful predictions that are testable, and give us some elegant mathematically expressible equation about the true nature of the universe, but models can only give us clunky retro-fitted attempts to make sense of vast swathes of data.

To understand the theory of a thrown ball, we had to employ the laws of motion, the laws of drag coefficients (wind resistance) as well as gravity. Newtonian mechanics tells us about the movement of objects in a frictionless universe. Fluid dynamics tells us about an object travelling through something like air or water. General Relativity tells us about objects acted upon by the force of gravity. Through the combination of these three theories, we can accurately predict where a thrown ball is going to land.

However, a model can make a reasonably decent job of the numbers, within a margin of error. Big heavy balls didn't travel very far, and small light balls flew further. No need for all that theory mumbo-jumbo. A model can easily be within the same order of magnitude as the theoretical predictions.

Have you heard of the Standard Model of particle physics?

The Standard Model tells us what goes on at a subatomic scale. Or rather, it attempts to, given vast quantities of experimental data.

Before I tell you any more about the Standard Model, let me let you in on a little secret. It doesn't scale. Yes, that's right. The Standard Model is very successful at the subatomic scale, but when we use it to model the entire universe, it doesn't work. In order to use the Standard Model to explain what is observable in the universe, cosmologists have had to invent 3 massive cludges, for which there is no experimental evidence to confirm the existence of: inflation, dark matter and dark energy.

The Standard Model concerns itself with stuff that we can see, and how things interact with each other. However, cosmology has had to invent dark matter (27% of the universe, but you can't see it), dark energy (68.3% of the universe, but you can't measure or detect it) and inflation (which attempts to explain how we got into this mess in the first place).

Religion is often derided for asking the faithful flock to accept things at face value without evidence, but now the dominant science of physics and cosmology asks us to to believe that over 95% of the universe is made up from stuff that's never been seen, never been measured, is not predicted by any fundamental theory and was basically dreamt up because our favoured model of the subatomic world doesn't scale.

Now, please don't get confused with quantum mechanics.

Quantum theory is quite empirically testable. The quantum behaviour of the universe is nearly impossible to visualise. Our day-to-day experience of the world is a deterministic one. When I put down the ball and look away, when I look back it's still there. When I throw the ball, I can know both its momentum and its location. I can predict where balls are going to land, in the deterministic world of the macro scale. Things get mighty weird in the quantum world, but that doesn't mean that things are not underpinned by good theoretical science.

The problem comes in when we model. The Standard Model has over 20 free constants, which are tweakable numbers fed in from experiments. The Standard Model is designed to be constantly improved, as new data become available. The often touted defence about how well tested and accurate the Standard Model is, is null-and-void, because many of those experiments actually feed back into the model itself. The Standard Model automatically adjusts itself to match experimental observation.

The problem with the Standard Model is that it doesn't make any testable predictions. The tests that we have done have simply improved the model. The experimental measurements change the free constants that are plugged into the model. The model confirms itself, but it doesn't tell us anything we hadn't already experimentally measured.

The only true test of the Standard Model would be to scale it up to see if it matches observations from cosmology. It doesn't scale.

Models don't give us fundamental theories that underpin the universe, and allow us to make predictions about previously unobserved phenomena that we can then design experiments to investigate. Models cannot be proven or disproven, because they do not make predictions. Models are unassailable, because they can adapt themselves to include any new experimental data.

The whole era of shut up and calculate has been a hinderance to the progress of theoretical physics. When people talk about the staggering accuracy and number of experiments that confirm the Standard Model, they ignore the fact that the model is only as accurate as the measurements that were used to build it. The model has told us nothing about the fundamental nature of reality.

Now, when the Large Hadron Collider (LHC) detected a bump in experimental data at 75 GeV, we had over 300 papers published to explain such an anomaly. The bump turned out to be nothing more than noise, a glitch. Physicists are writing science fiction - fan fiction - to explain the workings of the universe, but none of it is backed by testable theory that makes useful predictions.

Show me the physics that predicts a discovery in advance as opposed to proton smashing and attempting to retrofit data to a model. The current design of experiments completely contradicts the scientific method.

Until we return to the era of coming up with hypotheses and fundamental theories that make testable predictions, and then building experiments to prove or disprove those theories, we will be stuck with our multi billion dollar particle smashers, that do nothing except to improve the accuracy of models by a few percentage points, and unearth no useful physics at all.

 

Tags: