Even among physicists, when I say that I work in "cavity optomechanics" I tend to earn puzzled looks. The reason is partly that the field itself is very young (~10 years) and therefore little known, but perhaps also that the name itself is recent, since many of the underlying principles are much older. This blog post is meant as an "appetizer" with a little bit of history and motivation for the field. Hopefully, I will be able to supplement it with a more technical introduction soon.

## LIGO

For many, it all started with the "Laser Interferometer Gravitational-Wave Observatory", or LIGO. As the name says, LIGO was invented to observe gravitational waves using interferometers. As it happens, earlier this year they succeeded and announced the "discovery of gravitational waves".

The underlying idea is that a passing gravitational wave distorts the spacetime, which causes the distance between two objects to vary. What makes them difficult to observe is that measurements have to be incredibly precise. In LIGO, the arms of the interferometer are four kilometres long, yet the expected variation is of the order of $10^{-18}$m (that is one attometre). As a comparison, a proton is a 1000 times as large, having a diameter of $10^{-15}m$. This leads to all sorts of headaches, for example one might wonder how the "position of a mirror" can even be defined on a length scale that way smaller than even its smallest constituents. Or how large the energy of the photons needs to be to be able to resolve variations on a attometre scale (answer: 1TeV. That's LHC-scale energy, definitely not healthy for a mirror!).

On second thought, the issues mentioned above do not pose any actual challenges. The position of the mirror can be defined accurately, because the measurement uses plane waves and effectively averages over a large enough cross-section. Furthermore the lasers used have a benign wavelength of 1064nm (source: Wikipedia). The interferometer is set up in order to interfere two beams from two orthogonal arms destructively in the absence of gravitational waves. That way, even a tiny phase shift in one of the beams causes a measurable signal (light when there shouldn't be any).

The underlying idea is that a passing gravitational wave distorts the spacetime, which causes the distance between two objects to vary. What makes them difficult to observe is that measurements have to be incredibly precise. In LIGO, the arms of the interferometer are four kilometres long, yet the expected variation is of the order of $10^{-18}$m (that is one attometre). As a comparison, a proton is a 1000 times as large, having a diameter of $10^{-15}m$. This leads to all sorts of headaches, for example one might wonder how the "position of a mirror" can even be defined on a length scale that way smaller than even its smallest constituents. Or how large the energy of the photons needs to be to be able to resolve variations on a attometre scale (answer: 1TeV. That's LHC-scale energy, definitely not healthy for a mirror!).

On second thought, the issues mentioned above do not pose any actual challenges. The position of the mirror can be defined accurately, because the measurement uses plane waves and effectively averages over a large enough cross-section. Furthermore the lasers used have a benign wavelength of 1064nm (source: Wikipedia). The interferometer is set up in order to interfere two beams from two orthogonal arms destructively in the absence of gravitational waves. That way, even a tiny phase shift in one of the beams causes a measurable signal (light when there shouldn't be any).

## The Standard Quantum Limit (SQL)

What

This is frustrating. But here is a conundrum: Suppose we measure the momentum of the particle instead (and start with the measurement at $t=0$). As soon as we start the measurement, the position of the particle will be randomised, again due to Heisenberg's principle. But given $p(t)$, there is no way to recover information about the trajectory $x(t)$! (We could integrate $p(t)$, but that only yields a relative displacement, and if $x(0)$ is unknown, that doesn't tell us anything.)

Why are $x$ and $p$ different? Can we exploit this information to design a more precise experiment? It turns out we can, but the answer (in a later post) is likely to be slightly more technical.

*is*an issue, however, is that Heisenberg's uncertainty principle gives a lower bound on the precision of a continuous position measurement, which is called*standard quantum limit.*To understand how it arises, imagine someone told you what the trajectory $x(t)$ of a certain particle with mass $m$ was. This information would immediately give you access to the momentum $p(t)=m\dot x(t)$ of the particle as well. Heisenberg's principle tells us that we cannot know $x$*and*$p$ at the same time, so there must be some fundamental precision limit on such measurements, which is the SQL.This is frustrating. But here is a conundrum: Suppose we measure the momentum of the particle instead (and start with the measurement at $t=0$). As soon as we start the measurement, the position of the particle will be randomised, again due to Heisenberg's principle. But given $p(t)$, there is no way to recover information about the trajectory $x(t)$! (We could integrate $p(t)$, but that only yields a relative displacement, and if $x(0)$ is unknown, that doesn't tell us anything.)

Why are $x$ and $p$ different? Can we exploit this information to design a more precise experiment? It turns out we can, but the answer (in a later post) is likely to be slightly more technical.