This might be a naive question, but I'd like to clear it out.
Do gravitational waves travel at the speed of light?
I know the theory says nothing can travel faster than light. I also know that photons can be seen as quanta or as waves. So my guess is that gravitational waves travel at most, at the speed of light.
But do they? or do they travel slower? faster? Is there a doppler effect for GWs?
I ask because I would think ripples in the space-time fabric itself might be a bit different than light waves or other more studied phenomena.
Thank you for this. You linked to the youtube videos but I watch the PBS app on my Roku all the time and it's horrible at recommending what I should watch.
"Watched 'City in the Sky' [a 3-part series about airports and planes]? You'll love Downtown Abbey or The Great British Baking Show!"
I'm always looking for some actual good/educational shows on there, and there's so many great hidden gems, but they're all... hidden.
Setup your youtube account with liking/subscribing to the content you're interested in (even if it's from PBS) and you'll get some okay recommendations.
The speed of light is sometimes referred to as the speed of causality, and it seems like it's more of a fundamental speed limit on the propagation of events or information through space.
IIRC, everything moves through spacetime at c. Things with mass like people, planets, etc, move through the time portion as well as the space portion. As you go faster through space, you travel less through time, though at non- relativistic speeds you don't notice (GPS satellites do have to account for this). Electromagnetic waves have no mass, they don't travel in time, so the entire portion of their travel takes place in space, so we say they travel at the "speed of light."
> Electromagnetic waves have no mass, they don't travel in time, so the entire portion of their travel takes place in space, so we say they travel at the "speed of light."
This part comfuses me. If they don't travel in time, how do they have a speed? Light is a type of electromagnetic wave right? And it takes many years to travel to us from a nearby star.
If we can measure or calculate the time it takes for light from some place to reach us, does that not imply traveling through time?
Another way to think about it, is how time effects the object itself.
Photons are completely immutable, while they travel they don't change at all, if a photon was a "smergsboard" it would remain "smergsboard" during the whole trip.
One of the most interesting ways I saw explaining this, is imagine 'spacetime' as a cartesian space.
You have 4 axis, X, Y, Z and time.
EVERYTHING has speed of 'c', so you use trigonometry and rotations to figure the values, light, that have a speed of 'c' in the 3 space axis, then obviously have speed of '0' in time axis.
----
Now, one interesting application of that knowledge is how they figured the speed of neutrinos... As I just wrote, if something is travelling at speed of light, it is 'frozen', never changing...
But 10 years or so ago people figured that neutrinos change mid-flight, there are 3 (or more... people are unsure yet) 'flavors' of neutrinos, and during tests people noticed that even if you make a machine that generates only one specific flavor, what reaches on the other side is not necessarily that flavor, meaning they changed mid-flight...
But if they change, then they have some speed in 'time', this means then that the speed in space must be smaller than light.
Right now there are couple experiments where people are trying to use the changes in neutrinos to calculate their speed in 'time', and then by elimination figure their speed in space. I find it quite interesting, how people can use math to figure physics when our instruments aren't precise enough.
photons don't freeze in time in their own reference frame, and one doesnt get to priviledge any particular reference frame including those that are different from the photon's
no,the "Spacetime Interval" along a null geodesic is 0, but a null geodesic "does not have a Proper Time associated with it". Undefined is not the same as 0. "For lightlike paths, there exists no concept of proper time and it is undefined as the spacetime interval is identically zero. "
Tomayto, tomahto. You can't parametrize a null geodesic by proper time, sure, but I don't see anything particularly wrong with calling the arc 'length'/spacetime interval between events along any particle trajectory a proper time, even when it's 0.
(Δs)2 = 0
It is wrong to then equate this to some sort of proper time Δτ, find that
Δτ = 0
for a photon, and thus conclude that photons always "experience" zero proper time. No. Remember that Δτ is defined as the
time difference between two events as measured by an observer (i.e., an inertial frame) that actually travels between the events. Photons
have no reference frames! So the definition of Δτ doesn't apply to null paths.
I disagree: While you cannot have a clock travelling alongside the photon, you can have a family of clocks travelling along light-like trajectories that have the photon trajectory as its limit.
But as I was alluding to with my 'tomayto, tomahto' remark, this is a question of semantics, and we're not really arguing about physics, but labels.
It's fine to be confused, because the idea that "photons don't experience time" is physically meaningless.
If you plug c into the Lorentz transformation you get an infinity, which doesn't tell you anything particularly useful.
There's no physical way to accelerate to light speed, so it's meaningless to make assertions about how the "experience" of travelling at light speed would be different to the (presumably simpler) experience of travelling at < c.
The problem is that relativity is a classical theory, and it says nothing about the underlying physical processes of photon creation/destruction and propagation.
Maybe one day a Theory of Quantum Gravity will fix that problem and provide a detailed low-level picture of what actually happens when things move through spacetime. But we're not going to get there for a while.
In the meantime, we'll carry on using concepts like "position" and "time" without really understanding the mechanisms that generate them.
And if that sounds obvious, it really isn't. It's astounding that the universe knows where everything is and where it's going. Not only does it somehow keep track of all those changing spacetime relationships within a self-consistent system, but it also generates the counterintuitive geometry described by relativity.
This sounds a lot like the common "God of the gaps" argument that Hitchens and others describe, in which a deity or deities are supposedly invoked to explain what we do not yet understand.
Yet it is fascinating that (1) any system of thought (including science itself) must rely on axioms; (2) by Godël's incompleteness theorem, no system of thought can prove its own axioms; and (3) thus it would seem that faith is inescapably required to believe in anything at all.
When evaluating world views, perhaps the best metric is to evaluate which of them requires the least faith.
For my part, when considering the known universe's mere existence, atheism seems to require a lot more faith than theism.
Another way to look at it, is a requirement to accept uncertainty - the existence of unknowns - or indeed "unknowables".
To each their own - but I don't see "There are some things we cannot describe in our system of knowledge" as a particularly strong proof for the existence of God.
Perhaps some of us have been imbued with an unhealthy and naive lust for certainty, in part by an education system that put an emphasis on right and wrong answers rather than on the quest for better questions?
There's a lot of difference between "there are some things we cannot know" and "all systems of belief rest on axioms that must be taken on faith".
To me, the latter is an encouragement to rationally evaluate existing belief systems against each other, since it reduces all of them to a level playing field. From there, one can apply a simple twofold truth test to each system: correspondence to reality and internal consistency.
> atheism seems to require a lot more faith than theism.
There is a world of difference between thinking something caused our universe to exist and perhaps giving it the name "god", and believing in a specific god or specific claims about any god.
1. We give capital letters to the Big Bang, certainly we can have capital letters to the word to describe the cause of the universe, God?
2. God is not part of this universe.
3. God, having been the cause of this universe, can be described as all powerful - at least in the same way Big Bang was powerful.
4. God is outside of time.
The above requires no faith at all - to me it's obvious God described in points 1-4 is true.
Now the following requires a degree of faith:
5. God is one, indivisible, at least in the same way a quantum (by definition) is indivisible.
6. God, rather than having zero consciousness, is completely conscious, having equal or more consciousness than all the consciousness in this universe.
If God is all powerful by definition of having caused the universe then I find it harder to believe the specific claim that God in fact, is not conscious, even less than a starfish. And when we are talking about God, it's more likely to be all or nothing, so my specific claim is that God is completely conscious, and being outside of time, all-knowing.
Very trivialized: in some sense, you could say that for light itself, there is no time. In the same sense as there is no space for things that do not move (in space).
Think about a wave on a lake. It may appear to be moving in time. The water particles certainly move up and down. But if nothing is in it's "path" is the wave really moving? It's actually just there, the wave undulates and that creates the perception of motion, but really the thing you see moving is just a visual effect on the surface of a field the size of the entire lake. A field which is not moving at all.
Photons are similar. You see the peak of the wave moving around, but the wave itself is everywhere and eternal... until other forces get involved anyway.
I'm not sure about this analogy. You can argue that the apparent motion of the wave crests is an illusion being pieced together by our brains when we see the totality of the elliptical movements of water particles at the surface.
But at the moment when you drop a pebble into a pond, there are definitely parts of the surface which are moving and parts which are not, and the influence of the energy you introduced with the pebble can clearly be seen to spread outward over time.
Granted this doesn't map directly onto electromagnetic waves because the mechanisms involved in wave propagation are different.
This picture is incorrect: the electromagnetic wave has a mechanical momentum in the direction of its propagation, which means that something is moving in that direction.
According to the classical electrodynamics - it sure does. From the quantum mechanical point of view, it also does - in the sense that we can always measure it (i.e. it is an observable). The "property" in this case is not so much a particular outcome of such measurement as much as the expectation value; actually, I'm afraid that the use of the word "property" in this context can only lead to confusion as it effectively conflates several different things: the (quantum-mechanical) state, the observable, and the particular value observed.
Basically to establish time a measure has to be taken. Either by a human with our units for time, or by interaction with some force or object to establish that "this happened then".
We commonly think of time in the linear time line sense.
It's more accurate to think of it as a big mesh of points of interaction.
So if you reach the speed of light does that mean you'll reach the end of the universe, being that time stops for you and speeds up for everything else? Speaking of which, is a black hole just a window into the end of the universe?
It's more a sci-fi way of expressing things, but yes, sort of. Time dilation becomes infinite at c, so massless particles do not "experience" time. This is the reason that photons on a Feynman diagram are traditionally drawn horizontally.
But the black hole part is actually wrong. Time dilation approaches infinity at the event horizon, not the singularity. So to extend your metaphor the interior of a black hole forever exists "beyond time" from our perspective.
No. Everything has its own worldline through spacetime, and between two events at point p and q on a worldline through a given spacetime we can measure the interval dS between p and q. When we normalize the interval against a set of coordinates and a chosen metric signature (here +++-) we can have three types of interval: dS^2 = 0 is lightlike, dS^2 > 0 is spacetlike and dS^2 < 0 is timelike.
A concrete example using the Minkowski metric for a set of Cartesian coordinates dS^2 = dx^2 + dy^2 + dz^2 - cdt^2. If we have a test object that always remains at the (x=0,y=0,z=0) origin of the coordinates then as the "t" coordinate increases with the passage of time, -cdt^2 is the only nonzero component of dS^2. From t=0 to t=10000 (where t is in, say, seconds) is perfectly timelike interval. However, any way we vary x, y, and z, (measuring the coordinate distances in, say, light-seconds) if the changes are small compared to the constant factor c, we will have a timelike interval. Light itself, conversely, follows a lightlike interval. If we restrict a beam of light to move only on the x axis, then we have (in (light-)seconds and seconds) x=c, t=1; x=2c, t=2; x=3c, t=3; and so forth; the -c factor cancels out the change in x at each step, so dS^2 = 0.
But bear in mind here that the Minkowski metric is just one of many known exact solutions to the Einstein Field Equations, and there are many many many known approximate solutions. Moreover, we are free to use arbitrary coordinates. The Minkowski metric looks different in spherical polar coordinates, for example. We are also free to use arbitrary units. We can even use the metric signature (-,-,-,+) if we like. However, when we take all of these into account, we're left with the same distinction based on the interval: they're either lightlike, timelike, or spacelike.
A lightlike worldline is one in which intervals on the worldline are always light-like; a timelike worldine is one in which intervals on the worldline are always spacelike.
We have strong evidence and stronger theoretical reasoning to expect that massless objects will always have lightlike worldlines (and that light itself is massless) while massive objects will always have timelike worldlines.
So:
> Electromagnetic waves have no mass, they don't travel in time, so the entire portion of their travel takes place in space
No, they have lightlike worldlines. An interval between any two points on the wave's worldline will be lightlike. This generally means that changes in the spacelike coordinates will exactly match the change in the timelike coordinate multiplied by the constant factor c. However, under most reasonable choices of coordinates, the "t" coordinate will certainly vary from point to point along its worldline.
However, one has free choice to decide which axis is timelike or spacelike, and different choices may seem like the natural ones to different observers.
In order to cope with these sets of choices we write down the laws of physics in a generally covariant manner. This has been one of the greatest successes of relativity; any proposed theory that cannot be written down in generally covariant form is almost certainly unphysical in some way.
Lastly, the value of "c" is determined empirically, and will vary depending on one's choice of units. Relativists will often use a system of units in which c is set to unity (c=1), for example, in order to simplify the form of equations.
> (GPS satellites do have to account for this)
The theory side of GPS relies upon covariance matrices.
That's a rather impenetrable, buzzword-laden way of saying exactly the same thing as the grandparent post: everything moves through spacetime at c, which is a velocity expressed as a 4-vector of constant length. Increase one component and the others have to decrease to maintain the length.
Put all your velocity into the time component and you can't move in space. Conversely, if you put all of your velocity into the spatial components, you will freeze in time like a photon.
It's not buzzwords, it's jargon. Speaking as a physicist, here, it also reeks of someone trying to show off GR101 skills.
It's like a CS guy responding to "hashtables are O(1) lookup" with a wall-of-pedantry about different implementations, complete with complexity analysis by evaluation of recursion equations and whatnot.
Agreed. The level of pedantry when trying to explain a topic doesn't sound like it comes from someone who's internalised the core physics concepts they want to explain. It reminds me of when I was trying to explain from-first-principles classic thermodynamics to a layperson as a second-year student. It didn't go well.
photons don't freeze in time in their own reference frame, and one doesnt get to priviledge any particular reference frame including those that are different from the photon's
Photons are gauge bosons and those are tricky because they involve making a choice of gauge. I discuss gauge bosons a bit at https://news.ycombinator.com/item?id=15107372 if you're interested, although you can turn to any number of textbooks or similar sources for formalisms and likely better explanations.
For the same patch of spacetime with "a photon" in it, different observers can calculate different photon numbers and different photon energies.[2] That is to say that these properties are not always conserved under a change of systems of coordinates (trivially, when we have two observers with different observables, we can fix a coordinate system's origin on either of them, but that doesn't make either "right"). Indeed, the properties of the photon that survives such changes are: they locally move at c, they have no intrinsic mass, but they do have momentum (and thus contribute to the stress-energy-momentum tensor).
The intrinsic mass is the same as the rest mass (a quantity that remains the same in all frames of reference related by Lorentz transformations). The intrinsic masslessness of photons is required for the gauge invariance of the Feynman amplitudes of QED or the Standard Model. More detailed explanation would involve a trip through an explanation of the Ward identity[1] which gets even harder when curved spacetime is in play.
I'm sure you've already discovered that the topic of photons' frames of reference comes up a lot in much harder-science forums than HN, and hopefully you've found a decent treatment of that on e.g. physics.stackexchange.com or physicsforums.com. If you find a decent link, maybe someone (and probably I) would appreciate it if you attach it to this thread because it is likely to come up again someday. :-)
[2] redshifting is the clearest case of photon energy change, and can arise from uniform relativistic motion, relative acceleration, metric expansion, or real gravitation. Extremely relatively accelerated observers will disagree on particle counts generally, with the Unruh effect serving as a partial formalization.
We don't use inertial frames of reference in General Relativity because in the presence of real gravity, there are (strictly speaking) none anywhere. [1]
There are however static spacetimes that admit inertial frames of reference. Flat spacetime aka Minkowski spacetime is an example. That is the spacetime of Special Relativity, and in Special Relativity inertial frames of reference are extremely useful. However, the defining feature of flat spacetime is that there is no gravity anywhere in it.
We can talk about coordinate conditions [2] where those generalize a set of activities that pick out a specific choice of coordinates (and consequently an origin for the coordinates), a choice of units, and some other choices that one can freely make.
The photon's "own reference frame" could be specified as for example keeping it at the origin of a set of flat Cartesian coordinates (x=y=z=0=const) and letting it move against the t coordinate. This is unlikely to be useful, and can be made useless with various choices of units. However, one can say conclusively that the photon's spatial coordinate velocity is zero.
However, coordinate velocity isn't physical: it goes away by changing the system of coordinates. For example, in this system, your coordinate velocity is always exactly c. And so is the moon's. And so is the Andromeda galaxy's. But we can see how unphysical that is simply by fixing coordinates with you always at the origin, or the sun always at the origin, and noticing that the only thing moving with a coordinate velocity of c in those systems of coordinates is the photon.
Indeed, in General Relativity comparing velocities is extremely tricky for objects not occupying the same point in spacetime because it is very easy to be misled by what you're being told by the coordinates and the choices "hidden" within them. The usual advice is to avoid such comparisons (cf. Baez [3]).
Nowadays pretty much every relativist will tell you that Special Relativity emerges from General Relativity (the more fundamental theory) as a special case in the limit where gravity is weak, even though historically Special Relativity came first and informed the development of General Relativity. (They'll also probably advise you to calculate using Special Relativity forms of physical formulae where you can do so!)
However, where gravity is non-negligible or where one is tempted to use a broader set of coordinates than e.g. spherical or Cartesian coordinates on a local patch of sufficiently flat spacetime, Special Relativity is simply inappropriate. Intuitions from Special Relativity about how an object moving at exactly light speed (or even extreeeeeeemely close to it over sufficiently long intervals, like with an extragalactic ultra high energy cosmic ray[4]) are likely to be misleading; instead, one should use the toolset of General Relativity.
Unfortunately that toolset requires complicated mathematics. [5]
- --
[1] We can define locally inertial frames of reference (LIFs), and the Lorentzian structure of spacetime (four dimensions, three of one sign and one of the opposite sign) guarantees that we can do this in many cases, especially in infinitesimally small regions around a point, or in a small region along a geodesic. (I can explain this further if you are very interested, but its fairly technical and grinding it down to something suitable for HN may take some iteration. I don't even have a link to a decent explanation for what e.g. Fermi coordinates are, why they're useful, and how to use them :( so maybe I'd be breaking new ground ;) ). Some LIFs can be more extensive when gravity is sufficiently weak in that region: an Earth-based "laboratory frame" in Special Relativity is really just a LIF without admitting it; particle colliders typically don't really have to consider the influence of the gravity of bodies like the Earth, the Moon, the Sun, and so on, even if one has a view of the ocean (and its tides) out of one of the lab's windows.
Can you recommend a book/resource that explains this from first principles and introduces the math involved as well? The books I've read either exclude math altogether or if they don't, they assume that reader already knows and understands all the math that is required for this.
It assumes you know or are ready to learn some differential calculus and how to read a formula with an integral but it (maybe a bit steeply) teaches tensors (and some aspects of vectors and scalars) across the first couple of chapters. Carroll provides some (quasi-)samples under the "Lecture Notes" tab, but the book itself has benefited from editing. He also supplies links to alternatives that can be had for free-as-in-beer.
The classic text is "Grativation" by Misner, Thorne, and Wheeler. It's very dense, but very thorough. The other classic is "General Relativity" by Wald. They don't really include the math background though, for that you need texts on multivariable calculus.
Check out "Why Does E=MC^2". (I wish I understood it better. I think some of what raattgift was saying is related to the deeper issues, which the book does raise, and which I paraphrased at a very high level.)
You are terrible at explaining things and are correcting someone who actually explained it much better than you, even if he is technically incorrect. Your jargon laden overly verbose response is wildly out of place in correcting a simple layman level description of something. It's not appropriate to respond to a simple metaphor by slinging general relativity equations, you've probably instantly turned off anyone reading this from your position and at the end of the day you aren't really saying anything different, you're just trying to sound smart. If you are saying something differently, you've utterly failed to communicate it in any reasonable way.
If I get the GP correctly (IANAP and everything), he is pointing that General Relativity works equally well with many different descriptions of space-time.
The one where everything always move at c, and only the direction of movement changes is one among many possible (and indistinguishable) descriptions, and not a very useful one.
If he understood how to communicate with people at all, he'd have simply said what you just did. He's also mistaken, it is an extremely useful one because it makes a complex topic clear in a way that explains why C can't be exceeded. All models are wrong, that model is useful because it can be expressed very simply, it doesn't matter that it's not the only valid way to see things.
> it doesn't matter that it's not the only valid way to see things.
It certainly does though -- the existence of many valid models and simple mappings between them implies a 'deeper model' at play, and putting one particular model above all others as the 'correct' is actually discouraging the reader from getting towards the deeper truth.
If you say 'the sun is stationary and the planets revolve around it' is the only valid description of the solar system, you would be wrong, and you're also making it harder for a person to understand relativity down the line.
> and putting one particular model above all others as the 'correct' is actually discouraging the reader from getting towards the deeper truth.
No one is putting a particular model above all others, one model is simply being used to explain the relationship between C and time, it's not incorrect just because other models are also valid as long as they all explain the same relationship between C and time.
> the existence of many valid models and simple mappings between them implies a 'deeper model' at play
Not necessarily. You can have completely different formulations of the same physics. Lagrangian, Hamiltonian, and Newtonian dynamics are different models of classical motion. Does that imply there's a "deeper model" of classical motion? I wouldn't say so.
But the way he communicates eliminates ambiguity and allows the conversation to stay on topic. As soon as you try to express things "very simply", the conversation quickly degrades into an almost meaningless argument about things the participants do not (and, worse yet, do not wish to make a serious effort to) understand - which is exactly what we see here.
The only equation in my comment is the Minkowski metric which is the metric for Special Relativity, and should be familiar to everyone who has done any SR at all.
Moreover, it's just a a generalization of the Euclidean metric as follows:
Which shouldn't really scare anyone who has done Euclidean geometry.
The interesting difference is this: in Euclidean space, straight paths are shorter than curved paths, but in flat spacetime, it is curved paths that are shorter.
gnaritas is being pretty hard on you and shouldn't resort to ad-hominem, e.g. saying you are terrible at x. I think you obviously care a lot about this topic and actually have a lot to offer.
I will offer a perspective on this exchange...
I think people who have survived many-many math based courses often have an immediate and aggressive response to diving into the minutae of a quantitative topic before they have grokked the intuition behind it. This is a defence mechanism built up from hours and hours of wasted time in lectures where the topic has moved on before the student/s have really developed the basics and are ready for the detailed stuff.
Hours and hours of wasted life.
When a person with this defence mechanism sees a noobie about to fall into the same horrible cycle, this will trigger some aggression. For example: downvoting your post- they are trying to protect the noobs.
So if you are interested in reaching as many people as possible, please don't give up. I think your teaching effectiveness could be improved by finding ways to engage people at their (lower) level of understanding and trying to help then incrementally improve their mental models.
Thanks. That is an interesting perspective, and I understand it.
I hope you don't mind if I pick up your comment as an invitation to go even more meta than you. :-)
> if you are interested in reaching as many people as possible
I'm not sure I am, even if it's "as many people as possible on HN who open this discussion". I'm guessing that most people who will read down a thread on a topic like this have some interest in it, and probably have a little math, or some search-fu, or perhaps even a little physics, but little exposure to General Relativity (one can earn a Ph.D. in physics without ever having to walk through a comma-goes-to-semicolon exercise let alone deal with exceptions to that procedure, but I'm not writing for e.g. solid state physics Ph.D.s here and hopefully they already know how to look beyond an HN thread or Nature News link if they want to know more about SN-BH or similar stellar collisions).
However, I don't want to alienate people on either side of that -- neither the experts nor the enthusiastic-but-allergic-to-mathematical-physics readers.
> please don't give up
Thank you again. If you have any concrete suggestions (now or in some future thread) about how to help engage the latter group, I'll gladly read them.
However,
> before they have grokked the intuition behind it
the problem is that intuitions like "the shortest path between two points is a straight line" are based on Euclidean geometry, which is probably much more often taught rather than discovered by a student sua sponte, although once taught experimental validation is easy. But in Euclidean (well, Minkowski) spacetime, curved paths are shorter. I think that pretty much nobody would have any chance of discovering that feature of spacetime on her or his own, or intuiting it from planar geometry. However, it's easy enough to teach by explaining what a line element is, and what the line element of Minkowski spacetime is. Once that is absorbed and is familiar enough that reading and drawing spacetime diagrams isn't a chore, then one might expect intuitions like "one can resolve the twin paradox by observing that the travelling twin takes a more-curved path through spacetime than the non-travelling twin". But even there, people sometimes stumble on understanding that that statement is demonstrably true under any choice of coordinates, not just ones which hold the non-travelling twin at the spacelike origin (from which the traveller departs and to which the traveller returns) throughout. And even then, where does one's intuition take one when one or both twins experiences significant real gravity?
One option of course is to shrug off opportunities to try to write into words what one would normally describe using a formula. I'm sure that's not what you're suggesting (but others in this topic seem to).
Another is to give a reply that is neither correct nor detailed but which is at least more correct. Maybe that helps a little, but I doubt it advances anyone's understanding rather than be memorized as a slogan or factoid.
Yet another might be a pointer to a standard textbook. Since they tend to be chunky and expensive (and I can't even guess about availability at a local public library rather than a major reference library open to the general public), I'm not sure that's so helpful either, unless the pointer is to a pirate scan. :-)
Penultimately, this is unpaid pseudonomymous fun. I think ELIx (FSVO x) is a good challenge for the explainer too, especially for extremely abstract topics, otherwise why bother? From this perspective, what's a decent choice for "x" on HN? (We surely can agree that it will be different than "x" on e.g. physics.stackexchange.com; in fact I think that is close to your central point.)
Finally, in comparison to the previous paragraph other models exist, e.g. http://backreaction.blogspot.com/p/talk-to-physicist_27.html - I am reasonably sure Sabine Hossenfelder would be happy to negotiate on publishing a transcript or summary of a conversation on her blog or elsewhere (perhaps even as a comment on HN :) ) and I am even more sure the quality of her or her associates' answers will be better than mine.
I think that your perception of the audience as homogeneous (and like you) is wrong, and I have evidence for that in the form of upvotes and much politer comments than yours.
If you don't care about that, then I think that we should agree that nobody profits by you or I replying to one another's HN comments in future.
I didn't like the original comment "everything moves through spacetime at c.". I was trying to write a reply, but I saw your comment.
I think your comment is too long and too technical, but it's correct and explain why the original comment is wrong. So I upvoted it.
The problem is that is too long and too technical, so the main ideas are lost and many people prefer some wrong handwaving.
Some recommendations:
* Try to keep the answer shorter. Link to the Wikipedia article if possible. Perhaps ignore part of the original comment to make the reply shorter.
* Try to avoid too much technical jargon. This is a technical forum. You can assume everyone has Calculus 101 and Algebra 101, and a few more elementary courses, but nobody has advanced knowledge of all the topics.
It's difficult to explain in simple words why I don't like the original comment. The answer is strangely in one of your other comments.
> "Sure, you can always choose useless systems of coordinates."
It's too short! I'd try to add some details: The problem with "everything moves through spacetime at c" is that you must "sum" the squares of distance "x" in the laboratory reference frame and the time in the reference frame of the moving spaceship "s" (and fix the units, sorry). So you get a nice property that is valid if you use properties that are measure in different reference frames. But 99.99% of the time, life it's much better if you don't mix measurements in different reference frames.
[My version still needs a lot of fixes, fix the units, perhaps replace "reference frame" with "astronaut", add at least a formula, ... I guest that with 1 or 2 more hours I can fix it, but this is a deeply nested thread that nobody will read and I have to go to eat ... https://xkcd.com/386/ ]
> this is a deeply nested thread that nobody will read
I read it.
> too technical
Too technical for the comment or too technical for HN period?
> Link to the Wikipedia article if possible
I do link to things that go into further detail in many of my comments.
However, I think the relevant (English-language) wikipedia articles for this thread are bound to be much longer and more technical. The simple English wikipedia versions, to the extent they even exist, are generally not worth linking to.
The details were already in the replied-to message that people seem to either like or dislike. :-)
> I guest that with 1 or 2 more hours I can fix it
FWIW, I only took a few minutes. Is it likely that anyone would spend an hour or two on any HN comment (especially a down-thread one) as opposed to a blog posting, article or comment elsewhere?
Indeed some of my original motivation is exactly https://xkcd.com/386/ however the point wasn't just to be right, but also to try to advance someone's (not necessarily the original writer's) understanding of a line element and that line elements are meaningful (and in particular typically nonzero) for light.
That's more or less how you work it out on paper, with the hope that when you add it all up it comes out to c, (usually tuned to "1" or unity), but that isn't necessarily what is physically happening. There isn't some part of us that's compensating temporally, for a lack of spatial velocity, it's just that when you add up the numbers or draw something like a spacetime diagram, it should come out a certain way.
Gravitational waves are believed to travel at C, the theory says they should travel at C, and we're slowly narrowing in on C in measurements, but our ability to measure gravity waves is poor enough that we aren't yet quite sure.
Which is one thing this observation would fix, assuming it's real.
General Relativity (GR) is a metric theory of gravitation, with one metric to which everything couples.
In GR gravitational waves (GW) have lightlike worldlines. Consequently, a source emitting both electromagnetic and gravitational radiation will have its GWs and EMWs (or more generally its optical image and the direction in which things indicating its gravitational influence point) line up. This has been well-tested observationally, for example by watching the deflection of light from distant objects (like quasars) around Jupiter (whose mass, orbit, and distance from us are all very well characterized).
However, one can write down a bimetric theory of gravitation with different couplings. It's possible to write down a bimetric theory in which gravitational waves move more slowly or more quickly than electromagnetic waves.
It was fairly popular some years to take this kind of approach to solve some cosmological problems relating to the homogeneity within the horizon [1]. These were often cast as "variable speed of light", for aesthetic reasons fixing the speed of the gravitational interaction. However, it is perfectly reasonable to call the same models "variable speed of gravitational radiation" fixing the speed of light, as one has many freedoms with respect to coordinate conditions in General Relativity.
The problem is that these "variable speed of gravitational radiation" theories do not match observations of the galaxy-filled parts of the universe that we can see, and also does not match what we see in the Cosmic Microwave Background. (Some bimetric models fail to match the results of laboratory-scale physics experiments too.) Viable bimetric theories thus have the second metric decay in the very very early universe, such that in the galaxy-filled epoch the speeds of light and gravitational radiation are identical, and physics becomes (outside of the very early universe) indistinguishable from their "standard" single-metric General Relativity based generally covariant formulations. Such decaying-bimetric theories usually are designed to do away with cosmic inflation, but it becomes difficult to distinguish between cosmic inflation and viable bimetric-decay models because the observables eventually have to become identical, and the time at which they can differ gets pushed back further as we develop observatories which can resolve objects at ever higher redshifts, or as we can get better data on the anisotropies of the CMB.
> we're slowly narrowing in on C in measurements
We should determine c empirically, but we have already done so to exquisite precision.
However, we can also fix c to some exact value (e.g. the CODATA value, or 1) and be mindful of the side effects of doing so. This is, by far, the most common approach; you will be hard-pressed to find any formulation of a physical law which introduces uncertainty into the value of c, although it's certainly doable.
The fixed CODATA value is extremely good. The relative uncertainty in the speed of light is principally driven by the uncertainties in interferometry, which at the time of the 1983 redefinition of the metre was less than 0.1 part per billion (and is now less than a part per trillion, and so for all practical purposes is unimportant at scales of the observable universe).
Finally, one should note that in a general curved spacetime, while the constant factor "c" arises everywhere, it can only be taken as a speed when comparing two objects that co-occupy exactly the same infinitesimal point in spacetime. Comparing the speeds of distant objects is something that one should avoid in General Relativity. However, everywhere in every spacetime, in vacuum conditions one should find the same "c" as the upper limit of relative speeds of objects just as they enter, co-occupy, and exit the same point.
> We should determine c empirically, but we have already done so to exquisite precision.
As interesting as that is, what I meant we're narrowing down is the speed of gravity -- that is, that it's almost certainly equal to C. Not that we're narrowing down C itself, it's way more convenient to define that as 1.
Since you're here, though... the horizon problem. I can kind of understand the logic that makes it a problem, but...
If you have the same initial conditions everywhere, and the same laws of physics, wouldn't you expect everything we can see of the universe to look similar even if there hasn't been any communication?
That seems like an obvious implication of having deterministic physics, and sure, "same initial conditions" is a big assumption -- but I never see this hypothesis offered.
The tl;dr is that that hypothesis is a (subjectively) boring and (less subjectively) non-Copernican answer.
I'll expand on this.
Let's just accept arguendo that the laws of physics are everywhere the same and in particular that gravitation matched General Relativity from well before photon decoupling to the end of recombination.
You can indeed then take the position that the initial conditions of a hot big bang were extremely finely tuned, with overdensities at photon decoupling (and thus reflected in our CMB) being exactly encoded in the initial values.
However, the trend is to explore mechanisms that allow for much higher (Boltzmann) entropy in the early universe with overdensities evolving from fluctuations.
ETA, I'll steal a line from slide 6 at [1] : "don't explain low entropy by positing even lower entropy".
This trend has been productive in the sense that it has produced several progressive research programmes amenable to empirical tests.
For example, cosmic inflation (eta: in part by allowing the particle horizon and the Hubble horizon to be very different) allows for a much wider set of initial conditions (some with much higher entropy than is implied by maximally finely tuned initial conditions) that could produce our CMB and ultimately galaxies. Cosmic inflation in the broadest sense has produced a number successful predictions, so work is certain to continue.
Pragmatism, aesthetics, philosophy aside, I have trouble imagining in detail what observatories we would have constructed had late 1990s cosmologists simply pursued a programme of discovering the details of values surfaces at progressively earlier times, with the goal of simply discovering the initial conditions eventually, all without doing much theoretical speculating about what as yet unexplored early values surfaces might contain. What would have succeeded BOOMERANG? Instead, that sort of speculating raises all sorts of interesting questions about the behaviour of matter at extremely high densities and temperatures, how those behaviours might be encoded in a CMB that was not strictly fixed at the hot big bang, and what alternatives to the hot big bang (e.g. a big bounce cf. slides 7- @ [1]) could lead to our galaxy-filled sky.
So you practically hit the nail on the head in recognizing that assumptions about initial conditions is crucial to whether one sees the horizon problem as a problem in the first place. If you don't care about complaints about the apparent non-genericity of the initial conditions, there's no problem; likewise, if you are pretty sure that initial conditions can be highly generic yet lead to the universe we see, then there's also no problem. This is fertile ground for philosophers (and historians) of science. [2]
[2] A quick search leads to Casey McCoy (University of Edinburgh)'s http://jamesowenweatherall.com/wp-content/uploads/2014/10/Wh... section 3 (edit: notably from the bottom half of p 15 where he has some pleasantly difficult questions in parentheses) and the second half of p 18.
I'm with the first one, then. I consider the Kolomogorov complexity of the laws of physics to be important, and initial conditions to be part of said laws, but I don't think it adds much complexity to posit that the initial conditions are very, very simple (and thus low-entropy).
Taking that position also gives you the answer to why we won't see black holes fissioning into outspiralling pairs of neutron stars, or shards of glass spontaneously reassembling into a wineglass that leaps off the floor into someone's hand. The only cost is extremely exquisitely finely placed stress-energy-momentum in the early universe into the infinite past.
If you have no problem with an infinite history of ever lower entropy, then luckily observations to date do not contradict this sort of cosmology, and under time-reversal the "movie" showing the universe crunching into an ever more orderly state forever isn't very shocking other than we arrange our lives with the movie playing the other way, sweeping up broken shards of glass rather than catching rising stemware. Maybe conscious life somewhere else in our Hubble volume arranges their lives in that way, though, unbreaking and unmaking their artefacts and thinking our way of doing things is strange.
Some other questions for you: how far does this cosmology's future grow? Do we get infinite entropy in the infinite future? If we don't have an infinite future, why do we have an infinite ever-more-orderly past? If we have an infinite future, do you suppress fluctuations? Do you wholly suppress Poincaré recurrence? How would you distinguish us here as us here now from a recurrence? Are we a recurrence? etc.
These questions do not seem as amenable to testing with current technology as questions arising from other cosmologies.
Well, let's see. I don't have much time to spend on this, and I'm well aware that I'm an amateur anyway, but...
One point, at least. I see no reason why a theory being difficult to test should count even slightly against the probability of its being true. Sure, it's inconvenient -- and there's something to be said for focusing on ideas that can be tested -- but since when was nature ever constrained to convenience?
As to your other questions...
I'm not sure where the 'infinite past' comes from. There's a single slice of space-time that needs to be constrained; everything else can flow from there, in both directions. If it's a low-entropy constraint then causality would naturally flow away from that slice. The two sides should be mirrors of each other, though.
As to the far future... that's harder.
- My intuition is that, when you're trying to guess at which of many possible universes you actually live in, the amount of "runtime" the theoretical universe spends on computing you (that is, some form of life, at least) should matter. But that's problematic, since it seems to predict higher complexity of the physical laws in exchange for a universe that's denser with life, and it's not what we see.
- More realistically, perhaps, the big rip might be a thing. Recurrence could fail to happen because the far future doesn't allow for structures complex enough to have thoughts, not even by random chance.
I prefer to punt on the question, though. I don't know enough to have an informed opinion.
Let's flip the question and ask what happens if we let the value of c depend on the location in spacetime.
Ellis provides a "short checklist of issues that should be satisfactorily handled by" theories that have a value of c that depends on location in spacetime. https://arxiv.org/abs/astro-ph/0703751
The tl;dr version is that relativistic physical theories will need revising because in the face of a location-dependent c, and in particular the problem arises because we necessarily break lengths and durations when c is not everywhere-and-everywhen identical.
More narrowly observables like spectral lines are sensitive to ratios involving the quantity hc which are taken to be constant. Since cosmological redshifting of spectral lines fits in a substantial web of observables related to distance (e.g. angular diameter, surface brightness) we can practically rule out variation in hc to a very high redshift (z > 10).
You might instead be asking what happens if c is the same everywhere but we've settled on a very slightly wrong value of c. The answer is that this will almost be absorbed into our system of units, in particular in various ratios involving hc. If we change the value of c to the slightly improved value, we also probably slightly change things like the fine structure constant and the electron-to-proton mass ratio, which are dimensionless quantities with ratios involving hc. These dimensionless quantities are good checks on the many ways in which we might measure c empirically.
I believe the OP meant that they're narrowing in on being confident that gravitational waves travel at C, not narrowing in on a more precise value of C.
That said, if we do obtain a more precise value of C, the definition that changes is the meter. The second is defined in terms of energy states of a caesium atom.
In none of them can information travel faster than light, but that isn't a satisfactory answer, since one half of an entangled pair still has to "know" what happens to the other in order to give the right result from measurements, even though that doesn't let you send information.
In hidden-variable models, you can argue that the experiment outcome is defined "up-front". In the many-worlds model, both sides have both outcomes but the inconsistent ones "cancel out" as they meet, and pilot-wave interpretations are just many-worlds with one configuration picked out as "real".
But in most of the rest, yes, something travels faster than light. That's a common argument against e.g. collapse interpretations.
They entangle next to each other, and they move apart at max the speed of light. you'll have already paid the price for transferring that bit, so to speak.
Information cannot move faster than the speed of light, period.
It is that question that does not have much meaning.
If you observe one of a pair of entangled particles, you will see one of its possible values. Entanglement only means anything when you compare it's value with its pair's value, and that comparison is limited to the speed of light.
So, yes, in a sense quantum entanglement is free of all the causality issues brought by GR. But it does not really exist until the pair can communicate.
How would you do that? You can't force the entangled photon at the other end of the channel to measure in any particular way. Once you've measured yours, the other one will measure the same, yes - but you don't know what yours will be until you measure, and the probability is 1/2 either way.
The reason this isn't paradoxical is because expansion doesn't have a speed, and the phrase "X is expanding faster than Y" doesn't have a proper meaning.
And recession velocities have units of distance per time and exceed c at the Hubble sphere.
Such coordinate velocities are largely meaningless, though: The more interesting quantity is the relative velocity as evaluated via parallel transport along the trajectory of the photon you use to observe the receeding object, which goes to c at the cosmic event horizon.
The most popular theories say that yes, they do travel at the speed of light. Only a coincident detection of a gravitational wave and an electromagnetic counterpart would confirm this (such as from a binary neutron star coalescence; the binary black holes seen so far by LIGO didn't emit any visible EM as far as we know).
Would even a binary neutron star coalescence give us the information we'd need to answer this? Interstellar space is not actually empty. It contains a low density plasma, and so the speed of light in it is slightly slower than the speed of light in a vacuum.
I have no idea if that plasma would also slow down gravity waves, but even if it does it wouldn't necessarily be the same amount [1].
Maybe from the differences in arrival time of light at different frequencies we could figure out enough about the intervening plasma to figure out when the light would have arrived if the space had been empty, and check if that is when the gravity waves arrived?
[1] Edit: I have done some Googling. Gravity waves would not be slowed down by the interstellar plasma. The slowdown of light through a medium depends on the existence of positive and negative charges in the medium that can form dipoles in response to the passing electromagnetic wave. For gravity all the "charges" are positive so there is no dipole formation.
Yea so the universe says nothing can move faster than a massless particle in a vacuum. The photon is the most famous—hence the speed of light—but any massless this can move at it. So GWs do theoretically move at the speed of light, though checking to see if they do requires a lot of analysis and is tricky to figure out. But yes theoretically they do move at the speed of causality (or the speed that things happen in the universe unless someone slows them down)
I’m a physicist who’s now listened to a ton of LIGO’s founders talk about it.
I really wanted to like PBS SpaceTime. So many interesting subjects and I love physics videos on YouTube. But there's something about Matt O'Dowd that makes them unwatchable for me. Something about the pacing, I just can't stay focused.
And Udiprod doesn't have a habit of making physics vids, but this is my all time favorite physics videos on YouTube which explains quantum waves better than anything else I've seen - https://youtu.be/p7bzE1E5PMY
All massless particles like photons and gravitons travel at the same speed, the speed of light. Yes, gravitational waves travel at the speed of light, and yes, of course they are affected by doppler shift.
I have the same question, based on the Alcubierre Drive math, (Wrap Drive) which is based on that space can travel at any speed.
But I also know that the Alcubierre Drive have some weird implications, such as the inner of the ship is causally disconnected from the space wrap, basically you can't control it from the inside.
Anyways, I'm curious if the gravitational waves are "space waves", in that case it wouldn't have limit, as space can move at any speed.
Yes, though one has to go through quite a bit of trouble to actually show that. The big problem is, that speed of light is defined with reference to a background geometry and gravitational waves are a distortion of this background geometry. (And this has of course not be shown yet, however this observation, if the rumors are true, would be evidence for that.)
Your intuition is correct and the question of the speed of gravitational waves is pretty complicated. The usual treatment of gravitational waves is done by linearization of some non-linear equations. This is a very good approximation for the propagation of gravitational waves since they disturb the space-time only very slightly (for example, the mirrors in the interferometry experiment at LIGO get displaced by a tenth of a nucleus of a hydrogen atom, during the passage of a gravitational wave).
In this linear approximation the gravitational waves are governed by the same wave equation as for electromagnetism (only the spin part is different since the spins of the gravitons and photons are different). Since in the linear approximation we recover the Lorentz symmetry, then the linearized wave equation has to be Lorentz invariant. Then, one can apply results from the representation theory of the Lorentz or Poincare groups; there are two major types of representations: massless and massive. They differ in striking ways when it comes to spin and when it comes to propagation, for example massless particles travel at the speed of light. If you want to have a massive graviton then you need to get the mass somehow from your theory. Einstein's theory predicts a massless graviton, which by the argument above has to travel at the speed of light. We still don't know experimentally if the graviton is massless (but last time I looked at the Particle Data Book there was an upper bound on the mass which was very small).
Now, coming back to why your question is complicated. Remember that in Einstein's theory space-time itself is dynamical. Now suppose you follow the propagation of a gravitational wave. Since this takes some time, we need to take into account the fact that the shape of the space-time itself has changed in the meantime. In such dynamical situations it becomes complicated to even define what the speed of propagation between two points is. One way this becomes important is in cosmological situations. For example, the expansion of the universe stretches the distances and this allows us for example to see further that the distance you obtain multiplying the speed of light by the age of the universe. This being said, you can define the speed locally by studying propagation for very short distances and times and this will be a constant. However, you need to remember that the global situation is more complicated.
My understanding is that black hole mergers are not expected to have any optical-wavelength emissions, whereas neutron star mergers should have emissions across the electromagnetic spectrum. Is that distinction part of the excitement here?
The matter and light radiated away from black hole (BH) mergers will all come from the accretion discs of each BH. The accretion discs of the BH masses for which LIGO (and Virgo) is most sensitive will generally be fairly sparse, so the emissions will generally be fairly dim.
For a pair of similar-mass neutron stars (NS) that merge into a BH, you can treat region of spacetime close to the merger as being filled with extremely dense accretion discs. The density of matter leads very bright emissions, and the available geodesics produce radiation that will escape to infinity rather than being quickly absorbed close to the source (including within the dense matter around the newly formed BH as it settles into an accretion disc). The picture is slightly different where the NS masses are highly unequal.
The Max Planck Institute for Gravitational Physics at Potsdam Germany has done many numerical simulations of NS mergers. NASA's animated one [1] where the NS are of substantially different masses:
Systems of mass that are barbell-shaped (a pair of heavy masses connected via an arbitrarily thin bar; for orbiting stars and BHs we take the limit as the bar goes to zero volume and zero mass) will radiate gravitational waves when they are spun about an axis on an axis perpendicular to the bar. These spinning NSes will be radiating gravitational waves with increasing amplitude, and these radiated GWs remove angular momentum from the rotating system, allowing the NSes to move closer to each other. By the start of the video, the GW radiation being emitted should eventually be detectable by instruments at enormous distances; that radiation has allowed the two NSes to reach a critical proximity.
At this point the smaller NS disintegrates under tidal stress and its additional mass-energy-momentum that then begins falling onto the larger NS causes the larger NS to collapse into a BH. A large proportion of the smaller NS remains in the region near the new BH and is swept up into an accretion structure alsong with s small proportion of the larger NS that did not get trapped behind the horizon. The accretion structure is briefly very bright, especially in gamma rays, as the matter from the smaller neutron star self-collides until it is entrained into a dense disc. Those early gammas will be visible at enormous distances (e.g. we can pick up extragalactic NS mergers with sky-scanning instruments searching for gamma ray bursts [2]).
By comparison, the smaller of a pair of mass-mismatched black holes cannot disintegrate (everything is stuck within each BH's horizon), and the larger of the pair is likely to have a sparser accretion disc, so the amount of disc-disc collision will be relatively low. The reshaping of the accretion material around the merged BHs may be driven principally by the dynamical spacetime around the BH, with only occasional collisions. While such collisions can be arbitrarily energetic (at the point where their geodesics intersect bits of matter may be moving ultrarelativistically with respect to each other), there are unlikely to be enough such emissions to be reliably detectable at large distances.
[2] There is a timing-coincidence argument about a short gamma ray burst detected in this way and a detection by LIGO & Virgo that is circulating around the rumour mill. Peter Coles blogged some detail https://telescoper.wordpress.com/2017/08/23/ligo-leaks-and-n... This may be an NS-BH merger, in which the picture is again somewhat different, and depends on the density of the BH's accretion disc both in terms of its matter content and in terms of the momentum (e.g. if the NS and BH are counter-rotating, collisions will be more frequent and more energetic).
Mostly but also because we haven’t “heard” the waves caused by a Neutron star interaction before. So experimentally we still have to confirm that they cause GWs.
Of course they produce gravitational waves! The main point of curiosity is likely that unlike black holes, that are "hairless" entities of pure curvature, neutron stars are complex objects with multiple, variable layers... and consequentially (I presume) the gravitational waves occasioned by their merger would both carry more information about the objects' composition and likely be more complicated (as well as less powerful at any given distance).
While I would have a trouble imagining a quantum of the space-time curvature (the graviton), it is not hard to see that the changes in the curvature could propagate in the form of waves. So, while yes, an experimental discovery of these waves is an important event in history of science, I am left curious as to whether it adds anything to our present understanding of Nature...
Let's start with a graviton as a gauge boson, mediating the gravitational interaction similarly to how the photon is a gauge boson mediating the electromagnetic interaction.
First, let's start with "gauge". We can use as an analogy an air pressure gauge, one that measures relative air pressures. Let's take it to a place with a standard pressure (say, in conditions which are effectively STP) and tune our pressure gauge so that it reads "0" at that air pressure. As we wander to and fro reading our calibrated gauge, we'll see pressures that are zero, positive or negative. If we climb a tall hill we'll see a negative reading. If the temperature drops we'll see a positive reading.
If we contrive things so that we can take a reading of a generalized pressure with our pressure gauge everywhere in the universe at all times, we can construct a (classical) gauge field. In deep space, the gauge will read strongly negative. At the bottom of the ocean, or deep in Jupiter's atmosphere, or at the core of the sun it will read strongly positive. In the early universe, it'll be strongly positive; in the far far distant future away from the black hole that will dominate our patch of de Sitter vacuum, it will read strongly negative. We'll get zero values in some places, like near the Earth's surface through a lot of Earth's history, or in the upper reaches of Jupiter's atmosphere through a lot of its history.
Our choice of "0" is not ideal, because "0" is only rarely the value at any point in our gauge field that permeates all of spacetime. Instead we should set "0" as the value in extragalactic space, because then "0"s will dominate the field (indeed it is possible that all readings will then be non-negative). In effect, when we set our "0" at STP we normalized the gauge field; when we decided instead to set our "0" in extragalactic space, we renormalized it. We could obtain an ideal renormalization if we could sample the whole of spacetime and find the lowest reading of our pressure gauge, but we can certainly get rid of practically all negative values by taking far fewer samples in regions where we think the lowest readings might be.
Once we have settled on a decent normalization, we could look at the propagation of nonzeros and study their statistics. If they follow the Bose-Einstein statistics, we'd call them "bosons". If they follow the Fermi-Dirac statistics, we'd call them "fermions". If they follow some other statistics, we'd assign them yet another name. (Our choice of generalized "air pressure" probably follows some odd statistics.)
Perturbative quantum gravity works something like this. [2]
We have a background spacetime with a metric; we have a gauge that measures the deviation from this metric. It'll be "0" at every point where the arrangement of stress-energy exactly matches the metric, and nonzero elsewhere. We are interested in modelling the gravitational interaction as the arrangement of nonzero values in our field. Patterns of nonzeros around gravitationally interacting matter themselves evolve (under a suitable decomposition of spacetime into 3+1 space and time) and interact like (classical) waves (made up of many molecules), and upon some study we can determine that these waves in our gauge field form patterns that strongly suggest they have a rotational symmetry of two, which we expect on theoretical grounds too because the metric is a rank-2 tensor field so particles representing the (change in the) metric field should be spin 2.
Conveniently, in a quantum gauge group theory, a particle with spin 2 is attractive of a particle with the same charge and repulsive of a particle with the opposite charge. (Compare with spin 1, where same-charges repel and opposite-charges attract). [4] So we can identify the nonzero numbers in our metric gauge field with gravitons. This is amenable to study with perturbation theory.
Unfortunately General Relativity is a non-linear theory and in our perturbatively quantized gravity, when you have a lot of high-energy gravitons they spawn more gravitons. We would want to apply Wilson's thinking on renormalization and reset our gauge to "0" in a cluster of these high-energy gravitons by finding some suitable ground value in the cluster. This is extremely successful up to a point [1], but as the energies of the gravitons increases we have to take more measurements to find a suitable ground value, and eventually we have to take an infinite number of measurements to find one. This is what is meant when you read "gravity is perturbatively non-renormalizable".
There are, as you suggest with ("... space-time curvature ..."), other values related to the gravitational interaction that we can turn into a quantum field [3], but most suffer a highly similar fate: in some conditions we have to do an infinite amount of work to make our field values sensible and match observables.
Finally the gauge field that we built on the metric is fully relativistic and generally covariant, so it works with any system of coordinates, choice of units, slicing of spacetime into 3+1, etc. that we want, up to diffeomorphisms (we have to remember that we chose a static background spacetime). So even though it gives us useless readings in some regions of a spacetime containing strong gravity, perturbative quantum gravity is a useful and standard tool. However, it is not considered a candidate for a fundamental theory (barring some unforessen advancement in renormalization theory) rather than an effective theory and moreover by implication it undercuts General Relativity's claim to be a fundamental theory too.
- --
[1] Relativists tend to define "strong gravity" at this point, since we get correct results from renormalization at any energy lower than it. Strong gravity only appears very close to gravitational singularities (and in the case of black holes, that means well inside the horizon). If we are using the path-integral formalism then we'd find that we have "strong gravity" in this sense in every Feynman diagram containing at least one loop of gravitons.
[4] One might ask, "is there oppositely-gravitationally-charged matter anywhere"? It's an OK question, and people have discussed it seriously. Sabine Hossenfelder has touched on this a few times on her blog, including http://backreaction.blogspot.com/2017/04/why-doesnt-anti-mat... although while the photon has no electromagnetic charge, the graviton itself (in perturbative quantum gravity) has gravitational charge (this reflects the non-linearity of General Relativity).
"We are working hard to assure that the candidates are valid gravitational-wave events, and it will require time to establish the level of confidence needed to bring any results to the scientific community and the greater public. We will let you know as soon we have information ready to share."
No doubt they will "crest" just before the announcement :-). And opinions will "undulate" over whether or not they are valid results.
Perhaps we just need more drama in science classes to be more inclusive, "Will this acid change the Ph level of the solution? Or will its buffering protect it? Find out after the break ..."
Potentially this neutron - neutron star merger or ones that are detected in the future will result in the formation of a black hole and in that event we may get more data to help refine models of black holes and those refined models may give us insight into the mechanics of black hole radiation - but it would be a bit indirect and we'd have to get lucky. Also Hawking radiation is "very slow" and may never be measured directly in interstellar black holes
Normally, the scientific community is pretty careful about not revealing results before they're fully baked (press notwithstanding). Seeing how that control has broken down for this incident, is it correct to infer that astronomers have pretty much lost their minds over the possibility of capturing a neutron star merger?
LIGO didn't announce because they want to be sure. But one of the most valuable aspects of LIGO is as a trigger for optical astronomy of transient, time-sensitive events. So, when they see something, they have to tell a wider astronomical community---the other observers who look in non-gravitational channels---though they might not be ready to make the information public. Each observatory comprises hundreds of people. Suddenly your message isn't contained to just the hundreds of people loyal to LIGO, but to thousands of people. Moreover, these observatories tend to be publicly owned and funded, and transparency and modern open-science practices often mean live updating of the status of these observatories. Finally, there are the scientists who have nothing to do with this physics, but fought hard for some observation time and had their scheduled observations interrupted for something high-priority. These people can infer (or are often told directly) why their allocation was preempted.
No. The effect of the waves is so small that it required a gargantuan project to detect the strongest events. The LIGO detector is possibly the most sensitive instrument of any kind humans have made.
Do gravitational waves travel at the speed of light?
I know the theory says nothing can travel faster than light. I also know that photons can be seen as quanta or as waves. So my guess is that gravitational waves travel at most, at the speed of light.
But do they? or do they travel slower? faster? Is there a doppler effect for GWs?
I ask because I would think ripples in the space-time fabric itself might be a bit different than light waves or other more studied phenomena.
Can anyone point me in the right direction?