Charge conservation would unambiguously be violated, which is why this decay is not expected. The half-life you quote is an experimental lower-bound.
Charge conservation would unambiguously be violated, which is why this decay is not expected. The half-life you quote is an experimental lower-bound.
Charge conservation would indeed be violated, which is why this decay is not expected. Dave is mistaken: the half-life they’re referring to is an experimental lower-bound, not a actual expected value.
They are not expected to decay. The half-life they’re thinking of is a lower-bound based on current measurements, not an actual expected half-life.
Not all radio noise is from the CMB. There’s also thermal noise, though this would be minimized too if our hypothetical radio at the end of time is near absolute zero.
One clarification: electric charge, angular momentum, and color charge are conserved quantities, not symmetries. Time is a continuous symmetry though, and its associated conserved quantity is energy.
Similarly, information isn’t a symmetry, but it is a conserved quantity. So I assume you’re asking if there’s an associated symmetry for it from Noether’s theorem. This is an interesting question: while Noether’s theorem ensures that any continuous symmetry will have a corresponding conserved quantity, the reverse isn’t necessarily true as far as I know. In the case of information conservation, this normally follows naturally from the fact that the laws of physics are deterministic and reversible (Newton’s laws or the Schrodinger equation).
If you insist on trying to find such a symmetry, then you can do so by equating conservation of information with the conservation of probability current in quantum mechanics. This then becomes a math problem: is there a transformation of the quantum mechanical wavefunction (psi) that leaves its action invariant? It turns there is: the transformation psi -> exp(i*theta)*psi. So it seems the symmetry of the wavefunction with respect to complex phase necessitates the conservation of probability current (i.e. information).
Edit: Looking into it a bit more, Noether’s theorem does work both ways. Also, the Wikipedia page outlines this invariance of the wavefunction with complex phase. In that article, they use it to show conservation of electric current density by multiplying the wavefunction by the particle’s charge, but it seems to me the first thing it shows is conservation of probability current density. If you’re interested in other conserved quantities and their associated symmetries, there’s a nice table on Wikipedia that summarizes them.
I suspect you may be misunderstanding time dilation. From the perspective of a particle, time always passes by at 1 second per second. If you yourself were to travel at relativistic speeds (relative to, say, Earth) your perspective of time wouldn’t change at all. However, observers on Earth would see your “clock” to tick slower. That is, anything you do would progress more slowly from their perspective. In the very early Universe, a given particle would see most other particles moving at relativistic speeds, and so would see their “clocks” tick slower. These sorts of relativistic effects would influence interactions between particles during collisions, decay rates, etc, but are all things we know how to take into account in our models of the early Universe.
The required temperature depends on the mass of the particles you’re considering. You could say photons are always relativistic, so even the photon gas that is the cosmic microwave background is relativistic at 2.7 K. But you’re presumably more interested in massive particles.
If you apply the kinetic theory of gases to hydrogen, you’ll find that the average kinetic energy will reach relativistic levels (taken to be when it becomes comparable to the rest mass energy) around 1012 K. For the free electrons (since we’ll be dealing with plasmas at any sort of relativistic temperatures), this temperature is around 109 K due to the smaller mass of the electron. These temperatures are reached at the cores of newly-formed neutron stars (~1012 K) [1] and the accretion disks of stellar-mass black holes (~109 K) [2], but not at the cores of typical stars. Regarding time dilation, an individual particle’s clock would tick slower from the perspective of an observer in the center-of-mass frame of the relativistic gas, but I don’t think this would have any noticeable effect on any of the bulk properties of the gas (except for the decay of any unstable particles). Length contraction would probably affect collision cross-sections, though I haven’t done any calculations for this to say anything specific. One important effect would be the fact that the distribution of speeds would follow a Maxwell–Jüttner distribution instead of a Maxwell-Boltzmann distribution, and that collisions between particles could be energetic enough to create particle-antiparticle pairs. This would affect things like the number of particles in the gas, the relationship between temperature and pressure, the specific heat of the gas, etc.
You mention the early history of the Universe in your other comment. You can look through this table on Wikipedia to see the temperature range during each of the epochs of the early Universe, as well as a description of what happened. The temperatures become non-relativistic for electrons at some point during the photon epoch.
This is the first I’ve heard of the effect Mars has on Earth’s Milankovitch cycles (unsurprising, given that the paper is recent and the effect is quite small with a very long period). Earth presumably has a similar effect on Mars, but measuring this would be quite difficult. Keep in mind that we’re able to do this for Earth by analyzing drill cores (that paper uses data from 293 scientific deep-sea drill holes), which we can’t really do for Mars currently. Using other methods, we’ve been able to measure the effects of axial tilt and precession for Mars, but the effect from orbital interactions with Earth would be much more subtle. I’d be surprised if you could find anything on it in the literature.
I also would not expect the Moon to make much of a difference. The Earth-Moon distance is <1% of the Mars-Earth distance even at closest approach, so the Earth-Moon system is essentially a point mass to first order. Additionally, the mass of the moon is ~1% that of the Earth, so the effect there is quite small as well. As I mentioned, measuring Earth-Mars Milankovitch cycles is already difficult for Earth (we apparently only recently did so) while likely infeasible for Mars (currently), and detecting the effects from the Moon would be harder still.
Assuming we’re talking about refractive index here, metals technically still have a refractive index despite being reflective (light can penetrate a very short distance through metals). In the UV, the refractive index of mercury is <1 and of course it’s very dense. But that’s probably not going to be useful to you.
For transparent materials, water actually has a lower refractive index than most liquids (around 1.34 in the UV). You can check this website to see if there’s anything better (probably an organic), but I doubt it would be by much.
I don’t know much about 3D resin printing, but I assume you a focus an image (in the UV) onto a resin layer to selectively cure it. As you suggest, the presence of a liquid would refract the focusing light rays and change the position of the focal plane. This could in principle be accounted for by changing the distance from the focusing optic, though there could be some (perhaps minor) blurring of the image.
Depending on what you use on your TV, SmartTube may be an option. It even blocks sponsored segments within YouTube videos.
Most experimental research in matter under extreme pressures is concerned with recreating conditions within the interiors of planets and stars (the latter falls under the field of high energy density physics). The temperatures involved therefore tend to be very high. However, there’s no inherent conflict between high pressures and low temperatures, it’s just that temperature tends to increase when you compress something. Compress an ideal gas, for example, and it will heat up. Let it sit in its compressed state for a while though, and it will cool back down despite remaining under high pressure.
This is true for solids and liquids too (putting any phase transitions aside), though they are much less compressible. The core of the Earth will eventually cool too, though it’s currently kept at high temperature by the radioactive decay of heavy elements. Diamond anvil cells, however, can reach pressures exceeding those at the center of the earth in a laboratory setting, and some DACs can even be cooled to cryogenic temperatures. This figure on Wikipedia suggests cryo-DACs can be used to reach pressures up to 350 GPa at cryogenic temperatures. As an example, a quick search turns up a paper (arxiv version) that makes use of a DAC to study media at liquid nitrogen temperatures and pressures up to 10 GPa (~3% the pressure at the center of the Earth). Search around and I’m sure you can find others.
Yes, he’s right that bringing the poles of two magnets together puts the system in a state of higher potential energy. And, yes, you could use this as an explanation for “why” the magnets repel by invoking the principle of minimum energy. You can even show that this results in a force, as a gradient in the potential energy is mathematically equivalent to a conservative force. I do think, though, that you can give further justification for the principle of minimum energy than he gives in the video, as it follows from the second law of thermodynamics (see Wikipedia article). Regarding the exchange of virtual photons and using this to explain how the electromagnetic force arises: I would avoid this entirely.
One side nitpick though: I wouldn’t say that the energy came from “the chemical bonds in the food [you ate]”, but rather the formation of new bonds as you digest the food. Chemical bonds are states of lower potential energy, so breaking them in the sense of separating the constituent atoms requires energy. It’s just that different bonds can have even lower potential energy and therefore release energy when they’re formed.
N2 is (mostly) inert when it comes to respiration. What your body needs is oxygen and low concentrations of anything that might also be metabolically active. For scuba diving, N2 is used to dilute the oxygen and is used specifically because of how non-reactive it is. At high concentrations though, it can result in nitrogen narcosis - helium is sometimes used as the diluent gas instead to mitigate this.
As far as habitability is concerned, atmospheric nitrogen is essential for life on Earth at least, as it’s a major part of the nitrogen cycle (specifically, nitrogen fixation). Without it, we wouldn’t have nitrogen-containing organic compounds like amino acids (and, therefore, proteins), at least not nearly in the same quantities that we currently do. This doesn’t mean it’s essential for life outside earth, but it is for life as we know it, so its presence should increase our credence (if only a little) for whether a given planet is habitable or not. However, when looking for signs of life, it’s better to look for atmospheric signatures that are heavily influenced by life, rather than just those that facilitate it. The oxygen in Earth’s atmosphere was largely produced by life, and so its presence in the atmospheres of other planets would be a good (though not definitive) indication of habitability.
I’m late to this, but I’d like to bring up something I haven’t seen anyone else mention. But first, some more details regarding what has been discussed:
In most situations, it’s correct to say that EM waves basically don’t interact with one another. You can cross two laser beams, and they’ll just continue on their way without caring that the other one was present. A mathematically equivalent scenario is waves on a string: the propagation of a wave isn’t affected by the propagation of another, even when they overlap. Another way to put this is that they obey the principle of superposition: the total amplitude at any given point on the string is just the sum of the amplitudes of the individual waves at that point. You may want to argue that the waves do interact because there are interference effects, but interference is exactly what you get when they don’t interact, i.e. when the principle of superposition holds.
However, this is only true for so-called linear systems. I won’t go delve too deep into the math of what this means, but I think looking at the wave on a string example can give you some intuition. The behavior of waves on a string can be explained mathematically by treating the string as a large number of tiny points connected by springs. If the force on a given point by a neighboring spring is directly proportional (i.e. linear) to the spring displacement (Hooke’s law), then you find that the entire system obeys the wave equation, which is a linear equation. This is the idealized model of a string, and the principle of superposition holds for it perfectly. If, however, the forces acting on points within the string have a non-linear dependence on displacement, then the equation describing the overall motion of the string will be non-linear and the principle of superposition will no longer hold perfectly. In such a case, two propagating waves could interact with one another as the properties of the wave medium (the “stretchiness” of the string) would be influenced by the presence of a wave. In other words, the stretchiness of the string would change depending on how much it’s stretched (e.g. if a wave is propagating on it), and the stretchiness influences the propagation of waves.
Something analogous can happen with EM waves, and has been mentioned by others. In so-called non-linear media, the electromagnetic wave equation becomes non-linear and two beams of light (propagating EM waves) can influence one another through the medium. This makes sense when you consider that the optical properties of a material can be changed, even just temporarily*, when enough light is passed through it (for example, by influencing the state of the electrons in the material). It makes sense then that this modification to the optical properties of the material would influence the propagation of other waves through it. In the string example, this is analogous to the string itself being modified by the presence of a wave (even just temporarily) and thereby influencing the propagation of other waves. Such effects require sufficiently large wave amplitudes to be noticeable, i.e. the intensity of the light needs to be high enough to appreciably modify the medium.
What about the case of light propagating in a vacuum? If the vacuum itself is the medium, surely it can’t be altered and no non-linear effects could arise, right? In classical electromagnetism (Maxwell’s equations), this is true. But within quantum electrodynamics (QED), it is possible for the vacuum itself to become non-linear when the strength of the electromagnetic field is great enough. This is known as the Schwinger limit, and reaching it requires extremely high field strengths, orders of magnitude higher than what we can currently achieve with any laser.
*I want to emphasize that we’re not necessarily talking about permanent changes to the medium. In the case of waves on a string for example, the string doesn’t need to be stretched to the point of permanent deformation; non-permanent changes to its stretchiness are sufficient.
They do lol
A Dyson swarm is basically just a huge number solar collectors orbiting the sun. Humanity could put some individual collectors in space if we wanted to, but we don’t have anywhere near enough resources to make a full swarm.
Near-relativistic spacecraft are conceivably possible and are not too far beyond what’s possible with current technology (though would still require significant advancements). The catch is that they would be very tiny and we would have to send a stream of them to their destination.
Retinal projectors are currently under development, and advanced ones could in principle be higher quality than current VR headsets while having a very small form-factor. Optical metamaterials such as metalenses would be very useful for this, particularly if they could be designed to work at all three RGB wavelengths simultaneously (not easy).
Putting aside the issue that it requires a negative energy density, there’s still the issue that it will necessarily violate causality, which is the reason FTL travel is considered problematic in the first place. Maybe it’s ultimately okay, but it may also mean that warp drives are fundamentally impossible.
Depends on what you consider reasonable. If you’re a researcher, Thorlabs has a couple for <$30k. You could also build your own, but you probably wouldn’t be asking if you had the experience necessary to do this.
If you’re a hobbyist, building your own would be an impressive project that would teach you a lot (look up spontaneous parametric down-conversion, a common way to create entangled pairs). It would also be pricey, as you would need an appropriate laser source (probably a nanosecond pulsed laser), a non-linear crystal like BBO, and a lot of miscellaneous optical components, etc. You can get this stuff second hand online for a lot cheaper than new, but it would still cost a lot for an individual. You would also need to characterize your output to ensure you’re actually getting correlated pairs, which is outside of my expertise.
Can’t explore the scales of the Universe quite as easily with letter paper!
Drew mentions this and points out that it’s a new OS design and will therefore take a long time. He argues that an OS based on the linux design would be much easier.