JavaScript is required to use Bungie.net

OffTopic

Surf a Flood of random discussion.
Edited by NoelKannagi: 5/26/2015 1:03:08 PM
229

SCIENCE!

This thread is about science!

Posting in language:

 

Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

  • Everybody's ganging up on OP and I'm just sitting here.

    Posting in language:

     

    Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

    40 Replies
    • Noelle, this thread is now about science.

      Posting in language:

       

      Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

      1 Reply
      • I know there is no point in saying this but it's in recent so we probably assume it's new and don't look at the date. That's what I do at least.

        Posting in language:

         

        Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

      • K.

        Posting in language:

         

        Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

      • Oh how I missed you. Now let my suck on that thing.

        Posting in language:

         

        Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

      • Edited by Atomic Tea: 7/12/2014 12:24:10 PM
        How many times???

        Posting in language:

         

        Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

      • [b] [/b]

        Posting in language:

         

        Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

      • I really love science, quality science thread Noelle.

        Posting in language:

         

        Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

        1 Reply
        • I knew you'd be back. Haha.

          Posting in language:

           

          Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

        • Kugelblitz (astrophysics) From Wikipedia, the free encyclopedia For other uses, see Kugelblitz (disambiguation). In theoretical physics, a kugelblitz (German: "ball lightning", not to be confused with ball lightning) is a concentration of light so intense that it forms an event horizon and becomes self-trapped: according to general relativity, if enough radiation is aimed into a region, the concentration of energy can warp spacetime enough for the region to become a black hole (although this would be a black hole whose original mass-energy had been in the form of radiant energy rather than matter). In simpler terms, a kugelblitz is a black hole formed from energy as opposed to mass. According to Einstein's general theory of relativity, once an event horizon has formed, the type of mass-energy that created it no longer matters. A kugelblitz is so hot it surpasses the Planck temperature, the temperature of the universe 5.4×10−44 seconds after The Big Bang. The best-known reference to the kugelblitz idea in English is probably John Archibald Wheeler's 1955 paper "Geons",[1] which explored the idea of creating particles (or toy models of particles) from spacetime curvature. Wheeler's paper on geons also introduced the idea that lines of electric charge trapped in a wormhole throat might be used to model the properties of a charged particle-pair. A kugelblitz is an important plot element in Frederik Pohl's novel Heechee Rendezvous.

          Posting in language:

           

          Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

          1 Reply
          • Every time I see your account I think of this, how it relates to the Flood & You, 1st line is the bit

            Posting in language:

             

            Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

          • Science is the best.

            Posting in language:

             

            Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

          • Your welcome *revive*

            Posting in language:

             

            Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

          • ( ͡° ͜ʖ ͡°)

            Posting in language:

             

            Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

          • We should have a title for Noelle. Like how we call kinder Assmad Kinder.

            Posting in language:

             

            Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

          • well i just got liked bombed by goji and noelle at the same time

            Posting in language:

             

            Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

          • Virtual black hole From Wikipedia, the free encyclopedia In quantum gravity, a virtual black hole is a black hole that exists temporarily as a result of a quantum fluctuation of spacetime.[1] It is an example of quantum foam and is the gravitational analog of the virtual electron–positron pairs found in quantum electrodynamics. Theoretical arguments suggest that virtual black holes should have mass on the order of the Planck mass, lifetime around the Planck time, and occur with a number density of approximately one per Planck volume.[2] The emergence of virtual black holes at the Planck scale is a consequence of the uncertainty relation \Delta R_{\mu}\Delta x_{\mu}\ge\ell^2_{P}=\frac{\hbar G}{c^3} where R_{\mu} is the radius of curvature of space-time small domain; x_{\mu} is the coordinate small domain; \ell_{P} is the Planck length; \hbar is the Dirac constant; G - Newton's gravitational constant; c is the speed of light. These uncertainty relations are another form of Heisenberg's uncertainty principle at the Planck scale. Proof [show] If virtual black holes exist, they provide a mechanism for proton decay. This is because when a black hole's mass increases via mass falling into the hole, and then decreases when Hawking radiation is emitted from the hole, the elementary particles emitted are, in general, not the same as those that fell in. Therefore, if two of a proton's constituent quarks fall into a virtual black hole, it is possible for an antiquark and a lepton to emerge, thus violating conservation of baryon number.[2] The existence of virtual black holes aggravates the black hole information loss paradox, as any physical process may potentially be disrupted by interaction with a virtual black hole.[5]

            Posting in language:

             

            Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

          • Fuzzball (String Theory) From Wikipedia, the free encyclopedia Fuzzballs are theorized by some superstring theory scientists to be the true quantum description of black holes. The theory resolves two intractable problems that classic black holes pose for modern physics: The information paradox wherein the quantum information bound in in‑falling matter and energy entirely disappears into a singularity; that is, the black hole would undergo zero physical change in its composition regardless of the nature of what fell into it. The singularity at the heart of the black hole, where conventional black hole theory says there is infinite spacetime curvature due to an infinitely intense gravitational field from a region of zero volume. Modern physics breaks down when such parameters are infinite and zero.[1] Fuzzball theory replaces the singularity at the heart of a black hole by positing that the entire region within the black hole’s event horizon is actually a ball of strings, which are advanced as the ultimate building blocks of matter and energy. Strings are thought to be bundles of energy vibrating in complex ways in both the three physical dimensions of space as well as in compact directions—extra dimensions interwoven in the quantum foam (also known as spacetime foam). Contents [hide] 1 Physical characteristics 2 Information paradox 3 See also 4 Notes 5 External links Physical characteristics[edit] In some types of superstring theory—the basis of fuzzball theory—the extra dimensions of spacetime are thought to take the form of a 6-dimensional Calabi–Yau manifold. Samir Mathur of Ohio State University, with postdoctoral researcher Oleg Lunin, proposed via two papers in 2002 that black holes are actually spheres of strings with a definite volume; they are not a singularity, which the classic view holds to be a zero-dimensional, zero-volume point into which a black hole’s entire mass is concentrated.[2] String theory holds that the fundamental constituents of subatomic particles, including the force carriers (e.g.leptons, photons, and gluons), all composed of a one-dimensional string of energy that takes on its identity by vibrating in different modes and/or frequencies. Quite unlike the view of a black hole as a singularity, a small fuzzball can be thought of as an extra-dense neutron star where its neutrons have decomposed, or “melted,” liberating the quarks (strings in string theory) composing them. Accordingly, fuzzballs can be regarded as the most extreme form of degenerate matter. Whereas the event horizon of a classic black hole is thought to be very well defined and distinct, Mathur and Lunin further calculated that the event horizon of a fuzzball would, at an extremely small scale (likely on the order of a few Planck lengths),[3] be very much like a mist: fuzzy, hence the name “fuzzball.” They also found that the physical surface of the fuzzball would have a radius equal to that of the event horizon of a classic black hole; for both, the Schwarzschild radius for a median-size stellar-mass black hole of 6.8 solar masses is 20 kilometers. With classical black holes, objects passing through the event horizon on their way to the singularity are thought to enter a realm of curved spacetime where the escape velocity exceeds the speed of light. It is a realm that is devoid of all structure. Further, at the singularity—the heart of a classic black hole—spacetime is thought to have infinite curvature (that is, gravity is thought to have infinite intensity) since its mass is believed to have collapsed to zero (infinitely small) volume where it has infinite density. Such infinite conditions are problematic with known physics because key calculations utterly collapse. With a fuzzball, however, the strings comprising an object are believed to simply fall onto and absorb into the surface of the fuzzball, which corresponds to the event horizon—the threshold at which the escape velocity equals the speed of light. A fuzzball is a black hole; spacetime, photons, and all else that is not exquisitely close to the surface of a fuzzball are thought to be affected in precisely the same fashion as with a classic black hole featuring a singularity at its center. Classic black holes and fuzzballs differ only at the quantum level; that is, they differ only in their internal composition as well as how they affect virtual particles that form close to their event horizons (see Information paradox, below). Fuzzball theory is thought by its proponents to be the true quantum description of black holes. Cygnus X-1, an 8.7‑solar-mass black hole only 6,000 light years away in our own Milky Way galaxy, belongs to a binary system along with a blue supergiant variable star. If Cygnus X-1 is actually a fuzzball, its surface has a diameter of 51 kilometers.[4] Credit: ESA (Artist rendition) Since the volume of fuzzballs is a function of the Schwarzschild radius (2,954 meters per solar mass), fuzzballs have a variable density that decreases as the inverse square of their mass (twice the mass is twice the diameter, which is eight times the volume, resulting in one‑quarter the density). A typical 6.8‑solar-mass fuzzball would have a mean density of 4.0×1017 kg/m3.[5] A bit of such a fuzzball the size of a drop of water would have a mass of twenty million metric tons, which is the mass of a granite ball 240 meters in diameter.[6] Though such densities are almost unimaginably extreme, they are, mathematically speaking, infinitely far from infinite density. Although the densities of typical stellar-mass fuzzballs are quite great—about the same as neutron stars[7]—their densities are many orders of magnitude less than the Planck density (5.155×1096 kg/m3), which is equivalent to the mass of the universe packed into the volume of a single atomic nucleus. Fuzzballs become less dense as their mass increases due to fractional tension. When matter or energy (strings) fall onto a fuzzball, more strings aren’t simply added to the fuzzball; strings fuse together, and in doing so, all the quantum information of the in‑falling strings becomes part of larger, more complex strings. Due to fractional tension, string tension exponentially decreases as they become more complex with more modes of vibration, relaxing to considerable lengths. The “mathematical beauty” of the string theory formulas Mathur and Lunin employed lies in how the fractional tension values produce fuzzball radii that precisely equal Schwarzschild radii, which Karl Schwarzschild calculated using an entirely different mathematical technique 87 years earlier. Due to the mass-density inverse-square rule, all fuzzballs need not have unimaginable densities. There are also supermassive black holes, which are found at the center of virtually all galaxies. Sagittarius A*, the black hole at the center of our Milky Way galaxy, is 4.3 million solar masses. If it is actually a fuzzball, it has a mean density that is “only” 51 times that of gold. At 3.9 billion solar masses, a fuzzball would have a radius of 77 astronomical units—about the same size as the termination shock of our solar system’s heliosphere—and a mean density equal to that of the Earth's atmosphere at sea level (1.2 kg/m3). Irrespective of a fuzzball’s mass and resultant density, the determining factor establishing where its surface lies is the threshold at which the fuzzball’s escape velocity precisely equals the speed of light.[8] Escape velocity, as its name suggests, is the velocity a body must achieve to escape from a massive object. For earth, this is 11.2 km/s. In the other direction, a massive object’s escape velocity is equal to the impact velocity achieved by a falling body that has fallen from the edge of a massive object’s sphere of gravitational influence. Thus, event horizons—for both classic black holes and fuzzballs—lie precisely at the point where spacetime has warped to such an extent that falling bodies just achieve the speed of light. According to Albert Einstein, via his special theory of relativity, the speed of light is the maximum permissible velocity in spacetime. At this velocity, infalling matter and energy impacts the surface of the fuzzball and its now-liberated, individual strings contribute to the fuzzball’s makeup. Information paradox[edit] Main article: Black hole information paradox Classic black holes create a problem for physics known as the black hole information paradox, an issue first raised in 1972 by Jacob Bekenstein and later popularized by Stephen Hawking. The information paradox is born out of the realization that all the quantum nature (information) of the matter and energy that falls into a classic black hole is thought to entirely vanish from existence into the zero-volume singularity at its heart. For instance, a black hole that is feeding on the stellar atmosphere (protons, neutrons, and electrons) from a nearby companion star should, if it obeyed the known laws of quantum mechanics, technically grow to be increasingly different in composition from one that is feeding on light (photons) from neighboring stars. Yet, the implications of classic black hole theory are inescapable: other than the fact that the two classic black holes would become increasingly massive due to the infalling matter and energy, they would undergo zero change in their relative composition because their singularities have no composition. Bekenstein noted that this theorized outcome violated the quantum mechanical law of reversibility, which essentially holds that quantum information must not be lost in any process. This field of study is today known as black hole thermodynamics. Even if quantum information was not extinguished in the singularity of a classic black hole and it somehow still existed, quantum data would be unable to climb up against infinite gravitational intensity to reach the surface of its event horizon and escape.

            Posting in language:

             

            Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

          • Tipler cylinder From Wikipedia, the free encyclopedia A Tipler cylinder, also called a Tipler time machine, is a hypothetical object theorized to be a potential mode of time travel—although results have shown that a Tipler cylinder could only allow time travel if its length were infinite or with the existence of negative energy (see the discussion of Hawking's proof below). The Tipler cylinder was discovered as a solution to the equations of general relativity by Willem Jacob van Stockum[1] in 1936 and Kornel Lanczos[2] in 1924, but not recognized as allowing closed timelike curves[3] until an analysis by Frank Tipler[4] in 1974. Tipler showed in his 1974 paper, "Rotating Cylinders and the Possibility of Global Causality Violation" that in a spacetime containing a massive, infinitely long cylinder which was spinning along its longitudinal axis, the cylinder should create a frame-dragging effect. This frame-dragging effect warps spacetime in such a way that the light cones of objects in the cylinder's proximity become tilted, so that part of the light cone then points backwards along the time axis on a space time diagram. Therefore a spacecraft accelerating sufficiently in the appropriate direction can travel backwards through time along a closed timelike curve or CTC.[4] CTC's are associated, in Lorentzian manifolds which are interpreted physically as spacetimes, with the possibility of causal anomalies such as going back in time and potentially shooting your own grandfather, although paradoxes might be avoided using some constraint such as the Novikov self-consistency principle. They have an unnerving habit of appearing in some of the most important exact solutions in general relativity, including the Kerr vacuum (which models a rotating black hole) and the van Stockum dust (which models a cylindrically symmetrical configuration of rotating pressureless fluid or dust). An objection to the practicality of building a Tipler cylinder was discovered by Stephen Hawking, who provided a proof that according to general relativity it is impossible to build a time machine in any finite region that satisfies the weak energy condition, meaning that the region contains no exotic matter with negative energy. The Tipler cylinder, on the other hand, does not involve any negative energy. Tipler's original solution involved a cylinder of infinite length, which is easier to analyze mathematically, and although Tipler suggested that a finite cylinder might produce closed timelike curves if the rotation rate were fast enough,[5] he did not prove this. But Hawking comments "it can't be done with positive energy density everywhere! I can prove that to build a finite time machine, you need negative energy."[6] Hawking's proof appears in his 1992 paper on the chronology protection conjecture (though the proof is distinct from the conjecture itself, since the proof shows that classical general relativity predicts a finite region containing closed timelike curves can only be created if there is a violation of the weak energy condition in that region, whereas the conjecture predicts that closed timelike curves will prove to be impossible in a future theory of quantum gravity which replaces general relativity). In the paper, he examines "the case that the causality violations appear in a finite region of spacetime without curvature singularities" and proves that "[t]here will be a Cauchy horizon that is compactly generated and that in general contains one or more closed null geodesics which will be incomplete. One can define geometrical quantities that measure the Lorentz boost and area increase on going round these closed null geodesics. If the causality violation developed from a noncompact initial surface, the averaged weak energy condition must be violated on the Cauchy horizon."[7]

            Posting in language:

             

            Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

          • Frame-dragging From Wikipedia, the free encyclopedia General relativity Spacetime curvature.png G_{\mu \nu} + \Lambda g_{\mu \nu}= {8\pi G\over c^4} T_{\mu \nu} Introduction Mathematical formulation Resources · Tests Fundamental concepts[show] Phenomena[hide] Kepler problem · Lenses · Waves Frame-dragging · Geodetic effect Event horizon · Singularity Black hole Equations[show] Advanced theories[show] Solutions[show] Scientists[show] Spacetime[show] v t e Einstein's general theory of relativity predicts that non-static, stationary mass–energy distributions affect spacetime in a peculiar way giving rise to a phenomenon usually known as frame-dragging. The first frame-dragging effect was derived in 1918, in the framework of general relativity, by the Austrian physicists Josef Lense and Hans Thirring, and is also known as the Lense–Thirring effect.[1][2][3] They predicted that the rotation of a massive object would distort the spacetime metric, making the orbit of a nearby test particle precess. This does not happen in Newtonian mechanics for which the gravitational field of a body depends only on its mass, not on its rotation. The Lense–Thirring effect is very small—about one part in a few trillion. To detect it, it is necessary to examine a very massive object, or build an instrument that is very sensitive. More generally, the subject of effects caused by mass–energy currents is known as gravitomagnetism, in analogy with classical electromagnetism. Contents [hide] 1 Frame dragging effects 2 Experimental tests of frame-dragging 2.1 Proposals 2.2 Analysis of experimental data 2.3 Possible future tests 3 Astronomical evidence 4 Mathematical derivation of frame-dragging 4.1 Lense–Thirring effect inside a rotating shell 5 See also 6 References 7 Further reading 8 External links Frame dragging effects[edit] Rotational frame-dragging (the Lense–Thirring effect) appears in the general principle of relativity and similar theories in the vicinity of rotating massive objects. Under the Lense–Thirring effect, the frame of reference in which a clock ticks the fastest is one which is revolving around the object as viewed by a distant observer. This also means that light traveling in the direction of rotation of the object will move past the massive object faster than light moving against the rotation, as seen by a distant observer. It is now the best known frame-dragging effect, partly thanks to the Gravity Probe B experiment. Qualitatively, frame-dragging can be viewed as the gravitational analog of electromagnetic induction. Also, an inner region is dragged more than an outer region. This produces interesting locally rotating frames. For example, imagine that a north-south–oriented ice skater, in orbit over the equator of a black hole and rotationally at rest with respect to the stars, extends her arms. The arm extended toward the black hole will be "torqued" spinward due to gravitomagnetic induction ("torqued" is in quotes because gravitational effects are not considered "forces" under GR). Likewise the arm extended away from the black hole will be torqued anti-spinward. She will therefore be rotationally sped up, in a counter-rotating sense to the black hole. This is the opposite of what happens in everyday experience. There exists a particular rotation rate that, should she be initially rotating at that rate when she extends her arms, inertial effects and frame-dragging effects will balance and her rate of rotation will not change. Due to the Principle of Equivalence gravitational effects are locally indistinguishable from inertial effects, so this rotation rate, at which when she extends her arms nothing happens, is her local reference for non-rotation. This frame is rotating with respect to the fixed stars and counter-rotating with respect to the black hole. This effect is analogous to the hyperfine structure in atomic spectra due to nuclear spin. A useful metaphor is a planetary gear system with the black hole being the sun gear, the ice skater being a planetary gear and the outside universe being the ring gear. See Mach's principle. Another interesting consequence is that, for an object constrained in an equatorial orbit, but not in freefall, it weighs more if orbiting anti-spinward, and less if orbiting spinward. For example, in a suspended equatorial bowling alley, a bowling ball rolled anti-spinward would weigh more than the same ball rolled in a spinward direction. Note, frame dragging will neither accelerate or slow down the bowling ball in either direction. It is not a "viscosity". Similarly, a stationary plumb-bob suspended over the rotating object will not list. It will hang vertically. If it starts to fall, induction will push it in the spinward direction. Linear frame dragging is the similarly inevitable result of the general principle of relativity, applied to linear momentum. Although it arguably has equal theoretical legitimacy to the "rotational" effect, the difficulty of obtaining an experimental verification of the effect means that it receives much less discussion and is often omitted from articles on frame-dragging (but see Einstein, 1921).[4] Static mass increase is a third effect noted by Einstein in the same paper.[5] The effect is an increase in inertia of a body when other masses are placed nearby. While not strictly a frame dragging effect (the term frame dragging is not used by Einstein), it is demonstrated by Einstein that it derives from the same equation of general relativity. It is also a tiny effect that is difficult to confirm experimentally. Experimental tests of frame-dragging[edit] Proposals[edit] In 1976 Van Patten and Everitt[6][7] proposed to implement a dedicated mission aimed to measure the Lense–Thirring node precession of a pair of counter-orbiting spacecraft to be placed in terrestrial polar orbits with drag-free apparatus. A somewhat equivalent, cheaper version of such an idea was put forth in 1986 by Ciufolini[8] who proposed to launch a passive, geodetic satellite in an orbit identical to that of the LAGEOS satellite, launched in 1976, apart from the orbital planes which should have been displaced by 180 deg apart: the so-called butterfly configuration. The measurable quantity was, in this case, the sum of the nodes of LAGEOS and of the new spacecraft, later named LAGEOS III, LARES, WEBER-SAT. Although extensively studied by various groups,[9][10] such an idea has not yet been implemented. The butterfly configuration would allow, in principle, to measure not only the sum of the nodes but also the difference of the perigees,[11][12][13] although such Keplerian orbital elements are more affected by the non-gravitational perturbations like the direct solar radiation pressure: the use of the active, drag-free technology would be required. Other proposed approaches involved the use of a single satellite to be placed in near polar orbit of low altitude,[14][15] but such a strategy has been shown to be unfeasible.[16][17][18] In order to enhance the possibilities of being implemented, it has been recently claimed that LARES/WEBER-SAT would be able to measure the effects[19] induced by the multidimensional braneworld model by Dvali, Gabadaze and Porrati[20] and to improve by two orders of magnitude the present-day level of accuracy of the equivalence principle.[21] Iorio claimed these improvements were unrealistic.[22][23]

            Posting in language:

             

            Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

          • Antineutron From Wikipedia, the free encyclopedia Antineutron Quark structure antineutron.svg The quark structure of the antineutron. Classification Antibaryon Composition 1 up antiquark, 2 down antiquarks Statistics Fermionic Interactions Strong, Weak, Gravity, Electromagnetic Status Discovered Symbol n Particle Neutron Discovered Bruce Cork (1956) Mass 939.565560(81) MeV/c2 Electric charge 0 Magnetic moment 1.91 [[µN]] Spin 1⁄2 Isospin 1⁄2 Antimatter Annihilation Devices[show] Antiparticles[show] Uses[show] Bodies[show] People[show] v t e The antineutron is the antiparticle of the neutron with symbol n. It differs from the neutron only in that some of its properties have equal magnitude but opposite sign. It has the same mass as the neutron, and no net electric charge, but has opposite baryon number (+1 for neutron, −1 for the antineutron). This is because the antineutron is composed of antiquarks, while neutrons are composed of quarks. In particular, the antineutron consists of one up antiquark and two down antiquarks. Since the antineutron is electrically neutral, it cannot easily be observed directly. Instead, the products of its annihilation with ordinary matter are observed. In theory, a free antineutron should decay into an antiproton, a positron and a neutrino in a process analogous to the beta decay of free neutrons. There are theoretical proposals that neutron–antineutron oscillations exist, a process which would occur only if there is an undiscovered physical process that violates baryon number conservation.[1][2][3] The antineutron was discovered in proton–proton collisions at the Bevatron (Lawrence Berkeley National Laboratory) by Bruce Cork in 1956, one year after the antiproton was discovered. Contents [hide] 1 Magnetic moment 2 See also 3 References 4 External links Magnetic moment[edit] The magnetic moment of the antineutron is the opposite of that of the neutron.[4] It is 1.91 µN for the antineutron but −1.91 µN for the neutron (relative to the direction of the spin). Here µN is the nuclear magneton.

            Posting in language:

             

            Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

          • World line From Wikipedia, the free encyclopedia (Redirected from Worldline) General relativity Spacetime curvature.png G_{\mu \nu} + \Lambda g_{\mu \nu}= {8\pi G\over c^4} T_{\mu \nu} Introduction Mathematical formulation Resources · Tests Fundamental concepts[hide] Special relativity Equivalence principle World line · Riemannian geometry Phenomena[show] Equations[show] Advanced theories[show] Solutions[show] Scientists[show] Spacetime[show] v t e In physics, the world line of an object is the unique path of that object as it travels through 4-dimensional spacetime. The concept of "world line" is distinguished from the concept of "orbit" or "trajectory" (such as an orbit in space or a trajectory of a truck on a road map) by the time dimension, and typically encompasses a large area of spacetime wherein perceptually straight paths are recalculated to show their (relatively) more absolute position states — to reveal the nature of special relativity or gravitational interactions. The idea of world lines originates in physics and was pioneered by Hermann Minkowski. The term is now most often used in relativity theories (i.e., special relativity and general relativity). However, world lines are a general way of representing the course of events. The use of it is not bound to any specific theory. Thus in general usage, a world line is the sequential path of personal human events (with time and place as dimensions) that marks the history of a person[1] — perhaps starting at the time and place of one's birth until one's death. The log book of a ship is a description of the ship's world line, as long as it contains a time tag attached to every position. The world line allows one to calculate the speed of the ship, given a measure of distance (a so-called metric) appropriate for the curved surface of the Earth. Contents [hide] 1 Usage in physics 2 World lines as a tool to describe events 2.1 Trivial examples of spacetime curves 2.2 Tangent vector to a world line, four-velocity 3 World lines in special relativity 3.1 Simultaneous hyperplane 4 World lines in general relativity 5 World lines in literature 6 See also 7 References 8 External links Usage in physics[edit] In physics, a world line of an object (approximated as a point in space, e.g., a particle or observer) is the sequence of spacetime events corresponding to the history of the object. A world line is a special type of curve in spacetime. Below an equivalent definition will be explained: A world line is a time-like curve in spacetime. Each point of a world line is an event that can be labeled with the time and the spatial position of the object at that time. For example, the orbit of the Earth in space is approximately a circle, a three-dimensional (closed) curve in space: the Earth returns every year to the same point in space. However, it arrives there at a different (later) time. The world line of the Earth is helical in spacetime (a curve in a four-dimensional space) and does not return to the same point. Spacetime is the collection of points called events, together with a continuous and smooth coordinate system identifying the events. Each event can be labeled by four numbers: a time coordinate and three space coordinates; thus spacetime is a four-dimensional space. The mathematical term for spacetime is a four-dimensional manifold. The concept may be applied as well to a higher-dimensional space. For easy visualizations of four dimensions, two space coordinates are often suppressed. The event is then represented by a point in a Minkowski diagram, which is a plane usually plotted with the time coordinate, say t, upwards and the space coordinate, say x horizontally. As expressed by F.R. Harvey A curve M in [spacetime] is called a worldline of a particle if its tangent is future timelike at each point. The arclength parameter is called proper time and usually denoted τ. The length of M is called the proper time of the worldline or particle. If the worldline M is a line segment, then the particle is said to be in free fall.[2] A world line traces out the path of a single point in spacetime. A world sheet is the analogous two-dimensional surface traced out by a one-dimensional line (like a string) traveling through spacetime. The world sheet of an open string (with loose ends) is a strip; that of a closed string (a loop) is a volume. Once the object is not approximated as a mere point but has extended volume, it traces out not a world line but rather a world tube.

            Posting in language:

             

            Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

          • Quantum teleportation From Wikipedia, the free encyclopedia Quantum teleportation is a process by which quantum information (e.g. the exact state of an atom or photon) can be transmitted (exactly, in principle) from one location to another, with the help of classical communication and previously shared quantum entanglement between the sending and receiving location. Because it depends on classical communication, which can proceed no faster than the speed of light, it cannot be used for superluminal transport or communication of classical bits. It also cannot be used to make copies of a system, as this violates the no-cloning theorem. Although the name is inspired by the teleportation commonly used in fiction, current technology provides no possibility of anything resembling the fictional form of teleportation. While it is possible to teleport one or more qubits of information between two (entangled) atoms,[1][2][3] this has not yet been achieved between molecules or anything larger. One may think of teleportation either as a kind of transportation, or as a kind of communication; it provides a way of transporting a qubit from one location to another, without having to move a physical particle along with it. The seminal paper[4] first expounding the idea was published by C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres and W. K. Wootters in 1993.[5] Since then, quantum teleportation has been realized in various physical systems. Presently, the record distance for quantum teleportation is 143 km (89 mi) with photons,[6] and 21 m with material systems.[7] In August 2013, the achievement of "fully deterministic" quantum teleportation, using a hybrid technique, was reported.[8] On 29 May 2014, scientists announced a reliable way of transferring data by quantum teleportation. Quantum teleportation of data had been done before but with highly unreliable methods.[9][10] Contents [hide] 1 Non-technical summary 2 Protocol 3 Experimental results and records 4 Formal presentation 5 Alternative notations 6 Entanglement swapping 7 N-state particles 8 Logic gate teleportation 8.1 General description 8.2 Further details 9 Local explanation of the phenomenon 10 See also 11 References 12 External links Non-technical summary[edit] It is known, from axiomatizations of quantum mechanics (such as categorical quantum mechanics), that the universe is fundamentally composed of two things: bits and qubits.[11][12] Bits are units of information, and are commonly represented using zero or one, true or false. These bits are sometimes called "classical" bits, to distinguish them from quantum bits, or qubits. Qubits also encode a type of information, called quantum information, which differs sharply from "classical" information. For example, a qubit cannot be used to encode a classical bit (this is the content of the no-communication theorem). Conversely, classical bits cannot be used to encode qubits: the two are quite distinct, and not inter-convertible. Qubits differ from classical bits in dramatic ways: they cannot be copied (the no-cloning theorem) and they cannot be destroyed (the no-deleting theorem). Quantum teleportation provides a mechanism of moving a qubit from one location to another, without having to physically transport the underlying particle that a qubit is normally attached to. Much like the invention of the telegraph allowed classical bits to be transported at high speed across continents, quantum teleportation holds the promise that one day, qubits could be moved likewise. However, as of 2013, only photons and single atoms have been teleported; molecules have not, nor does this even seem likely in the upcoming years, as the technology remains daunting. Specific distance and quantity records are stated below. The movement of qubits does require the movement of "things"; in particular, the actual teleportation protocol requires that an entangled quantum state or Bell state be created, and its two parts shared between two locations (the source and destination, or Alice and Bob). In essence, a certain kind of "quantum channel" between two sites must be established first, before a qubit can be moved. Teleportation also requires a classical information link to be established, as two classical bits must be transmitted to accompany each qubit. The need for such links may, at first, seem disappointing; however, this is not unlike ordinary communications, which requires wires, radios or lasers. What's more, Bell states are most easily shared using photons from lasers, and so teleportation could be done, in principle, through open space. Single atoms have been teleported,[1][2][3] although not in the science-fiction sense. An atom consists of several parts: the qubits in the electronic state or electron shells surrounding the atomic nucleus, the qubits in the nucleus itself, and, finally, the electrons, protons and neutrons making up the atom. Physicists have teleported the qubits encoded in the electronic state of atoms; they have not teleported the nuclear state, nor the nucleus itself. Thus, performing this kind of teleportation requires a stock of atoms at the receiving site, available for having qubits imprinted on them. The importance of teleporting nuclear state is unclear: nuclear state does affect the atom, e.g. in hyperfine splitting, but whether such state would need to be teleported in some futuristic "practical" application is debatable. The quantum world is strange and unusual; so, aside from no-cloning and no-deleting, there are other oddities. For example, quantum correlations arising from Bell states seem to be instantaneous (the Alain Aspect experiments), whereas classical bits can only be transmitted slower than the speed of light (quantum correlations cannot be used to transmit classical bits; again, this is the no-communication theorem). Thus, teleportation, as a whole, can never be superluminal, as a qubit cannot be reconstructed until the accompanying classical bits arrive. The proper description of quantum teleportation requires a basic mathematical toolset, which, although complex, is not out of reach of advanced high-school students, and indeed becomes accessible to college students with a good grounding in finite-dimensional linear algebra. In particular, the theory of Hilbert spaces and projection matrixes is heavily used. A qubit is described using a two-dimensional complex number-valued vector space (a Hilbert space); the formal manipulations given below do not make use of anything much more than that. Strictly speaking, a working knowledge of quantum mechanics is not required to understand the mathematics of quantum teleportation, although without such acquaintance, the deeper meaning of the equations may remain quite mysterious.

            Posting in language:

             

            Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

          • Quantum entanglement From Wikipedia, the free encyclopedia Spontaneous parametric down-conversion process can split photons into type II photon pairs with mutually perpendicular polarization. Quantum mechanics \Delta x\cdot\Delta p_x \geqslant \frac{\hbar}{2} Uncertainty principle Introduction Glossary History Background[show] Fundamentals[show] Experiments[show] Formulations[show] Equations[show] Interpretations[show] Advanced topics[show] Scientists[show] v t e Quantum entanglement is a physical phenomenon that occurs when pairs or groups of particles are generated or interact in ways such that the quantum state of each particle cannot be described independently – instead, a quantum state may be given for the system as a whole. Measurements of physical properties such as position, momentum, spin, polarization, etc. performed on entangled particles are found to be appropriately correlated. For example, if a pair of particles is generated in such a way that their total spin is known to be zero, and one particle is found to have clockwise spin on a certain axis, then the spin of the other particle, measured on the same axis, will be found to be counterclockwise. Because of the nature of quantum measurement, however, this behavior gives rise to effects that can appear paradoxical: any measurement of a property of a particle can be seen as acting on that particle (e.g. by collapsing a number of superimposed states); and in the case of entangled particles, such action must be on the entangled system as a whole. It thus appears that one particle of an entangled pair "knows" what measurement has been performed on the other, and with what outcome, even though there is no known means for such information to be communicated between the particles, which at the time of measurement may be separated by arbitrarily large distances. Such phenomena were the subject of a 1935 paper by Albert Einstein, Boris Podolsky and Nathan Rosen,[1] describing what came to be known as the EPR paradox, and several papers by Erwin Schrödinger shortly thereafter.[2][3] Einstein and others considered such behavior to be impossible, as it violated the local realist view of causality (Einstein referred to it as "spooky action at a distance"),[4] and argued that the accepted formulation of quantum mechanics must therefore be incomplete. Later, however, the counterintuitive predictions of quantum mechanics were verified experimentally.[5] Experiments have been performed involving measuring the polarization or spin of entangled particles in different directions, which – by producing violations of Bell's inequality – demonstrate statistically that the local realist view cannot be correct. This has been shown to occur even when the measurements are performed more quickly than light could travel between the sites of measurement: there is no lightspeed or slower influence that can pass between the entangled particles.[6] Recent experiments have measured entangled particles within less than one part in 10,000 of the light travel time between them.[7] According to the formalism of quantum theory, the effect of measurement happens instantly.[8][9] It is not possible, however, to use this effect to transmit classical information at faster-than-light speeds[10] (see Faster-than-light → Quantum mechanics). Quantum entanglement is an area of extremely active research by the physics community, and its effects have been demonstrated experimentally with photons, electrons, molecules the size of buckyballs,[11][12] and even small diamonds.[13][14] Research is also focused on the utilization of entanglement effects in communication and computation. Contents [hide] 1 History 2 Concept 2.1 Meaning of entanglement 2.2 Apparent paradox 2.3 The hidden variables theory 2.4 Violations of Bell's inequality 2.5 Other types of experiment 2.6 Special Theory of Relativity 3 Non-locality and hidden variables 4 Quantum mechanical framework 4.1 Pure states 4.2 Ensembles 4.3 Reduced density matrices 4.4 Entropy 4.5 Quantum field theory 5 Applications 5.1 Entangled states 5.2 Methods of creating entanglement 5.3 Testing a system for entanglement 6 See also 7 References 8 Further reading 9 External links History[edit] May 4, 1935 New York Times article headline regarding the imminent EPR paper. The counterintuitive predictions of quantum mechanics about strongly correlated systems were first discussed by Albert Einstein in 1935, in a joint paper with Boris Podolsky and Nathan Rosen.[1] In this study, they formulated the EPR paradox (Einstein, Podolsky, Rosen paradox), a thought experiment that attempted to show that quantum mechanical theory was incomplete. They wrote: "We are thus forced to conclude that the quantum-mechanical description of physical reality given by wave functions is not complete."[1] However, they did not coin the word entanglement, nor did they generalize the special properties of the state they considered. Following the EPR paper, Erwin Schrödinger wrote a letter (in German) to Einstein in which he used the word Verschränkung (translated by himself as entanglement) "to describe the correlations between two particles that interact and then separate, as in the EPR experiment."[15] He shortly thereafter published a seminal paper defining and discussing the notion, and terming it "entanglement." In the paper he recognized the importance of the concept, and stated:[2] "I would not call [entanglement] one but rather the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought." Like Einstein, Schrödinger was dissatisfied with the concept of entanglement, because it seemed to violate the speed limit on the transmission of information implicit in the theory of relativity.[16] Einstein later famously derided entanglement as "spukhafte Fernwirkung"[17] or "spooky action at a distance." The EPR paper generated significant interest among physicists and inspired much discussion about the foundations of quantum mechanics (perhaps most famously Bohm's interpretation of quantum mechanics), but relatively little other published work. So, despite the interest, the flaw in EPR's argument was not discovered until 1964, when John Stewart Bell proved that one of their key assumptions, the principle of locality, was not consistent with the hidden variables interpretation of quantum theory that EPR purported to establish. Specifically, he demonstrated an upper limit, seen in Bell's inequality, regarding the strength of correlations that can be produced in any theory obeying local realism, and he showed that quantum theory predicts violations of this limit for certain entangled systems.[18] His inequality is experimentally testable, and there have been numerous relevant experiments, starting with the pioneering work of Freedman and Clauser in 1972[19] and Aspect's experiments in 1982.[20] They have all shown agreement with quantum mechanics rather than the principle of local realism. However, the issue is not finally settled, for each of these experimental tests has left open at least one loophole by which it is possible to question the validity of the results. The work of Bell raised the possibility of using these super strong correlations as a resource for communication. It led to the discovery of quantum key distribution protocols, most famously BB84 by Bennet and Brassard and E91 by Artur Ekert. Although BB84 does not use entanglement, Ekert's protocol uses the violation of a Bell's inequality as a proof of security. David Kaiser of MIT mentioned in his book, How the Hippies Saved Physics, that the possibilities of instantaneous long-range communication derived from Bell's theorem stirred interest among hippies, psychics, and even the CIA, with the counter-culture playing a critical role in its development toward practical use.[21]

            Posting in language:

             

            Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

          • [i] [/i]

            Posting in language:

             

            Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

          • Poincaré recurrence theorem From Wikipedia, the free encyclopedia In mathematics, the Poincaré recurrence theorem states that certain systems will, after a sufficiently long but finite time, return to a state very close to the initial state. The Poincaré recurrence time is the length of time elapsed until the recurrence (this time may vary greatly depending on the exact initial state and required degree of closeness). The result applies to isolated mechanical systems subject to some constraints, e.g., all particles must be bound to a finite volume. The theorem is commonly discussed in the context of ergodic theory, dynamical systems and statistical mechanics. If the space is quantized, an exact recurrence is possible after an period of time determined in part by the size of the space and the number of elements contained. If the space is continuous, no exact recurrence can be expected because there will be an arbitrarily large distance between any two locations, no matter how close together. The size of this arbitrarily large distance is related to aleph one (for the set of real numbers). (See Continuum Hypothesis.) The theorem is named after Henri Poincaré, who published it in 1890. Contents [hide] 1 Precise formulation 2 Discussion of proof 3 Formal statement of the theorem 3.1 Theorem 1 3.2 Theorem 2 4 Quantum mechanical version 5 See also 6 References 7 Further reading 8 External links Precise formulation[edit] Any dynamical system defined by an ordinary differential equation determines a flow map f t mapping phase space on itself. The system is said to be volume-preserving if the volume of a set in phase space is invariant under the flow. For instance, all Hamiltonian systems are volume-preserving because of Liouville's theorem. The theorem is then: If a flow preserves volume and has only bounded orbits, then for each open set there exist orbits that intersect the set infinitely often.[1] As an example, the deterministic baker's map exhibits Poincaré recurrence which can be demonstrated in a particularly dramatic fashion when acting on 2D images. A given image, when sliced and squashed hundreds of times, turns into a snow of apparent "random noise". However, when the process is repeated thousands of times, the image reappears, although at times marred with greater or lesser bits of noise. Discussion of proof[edit] The proof, speaking qualitatively, hinges on two premises:[2] A finite upper bound can be set on the total potentially accessible phase space volume. For a mechanical system, this bound can be provided by requiring that the system is contained in a bounded physical region of space (so that it cannot, for example, eject particles that never return) — combined with the conservation of energy, this locks the system into a finite region in phase space. The phase volume of a finite element under dynamics is conserved. (for a mechanical system, this is ensured by Liouville's theorem) Imagine any finite starting volume of phase space and follow its path under dynamics of the system. The volume "sweeps" points of phase space as it evolves, and the "front" of this sweeping has a constant size. Over time the explored phase volume (known as a "phase tube") grows linearly, at least at first. But, because the accessible phase volume is finite, the phase tube volume must eventually saturate because it cannot grow larger than the accessible volume. This means that the phase tube must intersect itself. In order to intersect itself, however, it must do so by first passing through the starting volume. Therefore, at least a finite fraction of the starting volume is recurring. Now, consider the size of the non-returning portion of the starting phase volume—that portion that never returns to the starting volume. Using the principle just discussed in the last paragraph, we know that if the non-returning portion is finite, then a finite part of the non-returning portion must return. But that would be a contradiction, since any part of the non-returning portion that returns, also returns to the original starting volume. Thus, the non-returning portion of the starting volume cannot be finite and must be infinitely smaller than the starting volume itself. Q.E.D.. The theorem does not comment on certain aspects of recurrence which this proof cannot guarantee: There may be some special phases that never return to the starting phase volume, or that only return to the starting volume a finite number of times then never return again. These however are extremely "rare", making up an infinitesimal part of any starting volume. Not all parts of the phase volume need to return at the same time. Some will "miss" the starting volume on the first pass, only to make their return at a later time. Nothing prevents the phase tube from returning completely to its starting volume before all the possible phase volume is exhausted. A trivial example of this is the harmonic oscillator. Systems that do cover all accessible phase volume are called ergodic (this of course depends on the definition of "accessible volume"). What can be said is that for "almost any" starting phase, a system will eventually return arbitrarily close to that starting phase. The recurrence time depends on the required degree of closeness (the size of the phase volume). To achieve greater accuracy of recurrence, we need to take smaller initial volume, which means longer recurrence time. For a given phase in a volume, the recurrence is not necessarily a periodic recurrence. The second recurrence time does not need to be double the first recurrence time. Formal statement of the theorem[edit] Let (X,\Sigma,\mu) be a finite measure space and let f\colon X\to X be a measure-preserving transformation. Below are two alternative statements of the theorem. Theorem 1[edit] For any E\in \Sigma, the set of those points x of E such that f^n(x)\notin E for all n>0 has zero measure. That is, almost every point of E returns to E. In fact, almost every point returns infinitely often; i.e. \mu\left(\{x\in E:\mbox{ there exists } N \mbox{ such that } f^n(x)\notin E \mbox{ for all } n>N\}\right)=0. For a proof, see proof of Poincaré recurrence theorem 1 at PlanetMath.org. . Theorem 2[edit] The following is a topological version of this theorem: If X is a second-countable Hausdorff space and \Sigma contains the Borel sigma-algebra, then the set of recurrent points of f has full measure. That is, almost every point is recurrent. For a proof, see proof of Poincaré recurrence theorem 2 at PlanetMath.org. Quantum mechanical version[edit] For quantum mechanical systems with discrete energy eigenstates, a similar theorem holds. For every \epsilon >0 and T_{0}>0 there exists a time T larger than T_{0}, such that \left |\left | \psi(T)\right\rangle - \left |\psi(0)\right\rangle\right | < \epsilon, where \left | \psi(t)\right\rangle denotes the state vector of the system at time t.[3][4][5] The essential elements of the proof are as follows. The system evolves in time according to: \left |\psi(t)\right\rangle = \sum_{n=0}^{\infty}c_{n}\exp\left(-i E_{n} t\right)\left |\phi_{n}\right\rangle where the E_{n} are the energy eigenvalues (we use natural units, so \hbar = 1 ), and the \left |\phi_{n}\right\rangle are the energy eigenstates. The squared norm of the difference of the state vector at time T and time zero, can be written as: \left |\left | \psi(T)\right\rangle - \left |\psi(0)\right\rangle\right |^{2} = 2\sum_{n=0}^{\infty}\left | c_{n}\right |^{2}\left [1-\cos\left(E_{n}T\right)\right ] We can truncate the summation at some n = N independent of T, because \sum_{n=N+1}^{\infty}\left | c_{n}\right |^{2}\left [1-\cos\left(E_{n}T\right)\right ] \leq \sum_{n=N+1}^{\infty}\left | c_{n}\right |^{2} which can be made arbitrarily small because the summation \sum_{n=0}^{\infty}\left |c_{n}\right |^{2}, being the squared norm of the initial state, converges to 1. That the finite sum \sum_{n=0}^{N}\left | c_{n}\right |^{2}\left [1-\cos\left(E_{n}T\right)\right ] can be made arbitrarily small, follows from the existence of integers k_{n} such that \left |E_{n}T -2\pi k_{n}\right |<\delta for arbitrary \delta>0. This implies that there exists intervals for T on which 1-\cos\left(E_{n}T\right)<\frac{\delta^{2}}{2}. On such intervals, we have: 2\sum_{n=0}^{N}\left | c_{n}\right |^{2}\left [1-\cos\left(E_{n}T\right)\right ] < \delta^{2}\sum_{n=0}^{N}\left | c_{n}\right |^{2}<\delta^{2} The state vector thus returns arbitrarily closely to the initial state, infinitely often.

            Posting in language:

             

            Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

          1 2 3 4 5 6 7
          You are not allowed to view this content.
          ;
          preload icon
          preload icon
          preload icon