• YeahBuoy@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      1 year ago

      From the image caption:

      Biphoton state holographic reconstruction. Image reconstruction. a, Coincidence image of interference between a reference SPDC state and a state obtained by a pump beam with the shape of a Ying and Yang symbol (shown in the inset). The inset scale is the same as in the main plot. b, Reconstructed amplitude and phase structure of the image imprinted on the unknown pump. Credit: Nature Photonics (2023). DOI: 10.1038/s41566-023-01272-3

      Edit: Sounds like this is actually observed, but I don’t believe this is something one can “see”. The images are constructed from the difference between two states: one a reference state and the other being the state as captured by a pump. This isn’t all different, as I understand, than capturing light in film or a photo sensor: the reference state being the film/sensor detecting no light and the second state being when it exposed to light.

    • galilette@mander.xyz
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      1 year ago

      The bigger black and white on the left is the “double exposure” of both the reference state and the unknown state and is observed. But the Yin Yang you see is the shape of the pump beam, which is the smaller black and white inset on the left. The colored one on the right is the reconstructed unknown state (that is, it is computed from b&w one and the reference state (not shown))

      • I hate to sound dumb … just trying to be crystal clear… with the proper equipment, humans can see the actual B&W Yin Yang shape in real time?

        Sorry… I’m just having a real hard time believing that symbol appears in real time at such microscopic level.

        • galilette@mander.xyz
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          The Ying Yang is the “shape” of the light source. The point is that their technique can be used to infer the shape of an unknown light source, among other things. In so far as the data being recorded in the experiment involves two photons (or really, many identically prepared copies of two photons over time) and therefore 4 spatial dimensions (x y for each), then yes the 2d image they show is necessarily “interpreted” from the 4d “raw image”. Exposure time is 1min according to the paper, so not quite “real time”, but the whole theory is time independent (no time in any equation), so I imagine it can be shortened with e.g. higher laser power.

          caveat: not an optics person so grain of salt…