Scientists hope to revive holograms

This story was originally published in the May/June 2022 issue with the title “Hope for Holograms.” click here Subscribe to read more stories like these.


“Help me Obi-Wan Kenobi. You are my only hope,” pleads a pale and luminous image of Princess Leia, projected from the faithful robot R2-D2. In a creative scene from the first star Wars Movie (1977), the heroes Kenobi and Luke Skywalker are stunned by the 3D projection, which embodied in front of them as if little Leah were in the room.

For fans of science fiction, this term hologram It often evokes images of floating projections that present characters with interactive intelligence to help them navigate a futuristic world. Various iterations of the concept have been popping up on screen in recent years, seen in everything from superhero flicks like Iron Man to the Netflix series black mirror. In literature, they appeared as early as Isaac Asimov’s 1942 story, “The Foundation”, in which pre-recorded video messages of a character are played as three-dimensional projections.

While current technologies for projecting synthetic images into real-world environments—from head-mounted devices to theater-sized projector systems—can create compelling 3D experiences, they generally fall short of the free interactive images that filmmakers and futurists envision. However, researchers and tech companies are currently working on display systems that they hope will one day fulfill the promise of science fiction.

Spoiler alert: Today’s solutions aren’t technically 3D images. This is because traditional holography involves recording a three-dimensional image on a two-dimensional medium.

Holograms originated as ghostly red or green images of inanimate objects, and are often used nowadays as security features — think shiny labels on credit cards or iridescent watermarks on various IDs, for example. In its early years, holography was generally accomplished by splitting a laser beam into two parts, then shining one beam onto an object before recombining the beams, creating an interference pattern that was imprinted on a photosensitive film. Hungarian physicist Dennis Gabor developed this technology in the 1940s using an electron beam, more than a decade before the advent of lasers.

But because traditional holograms must be projected across flat surfaces such as credit cards or photographic plates, creating a Leia-like image outdoors would be impossible using traditional holography, says Daniel Smalley, associate professor of electrical and computer engineering at Brigham Young University. (BYU). Instead, devices that produce 3D atmospheric graphics, in which points of light scattering are distributed in space, are generally classified as volumetric displays. “Instead of looking at them like a TV, you can look at them like a water fountain, where you can walk around and not have to stare at a screen to see those pictures,” Smalley says.

Engineers at Brigham Young University are developing volumetric displays in which symbols and images appear to float in the air. (Image source: Dan Smalley Lab / BYU)

Although these aerial imaging systems are still in the early stages of development, Smalley and other technologists envision a not-too-distant future where floating 3D projections will be widely used in video chats, medical imaging and emergency displays.

With the goal of reproducing a Leia-like image in the real world, Smalley’s team at BYU created technology that combines a laser and a dust-sized molecule of cellulose to produce images that appear to float.

Smalley says a laser beam containing low-energy pockets, or “holes,” can be used to trap the particle and pull it through the air. By illuminating the particles in red, green and blue rays as they flow along a predetermined path, the system displays color graphics in the air. The image is redrawn at least 10 times per second, creating the illusion of a coherent image. In other words, “we move the diamond fast enough that it looks like a continuous line,” says Smalley.

The team produced miniature images of a butterfly, a spacecraft, and the likeness of Princess Leia. Smalley says the system can be expanded by trapping and scanning many particles at once. Additionally, the images themselves can be made appear more solid by controlling how the particles reflect light in certain directions. By controlling how the light is scattered at each image point, the technology can also offer different experiences to multiple viewers depending on their perspective, he says. “Everyone in the room looking at the same space can see something completely different,” Smalley says. He adds that a sufficiently advanced device could one day enable, say, a child to see their homework as a projection — while one parent makes a video call and the other plays a game in the same place.

And while other technologies that use lasers — as well as some that use sound waves — offer interesting possibilities, producing an image requires lighting a physical piece of matter. A team of researchers across the Pacific has taken the concept of science fiction a step further, creating aerial photographs using only light.

Picture this: You’re asleep on a Saturday morning when the alarm goes off from your device. Quietly flip over to see the word SNOOZE displayed in the air above your device. You reach out and touch the virtual button with your finger — a slight sensation that tickles the tip of your finger — before going back to sleep.

Researchers from the University of Tsukuba in Japan are working on a technology that could make this scenario possible. Led by Assistant Professor Yoichi Ochiai, the team designs drawings in the air using a laser that emits extremely short pulses of light. The high-intensity beam can break down air molecules through ionization, producing short-lived patches of light or plasma. The system displays images by rapidly adjusting the laser’s focal point in three dimensions, resulting in thousands of emission points, or pixels, per second.

An earlier version of this technology used nanosecond laser pulses, which have the unfortunate side effect of burning human skin. Nicknamed Fairy Lights, the Ochiai setup uses much shorter femtosecond laser pulses and is much less dangerous despite having a higher intensity peak, says Ochiai. The little hearts, stars and fairies that the system projects are not only safe to touch, but responsive to touch. In one demo, the table system displays a checkbox that can be filled in with your finger. “It’s kind of like sandpaper or an electrostatic shockwave,” Ochiai says.

Combined with applications in augmented reality — such as a floating keyboard — the system’s tactile properties can be used to create a kind of airy braille, Ochiai says. It also envisions large-scale emergency displays that can be projected high above a city to warn residents of a natural disaster or direct them toward an escape route. While the initial images were no larger than a cubic centimeter, Ochiai says the system’s scalability depends on the size and power of the optical equipment. Large systems are currently too expensive, he says, adding that technologies that paint images using plasmas will likely become more feasible within the next 10-20 years as the multi-million dollar price drops.

BYU’s Daniel Smalley (center) and students Eric Nygaard (left) and Wesley Rogers (right) stand behind a laser table as they develop a new technology for volumetric displays. (Credit: Courtesy Nate Edwards/BYU)

While laser-based methods are still in their infancy, research groups and private companies alike have developed various screen-based displays to generate 3D images. Many have been marketed as 3D images, although they are usually based on other technologies, such as polarized glasses, near-eye displays, or stacked LCD screens.

IKIN, a startup based in San Diego, is working on creating 3D displays with a new device connected to smartphones. Known as RYZ (pronounced “rise”), the accessory features a patented lens, founder and chief technology officer Taylor Scott explains. To the viewer looking through the portal, full-color objects appear against a real-world background.

Utilizing advanced AI algorithms, device switches in the user’s eyes and other signals to not only simulate stereoscopic depth but also to combine photons from the background environment with the light that is projected onto the user’s retina. This “extra processing” allows the system to continually adjust the image in real time to compensate for the brightness and color of objects behind the image, Scott says.

“We track the environment around you and use that light that’s actually going toward your eyes and add it to create the image we want,” Scott says, adding that the device can be used in daylight, unlike most goggle-based systems and theater productions. Users can manipulate high-resolution depth fields by touching the glass-like window, and multiple users can view different perspectives of images depending on their position in relation to the device.

Scott anticipates a range of applications for this technology, from immersive video communications to advanced medical imaging technologies, such as brain scanning and echocardiography. Developers will be able to create content for the device, while the RYZ app will enable users to play games and transform existing digital photos and videos into deep fields. “We give consumers the ability to take all of their memories and bring them to life in 3D,” Scott says.

Although the laws of physics will likely prevent technologists from producing Leia-like projections anytime soon, advances in volumetric display technologies are sure to bring increasingly compelling methods into the mainstream.

For example, you might see this technique used in video games. And while IKIN has made a concerted effort not to get bogged down in the gaming space, video games are at the core of the startup. Scott is an avid player. In fact, the projector was motivated by the desire to immerse yourself in the world of one of his favorite console games. “The whole reason I created this system was because I wanted to play the Legend of Zelda in 3D,” Scott says. “I love Zelda.”

Leave a Comment