IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.
Gene therapy and optoelectronics could radically upgrade hearing for millions of people
Human hearing depends on the cochlea, a snail-shaped structure in the inner ear. A new kind of cochlear implant for people with disabling hearing loss would use beams of light to stimulate the cochlear nerve.
There’s a popular misconception that cochlear implants restore natural hearing. In fact, these marvels of engineering give people a new kind of “electric hearing” that they must learn how to use.
Natural hearing results from vibrations hitting tiny structures called hair cells within the cochlea in the inner ear. A cochlear implant bypasses the damaged or dysfunctional parts of the ear and uses electrodes to directly stimulate the cochlear nerve, which sends signals to the brain. When my hearing-impaired patients have their cochlear implants turned on for the first time, they often report that voices sound flat and robotic and that background noises blur together and drown out voices. Although users can have many sessions with technicians to “tune” and adjust their implants’ settings to make sounds more pleasant and helpful, there’s a limit to what can be achieved with today’s technology.
I have been an otolaryngologist for more than two decades. My patients tell me they want more natural sound, more enjoyment of music, and most of all, better comprehension of speech, particularly in settings with background noise—the so-called cocktail party problem. For 15 years, my team at the University of Göttingen, in Germany, has been collaborating with colleagues at the University of Freiburg and beyond to reinvent the cochlear implant in a strikingly counterintuitive way: using light.
We recognize that today’s cochlear implants run up against hard limits of engineering and human physiology. So we’re developing a new kind of cochlear implant that uses light emitters and genetically altered cells that respond to light. By using precise beams of light instead of electrical current to stimulate the cochlear nerve, we expect our optical cochlear implants to better replicate the full spectral nature of sounds and better mimic natural hearing. We aim to start clinical trials in 2026 and, if all goes well, we could get regulatory approval for our device at the beginning of the next decade. Then, people all over the world could begin to hear the light.
These 3D microscopic images of mouse ear anatomy show optical implants [dotted lines] twisting through the intricate structure of a normal cochlea, which contains hair cells; in deafness, these cells are lost or damaged. At left, the hair cells [light blue spiral] connect to the cochlear nerve cells [blue filaments and dots]. In the middle and right images, the bony housing of the mouse cochlea surrounds this delicate arrangement.Daniel Keppeler
Some 466 million people worldwide suffer from disabling hearing loss that requires intervention, according to the World Health Organization. Hearing loss mainly results from damage to the cochlea caused by disease, noise, or age and, so far, there is no cure. Hearing can be partially restored by hearing aids, which essentially provide an amplified version of the sound to the remaining sensory hair cells of the cochlea. Profoundly hearing-impaired people benefit more from cochlear implants, which, as mentioned above, skip over dysfunctional or lost hair cells and directly stimulate the cochlear, or auditory, nerve.
In the 2030s, people all over the world could begin to hear the light.
Today’s cochlear implants are the most successful neuroprosthetic to date. The first device was approved by the U.S. Food and Drug Administration in the 1980s, and nearly 737,000 devices had been implanted globally by 2019. Yet they make limited use of the neurons available for sound encoding in the cochlea. To understand why, you first need to understand how natural hearing works.
In a functioning human ear, sound waves are channeled down the ear canal and set the ear drum in motion, which in turn vibrates tiny bones in the middle ear. Those bones transfer the vibrations to the inner ear’s cochlea, a snail-shaped structure about the size of a pea. Inside the fluid-filled cochlea, a membrane ripples in response to sound vibrations, and those ripples move bundles of sensory hair cells that project from the surface of that membrane. These movements trigger the hair cells to release neurotransmitters that cause an electrical signal in the neurons of the cochlear nerve. All these electrical signals encode the sound, and the signal travels up the nerve to the brain. Regardless of which sound frequency they encode, the cochlear neurons represent sound intensity by the rate and timing of their electrical signals: The firing rate can reach a few hundred hertz, and the timing can achieve submillisecond precision.
Hair cells in different parts of the cochlea respond to different frequencies of sound, with those at the base of the spiral-shaped cochlea detecting high-pitched sounds of up to about 20 kilohertz, and those at the top of the spiral detecting low-pitched sounds down to about 20 Hz. This frequency map of the cochlea is also available at the level of the neurons, which can be thought of as a spiraling array of receivers. Cochlear implants capitalize on this structure, stimulating neurons in the base of the cochlea to create the perception of a high pitch, and so on.
A commercial cochlear implant today has a microphone, processor, and transmitter that are worn on the head, as well as a receiver and electrodes that are implanted. It typically has between 12 and 24 electrodes that are inserted into the cochlea to directly stimulate the nerve at different points. But the saline fluid within the cochlea is conductive, so the current from each electrode spreads out and causes broad activation of neurons across the frequency map of the cochlea. Because the frequency selectivity of electrical stimulation is limited, the quality of artificial hearing is limited, too. The natural process of hearing, in which hair cells trigger precise points on the cochlear nerve, can be thought of as playing the piano with your fingers; cochlear implants are more equivalent to playing with your fists. Even worse, this large stimulation overlap limits the way we can stimulate the auditory nerve, as it forces us to activate only one electrode at a time.
In normal hearing, sound waves travel down the ear canal and vibrate the ear drum and tiny bones in the middle ear. Those vibrations then reach the spiral-shaped cochlea and move bundles of sensory hair cells. When the hair cells respond, it triggers a neural signal that travels up the cochlear nerve to the brain. Hair cells at the base of the spiral respond to high-pitched sounds; those at the tip respond to low-pitched sounds.
With an electrical cochlear implant, a microphone, processor, and transmitter are worn behind the ear. The processor translates a sound’s pattern of frequencies into a crude stimulation pattern, which is transmitted to an implanted receiver and then to an electrode array that spirals through the cochlea. A limited number of electrodes (12 are shown here) directly stimulate the cells of the cochlear nerve. But each electrical pulse spreads out and stimulates off-target nerve cells, which results in muddier sound.
In a future optical cochlear implant, the external hardware could remain the same, though the processor could break up the sound into narrower frequency bands and transmit a more sophisticated stimulation pattern. The light source, either a flexible micro-LED array or optical fibers, would spiral through the cochlea, and the implant could have many more stimulation sites, because light is more easily confined in space than electrical current is. The user would have a gene-therapy treatment to make the cells of the cochlear nerve responsive to light, which would trigger precise signals that travel up the nerve to the brain.
The idea for a better way began back in 2005, when I started hearing about a new technique being pioneered in neuroscience called optogenetics. German researchers were among the first to discover light-sensitive proteins in algae that regulated the flow of ions across a cellular membrane. Then, other research groups began experimenting with taking the genes that coded for such proteins and using a harmless viral vector to insert them into neurons. The upshot was that shining a light on these genetically altered neurons could trigger them to open their voltage-gated ion channels and thus fire, or activate, allowing researchers to directly control living animals’ brains and behaviors. Since then, optogenetics has become a significant tool in neuroscience research, and clinicians are experimenting with medical applications including vision restoration and cardiac pacing.
I’ve long been interested in how sound is encoded and how this coding goes wrong in hearing impairment. It occurred to me that stimulating the cochlear nerve with light instead of electricity could provide much more precise control, because light can be tightly focused even in the cochlea’s saline environment.
We are proposing a new type of implanted medical device that will be paired with a new type of gene therapy.
If we used optogenetics to make cochlear nerve cells light sensitive, we could then precisely hit these targets with beams of low-energy light to produce much finer auditory sensations than with the electrical implant. We could theoretically have more than five times as many targets spaced throughout the cochlea, perhaps as many as 64 or 128. Sound stimuli could be electronically split up into many more discrete frequency bands, giving users a much richer experience of sound. This general idea had been taken up earlier by Claus-Peter Richter from Northwestern University, who proposed directly stimulating the auditory nerve with high-energy infrared light, though that concept wasn’t confirmed by other laboratories.
Our idea was exciting, but my collaborators and I saw a host of challenges. We were proposing a new type of implanted medical device that would be paired with a new type of gene therapy, both of which must meet the highest safety standards. We’d need to determine the best light source to use in the optogenetic system and how to transmit it to the proper spots in the cochlea. We had to find the right light-sensitive protein to use in the cochlear nerve cells, and we had to figure out how best to deliver the genes that code for those proteins to the right parts of the cochlea.
But we’ve made great progress over the years. In 2015, the European Research Council gave us a vote of confidence when it funded our “OptoHear” project, and in 2019, we spun off a company called OptoGenTech to work toward commercializing our device.
Our early proof-of-concept experiments in mice explored both the biology and technology at play in our mission. Finding the right light-sensitive protein, or channelrhodopsin, turned out to be a long process. Many early efforts in optogenetics used channelrhodopsin-2 (ChR2) that opens an ion channel in response to blue light. We used it in a proof-of-concept experiment in mice that demonstrated that optogenetic stimulation of the auditory pathway provided better frequency selectivity than electrical stimulation did.
In our continued search for the best channelrhodopsin for our purpose, we tried a ChR2 variant called calcium translocating channelrhodopsin (CatCh) from the Max Planck Institute of Biophysics lab of Ernst Bamberg, one of the world pioneers of optogenetics. We delivered CatCh to the cochlear neurons of Mongolian gerbils using a harmless virus as a vector. We next trained the gerbils to respond to an auditory stimulus, teaching them to avoid a certain area when they heard a tone. Then we deafened the gerbils by applying a drug that kills hair cells and inserted a tiny optical cochlear implant to stimulate the light-sensitized cochlear neurons. The deaf animals responded to this light stimulation just as they had to the auditory stimulus.
The optical cochlear implant will enable people to pick out voices in a busy meeting and appreciate the subtleties of their favorite songs.
However, the use of CatCh has two problems: First, it requires blue light, which is associated with phototoxicity. When light, particularly high-energy blue light, shines directly on cells that are typically in the dark of the body’s interior, these cells can be damaged and eventually die off. The other problem with CatCh is that it’s slow to reset. At body temperature, once CatCh is activated by light, it takes about a dozen milliseconds to close the channel and be ready for the next activation. Such slow kinetics do not support the precise timing of neuron activation necessary to encode sound, which can require more than a hundred spikes per second. Many people said the kinetics of channelrhodopsins made our quest impossible—that even if we gained spectral resolution, we’d lose temporal resolution. But we took those doubts as a strong motivation to look for faster channelrhodopsins, and ones that respond to red light.
We were excited when a leader in optogenetics, Edward Boyden at MIT, discovered a faster-acting channelrhodopsin that his team called Chronos. Although it still required blue light for activation, Chronos was the fastest channelrhodopsin to date, taking about 3.6 milliseconds to close at room temperature. Even better, we found that it closed within about 1 ms at the warmer temperature of the body. However, it took some extra tricks to get Chronos working in the cochlea: We had to use powerful viral vectors and certain genetic sequences to improve the delivery of Chronos protein to the cell membrane of the cochlear neurons. With those tricks, both single neurons and the neural population responded robustly and with good temporal precision to optical stimulation at higher rates of up to about 250 Hz. So Chronos enabled us to elicit near-natural rates of neural firing, suggesting that we could have both frequency and time resolution. But we still needed to find an ultrafast channelrhodopsin that operated with longer wavelength light.
We teamed up with Bamberg to take on the challenge. The collaboration targeted Chrimson, a channelrhodopsin first described by Boyden that’s best stimulated by orange light. The first results of our engineering experiments with Chrimson were fast Chrimson (f-Chrimson) and very fast Chrimson (vf-Chrimson). We were pleased to discover that f-Chrimson enables cochlear neurons to respond to red light reliably up to stimulation rates of approximately 200 Hz. Vf-Chrimson is even faster but is less well expressed in the cells than f-Chrimson is; so far, vf-Chrimson has not shown a measurable advantage over f-Chrimson when it comes to high-frequency stimulation of cochlear neurons.
This flexible micro-LED array, fabricated at the University of Freiburg, is wrapped around a glass rod that’s 1 millimeter in diameter. The array is shown with its 144 diodes turned off [left] and operating at 1 milliamp [right]. University of Freiburg/Frontiers
We’ve also been exploring our options for the implanted light source that will trigger the optogenetic cells. The implant must be small enough to fit into the limited space of the cochlea, stiff enough for surgical insertion, yet flexible enough to gently follow the cochlea’s curvature. Its housing must be biocompatible, transparent, and robust enough to last for decades. My collaborators Ulrich Schwarz and Patrick Ruther, then at the University of Freiburg, started things off by developing the first micro-light-emitting diodes (micro-LEDs) for optical cochlear implants.
We found micro-LEDs useful because they’re a very mature commercial technology with good power efficiency. We conducted severalexperiments with microfabricated thin-film micro-LEDs and demonstrated that we could optogenetically stimulate the cochlear nerve in our targeted frequency ranges. But micro-LEDs have drawbacks. For one thing, it’s difficult to establish a flexible, transparent, and durable hermetic seal around the implanted micro-LEDs. Also, micro-LEDs with the highest efficiency emit blue light, which brings us back to the phototoxicity problem. That's why we’re also looking at another way forward.
Instead of getting the semiconductor emitter itself into the cochlea, the alternative approach puts the light source, such as a laser diode, farther away in a hermetically sealed titanium housing. Optical fibers then bring the light into the cochlea and to the light-sensitive neurons. The optical fibers must be biocompatible, durable, and flexible enough to wind through the cochlea, which may be challenging with typical glass fibers. There’s interesting ongoing research in flexible polymer fibers, which might have better mechanical characteristics, but so far, they haven’t matched glass in efficiency of light propagation. The fiber-optic approach could have efficiency drawbacks, because we’d lose some light when it goes from the laser diode to the fiber, when it travels down the fiber, and when it goes from the fiber to the cochlea. But the approach seems promising, as it ensures that the optoelectronic components could be safely sealed up and would likely make for an easy insertion of the flexible waveguide array.
Another design possibility for optical cochlear implants is to use laser diodes as a light source and pair them with optical fibers made of a flexible polymer. The laser diode could be safely encapsulated outside the cochlea, which would reduce concerns about heat, while polymer waveguide arrays [left and right images] would curl into the cochlea to deliver the light to the cells.OptoGenTech
As we consider assembling these components into a commercial medical device, we first look for parts of existing cochlear implants that we can adopt. The audio processors that work with today’s cochlear implants can be adapted to our purpose; we’ll just need to split up the signal into more channels with smaller frequency ranges. The external transmitter and implanted receiver also could be similar to existing technologies, which will make our regulatory pathway that much easier. But the truly novel parts of our system—the optical stimulator and the gene therapy to deliver the channelrhodopsins to the cochlea—will require a good amount of scrutiny.
Cochlear implant surgery is quite mature and typically takes only a couple of hours at most. To keep things simple, we want to keep our procedure as close as possible to existing surgeries. But the key part of the surgery will be quite different: Instead of inserting electrodes into the cochlea, surgeons will first administer viral vectors to deliver the genes for the channelrhodopsin to the cochlear nerve cells, and then implant the light emitter into the cochlea.
Since optogenetic therapies are just beginning to be tested in clinical trials, there’s still some uncertainty about how best to make the technique work in humans. We’re still thinking about how to get the viral vector to deliver the necessary genes to the correct neurons in the cochlea. The viral vector we’ve used in experiments thus far, an adeno-associated virus, is a harmless virus that has already been approved for use in several gene therapies, and we’re using some genetic tricks and local administration to target cochlear neurons specifically. We’ve already begun gathering data about the stability of the optogenetically altered cells and whether they’ll need repeated injections of the channelrhodopsin genes to stay responsive to light.
Our roadmap to clinical trials is very ambitious. We’re working now to finalize and freeze the design of the device, and we have ongoing preclinical studies in animals to check for phototoxicity and prove the efficacy of the basic idea. We aim to begin our first-in-human study in 2026, in which we’ll find the safest dose for the gene therapy. We hope to launch a large phase 3 clinical trial in 2028 to collect data that we’ll use in submitting the device for regulatory approval, which we could win in the early 2030s.
We foresee a future in which beams of light can bring rich soundscapes to people with profound hearing loss or deafness. We hope that the optical cochlear implant will enable them to pick out voices in a busy meeting, appreciate the subtleties of their favorite songs, and take in the full spectrum of sound—from trilling birdsongs to booming bass notes. We think this technology has the potential to illuminate their auditory worlds.
Fascinating. Unclear how the "genetics" works - neurons don't regenerate, complicated, multi-process CRISPR probably ruled out nearterm, so possibly simple trans-wall mRNA? If so, endurance is a big question, just as in vaccines. Regarding the external DSP, modified DFT (mDFT) is chip-feasible, "channelizing" the pressure audio precisely with minimal overlap, and simultaneously in time and wavelength, supporting the idea of ultra-selective ON/OFF/pulsetrain stimuli to the disparate ion channels via some demultiplexer built into the optics. Multiple wavelength (WDM) optics might provide simultaneous, orthogonal transmission of overlapping pulsetrain channels to loci distributed across the optical cannula. This might involve annular layering or some reflection plane scheme, possibly interacting with multi-spectral wavelets cohering at differing propagation distances? Just a thought. On an entirely separate, but tightly related subject, tinnitus, is it possible that this same basic technology could be used to turn OFF (or otherwise defeat) neurochannels exciting this deleterious artifact of hearing loss? I think tinnitus could be as important an application regime as profound hearing loss, since so many more persons, and so often younger, suffer from it than from the near-total deafness that is associated with clinical cochlear implantation, applied particularly to elderly patients today. I would enjoy hearing from the author on these few ideas and questions. He and his team are to be complimented and encouraged for working on a very important problem in quality of life. I would like to listen (!) along for future developments here.
I am personally very interested in this! I'm a 40yr veteran of live audio production which I believe was the cause of a attack of Meniere's that took away most of the hearing in my right ear, along with perm. tinnitus in both ears. I do have random short term periods where I do get partial hearing back with no apparent triggers. I also have a background in lasers and electro-optics, specializing in acousto-optic modulators and deflectors which got me thinking about alternatives to your fiber optic "fan out" based on optical/acoustic channels. Perhaps a FM acousto-optic deflector for channel generation with a modulated laser diode source? The "refill" time of such a deflector would be in the microseconds per sweep with up to a thousand channels being reasonable. Or possibly a grating? And to top it off, my daughter is a Dr. of Genetics and Genomics, so this research is something that I have a personal interest in, to the point of wanting to participate! I would appreciate being informed of any trials, etc. Regards, Doug Dulmage, dulmage@shaw.ca
First: excellent article. Then, not all of us cochlear implant users have "robotic" sound: mine gives me sound as I remember it before I lost my hearing in 1984, including the enjoyment of music. But that's with a Cochlear Spectra 22; my newer "upgrades" are awful, in comparison. Last, how would this implant be useful for someone who already has an implant? Would the original implant have to be removed?
The company’s Earth-2 supercomputer is taking on climate change
Kathy Pretz is editor in chief for The Institute, which covers all aspects of IEEE, its members, and the technology they're involved in. She has a bachelor's degree in applied communication from Rider University, in Lawrenceville, N.J., and holds a master's degree in corporate and public communication from Monmouth University, in West Long Branch, N.J.
Nvidia’s CTO Michael Kagan is an IEEE senior member.
In 2019 Michael Kagan was leading the development of accelerated networking technologies as chief technology officer at Mellanox Technologies, which he and eight colleagues had founded two decades earlier. Then in April 2020 Nvidia acquired the company for US $7 billion, and Kagan took over as CTO of that tech goliath—his dream job.
Nvidia is headquartered in Santa Clara, Calif., but Kagan works out of the company’s office in Israel.
At Mellanox, based in Yokneam Illit, Israel, Kagan had overseen the development of high-performance networking for computing and storage in cloud data centers. The company made networking equipment such as adapters, cables, and high-performance switches, as well as a new type of processor, the DPU. The company’s high-speed InfiniBand products can be found in most of the world’s fastest supercomputers, and its high-speed Ethernet products are in most cloud data centers, Kagan says.
The IEEE senior member’s work is now focused on integrating a wealth of Nvidia technologies to build accelerated computing platforms, whose foundation are three chips: the GPU, the CPU, and the DPU, or data-processing unit. The DPU can support the ability to offload, accelerate, and isolate data center workloads, reducing CPU and GPU workloads.
“At Mellanox we worked on the data center interconnect, but at Nvidia we are connecting state-of-the-art computing to become a single unit of computing: the data center,” Kagan says. Interconnects are used to link multiple servers and combine the entire data center into one, giant computing unit.
“I have access and an open door to Nvidia technologies,” he says. “That’s what makes my life exciting and interesting. We are building the computing of the future.”
Kagan was born in St. Petersburg, Russia—then known as Leningrad. After he graduated high school in 1975, his family moved to Israel. As with many budding engineers, his curiosity led him to disassemble and reassemble things to figure out how they worked. And, with many engineers in the family, he says, pursuing an engineering career was an easy decision.
He attended the Technion, Israel’s Institute of Technology, because “it was one of the best engineering universities in the world,” he says. “The reason I picked electrical engineering is because it was considered to be the best faculty in the Technion.”
Kagan graduated in 1980 with a bachelor’s degree in electrical engineering. He joined Intel in Haifa, Israel, in 1983 as a design engineer and eventually relocated to the company’s offices in Hillsboro, Ore., where he worked on the 80387 floating-point coprocessor. A year later, after returning to Israel, Kagan served as an architect of the i8060XP vector processor and then led and managed design of the Pentium MMX microprocessor.
During his 16 years at Intel, he worked his way up to chief architect. In 1999 he was preparing to move his family to California, where he would lead a high-profile project for the company. Then a former coworker at Intel, Eyal Waldman, asked Kagan to join him and five other acquaintances to form Mellanox.
Alma mater: Technion, Israel’s Institute of Technology, Tel Aviv
Kagan had been turning down offers to join startups nearly every week, he recalls, but Mellanox, with its team of cofounders and vision, drew him in. He says he saw it as a “compelling adventure, an opportunity to build a company with a culture based on the core values I grew up on: excellence, teamwork, and commitment.”
During his more than 21 years there, he said, he had no regrets.
“It was one of the greatest decisions I’ve ever made,” he says. “It ended up benefiting all aspects of my life: professionally, financially—everything.”
InfiniBand, the startup’s breakout product, was designed for what today is known as cloud computing, Kagan says.
“We took the goodies of InfiniBand and bolted them on top of the standard Ethernet,” he says. “As a result, we became the vendor of the most advanced network for high-performance computing. More than half the machines at the top 500 computer companies use the Mellanox interconnect, now the Nvidia interconnect.
“Most of the cloud providers, such as Facebook, Azure, and Alibaba, use Nvidia’s networking and compute technologies. No matter what you do on the Internet, you’re most likely running through the chip that we designed.”
Kagan says the partnership between Mellanox and Nvidia was “natural,” as the two companies had been doing business together for nearly a decade.
“We delivered quite a few innovative solutions as independent companies,” he says.
One of Kagan's key priorities is Nvidia’s Bluefield DPU. The data center infrastructure on a chip offloads, accelerates, and isolates a variety of networking, storage, and security services.Nvidia
As CTO of Nvidia for the past two years, Kagan has shifted his focus from pure networking to the integration of multiple Nvidia technologies including building BlueField data-processing units and the Omniverse real-time graphics collaboration platform.
He says Nvidia’s vision for the data center of the future is based on its three chips: CPU, DPU, and GPU.
“These three pillars are connected with a very efficient and high-performance network that was originally developed at Mellanox and is being further developed at Nvidia,” he says.
Development of the BlueField DPUs is now a key priority for Nvidia. It is a data center infrastructure on a chip, optimized for high-performance computing. It also offloads, accelerates, and isolates a variety of networking, storage, and security services.
“In the data center, you have no control over who your clients are,” Kagan says. “It may very well happen that a client is a bad guy who wants to penetrate his neighbors’ or your infrastructure. You’re better off isolating yourself and other customers from each other by having a segregated or different computing platform run the operating system, which is basically the infrastructure management, the resource management, and the provisioning.”
Kagan is particularly excited about the Omniverse, a new Nvidia product that uses Pixar’s Universal Scene Description software for creating virtual worlds—what has become known as the metaverse. Kagan describes the 3D platform as “creating a world by collecting data and making a physically accurate simulation of the world.”
Car manufacturers are using the Omniverse to test-drive autonomous vehicles. Instead of physically driving a car on different types of roads under various conditions, data about the virtual world can be generated to train the AI models.
“You can create situations that the car has to handle in the real world but that you don’t want it to meet in the real world, like a car crash,” Kagan says. “You don’t want to crash the car to train the model, but you do need to have the model be able to handle hazardous conditions on the road.”
Kagan joined IEEE in 1997. He says membership gives him access to information about technical topics that would otherwise be challenging to obtain.
“I enjoy this type of federated learning and being exposed to new things,” he says.
He adds that he likes connecting with members who are working on similar projects, because he always learns something new.
“Being connected to these people from more diverse communities helps a lot,” he says. “It inspires you to do your job in a different way.”
The Omniverse platform can generate millions of kilometers of synthetic driving data in orders of magnitude faster than actually driving the car.
Nvidia is investing heavily in technology for self-driving cars, Kagan says.
The company is also building what it calls the most powerful AI supercomputer for climate science: Earth-2, a digital twin of the planet. Earth-2 is designed to continuously run models to predict climate and weather events at both the regional and global levels.
Kagan says the climate modeling technology will enable people to try mitigation techniques for global warming and see what their impact is likely to be in 50 years.
The company is also working closely with the health care industry to develop AI-based technologies. Its supercomputers are helping to identify cancer by generating synthetic data to enable researchers to train their models to better identify tumors. Its AI and accelerated computing products also assist with drug discovery and genome research, Kagan says.
“We are actually moving forward at a fairly nice pace,” he says. “But the thing is that you always need to reinvent yourself and do the new thing faster and better, and basically win with what you have and not look for infinite resources. This is what commitment means.”
Standard handsets on Earth, in some locations, will soon connect directly to satellites for remote roaming
Lucas Laursen is a journalist covering global development by way of science and technology with special interest in energy and agriculture. He has lived in and reported from the United States, United Kingdom, Switzerland, and Mexico.
Lynk Tower 1 launched in April 2022, deploying the world’s first commercial cell tower in space.
The next generation of cellphone networks won’t just be 5G or 6G—they will be zero g. In April, Lynk Global launched the first direct-to-mobile commercial satellite, and on 15 August a competitor, AST SpaceMobile, confirmed plans to launch an experimental direct-to-mobile satellite of its own in mid-September. Inmarsat and other companies are working on their own low Earth orbit (LEO) cellular solutions as launch prices drop, satellite fabrication methods improve, and telecoms engineers push new network capabilities.
LEO satellite systems such as SpaceX’s Starlink and Amazon’s Kuiper envision huge constellations of satellites. However, the U.S. Federal Communications Commission just rejected SpaceX’s application for some of the US $9 billion federal rural broadband fund—in part because the Starlink system requires a $600 ground station. Space-based cell service would not require special equipment, making it a potential candidate for rural broadband funds if companies can develop solutions to the many challenges that face satellite-based smartphone service.
“The main challenge is the link budget,” says electrical engineer Symeon Chatzinotas of the University of Luxembourg, referring to the amount of power required to transmit and receive data between satellites and connected devices. “Sending signals to smartphones outdoors could be feasible by using low Earth orbit satellites with sizable antennas in the sky. However, receiving info would be even more challenging since the smartphone antennas usually disperse their energy in all directions.”
“From a nerdy engineering perspective, what’s happening is that network architectures are diverging.” —Derek Long, Cambridge Consultants
The typical distance from a phone to an LEO satellite might be 500 kilometers, at least two orders of magnitude more than typical signal-transmission distances in urban settings, so the dispersion of the phone’s power would be at least eight times greater, and would be further complicated by the phone’s orientation. It is unlikely that a satellite-smartphone connection would work well when the handset is inside a building, for example.
Lynk Global’s initial offering, which it predicts will be available in late 2022, is narrowband—meaning limited voice calls, texting, and Internet of Things (IoT) traffic. That might not allow plutocrats to make 4K video calls from their ocean-faring yachts, but it would be enough for ship insurance companies or rescue services to remain in contact with vessels in places where they couldn’t be reached before, using off-the-shelf cellular devices. AST SpaceMobile’s is aiming for 4G and 5G broadband service for mobiles.
AST satellites will use a phased-array antenna, which consists of many antennas fanned out around the satellite. Each portion of the antenna will transmit within a well-defined cone terminating at the Earth’s surface; that will be the space-to-Earth equivalent of a cell originating from a single ground base station. The company plans for an initial fleet of 20 satellites to cover the equator and help fund the launch of subsequent satellites providing more global coverage.
The size of the coverage zone on the ground should exceed the limited size of those created by Alphabet’s failed balloon-based Project Loon. Broader coverage areas should allow AST to serve more potential customers with the same number of antennas. The low Earth orbit AST is experimenting with yields round-trip signal travel times of around 25 milliseconds or less, an order of magnitude faster than is the case for higher-orbit geostationary satellites that have provided satellite telephony until now.
Plenty of behind-the-scenes technical work remains. The relatively high speed of LEO satellites will also cause a Doppler shift in the signals for which the network will have to compensate, according to a recent review in IEEE Access. New protocols for handoffs between satellites and terrestrial towers will also have to be created so that an active call can be carried from one cell to the next.
The international telecoms standards group 3GPP began providing guidelines for so-called nonterrestrial networks in March in the 17th iteration of its cellular standards. “Nonterrestrial networks” refers not just to LEO satellites but also high-altitude platforms such as drones or balloons. Nonterrestrial networks will need further updates to 3GPP’s standards to accommodate their new network architecture, such as the longer distances between cell base stations and devices.
For example, Stratospheric Platforms earlier this year tested a drone-based network prototype that would fly at altitudes greater than 18,000 meters. Its behavior as part of a 5G network will differ from that of a Lynk Global or AST satellite.
“From a nerdy engineering perspective, what’s happening is that network architectures are diverging. On the one hand, small cells are replacing Wi-Fi. On the other hand [telecom operators] are going to satellite-based systems with very wide coverage. In the middle, traditional macrocells, which are kind of difficult economically, are being squeezed,” says Derek Long, head of telecommunications at Cambridge Consultants. The company has advised Stratospheric Platforms and other companies working on nonterrestrial networks.
If telecom operators succeed, users won’t even notice their space-age smartphone networks.
“When you buy a phone, you expect it to work. Not just where someone says it will work, but everywhere. This is a step toward making that a possibility,” Long says.
Register for this webinar to enhance your modeling and design processes for microfluidic organ-on-a-chip devices using COMSOL Multiphysics
If you want to enhance your modeling and design processes for microfluidic organ-on-a-chip devices, tune into this webinar.
You will learn methods for simulating the performance and behavior of microfluidic organ-on-a-chip devices and microphysiological systems in COMSOL Multiphysics. Additionally, you will see how to couple multiple physical effects in your model, including chemical transport, particle tracing, and fluid–structure interaction. You will also learn how to distill simulation output to find key design parameters and obtain a high-level description of system performance and behavior.
There will also be a live demonstration of how to set up a model of a microfluidic lung-on-a-chip device with two-way coupled fluid–structure interaction. The webinar will conclude with a Q&A session. Register now for this free webinar!