XML / RSS feed — for news readers and subscription

Mild-Sunlight-Activated Safe Photodynamic Therapy Using On-Off Polymer Photosensitizers in Wearable Microneedle Patch

Highest h-index author
Unknown

That author's affiliation: National University of Singapore Institution (first & last author): National University of Singapore

Photodynamic therapy (PDT) is often limited by its reliance on specialized photosources, the need for clinic-based treatment, and risks of phototoxicity. Here, the authors report mild-sunlight-activated PDT microneedle patches incorporating polymeric photosensitizers with an intrinsic “on-off” reactive oxygen species-generating mechanism, enabling bio-safe, deep-tissue, and self-administered PDT.

Activating p53<sup>Y220C</sup> with a mutant-specific small molecule

Highest h-index author
Xijun Zhu (h-index 4)

That author's affiliation: Stanford University Institution (first & last author): Stanford University

The tumor suppressor protein p53 is commonly mutated in cancer and has been challenging for therapeutic reactivation. Here, the authors developed a small molecule chemical inducer of proximity to selectively activate p53 transcription and induce cellular senescence and apoptosis in p53Y220C cells.

Skeletal editing of ether-based electrolyte diluents by oxygen-distal fluorination for energy-dense Li metal battery

Highest h-index author
Unknown

That author's affiliation: Shandong University First author institution: Shandong University Last author institution: Shanghai University

Overcoming parasitic reactions between the electrolyte and electrodes is essential for realizing energy-dense lithium metal batteries. Here, the authors develop an oxygen distal fluorinated diluent that promotes a diluent- and anion-rich solvation structure, forming a stable inorganic-rich interphase and stable cycling in pouch cells at 500 Wh kg–1 level.

Spatially decoding genotype-associated epigenetic landscapes in human lymphoma FFPE tissues via epi-Patho-DBiT

Highest h-index author
Rong Fan (h-index 61)

That author's affiliation: University of New Haven Institution (first & last author): University of New Haven

Li, Tao, and colleagues present epi-Patho-DBiT to spatially map chromatin accessibility and histone modifications in archived human lymphoma tissues, revealing epigenetic drivers of lymphoma development, progression and transformation.

Coherence of a hole-spin flopping-mode qubit in a circuit quantum electrodynamics environment

Highest h-index author
Unknown

That author's affiliation: University of Grenoble Alpes Institution (first & last author): University of Grenoble Alpes

Coupling semiconductor qubit devices to microwave resonators provides a way to transfer quantum information over long distances. A flopping-mode qubit that combines strong coupling to photons with good coherence properties has now been demonstrated.

Squeezing, trisqueezing and quadsqueezing in a hybrid oscillator–spin system

Higher-order interactions in quantum harmonic oscillator systems can result in useful effects, but they are hard to engineer. An experiment on a single trapped ion now demonstrates how spin can mediate higher-order nonlinear bosonic interactions.

An analytical model to describe self-discharge rates in solid-state batteries

Internal self-discharge can compromise the shelf life of solid-state batteries. Now, physico-chemical analysis of charge loss shows that the internal self-discharge over time is not solely determined by the electronic conductivity of the solid separator but also by its electrochemical stability. This model could help guide separator and cell design.

Thin membranes with Cu-ion crosslinking for high temperature polymer electrolyte membrane fuel cells

Highest h-index author
Unknown

That author's affiliation: Beihang University Institution (first & last author): Beihang University

High-temperature polymer electrolyte membrane fuel cells tend to use relatively thick membranes to counteract H3PO4-induced degradation, limiting performance. Here the authors introduce a dynamic metal-ion crosslinking strategy to create thin, robust membranes, achieving promising power density and durability.

Spatiotemporally localized optical links and knots

Highest h-index author
Sergey A. Ponomarenko (h-index 40)

That author's affiliation: Dalhousie University Institution (first & last author): University of Shanghai for Science and Technology

The authors propose and experimentally demonstrate a scheme for weaving optical topological knots and links that are fully localized in space–time, thereby breaking the conventional constraint of longitudinal space-filling via Milnor polynomials.

Multi-dimensional frequency-bin entanglement-based quantum key distribution network

Highest h-index author
Laurent Vivien (h-index 55)

That author's affiliation: Centre de Nanosciences et de Nanotechnologies First author institution: Unknown Last author institution: Centre de Nanosciences et de Nanotechnologies

Multi-dimensional frequency-bin entanglement-based quantum key distribution network

Tunable symmetry breaking in a hexagonal-stacked moiré magnet

Highest h-index author
Kai Sun (h-index 59)

That author's affiliation: University of Michigan First author institution: University of Michigan Last author institution: Texas Tech University

Tuning symmetry breaking in magnetic transitions via twist-angle engineering is challenging, as twisted two-dimensional magnets often inherit the magnetic ground states of their constituent parts. Now this tunability is achieved in a double-bilayer moiré magnet.

Better Hardware Could Turn Zeros into AI Heroes

Highest h-index author
Unknown
Main affiliation
Stanford University


When it comes to AI models, size matters.

Even though some artificial-intelligence experts warn that scaling up large language models (LLMs) is hitting diminishing performance returns, companies are still coming out with ever larger AI tools. Meta’s latest Llama release had a staggering 2 trillion parameters that define the model.

As models grow in size, their capabilities increase. But so do the energy demands and the time it takes to run the models, which increases their carbon footprint. To mitigate these issues, people have turned to smaller, less capable models and using lower-precision numbers whenever possible for the model parameters.

But there is another path that may retain a staggeringly large model’s high performance while reducing the time it takes to run an energy footprint. This approach involves befriending the zeros inside large AI models.

For many models, most of the parameters—the weights and activations—are actually zero, or so close to zero that they could be treated as such without losing accuracy. This quality is known as sparsity. Sparsity offers a significant opportunity for computational savings: Instead of wasting time and energy adding or multiplying zeros, these calculations could simply be skipped; rather than storing lots of zeros in memory, one need only store the nonzero parameters.

Unfortunately, today’s popular hardware, like multicore CPUs and GPUs, do not naturally take full advantage of sparsity. To fully leverage sparsity, researchers and engineers need to rethink and re-architect each piece of the design stack, including the hardware, low-level firmware, and application software.

In our research group at Stanford University, we have developed the first (to our knowledge) piece of hardware that’s capable of calculating all kinds of sparse and traditional workloads efficiently. The energy savings varied widely over the workloads, but on average our chip consumed one-seventieth the energy of a CPU, and performed the computation on average eight times as fast. To do this, we had to engineer the hardware, low-level firmware, and software from the ground up to take advantage of sparsity. We hope this is just the beginning of hardware and model development that will allow for more energy-efficient AI.

What is sparsity?

Neural networks, and the data that feeds into them, are represented as arrays of numbers. These arrays can be one-dimensional (vectors), two-dimensional (matrices), or more (tensors). A sparse vector, matrix, or tensor has mostly zero elements. The level of sparsity varies, but when zeroes make up more than 50 percent of any type of array, it can stand to benefit from sparsity-specific computational methods. In contrast, an object that is not sparse—that is, it has few zeros compared with the total number of elements—is called dense.

Sparsity can be naturally present, or it can be induced. For example, a social-network graph will be naturally sparse. Imagine a graph where each node (point) represents a person, and each edge (a line segment connecting the points) represents a friendship. Since most people are not friends with one another, a matrix representing all possible edges will be mostly zeros. Other popular applications of AI, such as other forms of graph learning and recommendation models, contain naturally occurring sparsity as well.


Diagram mapping a sparse matrix to a fibertree and compressed storage format


Beyond naturally occurring sparsity, sparsity can also be induced within an AI model in several ways. Two years ago, a team at Cerebras showed that one can set up to 70 to 80 percent of parameters in an LLM to zero without losing any accuracy. Cerebras demonstrated these results specifically on Meta’s open-source Llama 7B model, but the ideas extend to other LLM models like ChatGPT and Claude.

The case for sparsity

Sparse computation’s efficiency stems from two fundamental properties: the ability to compress away zeros and the convenient mathematical properties of zeros. Both the algorithms used in sparse computation and the hardware dedicated to them leverage these two basic ideas.

First, sparse data can be compressed, making it more memory efficient to store “sparsely”—that is, in something called a sparse data type. Compression also makes it more energy efficient to move data when dealing with large amounts of it. This is best understood by an example. Take a four-by-four matrix with three nonzero elements. Traditionally, this matrix would be stored in memory as is, taking up 16 spaces. This matrix can also be compressed into a sparse data type, getting rid of the zeros and saving only the nonzero elements. In our example, this results in 13 memory spaces as opposed to 16 for the dense, uncompressed version. These savings in memory increase with increased sparsity and matrix size.


Diagram comparing dense and sparse matrix\u2013vector multiplication step by step.


In addition to the actual data values, compressed data also requires metadata. The row and column locations of the nonzero elements also must be stored. This is usually thought of as a “fibertree”: The row labels containing nonzero elements are listed and linked to the column labels of the nonzero elements, which are then linked to the values stored in those elements.

In memory, things get a bit more complicated still: The row and column labels for each nonzero value must be stored as well as the “segments” that indicate how many such labels to expect, so the metadata and data can be clearly delineated from one another.

In a dense, noncompressed matrix data type, values can be accessed either one at a time or in parallel, and their locations can be calculated directly with a simple equation. However, accessing values in sparse, compressed data requires looking up the coordinates of the row index and using that information to “indirectly” look up the coordinates of the column index before finally reaching the value. Depending on the actual locations of the sparse data values, these indirect lookups can be extremely random, making the computation data-dependent and requiring the allocation of memory lookups on the fly.

Second, two mathematical properties of zero let software and hardware skip a lot of computation. Multiplying any number by zero will result in a zero, so there’s no need to actually do the multiplication. Adding zero to any number will always return that number, so there’s no need to do the addition either.

In matrix-vector multiplication, one of the most common operations in AI workloads, all computations except those involving two nonzero elements can simply be skipped. Take, for example, the four-by-four matrix from the previous example and a vector of four numbers. In dense computation, each element of the vector must be multiplied by the corresponding element in each row and then added together to compute the final vector. In this case, that would take 16 multiplication operations and 16 additions (or four accumulations).

In sparse computation, only the nonzero elements of the vector need be considered. For each nonzero vector element, indirect lookup can be used to find any corresponding nonzero matrix element, and only those need to be multiplied and added. In the example shown here, only two multiplication steps will be performed, instead of 16.

The trouble with GPUs and CPUs

Unfortunately, modern hardware is not well suited to accelerating sparse computation. For example, say we want to perform a matrix-vector multiplication. In the simplest case, in a single CPU core, each element in the vector would be multiplied sequentially and then written to memory. This is slow, because we can do only one multiplication at a time. So instead people use CPUs with vector support or GPUs. With this hardware, all elements would be multiplied in parallel, greatly speeding up the application. Now, imagine that both the matrix and vector contain extremely sparse data. The vectorized CPU and GPU would spend most of their efforts multiplying by zero, performing completely ineffectual computations.

Newer generations of GPUs are capable of taking some advantage of sparsity in their hardware, but only a particular kind, called structured sparsity. Structured sparsity assumes that two out of every four adjacent parameters are zero. However, some models benefit more from unstructured sparsity—the ability for any parameter (weight or activation) to be zero and compressed away, regardless of where it is and what it is adjacent to. GPUs can run unstructured sparse computation in software, for example, through the use of the cuSparse GPU library. However, the support for sparse computations is often limited, and the GPU hardware gets underutilized, wasting energy-intensive computations on overhead.

Neon pixel art of a glowing portal framed by geometric stairs and circuitry lines Petra Péterffy

When doing sparse computations in software, modern CPUs may be a better alternative to GPU computation, because they are designed to be more flexible. Yet, sparse computations on the CPU are often bottlenecked by the indirect lookups used to find nonzero data. CPUs are designed to “prefetch” data based on what they expect they’ll need from memory, but for randomly sparse data, that process often fails to pull in the right stuff from memory. When that happens, the CPU must waste cycles calling for the right data.

Apple was the first to speed up these indirect lookups by supporting a method called an array-of-pointers access pattern in the prefetcher of their A14 and M1 chips. Although innovations in prefetching make Apple CPUs more competitive for sparse computation, CPU architectures still have fundamental overheads that a dedicated sparse computing architecture would not, because they need to handle general-purpose computation.

Other companies have been developing hardware that accelerates sparse machine learning as well. These include Cerebras’s Wafer Scale Engine and Meta’s Training and Inference Accelerator (MTIA). The Wafer Scale Engine, and its corresponding sparse programming framework, have shown incredibly sparse results of up to 70 percent sparsity on LLMs. However, the company’s hardware and software solutions support only weight sparsity, not activation sparsity, which is important for many applications. The second version of the MTIA claims a sevenfold sparse compute performance boost over the MTIA v1. However, the only publicly available information regarding sparsity support in the MTIA v2 is for matrix multiplication, not for vectors or tensors.

Although matrix multiplications take up the majority of computation time in most modern ML models, it’s important to have sparsity support for other parts of the process. To avoid switching back and forth between sparse and dense data types, all of the operations should be sparse.

Onyx

Instead of these halfway solutions, our team at Stanford has developed a hardware accelerator, Onyx, that can take advantage of sparsity from the ground up, whether it’s structured or unstructured. Onyx is the first programmable accelerator to support both sparse and dense computation; it’s capable of accelerating key operations in both domains.

To understand Onyx, it is useful to know what a coarse-grained reconfigurable array (CGRA) is and how it compares with more familiar hardware, like CPUs and field-programmable gate arrays (FPGAs).

CPUs, CGRAs, and FPGAs represent a trade-off between efficiency and flexibility. Each individual logic unit of a CPU is designed for a specific function that it performs efficiently. On the other hand, since each individual bit of an FPGA is configurable, these arrays are extremely flexible, but very inefficient. The goal of CGRAs is to achieve the flexibility of FPGAs with the efficiency of CPUs.

CGRAs are composed of efficient and configurable units, typically memory and compute, that are specialized for a particular application domain. This is the key benefit of this type of array: Programmers can reconfigure the internals of a CGRA at a high level, making it more efficient than an FPGA but more flexible than a CPU.

Two circuit boards and a pen showing a chip shrinking from large to tiny size. The Onyx chip, built on a coarse-grained reconfigurable array (CGRA), is the first (to our knowledge) to support both sparse and dense computations. Olivia Hsu

Onyx is composed of flexible, programmable processing element (PE) tiles and memory (MEM) tiles. The memory tiles store compressed matrices and other data formats. The processing element tiles operate on compressed matrices, eliminating all unnecessary and ineffectual computation.

The Onyx compiler handles conversion from software instructions to CGRA configuration. First, the input expression—for instance, a sparse vector multiplication—is translated into a graph of abstract memory and compute nodes. In this example, there are memories for the input vectors and output vectors, a compute node for finding the intersection between nonzero elements, and a compute node for the multiplication. The compiler figures out how to map the abstract memory and compute nodes onto MEMs and PEs on the CGRA, and then how to route them together so that they can transfer data between them. Finally, the compiler produces the instruction set needed to configure the CGRA for the desired purpose.

Since Onyx is programmable, engineers can map many different operations, such as vector-vector element multiplication, or the key tasks in AI, like matrix-vector or matrix-matrix multiplication, onto the accelerator.

We evaluated the efficiency gains of our hardware by looking at the product of energy used and the time it took to compute, called the energy-delay product (EDP). This metric captures the trade-off of speed and energy. Minimizing just energy would lead to very slow devices, and minimizing speed would lead to high-area, high-power devices.

Onyx achieves up to 565 times as much energy-delay product over CPUs (we used a 12-core Intel Xeon CPU) that utilize dedicated sparse libraries. Onyx can also be configured to accelerate regular, dense applications, similar to the way a GPU or TPU would. If the computation is sparse, Onyx is configured to use sparse primitives, and if the computation is dense, Onyx is reconfigured to take advantage of parallelism, similar to how GPUs function. This architecture is a step toward a single system that can accelerate both sparse and dense computations on the same silicon.

Just as important, Onyx enables new algorithmic thinking. Sparse acceleration hardware will not only make AI more performance- and energy efficient but also enable researchers and engineers to explore new algorithms that have the potential to dramatically improve AI.

The future with sparsity

Our team is already working on next-generation chips built off of Onyx. Beyond matrix multiplication operations, machine learning models perform other types of math, like nonlinear layers, normalization, the softmax function, and more. We are adding support for the full range of computations on our next-gen accelerator and within the compiler. Since sparse machine learning models may have both sparse and dense layers, we are also working on integrating the dense and sparse accelerator architecture more efficiently on the chip, allowing for fast transformation between the different data types. We’re also looking at ways to manage memory constraints by breaking up the sparse data more effectively so we can run computations on several sparse accelerator chips.

We are also working on systems that can predict the performance of accelerators such as ours, which will help in designing better hardware for sparse AI. Longer term, we’re interested in seeing whether high degrees of sparsity throughout AI computation will catch on with more model types, and whether sparse accelerators become adopted at a larger scale.

Building the hardware to unstructured sparsity and optimally take advantage of zeros is just the beginning. With this hardware in hand, AI researchers and engineers will have the opportunity to explore new models and algorithms that leverage sparsity in novel and creative ways. We see this as a crucial research area for managing the ever-increasing runtime, costs, and environmental impact of AI.

The current and future landscape of AI foundation models for cancer management

Highest h-index author
Chuang Niu (h-index 15)

That author's affiliation: Rensselaer Polytechnic Institute Institution (first & last author): Rensselaer Polytechnic Institute

AI foundation models (FMs) are transforming cancer management and research. Here, we analyze the state-of-the-art FMs, discuss their impact on cancer management, and argue that the next generation of cancer AI FMs will be defined by multimodality, enhanced reasoning, maximized openness, and sustained human guidance.

Graph augmented transformers improve chemotherapy toxicity symptom extraction from clinical notes

Highest h-index author
Tina Hernandez-Boussard (h-index 101)

That author's affiliation: Stanford University Institution (first & last author): Stanford University

Adverse events from chemotherapy are common; however, identifying related symptoms following these events from clinical documentation can be challenging. Here, the authors develop a natural language processing model to extract symptoms from clinical notes.

Magnetic resonance identification tags for ultra-flexible electrodes

Highest h-index author
Mehmet Fatih Yanik (h-index 35)

That author's affiliation: University of Zurich Institution (first & last author): University of Zurich

Flexible electrodes offer higher biocompatibility but remain difficult to locate inside the brain, limiting data interpretation and targeting. Here, the authors demonstrate electrodes with magnetic tags for identification and localization.

Scaffold-assisted window junctions for superconducting qubit fabrication

Highest h-index author
John M. Martinis (h-index 106)

That author's affiliation: University of California, Santa Barbara First author institution: Institute of Physics, Academia Sinica Last author institution: Research Center for Applied Science, Academia Sinica

Scaffold-assisted window junctions for superconducting qubit fabrication

Sodium is not lithium

Sodium is not lithium

Anti-topological crystal and non-Abelian liquid in twisted semiconductor bilayers

Highest h-index author
Liang Fu (h-index 85)

That author's affiliation: Massachusetts Institute of Technology Institution (first & last author): Massachusetts Institute of Technology

The authors show that electron crystals compete closely with non-Abelian fractional Chern insulators in the half-full second moiré band of twisted bilayer MoTe2. In particular, they find an “antitopological” electron crystal with zero Chern number C arising because contributions to C from the full first band and half-full second band cancel.

Dorsal prefrontal cortex drives perseverative behavior in mice

Highest h-index author
Kenneth D. Harris (h-index 89)

That author's affiliation: University College Lahore Institution (first & last author): University College Lahore

Perseveration – repeating one choice when others would generate larger rewards – is a common behavior, but neither its purpose nor neuronal mechanisms are understood. Here the authors demonstrate a neural correlate and causal role of dorsal prefrontal cortex, specifically anterior supplementary motor cortex, in perseveration in mice performing a dynamic reward learning task.

Yong Wang Turns Information Into Insights

Highest h-index author
Unknown
Main affiliation
Nanyang Technological University


When Yong Wang recently received one of the highest honors for early-career data visualization researchers, it marked a milestone in an extraordinary journey that began far from the world’s technology hubs.

Wang was born in a small farming village in southwestern China to parents with little formal education and few electronic devices. Today the IEEE member and associate editor of IEEE Transactions on Visualization and Computer Graphics is an assistant professor of computing and data science at Nanyang Technological University, in Singapore. He studies how people can employ data visualization techniques to get more out of artificial intelligence tools.

YONG WANG


EMPLOYER

Nanyang Technological University, in Singapore

POSITION

Assistant professor of computing and data science

IEEE MEMBER GRADE

Member

ALMA MATERS

Harbin Institute of Technology in China; Huazhong University of Science and Technology in Wuhan, China; Hong Kong University of Science and Technology

“Visualization helps people understand complex ideas,” Wang says. “If we design these tools well, they can make advanced technologies accessible to everyone.”

For his work in the field, the IEEE Computer Society visualization and graphics technical committee presented him with its 2025 Significant New Researcher Award. The recognition highlights his growing influence in fields including human-computer interaction and human-AI collaboration—areas becoming more important as the world generates more data than humans can easily interpret.

Growing up in rural Hunan

Wang was born in southwestern Hunan Province. China’s economy was still developing, and life in his village was modest. Most families in Hunan grew rice, vegetables, and fruit to support themselves.

Wang’s parents worked in agriculture too, and his father often traveled to cities to earn money working in a factory or on construction jobs. The extra income helped support the family and made it possible for Wang to attend college.

“I’m very grateful to my parents,” Wang says. “They never attended university, but they strongly supported my education.”

“If we build tools that help people understand information, then more people can participate in science and innovation. That’s the real power of visualization.”

Technology was scarce in the village, he says. Computers were almost nonexistent, and televisions were considered precious, expensive household possessions.

One childhood memory still makes him laugh: During a summer vacation, he and his brother spent so many hours playing video games on a simple console connected to the family’s television that the TV screen eventually burned out.

“My mother was very angry,” he recalls. “At that time, a TV was a very valuable thing.”

He says that despite never having used a laptop or experimenting with electronic equipment, he was fascinated by the technologies he saw on TV shows.

Discovering robotics and engineering

His parents encouraged a practical career such as medicine or civil engineering, but he felt drawn to robotics and computing, he says.

“I didn’t really understand what computer science involved,” he says. “But from what I saw on TV, it looked exciting and advanced.”

He enrolled at Harbin Institute of Technology, in northeastern China. The esteemed university is known for its engineering programs. His major—automation— combined elements of electrical engineering, robotics, and control systems.

One of the defining experiences of his undergraduate years, he says, was a university robotics competition. Wang and his teammates designed a robot capable of autonomously navigating around obstacles.

The design was simple compared with professional systems, he acknowledges. But, he says, the experience was exhilarating. His team placed second, and Wang began to see engineering as both creative and collaborative.

He graduated with a bachelor’s degree in 2011 and briefly worked as an assistant at the Research Institute of Intelligent Control and Systems at Harbin.

In 2014 he took a position as a research intern working at Da Jiang Innovation in Shenzhen, China.

That experience helped him clarify his future, he says: “I realized I didn’t enjoy doing repetitive work or simply following instructions. I wanted to explore ideas that interested me, and I wanted to conduct research.” The realization pushed him toward graduate school, he says.

Building tools that help humans work with AI

Wang received a master’s degree in pattern recognition and image processing from the Huazhong University of Science and Technology, in Wuhan, China, in 2016.

He then enrolled in the computer science Ph.D. program at the Hong Kong University of Science and Technology and earned the degree in 2018. He remained there as a postdoctoral researcher until 2020, when he moved to Singapore to join Singapore Management University as an assistant professor of computing and information systems. He moved over to Nanyang Technological University as an assistant professor in 2024.

His research focuses on a challenge facing nearly every business: how to make sense of the enormous amounts of data being generated.

“We live in an era of information explosions,” Wang says. “Huge amounts of data are generated, and it’s difficult for people to interpret all of it to make better business decisions.”

Data visualization offers a solution by turning complex information into images, patterns, and diagrams that people can more readily understand.

But many visualizations still must be designed manually by experts, Wang notes. It’s a time-consuming process that creates a bottleneck, he says.

His solution is to use large language models and multimodal systems that can generate text, images, video, and sensor data simultaneously and automate parts of the process.

One system developed by his research group lets users design complex infographics through natural-language instructions combined with simple interactions such as drawing on a touchscreen with a finger. It allows nontechnical people to generate visualizations instead of hiring professional designers.

Another focus of Wang’s research is human-AI collaboration. AI systems can analyze data at enormous scale, but people still need to be the final decision-makers, he says.

Visualization helps bridge the gap between human intention and AI’s complex calculations by making the process an AI system uses to reach a result more transparent and understandable.

“If people understand how the AI system works,” Wang says, “they can collaborate with it more effectively.”

He recently explored how visualization techniques could help researchers understand quantum computing, a field where core concepts—such as superposition, where a bit can be in more than one state at a time—are abstract. In classical computing, the bit state is binary: It’s either 1 or 0. A quantum bit, or qubit, can be 1, 0, or both. The differences get more dizzying from there.

Visualization tools could help scientists monitor quantum systems and interpret quantum machine-learning models, he says.

The importance of IEEE communities

Teaching and mentoring students remain among the most meaningful parts of Wang’s career, he says.

Professional communities such as the IEEE Computer Society, he says, play a major role in helping him transform early-stage graduate students unsure of which lines of inquiry they will pursue into independent researchers with a solid technical focus. Through conferences, publications, and technical committees, IEEE connects Wang with other researchers working in visualization, AI, and human-computer interactions, he says.

Those connections have helped him share ideas, collaborate, and stay up to date on innovations in the research community.

Receiving the Significant New Researcher award motivates him to continue pushing the field forward, he says.

Looking back, he says, the distance between his rural village in Hunan and an international research career still feels remarkable. But, he says, the journey reflects something larger about his chosen field: “If we build tools that help people understand information, then more people can participate in science and innovation.

“That’s the real power of visualization.”

A New Type of Neuroplasticity Rewires the Brain After a Single Experience

Highest h-index author
Yasemin Saplakoglu
That author's affiliation: Quanta Magazine Institution (first & last author): Quanta Magazine “Neurons that fire together, wire together” is not the full story. A novel mechanism explains how the brain can learn across longer timescales.

The post A New Type of Neuroplasticity Rewires the Brain After a Single Experience first appeared on Quanta Magazine

szKendall: spatial-structural-zero-aware dissimilarity measures for subtype discovery using single cell Hi-C data

Highest h-index author
Shili Lin (h-index 30)

That author's affiliation: The Ohio State University First author institution: The Ohio State University Last author institution: Medical College of Wisconsin

Single-cell Hi-C contact maps reveal diverse DNA folding across cells, but data are sparse due to biological mechanism and low sequencing depth. Here, authors introduce structural-zero-aware dissimilarity measures that separate true contact absence from missing data, enhancing cell type clustering.

Near-term fermionic simulation with subspace noise tailored quantum error mitigation

Highest h-index author
Miha Papič (h-index 8)

That author's affiliation: IQM Quantum Computers First author institution: IQM (Germany) Last author institution: University of Würzburg

Near-term fermionic simulation with subspace noise tailored quantum error mitigation

Imaging dynamic electrocatalytic processes on nano-strained MoS<sub>2</sub> using interferometric electro-optical microscopy

Highest h-index author
Rui Hao (h-index 13)

That author's affiliation: Southern University of Science and Technology Institution (first & last author): Southern University of Science and Technology

Understanding dynamic heterogeneity in hydrogen evolution electrocatalysts is essential but difficult with limited spatio-temporal resolution. Here the authors use interferometric electro-optical microscopy to achieve nanometre–millisecond imaging of hydrogen evolution activity on MoS2.

Field-induced superconductivity in a magnetically doped two-dimensional crystal

Highest h-index author
Adrian Llanos (h-index 3)

That author's affiliation: California Institute of Technology Institution (first & last author): California Institute of Technology

Magnetic fields usually weaken superconductivity. By contrast, a material platform is demonstrated where applying a moderate field induces superconductivity.

Author Correction: Combination of PARP and KRAS<sup>G12D</sup> inhibitors enhances therapeutic efficacy by exploiting vulnerabilities in PDAC

Author Correction: Combination of PARP and KRAS<sup>G12D</sup> inhibitors enhances therapeutic efficacy by exploiting vulnerabilities in PDAC

Engineered local polarization disorder unlocks record efficiency in antiferroelectric capacitors

The authors introduce controlled compositional heterogeneity to broaden polarization vector distributions while preserving the antiferroelectric modulation in PbZrO3-based ceramics. They reduce polarization hysteresis while maintaining high polarization strength.

Discovery of molecular glues that bind FKBP12 and structurally distinct targets using DNA-encoded libraries

In this study, authors screen a 3.2 million member FKBP scaffold-directed DNA-encoded library and identify FKBP12-binding molecular glues for both bromodomain-containing protein 9 (BRD9) and quinoid dihydropteridine reductase (QDPR).

S-atom dislocation-induced room-temperature ferroelectricity in two-dimensional α-MnS semiconductor

A room-temperature ferroelectricity with out-of-plane polarization is disclosed in chemical vapor deposition synthesized two-dimensional α-MnS, which exhibits large tunneling electroresistance, high endurance, and long retention time.

Distributed wavefront shaping in radiative near-field sub-terahertz wireless networks

Highest h-index author
Atsutse Kludze (h-index 7)
Main affiliation
Unknown

This work shows that the effective near-field range of an aperture can be much smaller than the Fraunhofer limit. It introduces distributed beam shaping, coordinating multiple transmitting apertures to extend the effective near-field region.

Surface-code hardware Hamiltonian

Highest h-index author
Mohammad H. Ansari (h-index 11)
Main affiliation
Unknown

Surface-code hardware Hamiltonian

Distributed quantum inner product estimation with structured random circuits

Highest h-index author
Zaichen Zhang (h-index 38)
Main affiliation
Unknown

Distributed quantum inner product estimation with structured random circuits

Two-qubit gates using on-demand single-photons from ordered shape and size controlled large-volume superradiant quantum dots

Highest h-index author
A. Madhukar (h-index 55)
Main affiliation
Unknown

Two-qubit gates using on-demand single-photons from ordered shape and size controlled large-volume superradiant quantum dots

Laser-induced nucleation of magnetic hopfions

Highest h-index author
Yu Han (h-index 139)

That author's affiliation: South China University of Technology First author institution: South China University of Technology Last author institution: Nankai University

The creation of stable and isolated magnetic hopfions—three-dimensional topological solitons—has remained experimentally challenging. Now the laser-induced nucleation of hopfions has been achieved in a chiral magnet.

Two-electron quantum walks for probing entanglement and decoherence in an electron microscope

Highest h-index author
Ido Kaminer (h-index 56)

That author's affiliation: Technion – Israel Institute of Technology First author institution: Technion – Israel Institute of Technology Last author institution: University of Konstanz

Entanglement between particles offers insights into quantum behaviour, but methods for studying it in free-electron systems are lacking. Now a two-electron quantum walk is used to probe decoherence of free electrons inside an electron microscope.

The USC Professor Who Pioneered Socially Assistive Robotics



When the robotics engineering field that Maja Matarić wanted to work in didn’t exist, she helped create it. In 2005 she helped define the new area of socially assistive robotics.

As an associate professor of computer science, neuroscience, and pediatrics at the University of Southern California, in Los Angeles, she developed robots to provide personalized therapy and care through social interactions.

Maja Matarić


Employer

University of Southern California, Los Angeles

Job Title

Professor of computer science, neuroscience, and pediatrics

Member grade

Fellow

Alma maters

University of Kansas and MIT

The robots could have conversations, play games, and respond to emotions.

Today the IEEE Fellow is a professor at USC. She studies how robots can help students with anxiety and depression undergo cognitive behavioral therapy. CBT focuses on changing a person’s negative thought patterns, behaviors, and emotional responses.

For her work, she received a 2025 Robotics Medal from MassRobotics, which recognizes female researchers advancing robotics. The Boston-based nonprofit provides robotics startups with a workspace, prototyping facilities, mentorship, and networking opportunities.

When receiving the award at the ceremony in Boston, Matarić was overcome with joy, she says.

“I’ve been very fortunate to be honored with several awards, which I am grateful for. But there was something very special about getting the MassRobotics medal, because I knew at least half the people in the room,” she says. “Everyone was just smiling, and there was a great sense of love.”

Seeing herself as an engineer

Matarić grew up in Belgrade, Serbia. Her father was an engineer, and her mother was a writer. After her father died when she was 16, Matarić and her mother moved to the United States.

She credits her father for igniting her interest in engineering, and her uncle who worked as an aerospace engineer for introducing her to computer science.

Matarić says she didn’t consider herself an engineer until she joined USC’s faculty, since she always had worked in computer science.

“In retrospect, I’ve always been an engineer,” Matarić says. “But I didn’t set out specifically thinking of myself as one—which is just one of the many things I like to convey to young people: You don’t always have to know exactly everything in advance.”

Maja Matarić and her lab are exploring how socially assistive robots can help improve the communication skills of children with autism spectrum disorder. National Science Foundation News

While pursuing her bachelor’s degree in computer science at the University of Kansas in Lawrence, she was introduced to industrial robotics through a textbook. After earning her degree in 1987, she had an opportunity to continue her education as a graduate student at MIT’s AI Lab (now the Computer Science and Artificial Intelligence Lab). During her first year, she explored the different research projects being conducted by faculty members, she said in a 2010 oral history conducted by the IEEE History Center. She met IEEE Life Fellow Rodney Brooks, who was working on novel reactive and behavior-based robotic systems. His work so excited her that she joined his lab and conducted her master’s thesis under his tutelage.

Inspired by the way animals use landmarks to navigate, Matarić developed Toto, the first navigating behavior-based robot. Toto used distributed models to map the AI Lab building where Matarić worked and plan its path to different rooms. Toto used sonar to detect walls, doors, and furniture, according to Matarić’s paper, “The Robotics Primer.”

After earning her master’s degree in AI and robotics in 1990, she continued to work under Brooks as a doctoral student, pioneering distributed algorithms that allowed a team of up to 20 robots to execute complex tasks in tandem, including searching for objects and exploring their environment.

Matarić earned her Ph.D. in AI and robotics in 1994 and joined Brandeis University, in Waltham, Mass., as an assistant professor of computer science. There she founded the Interaction Lab, where she developed autonomous robots that work together to accomplish tasks.

Three years later, she relocated to California and joined USC’s Viterbi School of Engineering as an assistant professor in computer science and neuroscience.

In 2002 she helped to found the Center for Robotics and Embedded Systems (now the Robotics and Autonomous Systems Center). The RASC focuses on research into human-centric and scalable robotic systems and promotes interdisciplinary partnerships across USC.

Matarić’s shift in her research came after she gave birth to her first child in 1998. When her daughter was a bit older and asked Matarić why she worked with robots, she wanted to be able to “say something better than ‘I publish a lot of research papers,’ or ‘it’s well-recognized,’” she says.

“In academia, you can be in a leadership role and still do research. It’s a wonderful and important opportunity that lets academics be on top of our field and also train the next generation of students and help the next generation of faculty colleagues.”

“Kids don’t consider those good answers, and they’re probably right,” she says. “This made me realize I was in a position to do something different. And I really wanted the answer to my daughter’s future question to be, ‘Mommy’s robots help people.’”

Matarić and her doctoral student David Feil-Seifer presented a paper defining socially assistive robotics at the 2005 International Conference on Rehabilitation Robotics. It was the only paper that talked about helping people complete tasks and learn skills by speaking with them rather than by performing physical jobs, she says.

Feil-Seifer is now a professor of computer science and engineering at the University of Nevada in Reno.

At the same time, she founded the Interaction Lab at USC and made its focus creating robots that provide social, rather than physical, support.

“At this point in my career journey, I’ve matured to a place where I don’t want to do just curiosity-driven research alone,” she says. “Plenty of what my team and I do today is still driven by curiosity, but it is answering the question: ‘How can we help someone live a better life?’”

In 2006 she was promoted to full professor and made the senior associate dean for research in USC’s Viterbi School of Engineering. In 2012 she became vice dean for research.

“In academia, you can be in a leadership role and still do research,” she says. “It’s a wonderful and important opportunity that lets academics be on top of our field and also train the next generation of students and help the next generation of faculty colleagues.”

Research in socially assistive robotics

One of the longest research projects Matarić has led at her Interaction Lab is exploring how socially assistive robots can help improve the communication skills of children with autism spectrum disorder. ASD is a lifelong neurological condition that affects the way people interact with others, and the way they learn. Children with ASD often struggle with social behaviors such as reading nonverbal cues, playing with others, and making eye contact.

Matarić and her team developed a robot, Bandit, that can play games with a child and give the youngster words of affirmation. Bandit is 56 centimeters tall and has a humanlike head, torso, and arms. Its head can pan and tilt. The robot uses two FireWire cameras as its eyes, and it has a movable mouth and eyebrows, allowing it to exhibit a variety of facial expressions, according to the IEEE Spectrum’s robots guide. Its torso is attached to a wheeled base.

The study showed that when interacting with Bandit, children with ASD exhibited social behaviors that were out of the ordinary for them, such as initiating play and imitating the robot.

Matarić and her team also studied how the robot could serve as a social and cognitive aid for elderly people and stroke patients. Bandit was programmed to instruct and motivate users to perform daily movement exercises such as seated aerobics.

A smiling blonde woman gestures at a customizable tabletop robot that wears a knit outfit of a cute animal over its shell. Maja Matarić and doctoral student Amy O’Connell testing Blossom, which is being used to study how it can aid students with anxiety or depression.University of Southern California

Over the years, Matarić’s lab developed other robots including Kiwi and Blossom. Kiwi, which looked like an owl, helped children with ASD learn social and cognitive skills, helped motivate elderly people living alone to be more physically active, and mediated discussions among family members. Blossom, originally developed at Cornell, was adapted by the Interaction Lab to make it less expensive and personalizable for individuals. The robot is being used to study how it can aid students with anxiety or depression to practice cognitive behavioral therapy.

Matarić’s line of research began when she learned that large language model (LLM) chatbots were being promoted to help people with mental health struggles, she said in an episode of the AMA Medical News podcast.

“It is generally not easy to get [an appointment with a] therapist, or there might not be insurance coverage,” she said. “These, combined with the rates of anxiety and depression, created a real need.”

That made the chatbot idea appealing, she says, but she was interested to see if they were effective compared with a friendly robot such as Blossom.

Matarić and her team used the same LLMs to power CBT practice with a chatbot and with Blossom. They ran a two-week study in the USC dorms, where students were randomly assigned to complete CBT exercises daily with either a chatbot or the robot. Participants filled out a clinical assessment to measure their psychiatric distress before and after each session.

The study showed that students who interacted with the robot experienced a significant decrease in their mental state, Matarić said in the podcast, and students who interacted with the chatbot did not.

“Joining an [IEEE] society has an impact, and it can be personal. That’s why I recommend my students join the organization—because it’s important to get out there and get connected.”

She and her team also reviewed transcripts of conversations between the students and the robot to evaluate how well the LLM responded to the participants. They found the robot was more effective than the chatbot, even though both were using the same model.

Based on those findings, in 2024 Matarić received a grant from the U.S. National Institute of Mental Health to conduct a six-week clinical trial to explore how effective a socially assistive robot could be at delivering CBT practice. The trial, currently underway, also is expected to study how Blossom can be personalized to adapt to each user’s preferences and progress, including the way the robot moves, which exercises it recommends, and what feedback it gives.

During the trial, the 120 students participating are wearing Fitbits to study their physiologic responses. The participants fill out a clinical assessment to measure their psychiatric distress before and after each session.

Data including the participants’ feelings of relating to the robot, intrinsic motivation, engagement, and adherence will be assessed by the research team, Matarić says.

She says she’s proud of the graduate students working on this project, and seeing them grow as engineers is one of the most rewarding parts of working in academia.

“Engineers generally don’t anticipate having to work with human study participants and needing to understand psychology in addition to the hardcore engineering,” she says. “So the students who choose to do this research are just wonderful, caring people.”

Finding a community at IEEE

Matarić joined IEEE as a graduate student in 1992, the year she published her first paper in IEEE Transactions on Robotics and Automation. The paper, “Integration of Representation Into Goal-Driven Behavior-Based Robots,” described her work on Toto.

As a member of the IEEE Robotics and Automation Society, she says she has gained a community of like-minded people. She enjoys attending conferences including the IEEE International Conference on Robotics and Automation, the IEEE/RSJ International Conference on Intelligent Robots and Systems, and the ACM/IEEE International Conference on Human-Robot Interaction, which is closest to her field of research.

Matarić credits IEEE Life Fellow George Bekey, the founding editor in chief of the IEEE Transactions on Robotics, for recruiting her for the USC engineering faculty position. He knew of her work through her graduate advisor Brooks, who published a paper in the journal that introduced reactive control and the subsumption architecture, which became the foundation of a new way to control robots. It is his most cited paper. Bekey, who was editor in chief at the time, helped guide Brooks through the challenging review process. Matarić joined Brooks’s lab at MIT two years after its publication, and her work on Toto built on that foundation.

“Joining a society has an impact, and it can be personal,” she says. “That’s why I recommend my students join the organization—because it’s important to get out there and get connected.”

Subcellular mRNA localization patterns across tissues resolved with spatial transcriptomics

Highest h-index author
Shalev Itzkovitz (h-index 67)

That author's affiliation: Weizmann Institute of Science Institution (first & last author): Weizmann Institute of Science

Subcellular RNA localization plays a central role in post-transcriptional regulation. Using high-resolution spatial transcriptomics and a computational approach, the study maps intracellular mRNA localization across tissues, revealing conserved localization patterns and enabling routine analysis of RNA distribution in cells.

Prediction and functional interpretation of inter-chromosomal genome architecture from DNA sequence with TwinC

Highest h-index author
William Stafford Noble (h-index 107)

That author's affiliation: University of Washington Institution (first & last author): University of Washington

Here the authors present TwinC, a CNN that predicts trans-chromosomal contacts with high accuracy. Trained on Hi-C and validated with DNA SPRITE, it reveals that compartments, chromatin accessibility, transcription factor clusters, and G-quadruplexes drive these interactions.

Dynamic recruitment of CaMKII into SHANK3 phase-separated condensates tunes postsynaptic density remodeling during long-term potentiation

In this study, the authors find that SHANK3’s large intrinsically disordered region mediates phase separation to support postsynaptic density remodeling during long-term potentiation, providing an insight into how autism spectrum disorder-linked SHANK3 mutations disrupt synaptic plasticity.

Engineered BCG selectively triggers trained immunity in tumor-associated macrophages and sensitizes glioblastoma to radiotherapy in mice

Highest h-index author
Ningyi Ma (h-index 1)
Main affiliation
Unknown

Radiotherapy (RT) is standard-of-care in cancer management; however, RT efficacy remains limited. Here, the authors test whether membrane camouflaged BCG bacteria (MBCG) enhance response to RT in preclinical models of glioblastoma. MBCG efficiently targets tumor tissues, induces trained immunity in tumor-associated macrophages, and enhances the RT-induced anti-tumor responses.

Optical excitations reshape the spin-wave spectrum in antiferromagnets

Charge-transfer excitations, which define the optical bandgap in many insulators, also contribute to magnetic exchange in antiferromagnets. Femtosecond optical pumping of these transitions in canted antiferromagnet DyFeO3 reshapes the spin-wave spectrum — the set of collective spin excitations that define the dynamics of the antiferromagnet — without destroying the long-range order.

Average topological phase in a disordered Rydberg atom array

In addition to strongly protected topological phases that rely on exact symmetries, theory predicts that disorder can stabilize weakly protected phases in mixed quantum states, and an example of the latter is now observed in a Rydberg atom array.

Transverse optical torque observed at the nanoscale

Optical forces and torques on nanoparticles are difficult to measure due to the diffraction limit of light. Now, transverse optical torque is observed through the optical trapping and spatial tracking of a designed microscale structure.

Developmental system drift in dorsoventral patterning is linked to transitions to autonomous development in Annelida

Highest h-index author
David Ferrier (h-index 45)
Main affiliation
Queen Mary University of London

Here they show that BMP signalling is the ancestral pathway that patterns the dorsoventral (DV) axis in Annelida and Spiralia. The shift to unequal cleavage involved alternative pathways for patterning the DV axis, leading to a unique case of developmental system drift.

A midbrain circuit for high-fat-food induced conditioned taste aversion

Highest h-index author
Hao Wang (h-index 17554)
Main affiliation
Zhejiang University

Neural mechanisms underlying conditioned taste aversion are not fully understood. Here authors identified a brain circuit that drives learned aversion to high-fat food by associating it with nausea. This circuit’s learning and memory components offer insight into how the brain forms food avoidance behaviors.

Cholecystokinin coordinates gonadotropin-dependent and independent pathways to orchestrate zebrafish gonadal development

Highest h-index author
Hongwei Liang (h-index 58)
Main affiliation
Huazhong Agricultural University

In zebrafish, cholecystokinin drives gonadal development via two parallel pathways: direct FSH stimulation in the pituitary and local regulation of germ cell proliferation and survival within the gonad.

Quantum ‘Jamming’ Explores the Truly Fundamental Principles of Nature

Some quantum cryptographers want to find ways to keep messages secret even if the rules of quantum mechanics don’t hold. The recently rediscovered idea of quantum jamming complicates things.

The post Quantum ‘Jamming’ Explores the Truly Fundamental Principles of Nature first appeared on Quanta Magazine

Designing Broadband LPDA-Fed Reflector Antennas With Full-Wave EM Simulation



A practical guide to designing log-periodic dipole array fed parabolic reflector antennas using advanced 3D MoM simulation — from parametric modeling to electrically large structures.

What Attendees will Learn

  1. How to set design requirements for LPDA-fed reflector antennas — Understand the key specifications including bandwidth ratio, gain targets, and VSWR matching constraints across the full operating range from 100 MHz to 1 GHz.
  2. Why advanced 3D EM solvers enable simulation of electrically large multiscale structures — Learn how higher order basis functions, quadrilateral meshing, geometrical symmetry, and CPU/GPU parallelization extend MoM simulation capability by an order of magnitude.
  3. How to apply a systematic three-step design strategy with proven workflow starting with first optimizing the stand-alone LPDA for VSWR and gain, then integrating the reflector, and finally tuning parameters to satisfy all performance requests including gain and impedance matching.
  4. How parametric CAD modeling accelerates LPDA design — Discover how self-scaling geometry, automated wire-to-solid conversion, and multiple-copy-with-scaling features enable fully parametrized antenna models that streamline optimization across dozens of design variants.

Author Correction: A Bayesian decision support system for automated insulin doses in adults with type 1 diabetes on multiple daily injections: a randomized controlled trial

Author Correction: A Bayesian decision support system for automated insulin doses in adults with type 1 diabetes on multiple daily injections: a randomized controlled trial

Author Correction: Replay without sharp wave ripples in a spatial memory task

Author Correction: Replay without sharp wave ripples in a spatial memory task

Author Correction: PanMETAI - a high performance tabular foundation model for accurate pancreatic cancer diagnosis via NMR metabolomics

Author Correction: PanMETAI - a high performance tabular foundation model for accurate pancreatic cancer diagnosis via NMR metabolomics

<i>Mll5</i> haploinsufficiency attenuates microglial phagocytosis through dysregulated TREM2-SGK3-GSK3β signaling and recapitulates ASD-like behaviors in mice

This study shows that reduced Mll5 impairs microglial phagocytosis via TREM2- SGK3-GSK3β signaling, causing ASD-like behaviors in mice, while lithium chloride was shown to rescue deficits.

Precision cardiovascular risk prediction in type 1 diabetes: An IMI2 SOPHIA analysis

Highest h-index author
Paul W. Franks (h-index 136)
Main affiliation
KU Leuven · Universität Ulm

Risk profiling based on how BMI interacts with cardiovascular markers was useful in the general population. In type 1 diabetes— where cardiovascular risk is already high— these profiles are notably valuable for tailored approaches as they reveal how high glucose may hide other risk factors

Pan-organ poly(A) atlas reveals a post-transcriptional regulatory layer independent of RNA abundance

Poly(A) tails are critical to gene regulation, yet their organism-wide patterns remain unmapped. Here, the authors provide an 18-organ mouse atlas, revealing that poly(A) dynamics is fundamentally orthogonal to mRNA abundance.

Light-induced giant random telegraph noise in CuScP<sub>2</sub>S<sub>6</sub>/MoS<sub>2</sub> heterostructures and their use in noise resilience image inference

Random telegraph noise (RTN) reveals charge trapping dynamics in nanoscale electronic devices. Here, authors demonstrate optically controlled RTN in a CuScP2S6/MoS₂ heterostructure, enabling field and light-tunable defect activity and noise resilient neuromorphic image encoding.

Higher-order neuromorphic Ising machines—autoencoders and Fowler-Nordheim annealers are all you need for scalability

The authors demonstrate that an autoencoder-based neuromorphic architecture combined with Fowler-Nordheim annealing, is sufficient to implement scalable higher-order Ising machines. They show that these machines can consistently produce state-of-the-art solutions with high reliability and competitive time-to-solution metrics.

Engineering a compact high-fidelity <i>Staphylococcus aureus</i> Cas9 variant with broader targeting range and mechanistic insights into its activation

CRISPR-Cas9-based genome editing is powerful but limited by target range, specificity, and delivery constraints. Here, authors engineer a compact SaCas9 that recognizes NNG PAMs for efficient genome and base editing in cells and mice, and reveal its activation mechanism via cryo-EM structural analysis.

Crypto Faces Increased Threat from Quantum Attacks



The race to transition online security protocols to ones that can’t be cracked by a quantum computer is already on. The algorithms that are commonly used today to protect data online—RSA and elliptic curve cryptography—are uncrackable by supercomputers, but a large enough quantum computer would make quick work of them. There are algorithms secure enough to be out of reach for both classical and future quantum machines, called post-quantum cryptography, but transitioning to these is a work in progress.

Late last month, the team at Google Quantum AI published a whitepaper that added significant urgency to this race. In it, the team showed that the size of a quantum computer that would pose a cryptographic threat is approximately twenty times smaller than previously thought. This is still far from accessible to the quantum computers that exist today: the largest machines currently consist of approximately 1,000 quantum bits, or qubits, and the whitepaper estimated that about 500 times as much is needed. Nonetheless, this shortens the timeline to switch over to post-quantum algorithms.

The news had a surprising beneficiary: obscure cryptocurrency Algorand jumped 44% in price in response. The whitepaper called out Algorand specifically for implementing post-quantum cryptography on their blockchain. We caught up with Algorand’s chief scientific officer and professor of computer science and engineering at the University of Michigan, Chris Peikert, to understand how this announcement is impacting cryptography, why cryptocurrencies are feeling the effects, and what the future might hold. Peikert’s early work on a particular type of algorithm known as lattice cryptography underlies most post-quantum security today.

IEEE Spectrum: What is the significance of this Google Quantum AI whitepaper?

Peikert: The upshot of this paper is that it shows that a quantum computer would be able to break some of the cryptography that is most widely used, especially in blockchains and cryptocurrencies, with much, much fewer resources than had previously been established. Those resources include the time that it would take to do so and the number of qubits (or quantum bits) that it would have to use.

This cryptography is very central to not just cryptocurrencies but more broadly, to cryptography on the internet. It is also used for secure web connections between web browsers and web servers. Versions of elliptic curve cryptography are used in national security systems and military encryption. It’s very prevalent and pervasive in all modern networks and protocols.

And not only was this paper improving the algorithms, but there was also a concurrent paper showing that the hardware itself was substantially improved. The claim here was that the number of physical qubits needed to achieve a certain kind of logical qubit was also greatly reduced. These two kinds of improvements are compounding upon each other. It’s a kind of a win-win situation from the quantum computing perspective, but a lose-lose situation for cryptography.

IEEE Spectrum: What do Google AI’s findings mean for cryptocurrencies and the broader cybersecurity ecosystem?

Peikert: There’s always been this looming threat in the distance of quantum computers breaking a large fraction of the cryptography that’s used throughout the cryptocurrency ecosystem. And I think what this paper did was really the loudest alarm yet that these kinds of quantum attacks might not be as far off as some have suspected, or hoped, in recent years. It’s caused a re-evaluation across the industry, and a moving up of the timeline for when quantum computers might be capable of breaking this cryptography.

When we think about the timelines and when it’s important to have completed these transitions [to post-quantum cryptography], we also need to factor in the unknown improvements that we should expect to see in the coming years. The science of quantum computing will not stay static, and there will be these further breakthroughs. We can’t say exactly what they will be or when they will come, but you can bet that they will be coming.

IEEE Spectrum: What is your guess on if or when quantum computers will be able to break cryptography in the real world?

Peikert: Instead of thinking about a specific date when we expect them to come, we have to think about the probabilities and the risks as time goes on. There have been huge breakthrough developments, including not only this paper, but also some last year. But even with these, I think that the chance of a cryptographic attack by quantum computers being successful in the next three years is extremely low, maybe less than a percent. But then, as you get out to several years, like 5, 6, or 10 years, one has to seriously consider a probability, maybe 5% or 10% or more. So it’s still rather small, but significant enough that we have to worry about the risk, because the value that is protected by this kind of cryptography is really enormous.

The US government has put 2035 as its target for migrating all of the national security systems to post quantum cryptography. That seems like a prudent date, given the timelines that it takes to upgrade cryptography. It’s a slow process. It has to be done very deliberately and carefully to make sure that you’re not introducing new vulnerabilities, that you’re not making mistakes, that everything still works properly. So, you know, given the outlook for quantum computers on the horizon, it’s really important that we prepare now, or ideally, yesterday, or a few years ago, for that kind of transition.

IEEE Spectrum: Are there significant roadblocks you see to industrial adoption of post-quantum cryptography going forward?

Peikert: Cryptography is very hard to change. We’ve only had one or maybe two major transitions in cryptography since the early 1980s or late 1970s when the field first was invented. We don’t really have a systematic way of transitioning cryptography.

An additional challenge is that the performance tradeoffs are very different in post-quantum cryptography than they are in the legacy systems. Keys and cipher texts and digital signatures are all significantly larger in post-quantum cryptography, but the computations are actually faster, typically. People have optimized cryptography for speed in the past, and we have very good fast speeds now for post-quantum cryptography, but the sizes of the keys are a challenge.

Especially in blockchain applications, like cryptocurrencies, space on the blockchain is at a premium. So it calls for a reevaluation in many applications of how we integrate the cryptography into the system, and that work is ongoing. And, the blockchain ecosystem uses a lot of advanced cryptography, exotic things like zero-knowledge proofs. In many cases, we have rudimentary constructions of these fancy cryptography tools from post-quantum type mathematics, but they’re not nearly as mature and industry ready as the legacy systems that have been deployed. It continues to be an important technical challenge to develop post-quantum versions of these very fancy cryptographic schemes that are used in cutting edge applications.

IEEE Spectrum: As an academic cryptography researcher, what attracted you to work with a cryptocurrency, and Algorand in particular?

Peikert: My former PhD advisor is Silvio Micali, the inventor of Algorand. The system is very elegant. It is a very high performing blockchain system and it uses very little energy, has fast transaction finalization, and a number of other great features. And Silvio appreciated that this quantum threat was real and was coming, and the team approached me about helping to improve the Algorand protocol at the basic levels to become more post-quantum secure in 2021. That was a very exciting opportunity, because it was a difficult engineering and scientific challenge to integrate post-quantum cryptography into all the different technical and cryptographic mechanisms that were underlying the protocol.

IEEE Spectrum: What is the current status of post-quantum cryptography in Algorand, and blockchains in general?

Peikert: We’ve identified some of the most pressing issues and worked our way through some of them, but it’s a many-faceted problem overall. We started with the integrity of the chain itself, which is the transaction history that everybody has to agree upon.

Our first major project was developing a system that would add post-quantum security to the history of the chain. We developed a system called state proofs for that, which is a mixture of ordinary post-quantum cryptography and also some more fancy cryptography: It’s a way of taking a large number of signatures and digesting them down into a much smaller number of signatures, while still being confident that these large number of signatures actually exist and are properly formed. We also followed it with other papers and projects that are about adding post-quantum cryptography and security to other aspects of the blockchain in the Algorand ecosystem.

It’s not a complete project yet. We don’t claim to be fully post-quantum secure. That’s a very challenging target to hit, and there are aspects that we will continue to work on into the near future.

IEEE Spectrum: In your view, will we adopt post-quantum cryptography before the risks actually catch up with us?

Peikert: I tend to be an optimist about these things. I think that it’s a very good thing that more people in decision making roles are recognizing that this is an important topic, and that these kinds of migrations have to be done. I think that we can’t be complacent about it, and we can’t kick the can down the road much longer. But I do see that the focus is being put on this important problem, so I’m optimistic that most important systems will eventually have good either mitigations or full migrations in place.

But it’s also a point on the horizon that we don’t know exactly when it will come. So, there is the possibility that there is a huge breakthrough, and we have many fewer years than we might have hoped for, and that we don’t get all the systems upgraded that we would like to have fixed by the time quantum computers arrive.

Author Correction: Fc-engineered large molecules targeting blood-brain barrier transferrin receptor and CD98hc have distinct central nervous system and peripheral biodistribution

Author Correction: Fc-engineered large molecules targeting blood-brain barrier transferrin receptor and CD98hc have distinct central nervous system and peripheral biodistribution

Persistent organic pollutant concentrations in human pancreas correlate with markers of beta cell dysfunction

Measures of persistent organic pollutant concentrations in human pancreas remain limited; additionally, no studies have correlated pollutant concentrations with direct measures of beta cell function in humans. Here the authors show that lipophilic pollutants—including dioxins/furans, polychlorinated biphenyls, and organochlorine pesticides— accumulate in human pancreas and positively correlate with markers of beta cell dysfunction.

Asymmetric dimeric assembly of Suv3 helicase facilitates processive RNA unwinding

Human Suv3 is a mitochondrial helicase essential for RNA decay. Here, the authors present cryo-EM structures of Suv3 in multiple functional states, revealing an asymmetric dimeric architecture that coordinates ATP hydrolysis for processive RNA unwinding.

Author Correction: A proteogenomic atlas of 1032 brain metastases identifies molecular subtypes, immune landscapes, and therapeutic vulnerabilities

Author Correction: A proteogenomic atlas of 1032 brain metastases identifies molecular subtypes, immune landscapes, and therapeutic vulnerabilities

OpenAI Engineer Helps Companies Attract Buyers and Boost Sales



Like many engineers, Sarang Gupta spent his childhood tinkering with everyday items around the house. From a young age he gravitated to projects that could make a difference in someone’s everyday life.

When the family’s microwave plug broke, Gupta and his father figured out how to fix it. When a drawer handle started jiggling annoyingly, the youngster made sure it didn’t do so for long.

Sarang Gupta


Employer

OpenAI in San Francisco

Job

Data science staff member

Member grade

Senior member

Alma maters

The Hong Kong University of Science and Technology; Columbia

By age 11, his interest expanded from nuts and bolts to software. He learned programming languages such as Basic and Logo and designed simple programs including one that helped a local restaurant automate online ordering and billing.

Gupta, an IEEE senior member, brings his mix of curiosity, hands-on problem-solving, and a desire to make things work better to his role as member of the data science staff at OpenAI in San Francisco. He works with the go-to-market (GTM) team to help businesses adopt ChatGPT and other products. He builds data-driven models and systems that support the sales and marketing divisions.

Gupta says he tries to ensure his work has an impact. When making decisions about his career, he says, he thinks about what AI solutions he can unlock to improve people’s lives.

“If I were to sum up my overall goal in one sentence,” he says, “it’s that I want AI’s benefits to reach as many people as possible.”

Pursuing engineering through a business lens

Gupta’s early interest in tinkering and programming led him to choose physics, chemistry, and math as his higher-level subjects at Chinmaya International Residential School, in Tamil Nadu, India. As part of the high school’s International Baccalaureate chapter, students select three subjects in which to specialize.

“I was interested in engineering, including the theoretical part of it,” Gupta says, “But I was always more interested in the applications: how to sell that technology or how it ties to the real world.”

After graduating in 2012, he moved overseas to attend the Hong Kong University of Science and Technology. The university offered a dual bachelor’s program that allowed him to earn one degree in industrial engineering and another in business management in just four years.

In his spare time, Gupta built a smartphone app that let students upload their class schedules and find classmates to eat lunch with. The app didn’t take off, he says, but he enjoyed developing it. He also launched Pulp Ads, a business that printed advertisements for student groups on tissues and paper napkins, which were distributed in the school’s cafeterias. He made some money, he says, but shuttered the business after about a year.

After graduating from the university in 2016, he decided to work in Hong Kong’s financial hub and joined Goldman Sachs as an analyst in the bank’s operations division.

From finance to process optimization at scale

After two parties agree on securities transactions, the bank’s operations division ensures that the trade details are recorded correctly, the securities and payments are ready to transfer, and the transaction settles accurately and on time.

As an analyst, Gupta’s task was to find bottlenecks in the bank’s workflows and fix them. He identified an opportunity to automate trade reconciliation: when analysts would manually compare data across spreadsheets and systems to make sure a transaction’s details were consistent. The process helped ensure financial transactions were recorded accurately and settled correctly.

Gupta built internal automation tools that pulled trade data from different systems, ran validation checks, and generated reports highlighting any discrepancies.

“Instead of analysts manually checking large datasets, the tools automatically flagged only the cases that required investigation,” he says. “This helped the team spend less time on repetitive verification tasks and more time resolving complex issues. It was also my first real exposure to how software and data systems could dramatically improve operational workflows.”

“Whether it’s helping a person improve a trait like that or driving efficiencies at a business, AI just has so much potential to help. I’m excited to be a little part of that.”

The experience made him realize he wanted to work more deeply in technology and data-driven systems, he says. He decided to return to school in 2018 to study data science and AI, when the fields were just beginning to surge into broader awareness.

He discovered that Columbia offered a dedicated master’s degree program in data science with a focus on AI. After being accepted in 2019, he moved to New York City.

Throughout the program, he gravitated to the applied side of machine learning, taking courses in applied deep learning and neural networks.

One of his major academic highlights, he says, was a project he did in 2019 with the Brown Institute, a joint research lab between Columbia and Stanford focused on using technology to improve journalism. The team worked with The Philadelphia Inquirer to help the newsroom staff better understand their coverage from a geographic and social standpoint. The project highlighted “news deserts”—underserved communities for which the newspaper was not providing much coverage—so the publication could redirect its reporting resources.

To identify those areas, Gupta and his team built tools that extracted locations such as street names and neighborhoods from news articles and mapped them to visualize where most of the coverage was concentrated. The Inquirer implemented the tool in several ways including a new web page that aggregated stories about COVID-19 by county.

“Journalism was an interesting problem set for me, because I really like to read the news every day,” Gupta says. “It was an opportunity to work with a real newsroom on a problem that felt really impactful for both the business and the local community.”

The GenAI inflection point

After earning his master’s degree in 2020, Gupta moved to San Francisco to join Asana, the company that developed the work management platform by the same name. He was drawn to the opportunity to work for a relatively small company where he could have end-to-end ownership of projects. He joined the organization as a product data scientist, focusing on A/B testing for new platform features.

Two years later, a new opportunity emerged: He was asked to lead the launch of Asana Intelligence, an internal machine learning team building AI-powered features into the company’s products.

“I felt I didn’t have enough experience to be the founding data scientist,” he says. “But I was also really interested in the space, and spinning up a whole machine learning program was an opportunity I couldn’t turn down.”

The Asana Intelligence team was given six months to build several machine learning–powered features to help customers work more efficiently. They included automatic summaries of project updates, insights about potential risks or delays, and recommendations for next steps.

The team met that goal and launched several other features including Smart Status, an AI tool that analyzes a project’s tasks, deadlines, and activity, then generates a status update.

“When you finally launch the thing you’ve been working on, and you see the usage go up, it’s exhilarating,” he says. “You feel like that’s what you were building toward: users actually seeing and benefiting from what you made.”

Gupta and his team also translated that first wave of work into reusable frameworks and documentation to make it easier to create machine learning features at Asana. He and his colleagues filed several U.S. patents.

At the time he took on that role, OpenAI launched ChatGPT. The mainstreaming of generative AI and large language models shifted much of his work at Asana from model development to assessing LLMs.

OpenAI captured the attention of people around the world, including Gupta. In September 2025 he left Asana to join OpenAI’s data science team.

The transition has been both energizing and humbling, he says. At OpenAI, he works closely with the marketing team to help guide strategic decisions. His work focuses on developing models to understand the efficiency of different marketing channels, to measure what’s driving impact, and to help the company better reach and serve its customers.

“The pace is very different from my previous work. Things move quickly,” he says. “The industry is extremely competitive, and there’s a strong expectation to deliver fast. It’s been a great learning experience.”

Gupta says he plans to stay in the AI space. With technology evolving so rapidly, he says, he sees enormous potential for task automation across industries. AI has already transformed his core software engineering work, he says, and it’s helped him enhance areas that aren’t natural strengths.

“I’m not a good writer, and AI has been huge in helping me frame my words better and present my work more clearly,” he says. “Whether it’s helping a person improve a trait like that or driving efficiencies at a business, AI just has so much potential to help. I’m excited to be a little part of that.”

Exploring IEEE publications and connections

Gupta has been an IEEE member since 2024, and he values the organization as both a technical resource and a professional network.

He regularly turns to IEEE publications and the IEEE Xplore Digital Library to read articles that keep him abreast of the evolution of AI, data science, and the engineering profession.

IEEE’s member directory tools are another valuable resource that he uses often, he says.

“It’s been a great way to connect with other engineers in the same or similar fields,” he says. “I love sharing and hearing about what folks are working on. It brings me outside of what I’m doing day to day.

“It inspires me, and it’s something I really enjoy and cherish.”

What It’s Like to Live With an Experimental Brain Implant



Scott Imbrie vividly remembers the first time he used a robotic arm to shake someone’s hand and felt the robotic limb as if it were his own. “I still get goosebumps when I think about that initial contact,” he says. “It’s just unexplainable.” The moment came courtesy of a brain implant: an array of electrodes that let him control a robotic arm and receive tactile sensations back to the brain.

Getting there took decades. In 1985, Imbrie had woken up in the hospital after a car accident with a broken neck and a doctor telling him he’d never use his hands or legs again. His response was an expletive, he says—and a decision. “I’m not going to allow someone to tell me what I can and can’t do.” With the determination of a head-strong 22-year-old, Imbrie gradually regained the ability to walk and some limited arm movement. Aware of how unusual his recovery was, the Illinois-native wanted to help others in similar situations and began looking for research projects related to spinal cord injuries. For decades, though, he wasn’t the right fit, until in 2020 he was finally accepted into a University of Chicago trial.

Two photos. The first shows a man sitting in a chair with a large robotic arm extending in front of him. The second is a close-up of implants on the surface of a brain.  Scott Imbrie has shaken hands with a robotic arm controlled by a brain implant. The electrodes record neural signals that enable him to move the device and receive tactile feedback. Top: 60 Minutes/CBS News; Bottom: University of Chicago

Imbrie is part of a rarefied group: More people have gone to space than have received advanced brain-computer interfaces (BCI) like his. But a growing number of companies are now attempting to move the devices out of neuroscience labs and into mainstream medical care, where they could help millions of people with paralysis and other neurological conditions. Some companies even hope that BCIs will eventually become a consumer technology.

None of that will be possible without people like Imbrie. He’s a member of the BCI Pioneers Coalition, an advocacy group founded in 2018 by Ian Burkhart, the first quadriplegic to regain hand movement using a brain implant.

That life-changing experience convinced Burkhart that BCIs will make the leap from lab to real world only if users help shape the technology by sharing their perspectives on what works, what doesn’t, and how the devices fit into daily life. The coalition aims to ensure that companies, clinicians, and regulators hear directly from trial participants.

Two images. The first is a photo of a man sitting in a wheelchair; attached to the top of his head is a device with a cable attached. The second is a medical image showing the location of electrodes in the brain.  Ian Burkhart founded the BCI Pioneers Coalition to ensure that companies developing brain implants hear directly from the people using them. Left: Andrew Spear/Redux; Right: Ian Burkhart

The group also serves as a peer-support network for trial participants. That’s crucial, because despite the steady drumbeat of miraculous results from BCI trials, receiving a brain implant comes with significant risks. Surgical complications, such as bleeding or infection in the brain, are possible. Even more concerning is the potential psychological toll if the implant fails to work as expected or if life-changing improvements are eventually withdrawn.

Researchers spell this out upfront, and many are put off, says John Downey, an assistant professor of neurological surgery at the University of Chicago and the lead on Imbrie’s clinical trial. “I would say, the number of people I talk to about doing it is probably 10 to 20 times the number of people that actually end up doing it,” he says.

What Happens in a BCI Trial?

BCI pioneers arrive at their unique status via a number of paths, including spinal cord injuries, stroke-induced paralysis, and amyotrophic lateral sclerosis (ALS). The implants they receive come from Blackrock Neurotech, Neuralink, Synchron, and other companies, and are being tested for restoring limb function, controlling computers and robotic arms, and even restoring speech.

Many of the implants record signals from the motor cortex—the part of the brain that controls voluntary movements—to move external devices. Some others target the somatosensory cortex, which processes sensory signals from the body, including touch, pain, temperature, and limb position, to re-create tactile sensation.

BCI Designs Used by Today’s Pioneers


Diagram comparing three brain-computer interface implants from Blackrock, Neuralink, Synchron.

Ease of use depends heavily on the application. Restoring function to a user’s own limbs or controlling robotic arms involves the most difficult learning curve. In early sessions, participants watch a virtual arm reach for objects while they imagine or attempt the same movement. Researchers record related brain signals and use them to train “decoder” software, which translates neural activity into control signals for a robotic arm or stimulation patterns for the user’s nerves or muscles.

Paralyzed in a 2010 swimming accident, Burkhart took part in a trial conducted by Battelle Memorial Institute and Ohio State University from 2014 to 2021. His implant recorded signals from his motor cortex as he attempted to move his hand, and the system relayed those commands to electrodes in his arm that stimulated the muscles controlling his fingers.

A man seated at a desk has electronics wrapped around his right arm. He\u2019s holding a device shaped like a guitar and looking at a screen showing the fretboard of a guitar. Ian Burkhart, who is paralyzed from the chest down, received a brain implant that routed neural signals through a computer to his paralyzed muscles, enabling him to play a video game. Battelle

Getting the system to work seamlessly took time, says Burkhart, and initially required intense concentration. Eventually, he could shift his focus from each individual finger movement to the overall task, allowing him to swipe a credit card, pour from a bottle, and even play Guitar Hero.

Training a decoder is also not a one-and-done process. Systems must be regularly recalibrated to account for “neural drift”—the gradual shift in a person’s neural activity patterns over time. For complex tasks like robotic arm control, researchers may have to essentially train an entirely new decoder before each session, which can take up to an hour.

A man sits in a wheelchair surrounded by screens and electrical equipment. A device is attached to the top of his head, and a wire extends from it. Two other men stand in the room wearing masks.  Austin Beggin says that testing a BCI is hard work, but he adds that moments like petting his dog make it all worth it. Daniel Lozada/The New York Times/Redux

Even after the system is ready, using the device can be taxing, says Austin Beggin, who was paralyzed in a swimming accident in 2015 and now participates in a Case Western Reserve University trial aimed at restoring hand movement. “The mental work of just trying to do something like shaking hands or feeding yourself is 100-fold versus you guys that don’t even think about it,” he says.

It’s also a serious time commitment. Beggin travels more than 2 hours from his home in Lima, Ohio, to Cleveland for two weeks every month to take part in experiments. All the equipment is set up in the house he stays in, and he typically works with the researchers for 3 to 4 hours a day. The majority of the experiments are not actually task-focused, he says, and instead are aimed at adjusting the control software or better understanding his neural responses to different stimuli.

But the BCI users say the hard work is worth it. Beyond the hope of restoring lost function, many feel a strong moral obligation to advance a technology that could help others. Beggin compares the pioneers to the early astronauts who laid the groundwork for the lunar landings. “We’re some of the first astronauts just to get shot up for a couple of hours and come back down to earth,” he says.

The Emotional Impact of BCIs

Speak to BCI early adopters and a pattern emerges: The biggest benefits are often more emotional than practical. Using a robotic arm to feed oneself or control a computer is clearly useful, but many pioneers say the most meaningful moments are the ones the experiment wasn’t even trying to produce. Beggin counts shaking his parents’ hands for the first time since his injury and stroking his pet dachshund as among his favorite moments. “That stuff is absolutely incredible,” he says.

Neuralink participant Alex Conley, who broke his neck in a car accident in 2021, uses his implant to control both a robotic arm and computers, enabling him to open doors, feed himself, and handle a smartphone. But he says the biggest boost has come from using computer-aided design software.

A former mechanic, Conley began using the software within days of receiving his implant to design parts that could be fabricated on a 3D printer. He has designed everything from replacement parts for his uncle’s power tools to bumpers for his brother-in-law’s truck. “I was a very big problem solver before my accident, I was able to fix people’s things,” he says. “This gives me that same little burst of joy.”

Two photos show former U.S. president Barack Obama with a man seated in a wheelchair that has a robotic arm mounted to it. The first photo shows their whole bodies, the second is a close-up of a fist bump between Obama and the robotic hand. BCI user Nathan Copeland used a robotic arm to get a fist bump from then-President Barack Obama in 2016. Jim Watson/AFP/Getty Images

The outside world often underestimates those little wins, says Nathan Copeland, who holds the record for the longest functional brain implant. After breaking his neck in a car accident in 2004, he joined a University of Pittsburgh BCI trial in 2015 and has since used the device to control both computers and a robotic arm.

After he uploaded a video to Reddit of himself playing Final Fantasy XIV, one commenter criticized him for not using his device for more practical tasks. Copeland says people don’t understand that those lighthearted activities also matter. “A lot of tasks that people think are mundane or frivolous are probably the tasks that have the most impact on someone that can’t do them,” he says. “Agency and freedom of expression, I think, are the things that impact a person’s life the most.”

Nathan Copeland plays Final Fantasy XIV using his brain implant to control the game character.

When Brain Implants Become Life-Changing

This perspective resonates with Neuralink’s first user, Noland Arbaugh—paralyzed from the neck down after a swimming accident in 2016. After receiving his implant in January 2024, he was able to control a cursor within minutes of the device being switched on. A few days later, the engineers let him play the video game Civilisation VI, and the technology’s potential suddenly felt real. “I played it for 8 hours or 12 hours straight,” he says. “It made me feel so independent and so free.”

A man seated in a wheelchair looks at the screen of a laptop that\u2019s mounted on his wheelchair.  Before receiving his Neuralink implant, Noland Arbaugh used mouth-operated devices to control a computer. He says the BCI is more reliable and enables him to do many more things on his own. Rebecca Noble/The New York Times/Redux

But the technology is also providing more practical benefits. Before his implant, Arbaugh relied on a mouth-held typing stick and a mouth-controlled joystick called a quadstick, which uses sip-or-puff sensors to issue commands. But the fiddliness of this equipment required constant caregiver support. The Neuralink implant has dramatically increased the number of things he can do independently. He says he finds great value in not needing his family “to come in and help me 100 times a day.”

For Casey Harrell, the technology has been even more transformative. Diagnosed with ALS in 2020, the climate activist had just welcomed a baby daughter and was in the midst of a major campaign, pressuring a financial firm to divest from companies that had poor environmental records.

Person in a wheelchair outdoors, surrounded by green foliage and soft sunlight.

Bald head with wired brain-computer interface sensors attached in front of a monitor

Person using a brain-computer interface to control text on a monitor.Casey Harrell was able to communicate again within 30 minutes of his BCI being switched on. The device translates his neural signals quickly enough for him to hold conversations. Ian Bates/The New York Times/Redux

“Every morning we’d wake up and there’d be a new thing he couldn’t do, a new part of his body that didn’t work,” says his wife, Levana Saxon. Most alarming was his rapid loss of speech, which, among other things, left him unable to indicate when he was in pain. Then a relative alerted him to a clinical trial at the University of California, Davis, using BCIs to restore speech. He immediately signed up.

The device, implanted in July 2023, records from the brain region that controls muscles involved in talking and translates these signals into instructions for a voice synthesizer. Within 30 minutes of it being switched on, Harrell could communicate again. “I was absolutely overwhelmed with the thought of how this would impact my life and allow me to talk to my family and friends and better interact with my daughter,” he says. “It just was so overwhelming that I began to cry.”

While earlier assistive technology limited him to short, direct commands, Harrell says the BCI is fast enough that he can hold a proper conversation, and he’s been able to resume work part-time.

What’s Holding BCI Technology Back?

BCI technology still has limits. Most trial participants using Blackrock Neurotech implants can operate their devices only in the lab because the systems rely on wired connections and racks of computer hardware. Some users, including Copeland and Harrell, have had the equipment installed at home, but they still can’t leave the house with it. “That would be a big unlock if I was able to do so,” says Harrell.

The academic nature of many trials creates additional constraints. Pressure to publish and secure funding pushes researchers to demonstrate peak performance on narrow tasks rather than build more versatile and reliable systems, says Mariska Vansteensel, who runs BCI studies at the University Medical Center Utrecht in the Netherlands. She says that investigating the technology’s limits or repeating an experiment in new patients is “less rewarded in terms of funding.”

In a clinical trial, Scott Imbrie uses a BCI to control a robotic arm, using signals from his motor cortex to make it move a block. University of Chicago

One of Imbrie’s biggest frustrations is the rapid turnover in experiments. Just as he begins to get proficient at one task, he’s asked to switch to the next task. Study designs also mean that much of the users’ time is spent on mundane tasks required to fine-tune the system.

Perhaps the biggest issue is that trials are often time-limited. That’s partly because scar tissue from the body’s immune response to the implant can gradually degrade signal quality. But constraints on funding and researcher availability can also make it impossible for users to keep using their BCIs after their trials end, even when the technology is still functional.

Ian Burkhart’s BCI enables him to grasp objects, pour from a bottle, and swipe a credit card.

Burkhart has firsthand experience. His trial was extended, but the implant was eventually removed after he got an infection. He always knew the trial would end, but it was nonetheless challenging. “It was a little bit of a tease where I got to see the capability of the restoration of function,” he says. “Now I’m just back to where I was.”

The Push to Commercialize BCIs

Progress is being made in transitioning the technology from experimental research devices to fully-fledged medical products that could help users in their everyday lives. Most academic BCI research has relied on Blackrock Neurotech’s Utah Arrays, which typically feature 96 needlelike electrodes that penetrate the brain’s surface. The implant is connected to a skull-mounted pedestal that’s wired to external hardware. But some of the newer devices are sleeker and less invasive.

Neuralink’s implant houses its electronics and rechargeable battery in a coin-size unit connected to flexible electrode threads inserted into the brain by a robotic “sewing machine.” The implant, which is roughly the size of a quarter or a euro, is mounted in a hole cut into the skull and charges and transfers data wirelessly. Synchron takes a different approach, threading a stent-like implant through blood vessels into the motor cortex. This “stentrode” connects by wire to a unit in the chest that powers the implant and transmits data wirelessly.

Bearded person in red T\u2011shirt using a laptop at a kitchen table

Man using a large on-screen keyboard to type messages on a tablet computer Rodney Gorham can use his Synchron implant to control not just a computer, but also smart devices in his home like an air conditioner, fan, and smart speaker. Rodney Decker

Neuralink’s decoder runs on a laptop, while Synchron deploys a smartphone-size signal processing unit as a wireless bridge to the user’s devices, which allows them to use their implants at home and on the move. The companies have also developed adaptive decoders that use machine learning to adjust to neural drift on the fly, reducing the need for recalibration.

Making these devices truly user-friendly will require technology that can interpret user context, says Kurt Haggstrom, Synchron’s chief commercial officer—including mood, attention levels, and environmental factors like background noise and location. This approach will require AI that analyzes neural signals alongside other data streams such as audio and visual input.

Last year, Synchron took a first step by pairing its implant with an Apple Vision Pro headset. When trial participant Rodney Gorham looked at devices such as a fan, a smart speaker, and an air conditioner, the headset overlaid a menu that enabled him to adjust the device’s settings using his implant.

Rodney Gorham uses his Synchron implant to turn on music, feed his dog, and more. Synchron BCI

Another way to reduce cognitive load is to detect high-order signals of intent in neural data rather than low-level motor commands, says Florian Solzbacher, cofounder and chief scientific officer of Blackrock Neurotech. For instance, rather than manually navigating to an email app and typing, the user could simply think about sending an email and the system would then open it with content already prepopulated, he says.

Durability may prove a thornier problem to solve, UChicago’s Downey says. Current implants last around a decade—well short of a lifelong solution. And with limited real estate in the brain, replacement is only possible once or twice, he says.

Rapid technological progress also raises difficult decisions about whether to get a BCI implant now or wait for a more advanced device. This was a major concern for Gorham’s wife, Caroline. “I was hesitant. I didn’t want him to go on the trial but maybe a future one,” she says. “It was my fear of missing out on future upgrades.”

Will Brain Implants Ever Become Consumer Tech?

Some executives have raised the prospect of BCIs eventually becoming consumer devices. Neuralink founder Elon Musk has been particularly vocal, suggesting that the company’s implants could replace smartphones, let people save and replay memories, or even achieve “symbiosis” with AI.

This kind of talk inspires mixed feelings in users. The hype brings visibility and funding, says Beggin, but could divert attention from medical users’ needs. Copeland worries that consumer branding could strip the devices of insurance coverage and that rising demand may make it harder to access qualified surgeons.

A man, seen in profile, sits in a wheelchair. Noland Arbaugh, the first recipient of Neuralink’s BCI, says that using the implant to control a computer made him feel independent and free. Steve Craft/Guardian/eyevine/Redux

There are also concerns about how data collected by BCI companies will be handled if the devices go mainstream. As a trial participant, Arbaugh says he’s comfortable signing away his data rights to advance the technology, but he thinks stronger legal protections will be needed in the future. “Does that data still belong to Neuralink? Does it belong to each person? And can that data be sold?” he asks.

Blackrock’s Solzbacher says the company remains focused on the medical applications of the technology. But he also believes it is building a “universal interface to any kind of a computerized system” that may have broader applications in the future. And he says the company owes it to users not to limit them to a bare-bones assistive technology. “Why would somebody who’s got a medical condition want to get less than something that somebody who’s able-bodied would possibly also take?” says Solzbacher.

The ever-optimistic Imbrie heartily agrees. Medical devices are invariably expensive, he says, but targeting consumer applications could push companies to keep devices simple and affordable while continuing to add features. “I truly believe that making it a consumer-available product will just enhance the product’s capabilities for the medical field,” he says.

Imbrie is on a mission to refocus the conversation around BCIs on the positives. While concerns about risks are valid, he worries that the alarming language often used to describe brain implants discourages people from volunteering for trials that could help them.

“I remember laying there in the bed and not being able to move,” he says, “and it was really dehumanizing having to ask someone to do everything for you. As humans, we want to be independent.”

Temporal predictions shape somatosensory perception

Highest h-index author
Christian Büchel (h-index 115)
Main affiliation
Unknown

This study shows that expectations about when a stimulus will occur systematically increase perceived intensity of pain and non-painful sensations, independent of actual delay or prediction errors, highlighting a core role of temporal expectations in perception.

Machine learning driven discovery of low modulus biomedical titanium alloys for additive manufacturing

Researchers combined CALPHAD, machine learning, and multi-objective optimisation to design an AM-specific titanium alloy for implant and orthopaedic applications. Laser powder bed fusion produced low-stiffness (~43 GPa), high-ductility (~31%) components, with good cell compatibility.

Sparseness facilitates image encoding across visuo-frontal networks in freely moving macaque

Sparseness, a quantitative measure of coding efficiency, has only been tested under restrictive conditions using synthetic stimuli. Here, the authors employed wireless neural recordings in freely moving macaques to show that sparsification constitutes a general principle of population coding across sensory and executive cortical circuits.

Oxidation-reconstructed Li<sup>+</sup> transport enables high-tap-density single-crystal regeneration of spent LiNi<sub>0.5</sub>Co<sub>0.2</sub>Mn<sub>0.3</sub>O<sub>2</sub> positive electrodes

Direct regeneration of spent lithium-ion battery positive electrode materials is hindered by structural disorder and surface degradation. Here, authors use oxidation to reconstruct lithium transport pathways and regenerate dense single-crystal LiNi0.5Co0.2Mn0.3O2 with stable cycling performance.

Hi-Compass: a depth-aware deep learning framework for predicting cell-type-specific 3D genome organization from single-cell to spatial resolution

Cell-type-specific 3D genome maps are hard to generate experimentally. Here, the authors develop Hi-Compass, a deep-learning framework that predicts chromatin interactions from accessibility data across variable sequencing depths, recovering chromatin loops and linking disease variants to target genes.

Dual-function surface engineering for enhancing anode stability in alkaline seawater oxidation

Seawater electrolysis for green hydrogen is hindered by chloride-induced corrosion and local acidification of anodes. Here, the authors report an osmium-decorated cobalt phosphide anode that buffers protons and repels chloride, enabling stable ampere-level seawater electrolysis for 4500 h.

Phonon-scattering-induced linear magnetoresistance in the quantum limit up to room temperature

Phenomena emerging in the quantum limit of solids often stem from electron-impurity or electron-electron interactions. Here, the authors provide evidence that linear magnetoresistance in tellurium originates from high-temperature phonon scattering in the quantum limit.

Topological isomerization unlocks exceptional elasticity and strength of cellulosic triboelectric aerogels

To address the intrinsic trade-off between elasticity and strength in cellulose aerogels, a topological isomerization strategy is proposed, enabling simultaneous mechanical robustness and stable triboelectric output for reliable self-powered sensing.

Signaling cascades shape functional subpopulations of cortical astrocytes in male wild-type mice and APP/PS1dE9 Alzheimer’s disease model

How is astrocyte heterogeneity controlled? How does it impact disease? Here, the authors show that STAT3 and NF-kB pathways define astrocyte subpopulations in wild type and APP/PS1dE9 Alzheimer’s disease (AD) model mice, with different morphology, transcriptional signature, functional features, and impact on AD-related alterations.

The AI Revolution in Math Has Arrived

AI is being used to prove new results at a rapid pace. Mathematicians think this is just the beginning.

The post The AI Revolution in Math Has Arrived first appeared on Quanta Magazine

Squishy Photonic Switches Promise Fast Low Power Logic



Photonic devices, which rely on light instead of electricity, have the potential to be faster and more energy efficient than today’s electronics. They also present a unique opportunity to develop devices using soft materials, such as polymers and gels, which are poor conductors of electricity, but are easier to manufacture and more environmentally friendly. The development of these potentially squishy, flexible photonics, however, requires the ability to manipulate light using only light, not electricity.

In soft matter, that’s been done primarily by changing the physical properties of optical materials or by using intense light pulses to change the direction of light. Now, an international team of scientists has developed a new way of controlling light with light using very low light intensities and without changing any of the physical properties of materials.

Igor Muševič, a professor of physics at the University of Ljubljana who led the project, says that he first got the idea for the device while at a conference in San Francisco, listening to a talk by Stefan W. Hell about stimulated emission depletion (STED) microscopy. The imaging technique, for which Hell won a Nobel Prize in Chemistry in 2014, uses two lasers to produce an extremely small light beam to scan objects. “When I saw this, I said, this is manipulation light by light, right?” Muševič recalls.

His realization inspired a device into which a laser pulse is fired. Whether or not this beam makes it out of the device depends on whether or not a second pulse is fired less than a nanosecond afterwards.

A liquid crystal photonic switch

The device consists of a spherically-shaped bead of liquid crystal, held in shape by its elastic material properties and the forces between its molecules, infused with a fluorescent dye and trapped between four upright cone-shaped polymer structures that guide light in and out of the device. When a laser pulse is sent through one of the four polymer waveguides, the light is quickly transferred into the liquid crystal, exciting the fluorescent dye. In a process known as whispering gallery mode resonance, the photons inside the liquid crystal are reflected back inside each time they hit the liquid’s spherical surface. The result is that light circulates inside the cavity until it is eventually reflected into one of the waveguides, which then emits the photons out in a laser beam.

The team realized that sending a second laser pulse of a different color into the waveguides before the liquid crystal started emitting light from the first laser pulse resulted in stimulated emission of the excited dye molecules. The photons from the second laser pulse, which had to be fired into the waveguides after the first laser pulse, interact with the already-excited dye molecules. The interaction causes the dye to emit photons identical to those in the second pulse while depleting the energy from the first pulse. The second laser beam, called the STED beam, is amplified by the process, while the light from the first pulse is so diminished that it isn’t emitted at all. Because the outcome of the first laser pulse could be controlled using the second laser pulse, the team had successfully demonstrated the control of light by light.

Vandna Sharma, Jaka Zaplotnik, et al.

According to the Ljubljana team, the energy efficiency of the liquid crystal approach is much better than previous soft-matter techniques, which had typically involved using intense light fields to change material properties of the soft matter, such as the index of refraction. The new method reduces the energy needed by more than a factor of a hundred. Because the STED laser pulse circulates repeatedly in the crystal, a single photon can deplete many dye molecules of the energy from the first laser pulse.

Miha Ravnik, a theoretical physicist also at the University of Ljubljana who worked on the project, explains that control of light by light is essential in soft-matter photonic logic gates. “You can very much control when [light] is generated and in which direction,” Ravnik says of the light shined into the polymer waveguides. “And this gives you, then, this capability that you create logical operations with light.”

Aside from its potential in photonic logical circuits, the team’s approach presents several technical advantages over photonics made from silicon or other hard materials, Muševič says. For example, using soft matter greatly simplifies the manufacturing process. The liquid crystal in the team’s device can be inserted in less than a second, but manufacturing a similar structure with hard materials is difficult. Additionally, soft matter devices can be manufactured at much lower temperatures than silicon and other hard materials. Muševič also points out that soft matter presents an opportunity to experiment with the geometry of the device. With liquid crystals “you can make many different kinds of cavities,” says Muševič. “You have, I would say, a lot of engineering space.”

Ravnik is excited for the potential of the team’s breakthrough, particularly as a step towards photonic computing and even photonic neural networks. But, he recognizes that these developments are far down the line. “There’s no way this technology can compete with current neural network implementation at all,” he admits. Still, the possibilities are tantalizing. “The energy losses are predicted to be extremely low, the speeds for calculation extremely high.”

Author Correction: Coordination-tailored atomic interfaces for selective CH<sub>4</sub>-to-C<sub>2</sub> conversion in aqueous solution

Author Correction: Coordination-tailored atomic interfaces for selective CH<sub>4</sub>-to-C<sub>2</sub> conversion in aqueous solution

Deep visual proteomics uncovers nociceptor diversity and pain targets

Sensory neuron subtypes are defined by transcriptomics, but their proteomic identities remain unclear. Here, authors show distinct protein signatures of electrophysiologically defined nociceptors and implicate B3GNT2 in pain sensitization.

Why Do We Tell Ourselves Scary Stories About AI?

Our tales of AI developing the will to survive, commandeer resources, and manipulate people say more about us than they do about language models.

The post Why Do We Tell Ourselves Scary Stories About AI? first appeared on Quanta Magazine

Rapid formation of non-spatial hippocampal representations consistent with behavioral timescale synaptic plasticity is modulated by entorhinal input

Spatial representations in the hippocampus can rapidly form through behavioral timescale synaptic plasticity. Here authors show rapid formation of non-spatial olfactory representations in CA1, consistent with behavioral timescale synaptic plasticity, and describe modulation by medial and lateral entorhinal cortices.