Skip to main content

Research 12

Filling the Blind Spots in Brain Imaging

How high-performance computing enables full-volume 3D MRI reconstruction and segmentation for more accurate diagnostics

Sarah Anjum • PhD Student, Computer Science • March 2026

Seeing the Brain in Full

Accurate brain imaging can mean the difference between early tumor detection and missed diagnoses. Yet the size and complexity of 3D MRI scans create a computational bottleneck that limits diagnostic precision. In Professor Anastasiu’s lab, Sarah Anjum is tackling this problem by developing AI models capable of processing entire anatomical volumes simultaneously. Her work transforms massive, raw datasets into high-contrast, 3D reconstructions that radiologists can use to pinpoint abnormalities with unprecedented clarity.

The Scale of the Challenge

MRI brain scans aren’t just single images—they are stacks of roughly 154 3D slices, forming a complete map of the anatomy. Previous approaches often processed only small subsets or 2D patches, which hampered model accuracy and slowed workflows.

“Splitting the images and the AI models could really hamper the accuracy of the final results,” says Sarah.

These limitations also meant longer training times and repeated checkpoint reloads. The challenge Sarah addresses is reconstructing these massive, sometimes undersampled scans into sharp, high-contrast images that distinguish gray matter, white matter, and cerebrospinal fluid—without losing the full-volume context essential for accurate diagnoses.

 

How WAVE HPC Makes Full-Volume MRI Possible

Single-Pass Volume Processing

Transitioning to WAVE HPC’s newest hardware, including NVIDIA A100 GPUs and the Grace Hopper node, has been a “game changer.” Unlike previous GPUs such as the Tesla V100, these systems provide the memory to process entire 3D scans at once.

“The AI model is able to see the entire image of the scan of the human anatomy in one single go,” Sarah explains.

This capability improves output accuracy and drastically reduces training time, enabling complex models to learn from full-volume data instead of fragments.

Parallel Dataset Handling and Large Models

The Grace Hopper node’s unified memory allows Sarah to run multiple datasets in parallel. She leverages large models like nnUNet for 3D reconstruction and segmentation and applies tools including GANs, stable diffusion, and transformer-based architectures. Batch jobs and scripts run in the background, allowing the cluster to manage heavy computations while she focuses on refining her models.

“I am able to think about the models without having to worry about the resources,” she notes.

Supporting Infrastructure and Workflow Management

HPC staff assist in setting up environments and navigating massive output files. Without these resources, Sarah says, “All I would have is the 2D image… not this 3D reconstructed one.”

 

From Computation to Clinical Insight

Sarah’s model improves lower-quality, pixelated scans, enhancing tissue contrast for radiologists. This allows more precise segmentation of tumors, necrotic cores, and edema, which are critical for treatment planning.

image

Enhanced MRI segmentation highlighting tumor regions: necrotic core (red), enhancing tumor (green), and edema (yellow), improving clarity for more precise clinical analysis and treatment planning.

Looking ahead, Sarah plans to process raw k-space MRI data so that AI models can classify abnormalities directly, potentially lowering radiologist workload and improving diagnostic accuracy.

 

Recognition and Milestones

  • Presented an advanced 3D version of her work at the Santa Clara Research Showcase
  • Abstract presented at Baylor Research Conference, September 2025
  • Papers under review for SIGKDD 2026 and Engineering Applications of Artificial Intelligence
  • Developed O-SCAN, demonstrating improved tissue contrast and segmentation accuracy over current state-of-the-art models

 

Advice for Students Entering HPC

Sarah encourages students not to be intimidated by technical setup. WAVE HPC provides easy access and guidance for environment management. Beyond her own research, she assists fellow students in Dr. Akbari's lab with onboarding and workflow optimization on the cluster, helping them leverage its GPU resources effectively.

"You can submit a batch job, it runs in the background, and you stay productive on other tasks. It's easier than you think to get started," she says.

She recommends leveraging tools like Jupyter Lab, Python, and the batch job system to efficiently handle large datasets while focusing on model development.

Based on an interview conducted by Ella Griffin, WAVE Student Marketing Assistant, March 11, 2026. 

High-performance computing resources supporting this work were made possible through hardware donations from NVIDIA and Supermicro, which help power the WAVE High Performance Computing Center at Santa Clara University.

image