Skip to main content

Four Texas ECE Students Awarded Amazon AI PhD Fellowships

Amazon AI Fellows

Fifteen University of Texas at Austin students have been named Amazon AI Ph.D. Fellows. Supported through Amazon’s nationwide fellowship program, these students are advancing research in machine learning, computer vision, natural-language processing and more.

Nationwide, Amazon announced 100 doctoral students from nine universities as new winners of fellowships for a program whose goal is to help drive the innovations that will underwrite the next step in the evolution of practical AI.

“Amazon’s AI Ph.D. Fellowship Program reflects our ongoing commitment to the academic community. We’re fortunate to collaborate with some of the nation’s brightest Ph.D. students who are advancing critical areas in AI – from high-performance chips and hardware to networking, software, foundation models, applications and more,” said Rohit Prasad, SVP and head scientist of Amazon AGI. “We believe investing in future talent is essential to moving the field forward and creating truly useful AI that benefits everyone.”

Each student will receive two years of funding, cloud-computing resources and mentorship from Amazon researchers to explore new frontiers in AI innovation and impact. The following students received fellowships this year.

Parikshit Bansal
Computer Science

Parikshit Bansal is advised by Sujay Sanghavi. His research focuses on developing principled algorithms for general machine learning problems. His current work centers on diffusion models for language with a particular emphasis on improving their efficiency.

Rohit Dwivedula
Computer Science (Networked Systems group)

Rohit Dwivedula is advised by Aditya Akella and Daehyeok Kim. His research interests are at the intersection of systems and machine learning, where he focuses on developing AI-driven techniques for improving decision-making in operating systems and cloud infrastructure.

Siddhartha Jain
Computer Science

Siddhartha Jain is advised by Scott Aaronson. He works on quantum algorithms and complexity with a focus on finding applications of quantum computing with provable advantage over classical computation.

Avinash Kumar
Electrical and Computer Engineering

Avinash Kumar is advised by Poulami Das. His research focuses on improving the efficiency of machine learning models, specifically large language models (LLMs) through system-level optimizations. His recent work explores correlation-aware KV cache compression strategies and adaptive methods for serving early-exit models. Before his graduate studies, he was a GPU architect at NVIDIA and later a research associate at AMD.

Sateesh Kumar
Computer Science

Sateesh Kumar is advised by Georgios Pavlakos and Roberto Martín-Martín. His research focuses on improving the data efficiency and robustness of robot learning algorithms by leveraging large-scale robotics datasets and structured 3D representations. He earned an M.S. from the University of California, San Diego and was previously a researcher at ByteDance Seed and Retrocausal.

Syamantak Kumar
Computer Science

Syamantak Kumar is advised by Purnamrita Sarkar of the Department of Statistics and Data Sciences and Kevin Tian in the Department of Computer Science. His research lies at the intersection of statistics, optimization and machine learning, with a focus on developing principled algorithms for high-dimensional data analysis. His interests include sparse principal component analysis, differential privacy, robust statistics and sampling methods for complex probabilistic models.

Haoyu Li
Computer Science (Networked Systems group)

Haoyu Li is advised by Aditya Akella and Venkat Arun. His research leverages AI techniques to improve the performance and usability of modern systems, with a focus on data analytics pipelines, LLM cache management and scheduling for edge computing and autonomous vehicle systems.

Junbo Li
Computer Science

Junbo Li is advised by Atlas Wang and Qiang Liu. His research focuses on advancing reasoning-driven, agentic large language models and reinforcement learning, with an emphasis on building self-evolving pipelines that can interpret instructions while dynamically leveraging external tools, environments and reasoning to solve complex real-world problems.

Kiazhao Liang
Computer Science

Kiazhao Liang is advised by Qiang Liu. He previously worked as a principal engineer at SambaNova Systems. His research focuses on efficient training methods, sparse neural networks and large language models. He received a B.S. in computer science from the University of Illinois – Urbana-Champaign.

Zeping Liu
Geography

Zeping Liu is advised by Gengchen Mai of the Department of Geography and the Environment. His research focuses on advancing Geospatial AI, with emphasis on geo-foundation models and spatial representation learning. He has published 14 papers in journals and at conferences, including NeurIPS, RSE, ESSD and IEEE TGRS, and serves as a reviewer for eight journals. He is also a student technician at Esri.

Mohammad Omama
Electrical and Computer Engineering

Mohammad Omama is advised by Sandeep Chinchali in the Swarm Robotics Lab. He focuses on making machine learning for robots more efficient and adaptive. His research explores visual localization, map compression and multimodal representations, and his work has been published in top research venues. In addition, he served as an applied scientist intern at Amazon, where he mentored students, reviewed papers and pursued startup-style research ideas.

Litu Rout
Electrical and Computer Engineering

Litu Rout is advised by Constantine Caramanis and Sanjay Shakkottai. His research develops theory for generative models—diffusion, rectified flows and optimal transport—and applies them to conditional sampling, including inverse problems, image and video editing and personalization. He currently studies discrete diffusion for multimodal (image-text) generation and understanding.

Haoran Xu
Electrical and Computer Engineering

Haoran Xu is advised by Amy Zhang. His work focuses on scaling reinforcement learning methods and integrating generative AI to push toward superhuman artificial general intelligence, particularly for applications in robotics and large language models. He spent a summer as a research intern at Microsoft Research.

Chutong Yang
Computer Science

Chutong Yang is advised by Kevin Tian. He has a broad interest in the design and analysis of algorithms in theoretical computer science and trustworthy machine learning. His interests include solving problems in learning theory, differential privacy and algorithmic fairness using tools in optimization and statistics.

Xiao Zhang
Computer Science (Networked Systems group)

Xiao Zhang is advised by Daehyeok Kim. His research focuses on networked and distributed systems, with a current emphasis on enabling predictable AI performance at the 5G edge through cross-layer telemetry and resource management. He aims to build practical systems that bridge real-world deployment challenges and core AI infrastructure needs.