Amazon Research Awards recipients announced
Amazon Research Awards (ARA) provides unrestricted funds and AWS Promotional Credits to academic researchers investigating various research topics in multiple disciplines. This cycle, ARA received many excellent research proposals from across the world and today is publicly announcing 63 award recipients who represent 41 universities in 8 countries. This announcement includes awards funded under five call for proposals during the spring 2025 cycle: AI for Information Security, Amazon Ads, AWS AI: Agentic AI, Build on Trainium and Think Big. Proposals were reviewed for the quality of their scientific content and their potential to impact both the research community and society. Additionally, Amazon encourages the publication of research results, presentations of research at Amazon offices worldwide, and the release of related code under open-source licenses. Recipients have access to more than 700 Amazon public datasets and can utilize AWS AI/ML services and tools through their AWS Promotional Credits. Recipients also are assigned an Amazon research contact who offers consultation and advice, along with opportunities to participate in Amazon events and training sessions. “Amazon Research Awards are enabling incredibly impactful work to improve human health—from revolutionizing and democratizing structural biology tools, which can accelerate discovery of candidate molecules for new drugs to help patients, to predicting the etiology of a stroke in order to start the appropriate therapies, or interpreting digital phenotyping data to help with mental health services,” said Christine Silvers, AWS Principal Healthcare Advisor. “These are just three examples of projects that recipients have received Amazon Research Awards for. The potential for improving healthcare amongst all of the spring 2025 plus past and future awardees is staggering and inspiring.“ “Academic AI researchers face a fundamental challenge: advancing machine learning research and educating the next generation requires access to cutting-edge infrastructure that’s both powerful and affordable,” said Yida Wang, AWS AI Principal Applied Scientist. “The Build on Trainium program directly addresses this barrier. We are working with leading AI research universities such as, UC Berkeley, Stanford, CMU, MIT, UIUC, UCLA, and many others. At CMU, researchers achieved significant improvements over state-of-the-art FlashAttention in just one week. At MIT, researchers trained 3D medical imaging models with 50% higher throughput and lower cost, reducing training time from months to weeks. Build on Trainium represents AWS’s commitment to democratizing AI research through collaborative partnership with academia—fostering an environment where researchers experiment freely, students learn on production-scale infrastructure, and academic innovations shape the future of machine learning for everyone.” The tables below list, in alphabetical order by last name, the spring 2025 cycle call-for-proposal recipients, sorted by research area. AI for Information Security RecipientUniversityResearch titleChristopher Fletcher University Of California, BerkeleyDesign and Verification of High-Assurance Key Management Services for Stateful Confidential ComputingZhou Li University Of California, IrvinePrecise and Analyst-friendly Attack Provenance on Audit Logs with LLMYu Meng University of VirginiaWeakly-Supervised RLHF: Modeling Ambiguity and Uncertainty in Human PreferencesJelena Mirkovic University of Southern CaliforniaSafe and Secure API Discovery for Agentic AIAanjhan Ranganathan Northeastern UniversityUnderstanding How LLMs Hack: Interpretable Vulnerability Detection and RemediationSanjit Seshia University Of California, BerkeleyDesign and Verification of High-Assurance Key Management Services for Stateful Confidential ComputingAlexey Tregubov University of Southern CaliforniaSafe and Secure API Discovery for Agentic AIZiming ZhaoNortheastern UniversityUnderstanding How LLMs Hack: Interpretable Vulnerability Detection and RemediationAmazon Ads RecipientUniversityResearch titleXiaojing LiaoUniversity of Illinois at Urbana–ChampaignAdversarial Misuse of Large Language Models in Digital Advertising: Benchmarking and MitigationTianhao WangUniversity of VirginiaAdversarial Misuse of Large Language Models in Digital Advertising: Benchmarking and MitigationAWS Agentic AI RecipientUniversityResearch titleFaez AhmedMassachusetts Institute of TechnologyAutoDA-Sim: A Multi-Agent Framework for Safe, Aesthetic, and Aerodynamic Vehicle DesignFabio AnzaUniversity of Maryland, Baltimore CountyPhysics Co-Pilot: An LLM-Orchestrated Scientific Assistant for Physics ResearchAndrea BajcsyCarnegie Mellon UniversityFine Grained Planning Evaluation for VLM Web AgentsNiranjan BalasubramanianStony Brook UniversityEfficient and Effective Long-Horizon Reasoning for Interactive LLM AgentsAndreea BobuMassachusetts Institute of TechnologyContextual Harm Mitigation and Automated Backtracking in Computer Use AgentsJoseph Campbell Purdue University, West LafayetteOpen-World Probabilistic Theory of MindCong ChenDartmouth CollegeEmpowering Power Systems and Market Operations with Behavioral Generative AgentsChunyang ChenTechnical University of MunichFunctional Bug-Aware Software Testing via Intelligent Computer Use AgentsShay CohenUniversity of EdinburghDiffusion-inspired chain-of-thought self-revisionFernando De la TorreCarnegie Mellon UniversityFine Grained Planning Evaluation for VLM Web AgentsSidong FengMonash UniversityFunctional Bug-Aware Software Testing via Intelligent Computer Use AgentsJames FogartyUniversity of Washington, SeattleLeveraging Multiple Representations in Multi-Agent Mobile App Interface Understanding and Task ExecutionSurbhi GoelUniversity of PennsylvaniaEfficient and Safe Protocols for Collaborative Agentic AINika Haghtalab University of California, BerkeleyMulti-Agent AI AlignmentIrwin KingThe Chinese University of Hong KongWebAGI: VLM-Driven Framework for Robust Web Automation and Planning in Agentic AIEmma LejeuneBoston UniversityFormidable yet Solvable: Scientific Computing Tasks for Agentic AIBang Liu University of MontrealFoundation Agents and Protocol for Collaborative Agentic AIHarsha MadhyasthaUniversity of Southern CaliforniaImproving the Efficiency of Web AgentsMichael MacyCornell UniversityArtificial Collective Intelligence: The Structure and Dynamics of LLM CommunitiesRadu MarculescuUniversity of Texas at AustinCollaborative Continual Learning in Multimodal Multi-Agent SystemsLianhui QinUniversity of California, San DiegoReaL-Agent: A Retrieval-and-Reasoning Agent for Deep, Cross-Modality RetrievalMahnam SaeedniaDelft University of TechnologyHeterogeneous Multi-Agent Collaboration For Built-in ResilienceMaarten SapCarnegie Mellon UniversityOpenAgentSafety: Measuring and Mitigating Safety Harms of LLM-based AI Agent InteractionsVitaly ShmatikovCornell UniversityContextual Security for Multi-Agent SystemsHaim SompolinskyHarvard UniversityLifelong learning in agentic AI through gated memory modulesJohn Torous Harvard UniversityInterpreting Digital Phenotyping Data with LLM-Based Agentic Assistants for Mental Health ServicesJindong WangCollege of William & MaryStructure Matters: Task-Optimized Topologies for LLM AgentsXiaolong WangUniversity of California, San DiegoAgentic World RepresentationZhi-Li ZhangUniversity of Minnesota, Twin CitiesNetGenius: Agentic AI for Next-Generation Wireless Network Autonomous Configurations and Intelligent OperationsJiawei Zhou Stony Brook University Efficient and effective long-horizon reasoning for interactive LLM agents Build on Trainium RecipientUniversityResearch titleSaikat Dutta Cornell UniversityVERA: Automated Testing for Improving the Reliability of Neuron Compiler ToolchainKuan Fang Cornell UniversityFast Adaptation of Multi-Modal Foundation Models for Robotic Perception and ControlShizhong HanLieber Institute for Brain DevelopmentOptimizing and scaling pretraining and preference-based fine-tuning of Large Chemical ModelsSitao Huang University of California, IrvineAutomatic Kernel Synthesis and Tuning for AWS Trainium via Profile-Guided Graph Topology OptimizationWataru KameyamaWaseda UniversityAccelerating Vision-Language Autonomous Driving with AWS TrainiumDong LiUniversity of California, MercedEfficient Sparse Training with Adaptive Expert Parallelism on AWS TrainiumXiaoxiao LiUniversity of British ColumbiaEfficient MoE LLMs via Pruning and Matryoshka Quantization on AWS TrainiumJiang Liu Waseda UniversityAccelerating Vision-Language Autonomous Driving with AWS TrainiumXiaoyi LuUniversity of California, MercedAccelerating Large Language and Reasoning Model Workloads with AWS TrainiumSatoshi MasudaTokyo City UniversityLLM for Software Modeling Brain in Multi LanguageAndrew McCallum University of Massachusetts, AmherstOvercoming Fundamental Reasoning Limitations of LLMs by Always Thinking before WritingXupeng MiaoPurdue University, West LafayetteTowards Communication-Efficient Distributed Training of Large Foundation Models by Dataflow-aware OptimizationsMichael NagleLieber Institute for Brain DevelopmentOptimizing and scaling pretraining and preference-based fine-tuning of Large Chemical ModelsJean-Christophe NebelKingston University LondonEfficient Architectures for Genomic Variant Interpretation: Language Models for Non-Coding DNA Variant AnalysisFarzana RahmanKingston University LondonEfficient Architectures for Genomic Variant Interpretation: Language Models for Non-Coding DNA Variant AnalysisRohan Sachdeva University of California, BerkeleyLearning Host–Microbial Genetic Element Interactions with Genomic Language ModelsYanning ShenUniversity of California, IrvineAutomatic Kernel Synthesis and Tuning for AWS Trainium via Profile-Guided Graph Topology OptimizationYun SongUniversity of California, BerkeleyLearning Host–Microbial Genetic Element Interactions with Genomic Language ModelsHoa Vo Indiana University BloomingtonAI-Powered Travel Pattern Detection in VR for Occupant Behavior Analysis Using AWS TrainiumMinjia Zhang University of Illinois Urbana-ChampaignTrainium-native MoE: Developing kernel and system optimizations for efficient and scalable MoE trainingThink Big RecipientUniversityResearch titleTianlong Chen University of North Carolina at Chapel HillLeveraging Molecular Dynamics to Empower Protein AI ModelsWilliam H. LeeYale School of MedicineAI-powered prediction of ischemic stroke etiologies using multi-modal dataPiotr SlizHarvard Medical SchoolSBCloud – A Transformative Model for Scalable Structural Biology Research