GPU in EDU Seminar Series - UMN
Join Dr. Christopher S. Simmons from Cambridge Computer on Wednesday, October 29th at UMN
This one-day event, presented in collaboration with the Minnesota Supercomputing Institute at the University of Minnesota, features Dr. Christopher S. Simmons leading comprehensive sessions that equip academic researchers with a complete framework for implementing reproducible AI and scientific workflows.

The agenda topics were jointly developed by Dr. Simmons, Dr. Thomas Pengo and Dr. Christy Henzler, representing a true collaboration that reflects the shared research priorities of MSI and the University of Minnesota, and ensuring the seminars address areas of high relevance and impact.

The sessions begin with declarative software management, followed by data versioning through GitOps and DataOps methodologies, and culminate with an introduction to practical open-source LLM deployment strategies. By emphasizing sustainable approaches to AI workflows and combining cloud-native infrastructure principles with privacy-preserving deployment strategies, these seminars empower researchers and educators to accelerate discovery while maintaining scientific rigor and keeping sensitive data secure on institutional hardware rather than relying on external commercial services.

An option to attend virtually will be available. The link will be provided prior to the event.

Location
Coffman Memorial Union
President's Room (3rd Fl. / 332)
300 Washington Ave. S.E.
Minneapolis, MN 55455
Date & Time
Wednesday, October 29, 2025, 9:30 AM - 4:30 PM
Seminar Schedule
9:30 AM
Check In / Registration & Coffee!
Make sure to check in with a team member and grab a cup of coffee and a pastry!
10:00 AM
Welcome Remarks and Introduction
10:15 AM
Ensuring Reproducibility in AI: Declarative Tools for Managing Software Stacks and Workflows
As AI workloads increasingly move into academic research environments, the challenge of maintaining reproducible computational environments becomes critical for scientific rigor, collaboration, and scalability. This presentation addresses a fundamental problem in scientific computing: how to effectively manage the complex software dependencies that modern AI workflows require while ensuring portability across different systems. This talk will explore three complementary approaches to declarative software stack management: conda/mamba for cross-language package management, Spack for HPC-optimized builds, and containerization technologies including Docker, Singularity, and Apptainer.

The session will cover best practices for version controlling software environments, avoiding common pitfalls like Anaconda's impact on parallel filesystems, and leveraging modern tools like repo2docker for automated reproducible environments. By the end of this presentation, participants will be equipped to select the most appropriate tool for their specific research needs and implement reproducible software management practices that enhance collaboration and ensure long-term accessibility of their computational work.

Target Audience: Academic researchers currently managing software environments on HPC systems via command line who possess basic Linux proficiency and seek to implement more robust, reproducible software management practices for their AI and computational research workflows.
11:15 AM
BREAK (15 Min)
Beverages will be available throughout the day!
11:30 AM
The Reproducible Data Pipeline Playbook: Using GitOps and DataOps to Scale AI and Scientific Research
Building on the declarative and stateless principles introduced in software stack management, this session tackles the equally critical challenge of data reproducibility in AI workflows. While version controlling code has become standard practice, most researchers still manage datasets through ad-hoc file copying and manual organization, an approach that breaks down quickly when dealing with the hundreds of gigabytes to tens of terabytes typical in modern AI and scientific computing. This presentation introduces the transformative concept of "treating data like code" through GitOps and DataOps methodologies, extending the same declarative principles that ensure software reproducibility to data management.
Attendees will explore a progressive toolkit for data versioning, starting with Git LFS for seamless integration with existing Git workflows, advancing to DVC (Data Version Control) for sophisticated management of ML pipelines, and culminating with LakeFS for enterprise-scale data lake versioning with branching and merging capabilities.

The session will demonstrate how S3-compatible object storage, including resources like the Open Storage Network (OSN) available to academic researchers, serves as the foundation for scalable, collaborative data management that transcends institutional boundaries. By the end of this talk, participants will be equipped to implement basic data versioning workflows immediately and understand the pathway to more sophisticated data management as their research scales.

Target Audience: Academic researchers working with substantial datasets in AI and scientific computing who seek to apply software development best practices to data management and establish reproducible, collaborative data workflows that complement their existing Git-based development processes.
12:30 PM
LUNCH BREAK (1 Hour)
A deli sandwich lunch will be provided to our in-person attendees!
1:30 PM
Implementing Open-Source LLMs in Research Computing: From Model Selection to On-Premises Deployment (Part 1 of 2)
This is a 2 part series.  The description below is for both sessions.  

The rapid evolution of open-source large language models presents academic researchers with unprecedented opportunities to deploy powerful AI capabilities while maintaining complete control over sensitive data and computational workflows. This tutorial addresses the critical challenge facing research computing centers: how to navigate the complex landscape of open-source LLMs to make informed decisions about model selection, hardware requirements, and deployment strategies that align with existing HPC infrastructure and research objectives.

The session begins with a systematic exploration of leading open-source model families, including DeepSeek's reasoning-optimized variants, Meta's Llama ecosystem, Google's efficiency-focused Gemma models, and specialized options like MedGemma for healthcare applications. Attendees will learn to evaluate models based on benchmark performance, memory requirements, and domain-specific capabilities, while understanding the critical relationship between model parameters, precision formats (FP16/FP8/FP4), and GPU memory allocation. Beyond model selection, participants will explore the open-source framework ecosystem, comparing GUI tools like LM Studio for rapid prototyping against high-performance CLI engines like vLLM and SGLang suitable for SLURM integration and multi-GPU scaling. The session concludes with an introduction to Retrieval-Augmented Generation (RAG) pipelines, which enable researchers to query private document collections. This approach provides a key differentiator from commercial LLM services, ensuring data privacy and institutional control while extending model capabilities with domain-specific knowledge.

Target Audience: Academic researchers, HPC administrators, and research computing professionals seeking to understand open-source LLM deployment options, model selection criteria, and integration strategies for existing research computing infrastructure, with emphasis on data privacy, cost control, and workflow compatibility.
2:30 PM
SNACK BREAK (30 Min)
Snack, water, mix and mingle.
3:o0 PM
Implementing Open-Source LLMs in Research Computing: From Model Selection to On-Premises Deployment (Part 2 of 2)
Part 2!
4:o0 PM
(Optional) Q&A - (30 Min)
Got questions? I do! Now's the time to ask!
Speaker
Dr. Christopher S. Simmons
SCIENTIST-IN-RESIDENCE
CAMBRIDGE COMPUTER
Dr. Christopher S. Simmons is Cambridge Computer Services’ Scientist in Residence responsible for providing thought leadership for clients and cultivating relationships in our industry. He has over 25 years of experience in all facets of computational science and high performance computing.  Dr. Simmons earned his Ph.D. from the University of Texas at Austin in Physical Chemistry with an emphasis in Computational Quantum Mechanics and is the Project Lead for OpenHPC.
Location
Coffman Memorial Union
President's Room (3rd Fl. / 332)
300 Washington Ave. S.E.
Minneapolis, MN 55455
 Join us on Wed. October 29 
We look forward to hosting you!

Processing Registration...