Skip to main content

Past Events

Training and Events

Title Event Date Description
Ookami Webinar 02/29/24 - 02:00 PM - 03:00 PM EST

Whether you are interested in Ookami and consider getting an account, a new user, or a longtime user, who wants to optimize their usage, this webinar is for you! It will cover the basics of the system, how to get an account, and for existing users also a lot of tips and tricks on how to use it efficiently for your research.

Ookami has available cycles (CPU only) and is welcoming new users.

Data Parallelism: How to Train Deep Learning Models on Multiple GPUs (NVIDIA Deep Learning Institute) 02/29/24 - 11:00 AM - 07:00 PM EST

Modern deep learning challenges leverage increasingly larger datasets and more complex models. As a result, significant computational power is required to train models effectively and efficiently. Learning to distribute data across multiple GPUs during deep learning model training makes possible an incredible wealth of new applications utilizing deep learning.

NCSA Quantum Tutorial: Intro to Quantum Computing with Classiq 02/29/24 - 10:00 AM - 12:00 PM EST

This is a practical introductory workshop for using the Classiq platform to model quantum algorithms using a high-level modeling language, optimizing quantum circuits using a hardware-aware approach and smart synthesis, and running your optimized quantum circuits on various real quantum hardware and simulators. No previous quantum computing knowledge is required, and we encourage participation from everyone interested in learning more about quantum computing. After the workshop, attendees will be able to use the 

Model Parallelism: Building and Deploying Large Neural Networks (NVIDIA Deep Learning Institute) 02/28/24 - 11:00 AM - 07:00 PM EST

Large language models (LLMs) and deep neural networks (DNNs), whether applied to natural language processing (e.g., GPT-3), computer vision (e.g., huge Vision Transformers), or speech AI (e.g., Wave2Vec 2), have certain properties that set them apart from their smaller counterparts. As LLMs and DNNs become larger and are trained on progressively larger datasets, they can adapt to new tasks with just a handful of training examples, accelerating the route toward general artificial intelligence.

ACES: Introduction to Data Science in R 02/27/24 - 11:00 AM - 05:00 PM EST

This course is an introduction to the R programming language and covers the fundamental concepts needed to operate in the R environment with a particular focus on data science. This course assumes no prior experience with R.

Includes a 1-hour lunch break.

More information about this Short Course at https://hprc.tamu.edu/training/aces_intro_r.html
 

Building Transformer Based Natural Language Processing Applications (NVIDIA Deep Learning Institute) 02/22/24 - 11:00 AM - 04:00 PM EST

Applications for natural language processing (NLP) and generative AI have exploded in the past decade. With the proliferation of applications like chatbots and intelligent virtual assistants, organizations are infusing their businesses with more interactive human-machine experiences. Understanding how transformer-based large language models (LLMs) can be used to manipulate, analyze, and generate text-based data is essential.

ACES: GPU Programming 02/20/24 - 02:30 PM - 05:00 PM EST

This short course covers basic topics in CUDA programming on NVIDIA GPUs. Topics include

  • CUDA architecture
  • basic language usage of CUDA C/C++
  • writing, executing CUDA code.

More information about this Short Course https://hprc.tamu.edu/training/intro_cuda.html

Learn About the PATh Facility 02/20/24 - 02:30 PM - 04:00 PM EST

Supported by the same groups that run OSG Services, the PATh Facility provides dedicated throughput computing capacity to NSF-funded researchers for longer and larger jobs than will typically run on OSG services like the OSPool. This training will describe its features and how to get started. If you have found your jobs need more resources (cores, memory, time, data) than is typically available in the OSPool, this resource might be for you!

ACES: AI/ML Techlab in Jupyter Notebooks 02/20/24 - 11:00 AM - 01:30 PM EST

Accelerating AI/ML Workflows on a Composable Cyberinfrastructure

NVIDIA GenAI/LLM Virtual Workshop Series for Higher Ed 02/16/24 - 08:00 AM - 02/29/ - 04:00 PM EST

Join NVIDIA’s Deep Learning Institute (DLI) this February for a series of free, virtual instructor-led workshops providing hands-on experience with GPU-accelerated servers in the cloud to complete end-to-end projects in the areas of Generative AI and Large Language Models (LLMs). Each of these workshops are led by a DLI Certified Instructor and offer an opportunity to earn an industry-recognized certificate of competency based on assessments to support your career growth.

COMPLECS: HPC Security and Getting Help 02/15/24 - 02:00 PM - 03:30 PM EST

HPC systems are shared resources, therefore all users must be aware of the complexity of working in a shared environment and the implications associated with resource management and security. This module also addresses two essential and related sets of skills that should be a part of everyone’s toolbox, but that are frequently overlooked: (1) solving problems on your own leveraging online resources and (2) how to best work with the help desk or user support by properly collecting the information that can be used to help resolve your problem.

ACES: Using the Slurm Scheduler on Composable Resources 02/13/24 - 02:30 PM - 05:00 PM EST

This Short Course (2.5 hours) introduces researchers to the Slurm scheduler on the ACES cluster, a composable accelerator testbed at Texas A&M University. Topics covered include multiple job scheduling approaches and job management tools.

More information about this Short Course at https://hprc.tamu.edu/training/aces_slurm.html

Introduction to Composable Resources: ACES and FASTER 02/13/24 - 11:00 AM - 01:30 PM EST

Research computing on the composable ACES and FASTER clusters

This course will provide an overview of composable technology, where hardware can be reallocated between servers based on user requirements, featuring the advanced accelerators available on the composable ACES and FASTER clusters at Texas A&M University. Topics covered include hardware, access, policies, file systems, and batch processing.

AI for Science Using Delta 02/08/24 - 02:00 PM - 03:00 PM EST

NCSA is hosting a hands-on AI training series comprised of ten sessions to enable researchers to become proficient in using AI techniques on modern supercomputers. This series will be taught by experienced AI practitioners and cover basic, intermediate, and advanced AI topics. Each session will begin at 1 PM (CT) with an approximate duration of 1 hour. Presentation materials and software ready to use on Delta will be provided. All sessions will be recorded and made available for later viewing by registrants on NCSA's HPC-Moodle

ACES: Running Jupyter Notebook on the ACES Portal 02/06/24 - 02:30 PM - 03:30 PM EST

This 1-hour primer covers starting and using a Jupyter notebook in Open OnDemand on the ACES cluster, a composable accelerator testbed at Texas A&M University.

More information about this primer at https://hprc.tamu.edu/training/primers_popup.html

ACES: Using the Slurm Scheduler 02/06/24 - 11:00 AM - 12:00 PM EST

This 1-hour primer covers various job scheduling approaches using the Slurm Workload Manager on the ACES cluster, a composable accelerator testbed at Texas A&M University.

More information about this primer at https://hprc.tamu.edu/training/primers_popup.html

Anvil 101 02/02/24 - 02:30 PM - 04:00 PM EST

This webinar covers the basics of connecting to Anvil, managing the user environment, running jobs, and data management. Knowledge of basic UNIX commands and submitting batch jobs to a cluster will be helpful.

Topics Overview:

AI for Science Using Delta 02/01/24 - 02:00 PM - 03:00 PM EST

NCSA is hosting a hands-on AI training series comprised of ten sessions to enable researchers to become proficient in using AI techniques on modern supercomputers. This series will be taught by experienced AI practitioners and cover basic, intermediate, and advanced AI topics. Each session will begin at 1 PM (CT) with an approximate duration of 1 hour. Presentation materials and software ready to use on Delta will be provided. All sessions will be recorded and made available for later viewing by registrants on NCSA's HPC-Moodle

COMPLECS: Linux Tools for File Processing 02/01/24 - 02:00 PM - 03:30 PM EST

Many computational and data processing workloads require pre-processing of input files to get the data into a format that is compatible with the user’s application and/or post-processing of output files to extract key results for further analysis. While these operations could be done by hand, they tend to be time-consuming, tedious and, worst of all, error prone. In this session we cover the Linux tools awk, sed, grep, sort, head, tail, cut, paste, cat and split, which will help users to easily automate repetitive tasks.

Open OnDemand Tips and Tricks Call 02/01/24 - 01:00 PM - 01:30 PM EST

Hosted by the community, tips and tricks webinars share best practices of all things Open OnDemand. They take place on the first Thursday of every month at 1 p.m. ET.  Recordings of previous events are available on the Open OnDemand website

Filter Events & Trainings