Choose from 1 of the following:
- Pre-training Llama2 with Nvidia NeMo framework on Slurm
- Pre-training Llama2 with Nvidia NeMo framework on GKE
- Fine-Tuning Llama3 with Nvidia NeMo framework on Slurm
- Running, optimising performance and benchmarking Gemma with vLLM on GKE using L4 and H100 GPUs
- Converting Gemma from GPU to TPU with Jetstream on GKE with TPUv5e