This page looks better in the app
event cover
Speeding up training with FP8 and triton
event cover
Yandex
Software & IT Services
Free
Event details
Nov 13 | 18:15 - 21:00
35 Moskovyan Street, Yerevan
Yandex

Training Large Language Models takes massive compute power. Finding ways to make it faster and cheaper is a big deal. On November 13, Vladislav Savinov, Team Lead at YandexGPT pretraining, will share how modern research labs push the limits of GPU efficiency with FP8 precision and Triton. You’ll learn:

  • Why training in lower precision can cut costs and speed up compute
  • What recent FP8 research papers and open-source tools reveal
  • How GPUs actually work and how Triton kernels boost performance in practice
  • Real insights from YandexGPT’s production-scale FP8 pretraining

Join us for a deep technical dive!

Other events