upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

Benchmarking Large Language Models on NVIDIA H100 GPUs with CoreWeave (Part 1)

  • Article
  • Apr 27, 2023
  • #Naturallanguageprocessing #ComputerProgramming
Vitaliy Chiley
@vitaliychiley
(Author)
Daya Khudia
@DayaKhudia
(Author)
www.mosaicml.com
Read on www.mosaicml.com
1 Recommender
1 Mention
The research and engineering teams here at MosaicML collaborated with CoreWeave, one of the leading cloud providers for NVIDIA GPU-accelerated server platforms, to provide a preview... Show More

The research and engineering teams here at MosaicML collaborated with CoreWeave, one of the leading cloud providers for NVIDIA GPU-accelerated server platforms, to provide a preview of the performance that can be achieved when training large language models (LLMs) with NVIDIA H100 GPUs on the MosaicML platform.

Show Less
Recommend
Post
Save
Complete
Collect
Mentions
See All
Jim Fan @DrJimFan ยท May 3, 2023
  • Post
  • From Twitter
Excellent blog post on the speed and convergence of GPT training runs on H100 (FP8)
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta