Curating high-quality pretraining datasets is critical for enterprise developers aiming to train state-of-the-art large language models (LLMs). To enable developers to build highly accurate LLMs, NVIDIA previously released Nemotron-CC, a 6.3-trillion-token English language Common Crawl (CC) dataset. Today, the NVIDIA NeMo Curator team is excited to share that the pipeline used to build the…
]]>Data curation plays a crucial role in the development of effective and fair large language models (LLMs). High-quality, diverse training data directly impacts LLM performance, addressing issues like bias, inconsistencies, and redundancy. By curating high-quality datasets, we can ensure that LLMs are accurate, reliable, and generalizable. When training a localized multilingual LLM…
]]>Data curation is the first, and arguably the most important, step in the pretraining and continuous training of large language models (LLMs) and small language models (SLMs). NVIDIA recently announced the open-source release of NVIDIA NeMo Curator, a data curation framework that prepares large-scale, high-quality datasets for pretraining generative AI models. NeMo Curator, which is part of…
]]>