Small Language Models Are the New Rage, Researchers Say
The original version of this story appeared in Quanta Magazine. Large language models work well because they’re so large. The latest models from OpenAI, Meta, and DeepSeek use hundreds of billions of “parameters”—the adjustable knobs that determine connections among data and get tweaked during the training process. With more parameters, the models are better able