СƵ

Skip to content

Shelly Palmer - Are LLMs the future?

Shelly Palmer has been named LinkedIn’s “Top Voice in Technology,” and writes a popular daily business blog.
computerchip-unsplash
There are several schools of thought regarding how to efficiently scale the foundational models.

Greetings from Terminal 5 at JFK. I'm heading to San Francisco this morning to attend Google Cloud's Leaders Circle at Pebble Beach. AI and golf… heaven! Speaking of AI, for my Sunday essay I updated ). I've added some new, interesting strategies that will help unlock your prompt crafting super powers.

Speaking of interesting strategies, there's been a lot of chatter about the "end of LLMs" or "LLMs starting to fail." Those are only headlines – clickbait, really. If you dig a bit deeper, you'll read that there are several schools of thought regarding how to efficiently scale the foundational models.

If the goal is AGI (artificial general intelligence, a term with no agreed-upon definition), then just adding compute power to pre-training may not be the best path to follow. Instead, researchers are exploring an alternative called "inference scaling" to achieve smarter AI. Inference, the process when an AI model generates outputs and answers, can be optimized by having models “think” through multiple possibilities before settling on a response. This approach enables complex reasoning during real-time use without increasing model size.

OpenAI's recently launched o1 model is a good example. By enhancing inference, o1 can tackle tasks that demand layered decision-making, such as coding or problem-solving, in ways similar to human thought. "Test-time compute" techniques make this possible, allowing models to dedicate more processing to challenging queries as needed.

A move toward inference-focused, distributed, cloud-based servers (instead of large, centralized training clusters) might create a more competitive chip landscape. While NVIDIA is the go-to chipmaker for pre-training hardware, there are a bunch of chipmakers (AMD, Intel, etc.) that make hardware suitable for this new method of inference scaling.

The key takeaway is simple: LLMs are not failing, they are evolving. Sensationalist headlines aside, this is how product development works.

As always your thoughts and comments are both welcome and encouraged. Just reply to this email. -s

P.S. CES© is just around the corner (Las Vegas, January 7-10, 2025). Are you going? If you are, our executive briefings and floor tours are the best way to experience the show. .


 

ABOUT SHELLY PALMER

 

Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named  he covers tech and business for , is a regular commentator on CNN and writes a popular . He's a , and the creator of the popular, free online course, . Follow  or visit . 

push icon
Be the first to read breaking stories. Enable push notifications on your device. Disable anytime.
No thanks