Scaling data, compute, and model size alone will hit diminishing returns without new algorithmic ideas
Fei‑Fei Li acknowledges that larger models and more data have produced impressive gains, yet she warns that simply adding GPUs will not solve core limitations.
She cites tasks that current models cannot perform, such as counting chairs in a 3‑D scene or deriving Newtonian physics from raw observations.
The speaker argues that breakthroughs in architecture (e.g., transformers) and training paradigms are still needed to achieve higher-level reasoning.
This perspective aligns with research communities calling for “efficiency‑first” and “reasoning‑first” approaches.
The insight guides investors and labs to fund exploratory work beyond brute‑force scaling.
“I definitely think we need more innovations. Scaling more data, more GPUs and bigger current model architecture is still a lot to be done, but we absolutely need to innovate more.” — Fei‑Fei Li
While data, compute, and model size have driven recent advances, Fei-Fei Li stresses that continued breakthroughs require fresh ideas--especially in world modeling, embodied intelligence, and multimodal reasoning.