podcast

Beyond Leaderboards: LMArena’s Mission to Make AI Reliable

30.05.2025
Listen to the episode on your favorite platforms:
  • Apple Podcasts
  • Spotify
  • Castbox
  • Pocket Casts
  • Overcast
  • Castro
  • RadioPublic

LMArena cofounders Anastasios N. Angelopoulos, Wei-Lin Chiang, and Ion Stoica sit down with a16z general partner Anjney Midha to talk about the future of AI evaluation. As benchmarks struggle to keep up with the pace of real-world deployment, LMArena is reframing the problem: what if the best way to test AI models is to put them in front of millions of users and let them vote? The team discusses how Arena evolved from a research side project into a key part of the AI stack, why fresh and subjective data is crucial for reliability, and what it means to build a CI/CD pipeline for large models.

They also explore:

  • Why expert-only benchmarks are no longer enough.
  • How user preferences reveal model capabilities — and their limits.
  • What it takes to build personalized leaderboards and evaluation SDKs.
  • Why real-time testing is foundational for mission-critical AI.

Follow everyone on X:

Anastasios N. Angelopoulos

Wei-Lin Chiang

Ion Stoica

Anjney Midha

Timestamps

-  LLM evaluation: From consumer chatbots to mission-critical systems

-  Style and substance: Crowdsourcing expertise

-  Building immunity to overfitting and gaming the system

-  The roots of LMArena

-   Proving the value of academic AI research

-  Scaling LMArena and starting a company

-  Benchmarks, evaluations, and the value of ranking LLMs

-  The challenges of measuring AI reliability

-  Expanding beyond binary rankings as models evolve

-  A leaderboard for each prompt

-  The LMArena roadmap

-  The importance of open source and openness

-  Adapting to agents (and other AI evolutions)

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.