Generative AI in the Real World: Robert Nishihara on AI and the Future of Data

We need tools for working with multimedia data at scale

By Ben Lorica and Robert Nishihara
November 21, 2024

Generative AI in the Real World: Robert Nishihara on AI and the Future of Data
Generative AI in the Real World

 
 
00:00 / 11 m 16 s
 
1X
 

Robert Nishihara is one of the creators of Ray and cofounder of Anyscale, a platform for high-performance distributed data analysis and artificial intelligence. Ben Lorica and Robert discuss the need for data for the next generation of AI, which will be multimodal. What kinds of data will we need to develop models for video and multimodal data? And what kinds of tools will we use to prepare that data?

Check out other episodes of this podcast or the full-length version of this episode on the O’Reilly learning platform.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

About the Generative AI in the Real World podcast: In 2023, ChatGPT put AI on everyone’s agenda. In 2024, the challenge will be turning those agendas into reality. In Generative AI in the Real World, Ben Lorica interviews leaders who are building with AI. Learn from their experience to help put AI to work in your enterprise.

Timestamps

  • 0:00: Introduction
  • 1:06: Are we running out of data? 
  • 1:35: There is a paradigm shift in how ML is thinking about AI. The innovation is on the data side: finding data, evaluating sources of data, curating data, creating synthetic data, filtering low-quality data. People are curating and processing data using AI. Filtering out low-quality data or unimportant image data is an AI task. 
  • 5:02: A lot of the tools were aimed at warehouses and lakehouses. Now we increasingly have more unstructured multimodal data. What’s the challenge for tooling?
  • 5:44: Lots of companies have lots of data. They get value out of data by running SQL queries on structured data, but structured data is limited. The real insight is in unstructured data, which will be analyzed using AI. Data will shift from SQL-centric to AI-centric. And tooling for multimodal data processing is almost nonexistent. 
  • 8:24: What should we expect in 2025? Better reasoning? Multiagent architectures?
  • 9:03: One benchmark of reasoning is math. Multimodality will be ubiquitous. Text and image will be ubiquitous. People will use AI to get insights out of data that they didn’t have a way to use before. 
  • 10:20: Very few companies will pretrain large models. Companies will train lots of smaller models, and lots of companies will do posttraining. What we like about a lot of the foundation models we use can be attributed to posttraining.

Post topics: AI & ML, Generative AI in the Real World podcast
Post tags: Commentary
Share:

Get the O’Reilly Radar Trends to Watch newsletter