Generative AI in the Real World: Putting AI in the Hands of Farmers with Rikin Gandhi
Answering Practical Questions about Agriculture with AI
If you want to fine-tune your prompting skills, make sure to attend O’Reilly’s Prompt to Product Showdown virtual event, led by Lucas Soares, on August 15 from 9AM to noon PDT. You’ll see how five experts use carefully crafted prompts to build minimum viable products with generative AI.
Rikin Gandhi, CTO of Digital Green, talks with Ben Lorica about using generative AI to help farmers in developing countries become more productive. Farmer.Chat integrates information from training videos, sources of weather and crop information, and other data sources in a multimodal app that farmers can use in real-time.
Check out other episodes of this podcast or the full-length version of this episode on the O’Reilly learning platform.
About the Generative AI in the Real World podcast: In 2023, ChatGPT put AI on everyone’s agenda. In 2024, the challenge will be turning those agendas into reality. In Generative AI in the Real World, Ben Lorica interviews leaders who are building with AI. Learn from their experience to help put AI to work in your enterprise.
Timestamps
- 0:00: Introduction.
- 0:45: Digital Green helps farmers become more productive. Two years ago Digital Green developed Farmer.Chat, an app that uses generative AI to put local language training videos together with weather data, market information, and other data.
- 2:09: Our primary data source is our library of 10,000 videos in 40 languages that have been produced by farmers. We integrate additional sources for weather and market information. More recently we’ve added information support tools.
- 3:38: We have a smartphone app. Users who only have feature phones can call into a number and interact with a bot.
- 5:00: When did you realize that generative AI opened up new possibilities?
- 5:43: It was a gradual transition from offline videos on projectors. COVID didn’t allow us to get groups of farmers together. And more farmers came online in the same period.
- 7:47: We had a deterministic bot before Farmer.Chat. Folks using the bot—but users had to traverse a tree to get the information they wanted. That tree was challenging to create and difficult to use. With GPT-3 we saw that we could move away from that complexity
- 8:35: Your situation is inherently multi-modal: video, speech-to-text, voice; is this a challenge? Yes, but it’s also an opportunity. We’re now using tools like GPT Vision to get descriptive metadata about what’s in videos. It becomes part of the database. We began with text queries; we added voice support. And now people can take a photo of a crop or an animal.