LangChain: LLM Integration for Elixir Apps with Mark Ericksen
About this Episode
Published June 12, 2025 |
Duration: 38:18 |
RSS Feed |
Direct download
Transcript:
English
Mark Ericksen, creator of the Elixir LangChain framework, joins the Elixir Wizards to talk about LLM integration in Elixir apps. He explains how LangChain abstracts away the quirks of different AI providers (OpenAI, Anthropic’s Claude, Google’s Gemini) so you can work with any LLM in one more consistent API. We dig into core features like conversation chaining, tool execution, automatic retries, and production-grade fallback strategies.
Mark shares his experiences maintaining LangChain in a fast-moving AI world: how it shields developers from API drift, manages token budgets, and handles rate limits and outages. He also reveals testing tactics for non-deterministic AI outputs, configuration tips for custom authentication, and the highlights of the new v0.4 release, including “content parts” support for thinking-style models.
Key topics discussed in this episode:
• Abstracting LLM APIs behind a unified Elixir interface
• Building and managing conversation chains across multiple models
• Exposing application functionality to LLMs through tool integrations
• Automatic retries and fallback chains for production resilience
• Supporting a variety of LLM providers
• Tracking and optimizing token usage for cost control
• Configuring API keys, authentication, and provider-specific settings
• Handling rate limits and service outages with degradation
• Processing multimodal inputs (text, images) in Langchain workflows
• Extracting structured data from unstructured LLM responses
• Leveraging “content parts” in v0.4 for advanced thinking-model support
• Debugging LLM interactions using verbose logging and telemetry
• Kickstarting experiments in LiveBook notebooks and demos
• Comparing Elixir LangChain to the original Python implementation
• Crafting human-in-the-loop workflows for interactive AI features
• Integrating Langchain with the Ash framework for chat-driven interfaces
• Contributing to open-source LLM adapters and staying ahead of API changes
• Building fallback chains (e.g., OpenAI → Azure) for seamless continuity
• Embedding business logic decisions directly into AI-powered tools
• Summarization techniques for token efficiency in ongoing conversations
• Batch processing tactics to leverage lower-cost API rate tiers
• Real-world lessons on maintaining uptime amid LLM service disruptions
Links mentioned:
https://rubyonrails.org/
https://fly.io/
https://zionnationalpark.com/
https://podcast.thinkingelixir.com/
https://github.com/brainlid/langchain
https://openai.com/
https://claude.ai/
https://gemini.google.com/
https://www.anthropic.com/
Vertex AI Studio https://cloud.google.com/generative-ai-studio
https://www.perplexity.ai/
https://azure.microsoft.com/
https://hexdocs.pm/ecto/Ecto.html
https://oban.pro/
Chris McCord’s ElixirConf EU 2025 Talk https://www.youtube.com/watch?v=ojL_VHc4gLk
Getting started:
https://hexdocs.pm/langchain/getting_started.html
https://ash-hq.org/
https://hex.pm/packages/langchain
https://hexdocs.pm/igniter/readme.html
https://www.youtube.com/watch?v=WM9iQlQSF_g
@brainlid on Twitter and BlueSky
Special Guest: Mark Ericksen.