Exploring LangChain’s building blocks
To build practical applications, we need to know how to work with different model providers. Let’s explore the various options available, from cloud services to local deployments. We’ll start with fundamental concepts like LLMs and chat models, then dive into prompts, chains, and memory systems.
Model interfaces
LangChain provides a unified interface for working with various LLM providers. This abstraction makes it easy to switch between different models while maintaining a consistent code structure. The following examples demonstrate how to implement LangChain’s core components in practical scenarios.
Please note that users should almost exclusively be using the newer chat models as most model providers have adopted a chat-like interface for interacting with language models. We still provide the LLM interface, because it’s very easy to use as string-in, string-out.