The recent 0.2.0 release of Google’s Agent Development Kit (ADK) for Java adds an integration with the LangChain4j LLM framework. This integration provides developers with a wide range of Large Language Models (LLMs) supported by LangChain4j, for building AI agents.
In addition to ADK’s built-in Google Gemini and Anthropic Claude integrations, developers can now use LangChain4j to access other models from third-party providers (like OpenAI, Anthropic, GitHub, Mistral…) or local open-weight models, e.g. via Ollama or Docker Model Runner.
LangChain4j integration for a large choice of models
The LangChain4j LLM framework supports a wide variety of models. You can check the list of supported models in the LangChain4j documentation. Let’s have a look at a couple concrete examples, using Gemma with Docker Model Runner, and Ollama with Qwen.
When declaring your ADK agent with the LlmAgent
builder, you specify the LLM via the model()
builder method. You usually pass a string representing the name of the model, like “gemini-2.5-flash
“.
It’s also possible to use an instance of a class extending the BaseLlm
abstract class. This is exactly what the integration with LangChain4j does, to create a bridge between both frameworks. You have to use a new LangChain4j
class that extends this BaseLlm
class.
Running Gemma 3 with Docker Model Runner
After having installed and enabled Docker Model Runner on your machine, you can pull the Gemma 3 model easily via this command:
docker model pull ai/gemma3
Shell
As Docker Model Runner models exposes an OpenAI compatible API surface, you can use the LangChain4j module for OpenAI compatible models, by specifying the following dependencies in your Maven pom.xml
:
com.google.adk
google-adk-contrib-langchain4j
0.2.0
dev.langchain4j
langchain4j-open-ai
1.4.0
XML
Then, create a LangChain4j chat model, specifying the model you want to use, and the local URL and port:
OpenAiChatModel dmrChatModel = OpenAiChatModel.builder()
.baseUrl("http://localhost:12434/engines/llama.cpp/v1")
.modelName("ai/gemma3n")
.build();
Java
Now, configure a chess coach agent using that model:
LlmAgent chessCoachAgent = LlmAgent.builder()
.name("chess-coach")
.description("Chess coach agent")
.model(new LangChain4j(dmrChatModel))
.instruction("""
You are a knowledgeable chess coach
who helps chess players train and sharpen their chess skills.
""")
.build();
Java
Notice how the bridge between the two frameworks is done via the model(new LangChain4j(dmrChatModel))
instruction. And here you go, your AI agent is powered by a local model!
Running Qwen 3 with Ollama
If instead you want to build a friendly science teacher agent with the Qwen 3 model running locally on your machine via Ollama, first, define our dependencies inside a Maven pom.xml build file:
com.google.adk
google-adk
0.2.0
com.google.adk
google-adk-contrib-langchain4j
0.2.0
dev.langchain4j
langchain4j-ollama
1.4.0
XML
Let’s assume you’ve already installed Ollama on your machine, you pulled the Qwen 3 model, and got it running on port 11434
. With LangChain4j, in Java, you instantiate the Ollama model provider as follows:
OllamaChatModel ollamaChatModel = OllamaChatModel.builder()
.modelName("qwen3:1.7b")
.baseUrl("http://127.0.0.1:11434")
.build();
Java
Now let’s wire this model into a simple science teacher agent:
LlmAgent scienceTeacherAgent = LlmAgent.builder()
.name("science-app")
.description("Science teacher agent")
.model(new LangChain4j(ollamaChatModel))
.instruction("""
You are a helpful science teacher
who explains science concepts to kids and teenagers.
""")
.build();
Java
If the model supports function calling, you can give your agent access to tools as well. For example, give it access to MCP servers, or your local code driven functions. You can explore the various tools at your disposal in this article diving into ADK tools, or by looking at the ADK documentation.
New Features in this Release
Beyond the headline LangChain4j integration, version 0.2.0 brings several other powerful enhancements to the agent development workflow:
- Expanded Tooling Capabilities: We’ve significantly improved how you create and manage tools.
- Instance-based
FunctionTools
: You can now createFunctionTools
from object instances, not just static methods, offering greater flexibility in your agent’s architecture. - Improved Async Support:
FunctionTools
now support methods that return aSingle
. This improves asynchronous operation support and makes agents more responsive. - Better Loop Control: The new
endInvocation
field in Event Actions allows programmatic interruption or stopping of the agent loop after a tool call. This provides finer control over agent execution.
- Instance-based
- Advanced Agent Logic and Memory:
- Chained Callbacks: We’ve added support for chained callbacks for
before/after
events on model, agent, and tool execution. This enables more complex and fine-grained logic within your agent’s lifecycle. - New Memory and Retrieval: This version introduces an
InMemoryMemoryService
for simple, fast memory management and implementsVertexAiRagRetrieval
using AI Platform APIs for more advanced RAG patterns.
- Chained Callbacks: We’ve added support for chained callbacks for
- Other key enhancements include a parent POM and the Maven Wrapper (
./mvnw
), ensuring a consistent and straightforward build process for all contributors.
Let’s put those AI agents to work
We’re thrilled to get this new version into your hands. The integration with LangChain4j marks a major step forward in making ADK for Java a more open and flexible framework for building powerful AI agents.
To learn more about this new version of ADK for Java, read the GitHub release notes. New to developing agents in Java with ADK? Check out the ADK for Java documentation, this getting started guide (and video), or fork this GitHub template project to begin quickly.
My colleague Michael Vorburger and myself have been happy to work on this LangChain4j integration, in collaboration with Dmytro Liubarskyi who created LangChain4j. So if you’re building AI agents in Java with ADK, don’t hesitate to drop us a message to @glaforge on Twitter/X or @glaforge.dev on Bluesky. We’re looking forward to hearing about your great use cases.