Building AI Agents on the JVM
Most talks around the internet are either about AI itself or the use of AI. Fewer talks are about building something that can actually plug into an existing product and provide capabilities that were previously impractical.
In this post, I’ll build a small-but-realistic example on the JVM: a “car salesman” chat agent that can query inventory, keep a conversation with the customer, and finally prepare a draft deal for finance approval.
What we’re building: an AI agent
The agent will:
- Keep a conversation with customers in natural language
- Use provided tools to fetch cars you actually have for sale
- Keep context across the conversation (so the user doesn’t have to repeat themselves)
- Draft a deal, but does not approve it (never let AI make the final decisions)
The problem we’re trying to solve is the limitation of existing UI/UX options. Customers think in use cases, not filters. Someone might know they need “a family car” or a car for a “first-time driver”. Existing UI options typically force the customer to choose filters based on technical properties like “number of doors”, “acceleration”, or “engine type”. We’ll use an AI agent to translate use cases into technical requirements.
- “A family car” → 5 doors, ISOFIX, decent boot
- “First driver” → cheap-ish, easy to drive, not too powerful
Inside this example we use
- Kotlin
- Koog for the agent proxy (Koog docs)
- CLI interface
Koog in two minutes
Koog is an open-source framework for building AI agents in Kotlin.
What you get out of the box:
- A consistent way to call different LLM providers
- Tool abstractions (so the model can call our code)
- Building blocks for multi-step flows (including tool loops)
- Utilities for reliability and context control
Step 1: Build tools
Let’s start by building tools. Tools are just functions that the agent can call. In Koog, an easy way (on the JVM) is to:
- Implement
ToolSet - Annotate methods with
@Tool - Optionally add
@LLMDescriptionto give the model clearer intent on how to use it
Here’s the smallest possible tool set:
import ai.koog.agents.core.tools.annotations.Tool
import ai.koog.agents.core.tools.reflect.ToolSet
class CarCatalogue : ToolSet {
@Tool
fun getAvailableCars() {}
}
@LLMDescription is optional, but it’s a great improvement that reduces the probability of the agent misusing the tool.
A more complete example could look like this:
@LLMDescription("Tools for fetching available car catalogue and their details")
class CarCatalogue : ToolSet {
@Tool
@LLMDescription("Return the list of currently available cars")
fun getAvailableCars(
@LLMDescription("Filter list of cars by brands")
brands: List<Brand>? = null,
@LLMDescription("Filter list of cars by the production years range")
productionYears: Range? = null,
@LLMDescription("Filter list of cars by the engine type")
engines: List<Engine>? = null,
@LLMDescription("Filter list of cars by the number of seats")
numberOfSeats: Range? = null,
@LLMDescription("Filter list of cars by the price range")
prices: Range? = null,
): List<Car> {
// run a query on database and return cars
}
}
I won’t paste the whole mock implementation here. If you’re interested in how the rest of the class looks, check the link.
To make things slightly more interesting, let’s add one more tool: a tool that lets the agent build a deal draft and send it to finance for approval.
@LLMDescription("Tools for financing the car purchase")
class Finance : ToolSet {
@Tool
@LLMDescription("Send a deal draft to the finance department for approval")
fun buildDealDraft(
@LLMDescription("Name of the car brand & model")
car: String,
@LLMDescription("Name of the customer")
customer: String,
@LLMDescription("Proposed price for the car")
price: Int,
) {
// send the draft to the finance department
}
}
Step 2: Register tools and build the agent
Now that the tools are ready, let’s put them all together. Luckily, the heavy lifting is covered by Koog.
suspend fun main() {
val carCatalogueRegistry = ToolRegistry {
tools(CarCatalogue().asTools())
}
val financeRegistry = ToolRegistry {
tools(Finance().asTools())
}
val agent = AIAgent(
promptExecutor = simpleOpenAIExecutor(System.getenv("OPENAI_API_KEY")),
llmModel = OpenAIModels.Chat.GPT4o,
systemPrompt = """
You are a helpful car catalogue assistant. Be approaching to the
customer, friendly and professional. Don't use technical jargon.
Try to understand user's needs and find a car that is available right
now. Help users to find the best car for their needs by suggesting
available cars from the catalogue.
Only use the available tools to fetch the car data. You don't know
anything beyond that. Don't look up online sources, or make up any
data. User can't negotiate discounts with you.
""".trimIndent(),
toolRegistry = carCatalogueRegistry + financeRegistry
)
val response = agent.run("I need a compact electric city car")
println(response)
}
This is the nice part of using a framework: you pick a model/provider, provide credentials, define a system prompt, and attach tools.
One more reminder: a system prompt is not an enforcement mechanism. Think of it as guidance for the agent. Although we state that the agent can’t give any discounts, never assume it will always follow the rules.
The real way to enforce business rules is:
- Validation in your tools
- Hard business rules in your domain layer
- Human approval
Step 3: Make agent conversational
The previous example works, but it’s limited to a single question. In a real chat you need multi-turn behavior, and when tools are involved you also need a “tool loop”.
The key part is this:
- You call the LLM
- If it returns tool calls, you execute them
- You send tool results back
- You repeat only while tool calls exist
Koog’s functional strategy examples follow this exact shape. A simplified version looks like this:
suspend fun main() {
val agent = AIAgent(
// skipping repeating part
toolRegistry = carCatalogueRegistry + financeRegistry,
strategy = functionalStrategy { input ->
var input = input
while (input.isNotBlank()) {
val responses = callTools(requestLLMMultiple(input))
println(responses.map { it.asAssistantMessage().content })
input = readLine()
}
input
}
)
agent.run(readLine())
}
And that’s it — our example is fully functional. If you want to see the full working version, check the code here. You can run it on your machine; just remember to provide your API key.
What a conversation looks like
Here is an example of what a conversation with the AI agent can look like. Please note when the agent is executing a tool and with what parameters.
getAvailableCars with args {"brands":["BMW"],"engines":["ELECTRIC"]}Here are a few compact electric options currently available.
… agent lists the cars …
buildDealDraft with args {"car":"BMW i3","customer":"Damian","price":26000}Conclusion
As you can see, building a system this way is relatively easy. The hard parts are the ones a simple demo like this won’t show. The agent can decide on its own what tools it will use and with what parameters — which makes the whole process non-deterministic. As a developer, you have limited influence over that process. Sure, you can improve it with better hints, prompts, and different models, but in the end you don’t fully control how the model will behave.
The use case for AI agents in an existing system is limited by unpredictability, but it’s absolutely there. And using tools like Koog opens doors that weren’t practical before.
You can find the full working implementation of the example in a public repository: https://gitlab.com/garstecki/car-salesman