Tool use is the second of the three core agentic AI patterns -- the mechanism by which a language model stops being a pure text generator and starts taking action in the outside world. Reading a database, hitting an HTTP endpoint, running a shell command, posting to Slack: every one of those is a tool call. The pattern shows up alongside RAG and memory in roles like the RPM Interactive AI Product Engineer Contract. This article is the entry point: the loop, when to add a tool, when not to, and where to dive deeper.
An LLM that can only emit text is a very expressive autocomplete. An LLM that can call typed functions is an agent -- it perceives the world (function results) and acts on it (function calls), which is the textbook definition of an agent in AI. Three things make tool use the pivot point:
Tool use is a three-phase conversation between the model, the runtime, and the outside world:
# 1. Declaration -- developer registers tool schemas
tools = [{
"name": "get_weather",
"description": "Get current weather for a city",
"parameters": {
"type": "object",
"properties": {"city": {"type": "string"}},
"required": ["city"],
},
}]
# 2. Invocation -- model emits a structured tool call
response = llm(messages, tools=tools)
if response.tool_calls:
call = response.tool_calls[0]
# call.name == "get_weather", call.args == {"city": "Paris"}
# 3. Result injection -- runtime executes, feeds result back
result = registry[call.name](**call.args)
messages.append({"role": "tool", "tool_call_id": call.id, "content": result})
response = llm(messages, tools=tools) # model now has the answer
Modern providers (Anthropic, OpenAI, Google) all implement variations on this protocol. The schemas are JSON Schema; the wire format differs slightly per provider; the loop is the same. Capable agents emit parallel tool calls (several at once) and the runtime fans out execution before re-injecting -- this is what turns a 10-step sequential agent into a 3-step parallel one.
Reach for tool use when:
Do not reach for tool use when:
read_user_message tool.summarize tool that just calls another LLM is usually a smell -- inline it).A useful smell test: if a Python function with a clear signature would solve this sub-task, that is a tool. If you cannot write that signature, it is probably a prompt, not a tool.
bash and the agent is editing files.The "agentic AI patterns (RAG, tool use, memory)" trio shows up verbatim in product-engineer JDs. See the RPM Interactive AI Product Engineer Contract for an example role that lists this pattern as a hiring requirement -- pair this overview with the rag and memory introductions to cover the full bullet.