AISDK Tools Usage: Model Generates JavaScript Code but Doesn’t Execute Tools
I’m using AISDK 4 with the following configuration:
return streamText({
model,
messages: context,
tools: getTools(a, b, c, d),
toolChoice: "auto",
maxTokens: 30000,
temperature: 0.0,
abortSignal: abortSignal,
system: getSystemPrompt(a, b),
});
I’ve encountered an issue where, after approximately four messages with the model (including tool calls), the behavior changes on the fifth message. The model generates JavaScript code as text but fails to actually call the tool to execute it.
For example:
- When I ask the model to “create a PDF with cats”, it correctly generates and streams the JavaScript code to create the PDF, but the tool is never called to execute the code.
I’ve already specified in the prompt that it should call the tool. What might be causing this behavior? Are there any limits or restrictions in AISDK that I might be missing?
The issue you’re experiencing with AI SDK 4 where the model generates JavaScript code but fails to execute tools after multiple messages is a common problem related to tool calling configuration and multi-step execution limits. This behavior typically occurs when the maxSteps parameter isn’t properly configured or when there’s a mismatch between different parts of your tool calling setup.
Contents
- Understanding the maxSteps Parameter
- Common Causes of Tool Execution Failures
- Configuration Best Practices
- Debugging and Troubleshooting
- Migration Considerations
Understanding the maxSteps Parameter
The maxSteps parameter in AI SDK 4.0 controls how many times the model can call tools during a single conversation. When you reach this limit, the model will stop executing tools and instead generate the code as text content.
From the research findings, one Reddit post discovered that when maxSteps values between different components (like useChat and streamText) don’t match, the tools simply don’t execute, causing silent failures.
For your PDF generation scenario, if you’ve configured maxSteps: 4 but the conversation requires more tool calls after the fourth message, the model will generate JavaScript code instead of executing the tool. This explains why the behavior changes on the fifth message.
Common Causes of Tool Execution Failures
1. Mismatched maxSteps Configuration
If you’re using both client-side and server-side tool calling, ensure the maxSteps values match across all components:
// Server-side configuration (your current setup)
return streamText({
model,
messages: context,
tools: getTools(a, b, c, d),
toolChoice: "auto",
maxSteps: 10, // Ensure this is sufficient for your use case
temperature: 0.0,
abortSignal: abortSignal,
system: getSystemPrompt(a, b),
});
2. Tool Execution Errors
According to the AI SDK documentation, when tool execution fails, the AI SDK adds them as tool-error content parts to enable automated LLM roundtrips in multi-step scenarios. These errors might be preventing further tool execution.
3. Context Length Limitations
After multiple messages, the conversation context might exceed the model’s token limits, causing it to switch from tool execution to code generation.
Configuration Best Practices
1. Increase maxSteps Appropriately
For complex workflows involving multiple tool calls, set a higher maxSteps value:
return streamText({
model,
messages: context,
tools: getTools(a, b, c, d),
toolChoice: "auto",
maxSteps: 20, // Higher value for complex workflows
maxTokens: 30000,
temperature: 0.0,
abortSignal: abortSignal,
system: getSystemPrompt(a, b),
});
2. Implement Proper Error Handling
Handle tool execution errors gracefully:
const tools = {
executeJavaScript: tool({
description: "Execute JavaScript code",
parameters: z.object({
code: z.string().describe("The JavaScript code to execute"),
}),
execute: async ({ code }) => {
try {
// Your execution logic here
return { result: "Code executed successfully" };
} catch (error) {
// Return the error to the model for recovery
throw new Error(`Execution failed: ${error.message}`);
}
},
}),
};
3. Use Tool Context Properly
Ensure your tools have access to the necessary context:
const tools = {
createPDF: tool({
description: "Create a PDF document",
parameters: z.object({
content: z.string().describe("The content to include in the PDF"),
}),
execute: async ({ content }, context) => {
// Access conversation history if needed
const conversationHistory = context.messages;
// Your PDF creation logic
return { success: true, pdfId: generatedId };
},
}),
};
Debugging and Troubleshooting
1. Check Network Requests
As mentioned in the Microsoft troubleshooting guide, verify that the script defined in the src snippet configuration was downloaded by checking for response code 200 (success) or 304 (not changed) in your network tab.
2. Monitor Tool Execution Flow
Add logging to track tool execution:
const tools = {
executeJavaScript: tool({
description: "Execute JavaScript code",
parameters: z.object({
code: z.string().describe("The JavaScript code to execute"),
}),
execute: async ({ code }) => {
console.log("Tool execution requested:", code);
try {
const result = await executeCode(code);
console.log("Tool execution successful");
return result;
} catch (error) {
console.error("Tool execution failed:", error);
throw error;
}
},
}),
};
3. Validate Tool Schemas
Ensure your tool schemas are correctly defined. The research mentions that generateText throws errors for tool schema validation issues, which might prevent execution.
Migration Considerations
The research indicates that AI SDK 5.0 has significant improvements for tool calling. According to the migration guide, the maxSteps parameter has been replaced with stopWhen, which provides more flexible control over multi-step execution.
Key benefits of upgrading to AI SDK 5.0 include:
- Automatic Input Streaming: Tool call inputs now stream by default
- Explicit Error States: Tool execution errors are limited to the tool and can be resubmitted
- Better Multi-step Control: More flexible stop conditions
If you’re experiencing persistent issues with tool execution after multiple messages, consider migrating to AI SDK 5.0 where the tool calling mechanism has been significantly improved.
Sources
- AI SDK Core: Tool Calling - Official Documentation
- Migration Guide: AI SDK 4.0 to 5.0
- Reddit: Vercel AI SDK - Silent Failure with maxSteps
- AI SDK 5 - Vercel Blog
- Microsoft Azure Monitor - JavaScript SDK Troubleshooting
Conclusion
The issue where your AI SDK 4 model generates JavaScript code but doesn’t execute tools after multiple messages is most likely caused by either insufficient maxSteps configuration or mismatched parameters between different components.
Key takeaways:
- Ensure your
maxStepsvalue is high enough for your workflow complexity - Check for consistency between client-side and server-side configurations
- Implement proper error handling and logging for tool execution
- Consider upgrading to AI SDK 5.0 for improved tool calling capabilities
Recommended actions:
- Increase the
maxStepsparameter to handle more tool calls - Audit your entire tool calling configuration for parameter mismatches
- Add comprehensive logging to debug tool execution flow
- Evaluate migrating to AI SDK 5.0 for better multi-step execution control
By addressing these configuration issues, you should be able to restore proper tool execution behavior even after multiple messages in your conversation.