Coze
Official siteLow-code agent workflow platform for fast automation delivery.
OpenAutomatically analyze PR diffs, generate concise feedback, and skip heavy analysis when token usage exceeds a set threshold to control costs.
When PR {pr_number} is created in repository {repo_name}, trigger this workflow.
1. Use GitHub API to fetch PR diff.
2. Estimate token usage based on diff lines.
3. If estimated tokens > {token_limit}, post a comment "Deep analysis skipped, token limit reached." and exit.
4. Otherwise, call OpenAI to generate detailed review and comment.
5. Log actual token usage.This section only explains placeholders. It is not an input form on this website. Copy the prompt, then replace variables in Coze / Dify / ChatGPT.
{repo_name}Full GitHub repo name, e.g., "org/repo"
Filling hint: replace this with your real business context.
{pr_number}Pull request number
Filling hint: replace this with your real business context.
{token_limit}Maximum allowed token usage (integer)
Filling hint: replace this with your real business context.
Fill variables below to generate a ready-to-run prompt in your browser.
{repo_name}Full GitHub repo name, e.g., "org/repo"
{pr_number}Pull request number
{token_limit}Maximum allowed token usage (integer)
Generated Prompt Preview
When PR {pr_number} is created in repository {repo_name}, trigger this workflow.
1. Use GitHub API to fetch PR diff.
2. Estimate token usage based on diff lines.
3. If estimated tokens > {token_limit}, post a comment "Deep analysis skipped, token limit reached." and exit.
4. Otherwise, call OpenAI to generate detailed review and comment.
5. Log actual token usage.Teams that need faster development output with more stable prompt quality.
Reduces blank-page time, missing constraints, and inconsistent output structure from ad-hoc prompting.
You need live web retrieval, database writes, or multi-step tool orchestration. Use full workflow automation for that.
Keep exploring with similar templates and matching tools.
No recent items yet.
1. Listen for PR creation and extract {repo_name} and {pr_number}
2. Call GitHub API to get diff and count added/removed lines
3. Estimate tokens (lines × 0.5)
4. If estimate > {token_limit}, comment with summary template and stop; else call OpenAI for full review and comment
5. Record actual token usage to log or monitoring
Development
Tools that work well with this template.