Skip to main content

AI-Assisted Fixing

Bob the Fixer leverages AI coding assistants to automatically analyze and fix code issues through the Model Context Protocol (MCP).

How It Works

Bob the Fixer exposes SonarQube data via MCP, enabling AI assistants to:

  1. Scan - Run SonarQube analysis on your project
  2. Analyze - Get detailed issue information with code context
  3. Fix - AI applies changes using its code editing capabilities
  4. Verify - Re-scan to confirm the fix resolves the issue

Supported AI Assistants

Claude Code has native MCP support, providing the best integration experience.

Setup:

{
"mcpServers": {
"bob-the-fixer": {
"command": "node",
"args": ["/path/to/universal-mcp-server.js"],
"env": {
"SONAR_URL": "http://localhost:9000",
"SONAR_TOKEN": "squ_xxx"
}
}
}
}

Capabilities:

  • Full MCP tool access
  • Native file editing
  • Multi-turn conversations
  • Context-aware fixes

GitHub Copilot

Works via MCP configuration in ~/.copilot/mcp-config.json.

Setup:

{
"mcpServers": {
"bob-the-fixer": {
"command": "node",
"args": ["/path/to/universal-mcp-server.js"],
"env": {
"SONAR_URL": "http://localhost:9000",
"SONAR_TOKEN": "squ_xxx"
}
}
}
}

OpenAI Codex

Configuration in ~/.codex/config.toml with extended timeouts for long-running scans.

Setup:

[mcp_servers.bob-the-fixer]
command = "node"
args = ["/path/to/universal-mcp-server.js"]
tool_timeout_sec = 600

[mcp_servers.bob-the-fixer.env]
SONAR_URL = "http://localhost:9000"
SONAR_TOKEN = "squ_xxx"

Google Gemini

Supported through Gemini's MCP adapter.

AI Fixing Workflow

Step 1: Initial Scan

User: Scan this project with Bob the Fixer

AI: [Calls sonar_scan_project with autoSetup: true]

Found 15 issues:
- 2 BLOCKER (security vulnerabilities)
- 3 CRITICAL (bugs)
- 7 MAJOR (code smells)
- 3 MINOR (suggestions)

Step 2: Get Issue Details

User: Show me the first BLOCKER

AI: [Calls sonar_get_issue_details]

🔴 BLOCKER: SQL Injection Vulnerability

File: src/db/queries.ts:45

Affected Code:
────────────────────────────────
44 │ function getUser(id: string) {
45 │ return db.query(`SELECT * FROM users WHERE id = '${id}'`);
46 │ }
────────────────────────────────

This allows SQL injection attacks. Use parameterized queries instead.

✅ Compliant Example:
return db.query('SELECT * FROM users WHERE id = $1', [id]);

Step 3: Apply Fix

User: Fix this vulnerability

AI: [Reads file, applies fix]

I've updated src/db/queries.ts:

- return db.query(`SELECT * FROM users WHERE id = '${id}'`);
+ return db.query('SELECT * FROM users WHERE id = $1', [id]);

The query now uses parameterized input to prevent SQL injection.

Step 4: Verify Fix

User: Scan again to verify

AI: [Calls sonar_scan_project with autoSetup: false]

✅ BLOCKER resolved!

Remaining issues: 1 BLOCKER, 3 CRITICAL, 7 MAJOR, 3 MINOR

Pattern-Based Fixing

Use sonar_analyze_patterns to fix multiple similar issues at once:

User: Group issues by fixability and fix all auto-fixable ones

AI: [Calls sonar_analyze_patterns with groupBy: "fixability"]

AUTO-FIXABLE (8 issues):
- 5x "Use const instead of let" (eslint:prefer-const)
- 3x "Remove unused import" (typescript:S1128)

I'll fix all 8 issues now...

[Applies fixes to multiple files]

Done! Fixed 8 issues across 6 files.

Advanced Scenarios

Security Hotspot Review

User: Review all HIGH severity security hotspots

AI: [Calls sonar_get_security_hotspots]

Found 3 HIGH severity hotspots:

1. Hardcoded credentials in config.ts:12
2. Weak cryptographic algorithm in auth.ts:89
3. Open redirect vulnerability in router.ts:156

Would you like me to review each one?

Technical Debt Prioritization

User: What technical debt should I tackle first?

AI: [Calls sonar_get_technical_debt]

TECHNICAL DEBT: 3d 4h

High-Impact Opportunities:
1. Fix 5 critical bugs (saves 4h)
2. Reduce duplication in utils/ (saves 6h)
3. Add tests for auth module (reduces risk)

ROI Recommendation: Start with critical bugs for immediate impact.

Coverage Gap Analysis

User: What code needs more tests?

AI: [Calls sonar_get_coverage_gaps]

UNCOVERED CODE in src/services/PaymentService.ts:

Lines 45-67: processPayment()
- No tests for error handling paths
- Branch coverage: 40%

Lines 89-102: refund()
- Completely untested

Shall I generate test cases for these methods?

Best Practices

1. Review AI Changes

Always review changes before committing:

git diff  # Review AI-made changes
git add -p # Stage changes selectively

2. Fix by Severity

Let the AI prioritize:

Fix all BLOCKER and CRITICAL issues first, then move to MAJOR

3. Batch Similar Issues

Use pattern analysis for efficiency:

Find all issues of the same type and fix them together

4. Verify After Fixing

Always re-scan:

Scan again with autoSetup: false to verify all fixes

5. Run Tests

After AI fixes, run your test suite:

Run the tests to make sure the fixes didn't break anything

Capabilities by AI Assistant

CapabilityClaudeCopilotOpenAIGemini
Issue Reading
Pattern Analysis
Code Modification
Multi-file Edits
Verification Scans
Test Generation

Limitations

What AI Can Fix Well

  • Simple bugs (null checks, type errors)
  • Security vulnerabilities (injection, XSS)
  • Code smells (complexity, duplication)
  • Style issues (formatting, naming)

What Requires Human Review

  • Architectural changes
  • Business logic modifications
  • Complex refactoring
  • Performance optimizations

When to Skip AI Fixing

  • Legacy code with no tests
  • Critical production paths
  • Code you don't understand
  • Complex dependencies

Troubleshooting

"Tool not found"

Verify MCP server is registered:

# Claude
claude mcp list

# Check server is running
node /path/to/universal-mcp-server.js

"Scan timeout"

Increase timeout in AI CLI configuration:

# For OpenAI Codex
tool_timeout_sec = 600

"Fix didn't work"

  1. Check the AI understood the issue correctly
  2. Provide more context about the codebase
  3. Try a more specific prompt

Next Steps