Analyzer69000 · Agents & LLMs

Drop in.
1 tool. Any agent.

The on-chain forensic scanner, as a single tool call. Your Claude / OpenAI / MCP agent can reason about rug risk, LP lock, holder concentration, deployer history, fair-launch status, wash-trade detection, bundle hold/sold %, team wallets, Uniswap V4 pools, and CEX origin on any Solana or EVM token — in < 1 KB per response. Plus a live Telegram bot and a cross-chain trending feed. No OAuth. No key. Ship it in 60 seconds.

scan_token() 8 chains < 1 KB / call stdio MCP 30 req/min · free

📊 Live on-chain scan volume

Not a demo — this is the exact API your agent will hit. Every number below is a real public endpoint.

Tokens scanned (30d)
unique contracts
Rugs caught (30d)
LIKELY_RUG verdicts
Avg response
p50 on summary=1
Chains supported
8
Solana + 6 EVM + Hyperliquid
Schema version
stable response shape

🧪 See it in action — real API, real token

Watch an agent round-trip end-to-end. The tool response is exactly what your LLM receives — no summarization layer, no caching tricks, just the verbatim summary=1 payload your agent would consume.

Agent conversation
User
Should I ape into JUPyiwrY…DvCN?
Agent
Let me scan it first.
Tool call
scan_token({ address: "JUPy…DvCN" })
Tool response · 1 KB
click Scan to fetch live
Agent's reasoning
The agent's interpretation of the tool response will appear here — verdict, top risks, confidence, and a one-line recommendation an LLM can hand back to the user.

🔧 Plug-and-play tool definition

Copy this verbatim into any LLM toolset — Claude's tools, OpenAI's function_call, LangChain, LlamaIndex. The input schema is strict so hallucinated addresses get rejected upfront.

JSON
{
  "name": "scan_token",
  "description": "Safety + market summary for a token contract. Works on Solana (base58 mint) and EVM (0x... on Ethereum, Base, BSC, Arbitrum, Optimism, Polygon). Verdicts: BLUE_CHIP, ESTABLISHED, CLEAN_ON_SURFACE, CAUTION, SUSPICIOUS, HIGH_RISK, LIKELY_RUG.",
  "input_schema": {
    "type": "object",
    "required": ["address"],
    "properties": {
      "address": { "type": "string" },
      "chain": { "type": "string", "enum": ["auto","solana","eth","base","bsc","arb","op","poly"] }
    }
  }
}
JS · handler
async function scan_token({ address, chain = 'auto' }) {
  if (!/^[1-9A-HJ-NP-Za-km-z]{32,44}$|^0x[a-fA-F0-9]{40}$/.test(address))
    return { error: 'invalid_address' };
  const r = await fetch(`https://analyzer69000.com/api/v1/token/${address}?chain=${chain}&summary=1`);
  return await r.json();
}

🔌 Hosted MCP? No. Yours.

We deliberately don't run a hosted MCP endpoint — the spec recommends servers run adjacent to the model so credentials + trust boundaries stay local. Drop this tiny wrapper into your own MCP host and you're live.

JS · MCP stdio server
// a69-mcp.mjs — run: node a69-mcp.mjs
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';

const server = new Server(
  { name: 'analyzer69000', version: '1.0.0' },
  { capabilities: { tools: {} } },
);

server.setRequestHandler('tools/list', async () => ({
  tools: [{
    name: 'scan_token',
    description: 'Safety + market summary for any token contract.',
    inputSchema: { type: 'object', required: ['address'], properties: { address: { type: 'string' } } },
  }],
}));

server.setRequestHandler('tools/call', async (req) => {
  const { address } = req.params.arguments;
  if (!/^[1-9A-HJ-NP-Za-km-z]{32,44}$|^0x[a-fA-F0-9]{40}$/.test(address))
    return { content: [{ type: 'text', text: 'invalid_address' }], isError: true };
  const r = await fetch(`https://analyzer69000.com/api/v1/token/${address}?summary=1`);
  return { content: [{ type: 'text', text: JSON.stringify(await r.json(), null, 2) }] };
});

await server.connect(new StdioServerTransport());

✨ Works with every major host

Same tool definition, every ecosystem. We've stayed spec-pure so the moment your agent platform supports tool calling, you're live.

Claude
Anthropic tool-use
OpenAI
Functions & Assistants
MCP
Stdio / HTTP transport
LangChain
Tool decorators
LlamaIndex
FunctionTool
Vercel AI SDK
tool()

💬 Real prompts your users will send

The agent picks up scan_token implicitly — no prompt engineering required. Below: actual questions that one tool call can answer end-to-end.

🤔
"Should I buy this token my friend shipped?"
→ scan_token(addr) → check verdict & top_risks → summarize
🚨
"Is 3tuvMBt…pump safe to ape into right now?"
→ LIKELY_RUG · 100/100 rug · abandoned token
📊
"Compare the safety of JUP, BONK, and PEPE."
→ three parallel calls → agent diffs verdicts + LP lock
🕵️
"Who's bundling this launch?"
→ bundle.wallets_in_largest_bundle + pct_10m
🔥
"Is the LP actually locked or can they rug?"
→ liquidity_lock.verdict + secured_pct + pool_type
🧬
"Has this deployer rugged before?"
→ deployer.serial_rugger_severity

🛡️ Safe by default. Free by design.

No API key. No OAuth. No abuse vectors. The same tier that powers the scanner website powers your agent — protections are at the API, not in your wrapper.

Rate limit
30/min
burst-friendly, per-IP
Hourly ceiling
300/h
per-IP cap
Server cache
12h
per address
Response size
~1 KB
summary=1 mode
Latency
fast
cache-hit <200ms