The on-chain forensic scanner, as a single tool call. Your Claude / OpenAI / MCP agent can reason about rug risk, LP lock, holder concentration, deployer history, fair-launch status, wash-trade detection, bundle hold/sold %, team wallets, Uniswap V4 pools, and CEX origin on any Solana or EVM token — in < 1 KB per response. Plus a live Telegram bot and a cross-chain trending feed. No OAuth. No key. Ship it in 60 seconds.
scan_token()
8 chains
< 1 KB / call
stdio MCP
30 req/min · free
Not a demo — this is the exact API your agent will hit. Every number below is a real public endpoint.
Watch an agent round-trip end-to-end. The tool response is exactly what your LLM receives — no summarization layer, no caching tricks, just the verbatim summary=1 payload your agent would consume.
JUPyiwrY…DvCN?Copy this verbatim into any LLM toolset — Claude's tools, OpenAI's function_call, LangChain, LlamaIndex. The input schema is strict so hallucinated addresses get rejected upfront.
{
"name": "scan_token",
"description": "Safety + market summary for a token contract. Works on Solana (base58 mint) and EVM (0x... on Ethereum, Base, BSC, Arbitrum, Optimism, Polygon). Verdicts: BLUE_CHIP, ESTABLISHED, CLEAN_ON_SURFACE, CAUTION, SUSPICIOUS, HIGH_RISK, LIKELY_RUG.",
"input_schema": {
"type": "object",
"required": ["address"],
"properties": {
"address": { "type": "string" },
"chain": { "type": "string", "enum": ["auto","solana","eth","base","bsc","arb","op","poly"] }
}
}
}
async function scan_token({ address, chain = 'auto' }) {
if (!/^[1-9A-HJ-NP-Za-km-z]{32,44}$|^0x[a-fA-F0-9]{40}$/.test(address))
return { error: 'invalid_address' };
const r = await fetch(`https://analyzer69000.com/api/v1/token/${address}?chain=${chain}&summary=1`);
return await r.json();
}
We deliberately don't run a hosted MCP endpoint — the spec recommends servers run adjacent to the model so credentials + trust boundaries stay local. Drop this tiny wrapper into your own MCP host and you're live.
// a69-mcp.mjs — run: node a69-mcp.mjs
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
const server = new Server(
{ name: 'analyzer69000', version: '1.0.0' },
{ capabilities: { tools: {} } },
);
server.setRequestHandler('tools/list', async () => ({
tools: [{
name: 'scan_token',
description: 'Safety + market summary for any token contract.',
inputSchema: { type: 'object', required: ['address'], properties: { address: { type: 'string' } } },
}],
}));
server.setRequestHandler('tools/call', async (req) => {
const { address } = req.params.arguments;
if (!/^[1-9A-HJ-NP-Za-km-z]{32,44}$|^0x[a-fA-F0-9]{40}$/.test(address))
return { content: [{ type: 'text', text: 'invalid_address' }], isError: true };
const r = await fetch(`https://analyzer69000.com/api/v1/token/${address}?summary=1`);
return { content: [{ type: 'text', text: JSON.stringify(await r.json(), null, 2) }] };
});
await server.connect(new StdioServerTransport());
Same tool definition, every ecosystem. We've stayed spec-pure so the moment your agent platform supports tool calling, you're live.
The agent picks up scan_token implicitly — no prompt engineering required. Below: actual questions that one tool call can answer end-to-end.
3tuvMBt…pump safe to ape into right now?"No API key. No OAuth. No abuse vectors. The same tier that powers the scanner website powers your agent — protections are at the API, not in your wrapper.