Ingress now in Plain English: 4 New MCP Tools for the Network Edge
Ask Cursor, Claude, or Codex what's happening at your edge. Error rates, latency, status codes, slow paths, scrapers
The classic incident dance: 5xx errors climb, you tab over to Grafana to find the metric, jump to Kibana to find the failing path, then back to your code to fix it. Four tabs, three logins, and the answer is still spread across all of them.
We just shipped 4 MCP tools that put your nginx ingress straight into Cursor, Claude, or any MCP client. They close the last big gap in our MCP surface: the network edge.
What was already there
Quave ONE exposes around 70 MCP tools today. A quick tour so you can place these new ones in context:
- Accounts, apps, environments: list, create, update, get full config and status.
- Deploys: trigger builds, deploy Docker images, redeploy, rollback, stop, start, switch branch.
- Scaling and resources: zClouds, autoscaling, container count, function timeouts.
- Env vars and secrets: list, update, patch JSON fields, fetch CLI tokens.
- Observability before today: container CPU and memory time series, pod details, raw logs via
get-logs, node metrics for BYOP regions. - Network and security: custom hosts, IP allowlists, WAF, rate limits, region moves with CNAME validation.
- Credentials: container registries, TLS certs, ACME wildcards, auto-synced across regions.
- Alerts and contact points: CPU, memory, custom thresholds, with Slack, PagerDuty, webhook, email.
- Docs: search and read product docs from the same prompt.
What we were missing: the ingress edge itself. Until today, you had to leave the editor to ask "is anything failing at the edge, and where?" That ends now.
The 4 new ingress tools
get-account-ingress-metrics-series: nginx time series for the whole accountget-app-env-ingress-metrics-series: same, scoped to one app envget-account-ingress-access-log-summary: aggregated nginx access logs for the accountget-app-env-ingress-access-log-summary: same, scoped to one app env
10 metrics: request rate, success rate %, 4xx %, 5xx %, request rate by status, request and response bytes per second, p50/p90/p99 latency. Access logs group by service, host, path, method, or status, with free text search across user agent, IP, status, and the rest.
Real scenarios that work today
1. Post-deploy 5xx spike
A bad ingress reload, a missing env var, a slow migration - any of these can push 5xx errors past 10% right after you ship. The Kubernetes ingress-nginx reload storm pattern alone is a documented cause of recurring 502/504 bursts on healthy backends. Ask your editor:
What is the 5xx rate for my prod app env over the last 15 minutes? If it spiked, show me which endpoint is on fire.
That's get-app-env-ingress-metrics-series with metric: "error_rate_5xx_percent", then get-app-env-ingress-access-log-summary with groupBy: ["path", "method"] and search: "503".
2. AI scrapers hammering one path
GPTBot, ClaudeBot, PerplexityBot, OAI-SearchBot, and Meta-WebIndexer are all live in 2026 nginx logs. A public 48-day study found Meta-WebIndexer alone made 1,833 requests against one site. They're not malicious, but they can blow your egress budget on /blog or your sitemap. Ask:
Group account access logs by path for the last 24h where user agent contains "GPTBot". Then repeat for "ClaudeBot".
get-account-ingress-access-log-summary with search: "GPTBot", groupBy: ["path"], topN: 20. Decide whether to block, rate limit, or cache.
3. 404 storm from a vulnerability scanner
Someone is probing /.env, /wp-admin, /phpmyadmin. With one prompt:
Show me the top paths returning 404 in the last hour, account wide.
get-account-ingress-access-log-summary with groupBy: ["path"] and search: "404". If most 404s come from a handful of paths, it's a scanner, not a UI bug. Add the offending IP to your allowlist via update-app-env-allowlist and move on, all from the same chat.
4. P99 latency regression
A slow database query lands in last night's deploy. p99 doubles, p50 looks fine, the generic dashboard glance misses it.
Show me p99 latency for
wa-api-prodover the last 24h, then group access logs by path so I can see which route slowed down.
get-app-env-ingress-metrics-series with metric: "latency_p99", then get-app-env-ingress-access-log-summary with groupBy: ["path"] to find the regression's footprint.
Why this matters
You ask a question in plain English. The agent picks the tool, runs the query, returns numbers in seconds. No new tab, no new login, no waiting for the SRE on call who knows where the dashboard lives. Same Prometheus and Elasticsearch data that powers your alerts, just where you already work.
Available now on Quave ONE. Add the MCP server to Cursor, Claude, or Codex and ask away.
BTW, are you following our status page? https://quaveone.statuspage.io/
It's always important to follow it as we post updates there when needed.