AI Visibility — Whether ChatGPT, Claude, and Perplexity Actually Surface You
This proposal deploys the same AI Visibility platform you saw on the call, under your accounts, configured for your ICP and your clients, ready to scan the day after handover. Competitor and keyword setup, automatic prompt generation by funnel stage, parallel scans across the major AI surfaces, per-response analysis, content gap analysis, and a generated action plan with technical recommendations. One coherent measurement layer for a discovery surface that didn't have one.
What you walked through on the call
One client. One click per stage. Roughly nine minutes from configured to a complete read on where you stand inside AI search.
- 1. Configure competitors with short descriptions and tracked keywords for the category
- 2. One click generates fifty prompts per funnel stage (awareness, reputation, consideration, comparison, recommendation), hundreds in total
- 3. One click runs every prompt across ChatGPT, Claude, Perplexity, and Gemini in parallel; a full client scan finished in roughly nine minutes
- 4. Per-response analysis: mentioned or not, where in the answer, sentiment, hallucinated facts flagged, clarification requests counted
- 5. Aggregate metrics: visibility score, share of voice, average position, sentiment score, hallucination rate, clarification rate
- 6. Content gap analyzer pinpoints the funnel stages and prompts where competitors appear and you don't, with the exact content to write to close each gap
- 7. One click generates an action plan: priority articles, sequential roadmap, quick wins separated from longer-term plays
- 8. Three technical layers flagged for results: crawler accessibility, site structure for AI understanding, content that answers the surfaced questions
- 9. Configurable cadence (daily, weekly, or monthly scans), so the read isn't a one-time snapshot but a trend
The platform is already in production. Node AI uses it on its own brand and on a paying client's. Plug-and-play means we deploy it under your accounts, configure it for the ghostwriting category and the clients you serve, run the first full scan with you, and hand you the keys. The same setup we use for our own visibility work, calibrated for yours.
You're operating as a solo ghostwriter at a moment when buyer discovery is shifting. The founders, agencies, and operators you write for are increasingly running their initial research through ChatGPT, Claude, and Perplexity before they land on a search results page or a referral. The question isn't whether AI search matters to your prospects — it's whether you and your clients have any read on what those surfaces are saying about you, and whether that picture is moving in the right direction.
Strengths
- You're already AI-native. You work in Claude every day. You understand the difference between "we use AI" and "we operate in AI." That's the cultural prerequisite for taking the measurement layer seriously rather than treating it as a curiosity.
- You understand SEO history. You know what it looks like when a discovery surface gets crowded and the early movers carry a structural advantage long after. AI search is at that early-mover window now; the people measuring before they need to compete will be the ones with the data when they do.
- Your content is already strong enough to rank if surfaced. The bottleneck isn't quality; it's whether the AIs can find it, parse it, and choose it over the next option. That's a measurement and structure problem, not a writing problem.
But three things are uncounted right now.
Limitations
- AI search is a real discovery surface for your prospects, and you have no instrumentation on it. Founders, agencies, and operators asking ChatGPT for ghostwriters or content help are getting answers. You don't know what those answers say, whether your name appears, or which competitors are getting recommended in your place.
- Auditing AI visibility manually is not feasible at the cadence the surface moves at. A proper read requires hundreds of prompts run across multiple AI surfaces, every cycle. By hand, that's days of work per audit. So it doesn't happen, and the surface stays unmeasured.
- Generic SEO advice doesn't translate cleanly to this surface. The AIs scrape SEO-ranked content, so SEO still matters, but the questions that decide whether you get surfaced are different, the content shape that gets cited is different, and the technical layer (whether AI crawlers can even reach your site, whether the structure makes you understandable) is a layer most SEO advice doesn't address.
Opportunities
- Measure before competing on the surface becomes the only option. The cost of starting to track is low today and will not be low in eighteen months. The data you accumulate now is the baseline you make decisions against later.
- Close content gaps proactively rather than retroactively. Once you know exactly which funnel-stage prompts surface competitors and not you, you know exactly what to write. The gap analyzer turns "we should make more content" into a ranked list of specific pieces with measurable impact.
- You already work in Claude all day. The kind of generation cost the platform incurs runs against tokens you're already using, which changes the operating economics versus a competing tool that charges its own per-call premium.
The same AI Visibility platform you saw demoed, deployed under your accounts and configured for the ghostwriting category and the kinds of clients you serve. Three layers. One scanning engine. One generated plan at the end.
The Engine
Layer 1 · Scanning Engine
Hundreds of prompts, four AI surfaces, one click, one parallel run
- Competitor and keyword configuration. You define who you're tracked against and which keywords matter for your category. The configuration travels with the brand.
- Automated prompt generation by funnel stage. The system generates roughly fifty prompts each across awareness, reputation, consideration, comparison, and recommendation. Hundreds of realistic user queries, written to match how prospects actually ask. No cold start, no hand-crafted prompt lists to maintain.
- Parallel multi-surface scans across ChatGPT, Claude, Perplexity, and Gemini. A full scan completes in roughly nine minutes for a client's full prompt set, not days. Each surface queried independently, so one platform's slowness or failure doesn't block the others.
- Configurable cadence. Daily, weekly, or monthly scans, scheduled and queued automatically. AI search is a moving target; the trend matters more than any single read.
Layer 2 · Analysis & Metrics
Every response classified, every metric aggregated, every gap surfaced
- Per-response analysis. Each AI answer is examined on multiple dimensions: are you mentioned, where in the response, with what sentiment, with what level of detail, was a fact hallucinated, did the model ask for clarification instead of recommending you.
- Aggregate metrics. A unified visibility score across the four surfaces. Share of voice against your tracked competitors. Average position in response. Sentiment score. Hallucination rate. Clarification-request rate. The five-or-six numbers that matter, tracked over time.
- Content gap analyzer. The most actionable view in the platform: for each funnel stage, exactly which prompts surface competitors and not you. Each gap is mapped to the content piece that would close it. You stop guessing what to write next.
- Trend tracking. Scan-over-scan deltas on every metric. The visibility line chart that shows whether what you're shipping is moving the number, per platform.
Layer 3 · Action Engine
Content plans, technical recommendations, generated as documents you can act on
- One-click action plan generation. From the surfaced gaps, the platform produces a prioritized roadmap: quick wins separated from long-term plays, sequential ordering so you know what to ship first, expected impact tagged on each piece.
- Article and brief generation. For any recommendation, the action engine drafts the piece in your voice or generates a brief a human writer can take. Output is shaped specifically for AI extractability: answer-first structure, schema-tagged, the format the surfaces preferentially cite.
- Agent-driven competitor audits. The agent layer can run technical audits on competitor sites (how they're structured, what they do for crawler accessibility, where their content has been shaped for AI surfacing) and produce findings as a document you hand to yourself or your client.
- Three technical layers flagged on every site assessed. Whether AI crawlers can reach the site at all (often a site that looks fine to humans is invisible to bots), whether the structure makes the content understandable to AI, whether the content actually answers the questions the scan surfaces. The action plan covers all three, not just the third.
The Outcome
You have, for the first time, an honest read on whether ChatGPT, Claude, Perplexity, and Gemini are surfacing you to the buyers your prospects are becoming. The read is not a snapshot — it's a trend, run on the cadence that matches how often the surface moves. Every scan ends not with a number but with a list: the exact prompts you're losing, the exact pieces that would close them, the exact technical work that needs to happen to your site for any of it to land.
End-to-end visibility pipeline, all configurable, all yours after handover. Four stages, each surfaces in the workspace you saw demoed.
Configure
Brand, competitors, keywords, cadence
You set up your brand profile. Add up to a handful of tracked competitors with short descriptions of who they are. Add tracked keywords for the category. Pick the cadence: daily, weekly, or monthly. Setup is once-per-brand; from there the configuration travels with every scan.
Scan
Prompts generated, surfaces queried in parallel
One click generates the prompt library: roughly fifty prompts each across awareness, reputation, consideration, comparison, and recommendation. Another click runs every prompt across ChatGPT, Claude, Perplexity, and Gemini in parallel. A full scan completes in minutes, not hours. Real-time progress visible in the workspace.
Scheduled runs happen in the background on the cadence you set — no manual trigger required.
Analyze
Per-response classification, aggregate metrics, gap surfacing
Every AI response is classified across the dimensions that matter: mention, position, sentiment, hallucination, clarification. The dashboard rolls everything up into the metrics you'll track week over week: visibility score, share of voice, sentiment, hallucination rate, position. The content gap analyzer surfaces the exact prompts and funnel stages where competitors win and you don't.
Act
Plan generation, content production, technical recommendations
One click turns the gap data into a sequenced action plan: priority order, quick wins first, longer-term plays mapped. From any recommendation, generate the article or the brief. From any competitor, generate a technical audit document. The next scan tells you whether the work is moving the metrics, and the loop continues.
Plug-and-play means three things stop being separate jobs:
- Manual hundred-prompt audits across multiple AI surfaces → one scanning engine. The work that, done properly by hand, takes days per cycle (running the prompts, capturing the responses, classifying each one, aggregating the result) collapses to one click and roughly nine minutes of compute. The audit moves from "we should do that someday" to "it ran on Monday morning, the brief is in our inbox."
- Generic SEO advice → AI-search-specific content roadmaps. The recommendations don't come from a checklist — they come from the actual gaps the scan surfaces in your category. Each one tied to a specific prompt where a specific competitor is winning, with the specific piece that would close it. The output is ranked, sequenced, and tied back to a metric the next scan will measure.
- Treating visibility as a content problem → treating it as a three-layer problem. The platform makes the technical layers legible: whether AI crawlers can actually access the site (the most common silent failure: the site looks fine to humans and is invisible to bots), whether the structure is shaped for AI understanding, and whether the content itself answers the surfaced questions. Most advice covers only the third layer. The agent can audit all three, on your site or a competitor's, and write up the findings.
What doesn't change: your taste, your voice, your editorial judgment. The platform measures and recommends; you decide what's worth writing and how it should sound. The point isn't to remove you from the loop — it's to make sure the loop spends your time only on the parts that require it.
Plug-and-play deploys in roughly two weeks. Each phase ends with something concrete: the engine in your hands at the end of phase 2, doing real work by the end of phase 3.
Deploy & configure
Days 1–5
What you have at the end of this week: the engine deployed under your accounts, brand profiles configured for you and seeded for the kinds of clients you plan to onboard, your tracked competitor set and category keywords loaded, prompt libraries generated and reviewed, and the first full scan run end-to-end across all four AI surfaces.
Tune the analysis & the plan templates
Days 5–9
What you have at the end of this phase: the analysis pipelines calibrated for the ghostwriting category: the right competitors recognized, the right keywords weighted, the funnel-stage definitions tightened to match how your prospects actually research. Content plan templates customized to the kinds of outputs you and your clients will produce. The technical-audit agent calibrated on a sample competitor site so you know what its output looks like before you depend on it.
Walkthrough, training & handover
Days 9–14
What you have at the end of this phase: a recorded walkthrough of every screen, a live training call covering the daily and weekly operating cadence (how to run a scan, how to read the dashboard, how to ship a plan to a client), full credentials handover, and 30 days of post-launch async support if anything we built breaks. From this point on, the engine is yours to run.
Two operating models on the same engine. The choice is whether you want to drive it yourself after handover, or have us tuning it alongside you as you start running it for clients. Pricing in EUR.
Recommended for the start
The plug-and-play option. We deploy the engine under your accounts, configure it for your category, run the first full scan with you, and hand you the keys. Right if you want to drive it yourself, learn it from the inside, and decide later whether to extend.
- Engine deployed under your accounts — you own the deployment forever
- Brand profile configured for you
- Competitor set and category keywords loaded; prompt libraries generated across all five funnel stages
- First full scan run end-to-end with you, across all four AI surfaces
- Analysis pipelines tuned to the ghostwriting category
- Content plan templates customized to your output style
- Technical-audit agent calibrated and tested on a sample competitor
- Cadence (daily / weekly / monthly) configured per brand
- Live training call + recorded walkthrough of every screen
- 30 days of post-launch async support
Same build, plus a light monthly retainer. We tune scans as your roster grows, refine the prompt library and competitor sets as your category shifts, produce a monthly read-out on visibility movement, and refine the technical recommendation templates as you see what's landing. Right if you'd rather have us tuning the engine alongside you than run it entirely solo after handover.
- Everything in Tier 1 setup, plus ongoing system support
- Monthly scan tuning: prompt library refinement, competitor-set updates, keyword profile expansion as your category shifts
- Monthly read-out: what's moving, what isn't, what to ship next
- Technical recommendation template refinement based on what's actually closing gaps
- Bi-weekly working call: what's working, what's not, what to ship next
- Async support, prioritized response
Operating cost (pass-through, paid by you)
The engine runs on infrastructure billed directly to your accounts — no markup. Everything under your ownership.
AI generation — usage-based, runs on your own provider tokens (incl. your existing Claude where it applies)
variable
Database & queue hosting — shared infrastructure for scan persistence and scheduled runs
€20–50/mo
Domain / hosting — if you want the engine on your own subdomain
~€15/year
We build it right, or we fix it.
30 days of post-launch support at no cost. If anything we built breaks, we fix it. No exceptions.
You approve every milestone.
Nothing moves forward without your written sign-off. If a deliverable doesn't match what we promised, we revise it until it does.
No surprises.
2 revision rounds per deliverable included. Timeline and scope locked once we start.
What we guarantee — and what we don't.
We guarantee the work: the engine deployed under your accounts, configured for your category, calibrated against your competitor and keyword sets, with scans running end-to-end across the four AI surfaces. We don't guarantee a specific visibility score — that depends on the writing, the site, and the technical foundations underneath it, all of which stay with you. The engine tells you exactly where you stand and exactly what to do about it; the editorial and technical decisions stay yours.
AI search is moving from "marginal discovery surface" to "primary discovery surface for high-intent buyers" inside the next eighteen months. The measurement layer is cheap to stand up now, while the window where you can build a baseline and start moving against it is still open. Eighteen months from now, every serious player in your category will be measuring this, and the buyers will expect their ghostwriter to be one of them.
Ready to move forward? Here's how we get going.
01
Review & reply
Read through the proposal. Reply with your tier preference. If anything in scope, timeline, or pricing feels off, push back — we'd rather adjust now than ship the wrong shape of engagement.
02
Deposit invoice
50% deposit on Tier 1. First month + setup on Tier 2. Payment confirms the start date and locks in the timeline.
03
Account provisioning
We send a checklist for the accounts the engine runs against: AI provider tokens, database, scheduling infrastructure. All under your ownership, billed direct, you own everything.
04
Kickoff call
60 minutes within 48 hours of deposit. We walk through your ICP, identify the competitor set and keyword profile, lock the funnel-stage definitions for the ghostwriting category, and decide which client brands (if any) to configure alongside your own at launch. We come with a pre-built starter; you sharpen it.
05
Build & handover
Deployment and configuration in week 1; analysis tuning and template customization in week 2; recorded walkthrough plus live training before handover. From handover, the engine is yours to run.