Joshua Vogel

Joshua Vogel

Customer Success leader · AI builder · Sydney, AU Open to work

joshua.ad.vogel@gmail.com · LinkedIn · The CS PRESS

Senior CS operator. Most recent book: $10.5M ARR enterprise portfolio at 147% NRR, with 100% logo retention and $934K of upsell pipeline generated in two quarters. Across 10+ years I've built Customer Success programs from scratch, run $25M ARR books at 120%+ NRR, and led the customer save plays that get recognized by industry panels. Most recently the 2025 Creative Customer Success Leader Award.

What's different about the past 18 months: I've built an AI-augmented operating model for Customer Success that returns hours of capacity per week to high-value customer engagement. Patterns I built for myself have been picked up across the broader Customer Success organization. The result is a working template for what AI-native Customer Success looks like in practice. Operator judgment in front, agentic infrastructure underneath. Open to bringing this operating model to the next team.

10+ Years CS
Leadership
$33M+ ARR Managed
Across Career
147% Current
Net Retention
1,200+ Customers
Served

The thesis

Reactive Customer Success vs proactive Customer Success.

Most CS organizations operate reactively: an inbox plus a Salesforce login. The teams hitting 130%+ NRR aren't doing more of the same. They're operating with a fundamentally different model. Here's what changes.

Reactive baseline

Where most CS functions live today.

Proactive operating model

The work I do, scaled across teams I've led.

Inbox-driven engagement. Reply when prompted; manage what's loud, miss what's quiet.
Scheduled cadence and signal-driven outreach. Quiet accounts surface as engagement gaps before they become churn risks.
QBR data extracted manually the day before each meeting. The whole prep day burns on spreadsheets.
Always-on telemetry feeding executive-ready charts. QBR cycle compressed 75%. Prep time goes to narrative and strategy, not data wrangling.
Single CSM thread to the customer. One relationship, one perspective, one point of failure.
Account pod model (CSM + AE + SE + RAM). Twelve-stakeholder customer mapped to four-function vendor pod, single shared roadmap.
Renewal scramble in the last 90 days. Sponsor change in month nine derails the conversation.
Renewal posture from day one. Stakeholder map maintained; sponsor changes caught early; expansion conversation always on the table.
CSM workflow ad hoc per person. Knowledge stays in heads; one departure resets the team.
Codified workflows as reusable systems. Account memory, voice profile, governance rules. New CSMs onboard onto an operating standard, not into a fog.
AI used as a side tool. ChatGPT for drafts. Tools added with no measurement.
AI as production infrastructure. Cost-tiered model routing. Value reported quarterly. Capacity returned to high-leverage human work.

Selected work

What I've built and what it produced

Five systems I've built and run in production against an enterprise portfolio. Each started as a recurring problem in my own CSM workflow. Each produces measurable capacity, consistency, or coverage that wouldn't exist otherwise, and travels with me to the next team.

The Customer Success operating surface I run on

Hours per day of context-switching eliminated. Faster response to at-risk signals. Better preparation for executive customer conversations.

A single web surface that aggregates everything a senior CSM needs to make daily decisions: today's customer meetings with prep brief, escalations needing my attention, customer signals from the past 24 hours, my prioritized daily plan, and recap drafts after meetings.

Before this existed, the same context lived across 8 different tools and tabs. The cost of switching between them was the dominant tax on the role. Compressing that into one surface is the single biggest productivity unlock I've shipped.

Stack Cloudflare Workers · D1 · Pages · Custom sync-daemon · Daily production use

A library of repeatable Customer Success workflows

The same job, done in a fraction of the time, every time. Consistent customer experience regardless of CSM workload. Best-practice execution baked into the workflow itself, not relying on memory.

Fifty-plus self-contained workflows that handle the recurring jobs CS practitioners do manually: account research before customer meetings, follow-up emails after meetings, churn-risk scoring, escalation tracking, QBR data preparation, executive briefing decks, customer reactivation outreach.

CS practitioners spend an enormous share of their time on knowable, repeatable work. Capturing that work as reusable systems returns capacity to the part of the job AI can't do: the human relationship work that actually drives renewal and expansion.

Stack OpenCode + Anthropic Claude · Model Context Protocol (MCP) · Versioned, evaluated, self-improving over time

Patterns adopted across the broader CS organization

Tools I built for myself, picked up by peers across the global CS team where I work. The same time-saving and consistency benefits I get, scaled.

Multiple workflows from my personal library (a customer usage analytics skill, a QBR report generator, a repository bootstrap tool, a doc-drift audit skill) sanitized and contributed to my employer's central Customer Success tooling library. Reviewed via the same merge-request process used by engineering teams.

Internal AI tooling at scale is valuable when one practitioner's improvements compound across an organization, not when each person reinvents the wheel. Showing I can lead this kind of upstream contribution while still hitting portfolio number is the differentiated profile.

Scope Maintainer-level access on the shared library · Active peer collaboration on a unified hub

An operating standard for AI in customer-facing work

AI assistance that's reliable enough to run against real customer accounts without erosion of trust, voice, or accuracy. Mistakes that would have damaged customer relationships, codified as guardrails before they happen.

A written-down, version-controlled set of rules and protocols defining how AI systems handle customer-facing work safely. Covers voice and tone discipline, when to escalate to a human, what to never automate, how to handle customer data, how to attribute work honestly, and how to surface uncertainty rather than fake confidence.

The biggest risk in AI-augmented customer-facing work isn't model capability. It's drift between AI behavior and the standards a senior operator would apply. This standard closes that gap operationally, not philosophically. It's the kind of governance work most organizations will need within the next 12 months and almost none have written down today.

Format ~3,000 lines of git-managed governance · Reviewed and tightened on a fortnightly autonomous audit cadence

An always-on layer that surfaces what matters

Important customer signals get caught early. The morning plan reflects current reality, not yesterday's. Nothing important falls through the cracks because it was buried in a notification stream.

Forty-plus scheduled jobs running before, during, and after my working day to surface relevant signals: meeting prep before customer calls, engagement scans across the book of business, escalation triage, end-of-day wrap-ups, fortnightly audits of what I might have missed.

CSM workflows are fundamentally driven by external signal. Operating reactively means missing signals. This layer makes the signals visible before I have to ask, which is what separates senior IC operating from junior IC reaction.

Reliability Auth-aware · Idempotent · Fail-safe · Production-grade


Selected outputs

What the work actually looks like

Five visualizations of the systems above. All examples use synthetic customer data. Actual customer information is never shown publicly. The structure, cadence, and voice are real.

CSM Hub Dashboard · Daily view

The single surface I open every morning

Today's customer meetings · 3
09:30
Acme Industries
QBR · 6 attendeesPREP READY
11:00
Globex Logistics
Weekly syncRECAP DUE
14:30
Helios Energy
Renewal kickoffAT-RISK
Portfolio signals · last 24h
Engagement gaps 3 accounts > 21 days quiet
Hot escalations 2 unresolved · INC-1142, INC-1156
Renewals next 90 days 7 accounts · $4.2M ARR
Pipeline activity + $180K added (Q4 cross-sell)

Morning brief · 09:45 AEST · Daily output

How the day starts, before I do

FRIDAY · 2026-05-08 · 09:45 AEST · 7 customer signals · 3 meetings · 2 escalations

Top of mind today

  • Helios Energy renewal kickoff at 14:30. $1.2M annual contract; sponsor moved last quarter, new contact has 2 weeks of context. Brief draft prepped. Risk score: 6/10.
  • Acme Industries QBR at 09:30. Multi-product expansion ready to discuss. Bot Management adoption up 40% QoQ; ready to position Application Security upsell.
  • Globex Logistics weekly sync at 11:00. Recap from last week shows 3 unresolved action items. Following up on integration timeline.

Engagement gaps · 3 accounts

  • Polaris Financial: 24 days since last meaningful contact. Last 3 emails one-way (mine). Suggest re-engagement outreach.
  • Nimbus Software: 22 days quiet. Renewal in 73 days. Worth a check-in.
  • Atlas Retail: 21 days quiet. No meetings booked. Possible sponsor change; verify on LinkedIn.

Hot escalations · 2 unresolved

  • INC-1142 (Globex): 5 days unresolved. Ticket sitting on Engineering. Last update 48h ago. Worth nudging in QBR.
  • INC-1156 (Acme): 2 days unresolved. New escalation; product team aware. No action needed yet.

CS Skills Library · 54 production skills

The library I built one workflow at a time

Each skill is a self-contained, versioned, evaluated workflow that handles a recurring CSM job. Highlighted skills have been contributed upstream to the broader CS organization. Click any skill to see what it does and an example of what it produces.

43 of 54 shown. Each skill is git-versioned and follows a maturity progression: draft → tested → trial → crystallized.

QBR slide · Business priorities · Real rendered output

How I open every executive QBR

Before any usage data, before any product roadmap. The slide that says "I understand what your business is trying to do." Proves the research that earned the seat at the executive table.

Built from the sources every prepared CSM should be using: prior customer conversations, internal account history, recent public announcements, support escalation history, and stakeholder conversations across multiple customer teams. The slide below is a real rendered output from my QBR generator, run against an anonymized profile (customer name and team identifiers replaced with synthetic placeholders; structure, density, and rendering unchanged from production).

Most CSMs open with usage charts. The customer's CFO doesn't care about usage charts. They care whether you understand the business and whether their investment is working. This slide answers both questions in 30 seconds, and is the reason I get invited back to the executive table quarter after quarter.

Stack Custom HTML/CSS slide template · JSON-driven content · Synthesized from public filings + internal research

Cross-functional governance · Pod model

How I run accounts when one CSM isn't enough

Stakeholder-aligned outcomes instead of single-thread relationships. $2.5M upsell ARR delivered through pod execution at one role; 100% logo retention across high-risk enterprise accounts at another. The cross-functional motion is the durable lever, not the heroic CSM.

At the enterprise tier, no CSM gets to be a single point of contact. The customer has six departments and the vendor has six functions, and the work is keeping all twelve aligned on the same roadmap. I've stood up two versions of this: a "Rescue Squad" for high-risk accounts in my current role (CSM + AE + SE + RAM operating off a single shared plan), and an account pod model in a previous role that drove $2.5M in upsell ARR by replacing fragmented engagement with a unified cadence.

↓ Internal pod (vendor side)

CSM

Success outcomes

Adoption, retention, expansion narrative, exec relationships

AE

Commercial growth

Renewal, upsell, contract structure, exec sponsor

SE

Technical credibility

Architecture conversations, deep technical evaluation

RAM

Renewal mechanics

Quote-to-close, paper, procurement orchestration

Single shared customer roadmap

↑ Customer stakeholders (typical enterprise)

Executive

CTO / CISO / VP

Outcomes, board narrative, exec sponsor

Platform / DevOps

Daily operators

Reliability, configuration, runbooks

Security

Compliance + threat

Posture, incident response, audit

Procurement / Finance

Renewal owner

Contract economics, paper cycle

Beyond the account pod, the work spans the whole vendor org: Finance (reconciled 5+ years of legacy revenue data with the CFO's team for 99.9% ARR accuracy), Product (Customer Advisory Board feedback that shaped roadmap), Engineering (escalations, feature requests), and Marketing (case study development). Senior CS isn't customer-facing. It's everywhere-facing.

Operating cadence Weekly internal pod sync · Monthly customer pod-to-pod · Quarterly executive QBR · Single shared notebook of record

Memory layer · Context continuity

Persistent memory architecture for AI-augmented Customer Success

Every conversation builds on the last. Account context survives sessions. Voice consistency persists across every AI interaction. The first time AI tooling actually compounds, instead of starting from zero every morning.

For most CSMs using AI tools, every session starts from a blank slate. Last week's customer conversation, the sponsor's preferences, the team's running concerns, prior commitments. All of it has to be re-established on every prompt. I built a persistent memory architecture that solves this: structured account memory, a voice profile that travels with me across all AI assistants, depth-control protocols, and session handoff patterns that make AI tooling actually compound over time.

Layer 1

Account memory

Per-customer JSON file. Stakeholders, running concerns, prior interactions, outstanding commitments, sentiment, sponsor changes. Updated after every customer touch.

Layer 2

Voice profile

Codified version of how I write. Tone calibration, vocabulary constraints, AI-tells to avoid, structural patterns. Applied automatically to every customer-facing draft.

Layer 3

Session handoff

Long work survives across AI sessions through structured handoff protocols. Resume yesterday's analysis exactly where it left off, with all context intact.

What an account memory entry actually looks like (synthetic):

"account": "Acme Industries", "slug": "acme-industries", "last_updated": "2026-05-08", "stakeholders": [ "name": "Sarah Chen", "role": "CTO", "tenure_in_role": "3 months", "engagement_style": "data-driven, prefers brevity", "outstanding": ["Bot Mgmt review by Tuesday"] ], "running_concerns": [ "Renewal at 90 days", "INC-1142 unresolved (engineering team aware)", "Sponsor change Feb 2026 (Sarah replaced Mark)" ], "voice_calibration": "tone": "warm but direct, no jargon", "avoid": ["dwelling on past missteps", "over-promising roadmap"] , "recent_interactions": [ "date": "2026-05-07", "type": "QBR", "summary": "expansion deferred; asked for Bot Mgmt deep-dive" ]

Memory is what separates a useful AI assistant from one you have to babysit. Every customer follow-up email I draft today reads the customer's last interaction, knows the sponsor by name, references prior commitments accurately, and applies my voice consistently. That's not magic. That's deliberate architecture, and it's the layer most CS teams haven't built yet.

Stack JSON-backed account memory · Cloudflare Agent Memory product (dogfooded) · Voice profile in markdown · Session handoff via structured project logs

Cost engineering · Model selection discipline

AI cost discipline through model-tier routing Architecture designed

Cost reduction of 50-65% projected on operational AI spend, with no quality drop on customer-facing output. The architecture work and playbook to make it executable across an entire CS team.

Most AI tooling at the operator level uses a single premium model for everything. That works until the bills arrive. I designed a model-tier routing layer that classifies each skill by its actual reasoning requirement and routes accordingly: cheap models for the structured, deterministic, repetitive work; mid-tier for synthesis and drafting; premium models reserved for customer-facing judgment, voice-sensitive output, and architecture decisions.

Tier 1 · Cheap

Kimi-K · Haiku

Data extraction. Formatting. Deterministic transforms. Sub-task structuring. Telemetry parsing.

~50-60% of workload

Tier 2 · Balanced

Sonnet

Synthesis. Multi-step research. Structured drafts. Internal-comms framing. Brief-tier work.

~25-30% of workload

Tier 3 · Premium

Opus

Customer-facing emails. Voice-sensitive drafting. Architecture decisions. Final judgment calls.

~10-20% of workload

AI cost is moving from a small operational expense to a top-five budget line item across Customer Success organizations. The teams that invest in model-tier routing now will have 2-3x more usable AI capacity per dollar than teams that don't. The architecture and policy work to make this real are non-trivial: every skill needs a classified reasoning requirement, every routing rule needs to be verified for quality regression, and the team needs a playbook for when to override defaults. That foundation work is the high-leverage piece, and it's done.

Stack OpenCode model-pinning · Anthropic Claude (Opus, Sonnet, Haiku) · Kimi-K · Gateway-level routing · Per-skill model selection metadata

Measurement · ROI defensibility

Quantifying the value of AI-augmented Customer Success Architecture designed

Designed an executive-ready measurement framework that correlates AI investment with operational outcomes (hours returned, retention impact, expansion correlation, response-time improvements). The answer to "what did AI deliver this quarter?" with numbers a CFO takes seriously.

AI investment without measurement is a cost center waiting to be cut. I designed a structured value-reporting framework that pulls execution telemetry from every skill and automation, correlates it with business outcomes from the CRM, and produces monthly executive reports calibrated for finance and CS leadership audiences. The framework also surfaces underutilized skills (capacity not yet captured) and high-leverage skills (where investment is paying back fastest), turning the report into a planning tool, not just a scorecard.

CS Value Report · Q4 2026 (illustrative)

8 CSMs · 44 enterprise accounts

Hours returned to customer engagement ~720 hrs
QBR cycle compression (vs FY25 baseline) -75%
Customer follow-up time-to-send -64%
NRR delta vs portfolios without AI tooling +8 pts
Underutilized skills flagged for re-investment 7 skills

Across the CS industry, finance teams are starting to ask hard questions about AI spend. The CS leaders who can answer "here's what AI delivered last quarter" with concrete numbers will keep their budgets and grow them. Those who can't will see investment cut. This is measurement work that becomes existential within 12 months, and the framework is the hard part. The reports themselves are the easy output.

Stack Skill execution telemetry · CRM outcome correlation · Monthly executive report generator · Skill maturity scoring

Agentic Codex · ~3,000 lines of governance

How AI handles customer-facing work, written down

A version-controlled rulebook that AI assistants read at every customer-facing touchpoint. Excerpts from the table of contents:

01
CRM is the source of truth for the account team. No inferences. Helping doesn't transfer ownership.
02
Customer-facing email output format. Plain text in fenced code blocks; structured by ALL CAPS section headers and bullet items; no markdown rendering tricks.
03
Date-day verification, non-negotiable. Every date in customer output must be machine-verified against `cal`, never inferred.
04
Time-of-day verification. Run `date` before stating elapsed time, current time, or remaining time. Hallucinated time is the most common LLM failure mode.
05
Tool-first context retrieval. When the answer exists in a tool, grab it. Don't ask the user for what an MCP can answer in seconds.
06
Declaration discipline. Never claim "done" without verifying user-level success, not just file-state success.
07
Reproduce-first debugging. Before opening any source file in response to a UI bug, reproduce the symptom with DevTools open.
08
Sub-agent delegation gates. 5 mechanical gates fire BEFORE inline tool calls; main session reserved for judgment-heavy work.
12 more rules covering security, attribution, voice and tone, customer-artefact destination, sensitive file handling, and process-state-vs-file-state debugging.

QBR report generator · End-to-end deck automation

The biggest single capacity unlock I've shipped

What used to take 4-6 hours of manual chart work per QBR now compresses by 75%. Across a 44-account enterprise portfolio that's 150-180 hours returned per quarter, or roughly four full working weeks of senior CSM time redirected from spreadsheet wrangling to strategic preparation, executive relationship work, and team coaching.

Most CSMs lose the day before a QBR to data extraction, copy-paste, chart formatting, and slide assembly. The QBR report generator turns that day into a 30-minute pipeline run: telemetry pulled, charts rendered, slides assembled, deck exported as PDF and PPTX. The CSM's job becomes the only thing the customer actually values: framing the story, refining the narrative, anticipating the executive question.

The dollar math is real. At a fully-loaded senior-CSM cost, the time recovered is six-figure annual capacity per CSM, and it compounds across a team. But the bigger impact is qualitative. CSMs who aren't burning the day before a QBR show up to the meeting prepared to think, not just present. That's the difference between a tactical reporting cadence and a strategic partnership.

The charts below are real renders from the generator, anonymized for public display. Data shapes, peak values, and visual treatment are unchanged from production.

Stack Node.js + Puppeteer (headless Chrome) · Custom HTML/CSS chart templates · JSON-driven data model · 14-slide deck output, PDF + PPTX exported

aboutGOLF · 2023–2025 · Director of Customer Success

Salesforce-native CS operating system, built from scratch

98% logo retention. 44% reactivation of churned ARR ($400K). 70% of the subscription base captured on auto-pay. 99.9% ARR record accuracy in partnership with Finance.

I walked into a Customer Success function with no playbook: no health scoring, no renewal automation, no engagement tracking, no executive reporting. Over 26 months I designed and shipped the Salesforce-native CS operating system that ran 1,200 customer accounts. Custom objects for customer journeys. Formula fields for a 1-10 health score aggregating NPS, CSAT, CES, and engagement data. Automation flows for renewal kickoffs and churn-risk alerts. Executive dashboards in Salesforce + Power BI that gave the C-suite real-time visibility for the first time.

Same operator instinct as the AI builds above, applied with a CRM-native stack and no AI to lean on. Building Customer Success from scratch in a SaaS environment that hasn't run CS before is a genuinely different skill from optimizing an existing CS function. It's the work that earned the 2025 Creative Customer Success Leader Award.

Acme Industries · Account 360
$2.4M ARR · Renewal Mar 2026

Health Score

7.5/ 10
↑ +0.8 last 30 days · trending up

Engagement (last 30 days)

14 emails 3 meetings 2 training sessions
Active sponsor engagement

Voice of Customer

NPS 8 (promoter) CSAT 4.7 / 5 CES 5 / 7

Renewal status

Auto-pay configured · Renews 14 Mar 2026
+120 days renewal runway · QBR scheduled Dec 12

Next actions

  • Q4 QBR scheduled Dec 12; prep brief drafted by csm-meeting-prep
  • Expansion opportunity: aG Leagues platform pilot (estimated $180K ARR)
  • Health score improved 0.8 in 30 days; share with sponsor in Q4 review
22 mo
Tenure
$2.4M
ARR
100%
Adoption

Stack Salesforce (custom objects, formula fields, automation flows) · QuickBooks · DocuSign · Power BI · HubSpot · Pardot

System architecture

How the pieces fit together

SIGNAL ORCHESTRATION INFRASTRUCTURE SURFACE Customer signals Email · Calendar · Slack CRM systems Salesforce · CRM data Escalation streams Jira · Pagerduty Schedules + cadence launchd · cron OpenCode + Anthropic Claude (MCP) 54 production skills · Multi-agent orchestration · Agentic codex governance 42 scheduled jobs · Self-healing maturity progression Cloudflare Workers Sync-daemon · skill-bridge Cloudflare D1 Source of truth Cloudflare Pages Dashboard frontend CSM Hub Dashboard · Daily decisions, single surface

User → Signal collection → Agentic orchestration → Cloudflare infrastructure → Operating surface


Career snapshot

Where I've worked

2025 — Now
Cloudflare · Customer Success Manager (Enterprise) · Sydney, AU
$10.5M ARR @ 147% NRR · 44 enterprise accounts · $934K Q3-Q4 pipeline · 100% logo retention · QBR cycle compressed 75% via AI tooling I built
2023 — 2025
aboutGOLF · Director of Customer Success and Support · US (Remote)
Built CS from scratch · 1,200 accounts · 98% retention · 44% churn reactivation ($400K) · $2.5M upsell ARR · 1-10 health scoring system in Salesforce · Cart-to-Curb e-commerce automation
2019 — 2023
WithYouWithMe · Head of Enterprise Account Management · Sydney, AU
$25M ARR @ 120% NRR · Promoted twice in 3.5 years · Grew Accenture from $1.3M → $5M ARR in 90 days · Led an 8-person CSM team across global expansion (UK + Canada launches)
2012 — 2019
U.S. Navy · IT Infrastructure Project Manager · Naples, Italy
140+ infrastructure projects across EMEA · Navy Achievement Medal · DISA Facility Control Office of the Year (2017) · The technical foundation underneath every CS role since

↓ Download resume (PDF)


Voice and recognition

Writing, speaking, awards


Get in touch

Talk to me

I'm actively exploring next-chapter opportunities in senior Customer Success leadership, AI-native CS, and roles that bridge customer relationship work with AI infrastructure. The fastest path to a conversation is a direct email.

joshua.ad.vogel@gmail.com  ·  LinkedIn  ·  Subscribe to The CS PRESS

Sydney-based with Australian Permanent Residency. Open to senior IC and leadership conversations in Customer Success and AI-native CS / Solutions Engineering globally, on-site in Sydney, hybrid, or fully remote.