Senior CS operator. Most recent book: $10.5M ARR enterprise portfolio
at 147% NRR, with 100% logo retention and
$934K of upsell pipeline generated in two quarters.
Across 10+ years I've built Customer Success programs from scratch,
run $25M ARR books at 120%+ NRR, and led the customer save plays that get
recognized by industry panels. Most recently the
2025 Creative Customer Success Leader Award.
What's different about the past 18 months: I've built an AI-augmented operating
model for Customer Success that returns hours of capacity per week to
high-value customer engagement. Patterns I built for myself have been picked up
across the broader Customer Success organization. The result is a working
template for what AI-native Customer Success looks like in practice. Operator
judgment in front, agentic infrastructure underneath.
Open to bringing this operating model to the next team.
10+Years CS Leadership
$33M+ARR Managed Across Career
147%Current Net Retention
1,200+Customers Served
The thesis
Reactive Customer Success vs proactive Customer Success.
Most CS organizations operate reactively: an inbox plus a Salesforce login.
The teams hitting 130%+ NRR aren't doing more of the same. They're operating
with a fundamentally different model. Here's what changes.
Reactive baseline
Where most CS functions live today.
Proactive operating model
The work I do, scaled across teams I've led.
Inbox-driven engagement. Reply when prompted; manage what's loud, miss what's quiet.
Scheduled cadence and signal-driven outreach. Quiet accounts surface as engagement gaps before they become churn risks.
QBR data extracted manually the day before each meeting. The whole prep day burns on spreadsheets.
Always-on telemetry feeding executive-ready charts. QBR cycle compressed 75%. Prep time goes to narrative and strategy, not data wrangling.
Single CSM thread to the customer. One relationship, one perspective, one point of failure.
Account pod model (CSM + AE + SE + RAM). Twelve-stakeholder customer mapped to four-function vendor pod, single shared roadmap.
Renewal scramble in the last 90 days. Sponsor change in month nine derails the conversation.
Renewal posture from day one. Stakeholder map maintained; sponsor changes caught early; expansion conversation always on the table.
CSM workflow ad hoc per person. Knowledge stays in heads; one departure resets the team.
Codified workflows as reusable systems. Account memory, voice profile, governance rules. New CSMs onboard onto an operating standard, not into a fog.
AI used as a side tool. ChatGPT for drafts. Tools added with no measurement.
AI as production infrastructure. Cost-tiered model routing. Value reported quarterly. Capacity returned to high-leverage human work.
Selected work
What I've built and what it produced
Five systems I've built and run in production against an enterprise portfolio.
Each started as a recurring problem in my own CSM workflow. Each produces
measurable capacity, consistency, or coverage that wouldn't exist otherwise,
and travels with me to the next team.
The Customer Success operating surface I run on
Hours per day of context-switching eliminated. Faster response to at-risk
signals. Better preparation for executive customer conversations.
A single web surface that aggregates everything a senior CSM needs to make
daily decisions: today's customer meetings with prep brief, escalations
needing my attention, customer signals from the past 24 hours, my prioritized
daily plan, and recap drafts after meetings.
Before this existed, the same context lived across 8 different tools and tabs.
The cost of switching between them was the dominant tax on the role.
Compressing that into one surface is the single biggest productivity unlock
I've shipped.
Stack
Cloudflare Workers · D1 · Pages
· Custom sync-daemon
· Daily production use
A library of repeatable Customer Success workflows
The same job, done in a fraction of the time, every time. Consistent customer
experience regardless of CSM workload. Best-practice execution baked into the
workflow itself, not relying on memory.
Fifty-plus self-contained workflows that handle the recurring jobs CS
practitioners do manually: account research before customer meetings, follow-up
emails after meetings, churn-risk scoring, escalation tracking, QBR data
preparation, executive briefing decks, customer reactivation outreach.
CS practitioners spend an enormous share of their time on knowable, repeatable
work. Capturing that work as reusable systems returns capacity to the part of
the job AI can't do: the human relationship work that actually drives renewal
and expansion.
Stack
OpenCode + Anthropic Claude ·
Model Context Protocol (MCP) ·
Versioned, evaluated, self-improving over time
Patterns adopted across the broader CS organization
Tools I built for myself, picked up by peers across the global CS team
where I work. The same time-saving and consistency benefits I get, scaled.
Multiple workflows from my personal library (a customer usage analytics
skill, a QBR report generator, a repository bootstrap tool, a doc-drift
audit skill) sanitized and contributed to my employer's central Customer
Success tooling library. Reviewed via the same merge-request process
used by engineering teams.
Internal AI tooling at scale is valuable when one practitioner's improvements
compound across an organization, not when each person reinvents the wheel.
Showing I can lead this kind of upstream contribution while still hitting
portfolio number is the differentiated profile.
Scope
Maintainer-level access on the shared library
· Active peer collaboration on a unified hub
An operating standard for AI in customer-facing work
AI assistance that's reliable enough to run against real customer accounts
without erosion of trust, voice, or accuracy. Mistakes that would have
damaged customer relationships, codified as guardrails before they happen.
A written-down, version-controlled set of rules and protocols defining how
AI systems handle customer-facing work safely. Covers voice and tone
discipline, when to escalate to a human, what to never automate, how to
handle customer data, how to attribute work honestly, and how to surface
uncertainty rather than fake confidence.
The biggest risk in AI-augmented customer-facing work isn't model capability.
It's drift between AI behavior and the standards a senior operator would apply.
This standard closes that gap operationally, not philosophically. It's the
kind of governance work most organizations will need within the next 12 months
and almost none have written down today.
Format
~3,000 lines of git-managed governance
· Reviewed and tightened on a fortnightly autonomous
audit cadence
An always-on layer that surfaces what matters
Important customer signals get caught early. The morning plan reflects current
reality, not yesterday's. Nothing important falls through the cracks because
it was buried in a notification stream.
Forty-plus scheduled jobs running before, during, and after my working day to
surface relevant signals: meeting prep before customer calls, engagement scans
across the book of business, escalation triage, end-of-day wrap-ups,
fortnightly audits of what I might have missed.
CSM workflows are fundamentally driven by external signal. Operating reactively
means missing signals. This layer makes the signals visible before I have to
ask, which is what separates senior IC operating from junior IC reaction.
Five visualizations of the systems above. All examples use synthetic customer
data. Actual customer information is never shown publicly. The structure,
cadence, and voice are real.
Helios Energy renewal kickoff at 14:30. $1.2M annual contract; sponsor moved last quarter, new contact has 2 weeks of context. Brief draft prepped. Risk score: 6/10.
Acme Industries QBR at 09:30. Multi-product expansion ready to discuss. Bot Management adoption up 40% QoQ; ready to position Application Security upsell.
Globex Logistics weekly sync at 11:00. Recap from last week shows 3 unresolved action items. Following up on integration timeline.
Engagement gaps · 3 accounts
Polaris Financial: 24 days since last meaningful contact. Last 3 emails one-way (mine). Suggest re-engagement outreach.
Nimbus Software: 22 days quiet. Renewal in 73 days. Worth a check-in.
Atlas Retail: 21 days quiet. No meetings booked. Possible sponsor change; verify on LinkedIn.
Hot escalations · 2 unresolved
INC-1142 (Globex): 5 days unresolved. Ticket sitting on Engineering. Last update 48h ago. Worth nudging in QBR.
INC-1156 (Acme): 2 days unresolved. New escalation; product team aware. No action needed yet.
CS Skills Library · 54 production skills
The library I built one workflow at a time
Each skill is a self-contained, versioned, evaluated workflow that handles
a recurring CSM job. Highlighted skills have been contributed upstream
to the broader CS organization. Click any skill to see
what it does and an example of what it produces.
43 of 54 shown. Each skill is git-versioned and follows a maturity
progression: draft → tested → trial → crystallized.
Daily orchestration Featured · Contributed upstream
csm-morning-brief
Runs: Every weekday at 9:45 AM AEST
Prioritized daily briefing of meetings, escalations, customer signals, and engagement gaps. Pulls from calendar, email, escalation systems, and account memory. Surfaces what changed in the past 24 hours and prioritizes the day before I open my laptop.
Example output
FRIDAY 2026-05-08 09:45 AEST · 7 customer signals · 3 meetings · 2 escalations
TOP OF MIND TODAY
• Helios Energy renewal kickoff at 14:30. $1.2M annual; sponsor moved last quarter, new contact has 2 weeks of context. Brief draft prepped. Risk score: 6/10.
• Acme Industries QBR at 09:30. Bot Management adoption up 40% QoQ; ready to position Application Security upsell.
ENGAGEMENT GAPS · 3 accounts
• Polaris Financial: 24 days quiet. Re-engagement outreach.
• Nimbus Software: 22 days quiet. Renewal in 73 days.
Daily orchestration
csm-midday
Runs: Every weekday at 12:00 PM
Mid-day checkpoint. Reconciles what got done against the morning plan, scans for new inbound signals, re-prioritizes the afternoon, and surfaces anything overdue.
Example output
MIDDAY · 12:00 PM · 4 of 7 brief items addressed
BLOCKED
• Helios renewal: waiting on legal review (raised at 10:15)
• Acme expansion: SE availability for technical scoping
NEW INBOUND
• Globex CTO replied to Wednesday brief: positive
• 1 new escalation: INC-1162
Daily orchestration
csm-eod-wrap
Runs: Every weekday at 5:00 PM
End-of-day wrap. Reviews the meetings I had, drafts any missing follow-up emails, extracts action items, and checks on-site attendance for the next 7 days.
Example output
EOD WRAP · 17:00 AEST · 3 meetings · 2 follow-ups drafted
MEETINGS RECAPPED
• Acme QBR (09:30): expansion conversation landed. Drafted follow-up to CFO
• Helios renewal (14:30): risk reduced; sponsor introduced to AE
ACTION ITEMS CARRIED FORWARD
• Get SE on Acme call next Tuesday
• Draft executive briefing memo for Helios board (Tuesday)
Daily orchestration Featured · Contributed upstream
csm-meeting-prep
Runs: Pre-meeting trigger or scheduled 24h ahead
Pre-meeting briefing. Reads the calendar 24h ahead, pulls prior conversation history, recent customer signals, account memory, escalation status, and produces a one-pager calibrated to the meeting type (kickoff, QBR, executive briefing, follow-up).
Example output
MEETING PREP · Helios Energy renewal kickoff · 14:30
WHO
New sponsor: Sarah Chen (CTO, joined Q1). Previous: Mark Tao (departed Feb).
CONTEXT
· Renewal due 90 days. $1.2M ACV.
· Escalation INC-1142 unresolved 5 days; raise gently.
· Last QBR (Nov): 8/10 satisfaction, expansion conversation deferred.
MEETING OBJECTIVES
1. Establish relationship with Sarah
2. Confirm renewal trajectory
3. Surface INC-1142 if not raised first
Daily orchestration Featured · Contributed upstream
csm-meeting-recap
Runs: Auto-fires 20 minutes after each customer-facing meeting
Post-meeting recap generator. Reads the meeting transcript, extracts discussed topics, action items, and key decisions, filters out internal-only content, and produces a customer-ready recap with markdown, email draft, and PDF-ready data.
Example output
MEETING RECAP · Acme Industries Q4 QBR · 09:30 AEST
WHAT WE DISCUSSED
· Bot Management adoption (40% QoQ growth)
· Application Security upsell positioning
· Q1 roadmap alignment
ACTION ITEMS
· Josh: send Bot Management adoption summary by Tuesday
· Sarah Chen: bring Head of Security into next conversation
DRAFT EMAIL READY (in Gmail Drafts)
Customer engagement Featured · Contributed upstream
csm-customer-followup
Runs: Triggered by a meeting transcript or direct request
Drafts customer follow-up emails from meeting transcripts. Cross-references prior conversations, verifies live system state, applies my voice profile, surfaces any factual claims for verification, and delivers as a Gmail draft. Voice-aligned, not AI-tell-laden.
Example output
TO: sarah.chen@acme.example
SUBJECT: Following up · Q4 QBR
Hi Sarah,
Thanks for the time today. Three things from our conversation worth tracking:
· Bot Management has come a long way since Q3 (40% QoQ adoption). I'll send the snapshot by Tuesday.
· On Application Security — happy to scope a deeper conversation with our SE team next week if useful.
· INC-1142 — pinging product engineering tomorrow for a status update; you'll hear from me by EOD.
Let me know if I missed anything.
Best,
Josh
Customer engagement Featured · Contributed upstream
csm-account-research
Runs: Pre-engagement, ad-hoc
Multi-source account research. Pulls public news, recent product announcements, leadership changes, industry context, and combines with internal account history (prior QBRs, escalations, support tickets) to produce a one-pager that gets a CSM up to speed on a customer in 5 minutes.
Example output
ACME INDUSTRIES · ACCOUNT BRIEF
WHAT THEY DO
National specialist provider of financial services solutions to mid-market and enterprise customers.
RECENT NEWS
· Q3 earnings beat (revenue +12% YoY)
· New CTO appointed Feb 2026 (Sarah Chen, ex-Stripe)
· Announced AI-platform initiative; mentioned cloud spend in earnings call
INTERNAL HISTORY
· Customer 22 months · 17 products · Premium B support
· Last 3 QBRs: positive trajectory, expansion deferred twice
· 0 unresolved escalations as of today
Customer engagement
csm-churn-risk
Runs: Daily scan + ad-hoc
Churn-risk scoring across the portfolio. Combines engagement velocity, escalation density, sponsor tenure, sentiment signals, and renewal proximity into a numeric score. Generates save plays for accounts above threshold.
Drafts customer-facing emails for non-meeting scenarios: onboarding welcome, check-in, renewal nudge, escalation response, re-engagement. Different from meeting follow-up; this is for outreach without a transcript driver.
Customer engagement
csm-jargon-translate
Runs: Triggered by an internal escalation ticket
Translates internal escalation tickets (engineering language, product naming, internal status) into customer-friendly status updates that a CSM can send without paraphrasing or risking misrepresentation.
Reporting Featured · Contributed upstream
qbr-report-generator
Runs: Pre-QBR, on demand
End-to-end QBR generation. Synthesizes telemetry data, business context, and stakeholder-specific framing. Renders charts via headless Chrome, assembles slides via deck templates, exports PDF and PPTX. The same skill that produced the WAF and Bot Management charts above.
Example output
QBR GENERATED · Acme Industries · Q4 2026
14 slides · 11 charts · PDF + PPTX exported
INCLUDED CHARTS
· Total request volume (CDN)
· Firewall activity by rule type
· Bot Management classification
· DNS query patterns
· DLP event volume
NEXT STEPS
· Review and personalize narrative slides
· Send to Sarah Chen 24h before meeting
Generates customer-facing platform-update briefs filtered to that customer's contracted services. Translates raw changelog entries into "what this means for you" framing.
Reporting
cs-value-report
Runs: Monthly
Measures and reports the CS function's value contribution: capacity returned, hours saved, retention impact, expansion driven. Used in management one-on-ones and team-level reviews.
Account intelligence
csm-account-tracker
Runs: Always-on
Per-account task aggregator. Pulls items from the personal tracker, internal tickets, follow-up commitments, and the morning brief. Filters by account, buckets into a workflow model (Today / Planned / Blocked / Snoozed / Nurture / Internal), and surfaces overdue items.
Account intelligence
csm-account-enrichment
Runs: Daily
Batch-enriches the customer portfolio with engagement signals: last contact, email volume, sentiment, meeting frequency. Pushes structured context to the dashboard for the Accounts view.
Account intelligence
csm-account-intelligence-hub
Runs: Daily
Reads existing account memory and signal sources, merges them into a unified intelligence object per account, generates a sentiment narrative (Key Wins, Risks, Action Items), and writes it to the dashboard.
Account intelligence
csm-portfolio-data-sync
Runs: Daily
Pulls the customer portfolio from internal CRM systems in a single API call, enriches each account with per-account data (segment, industry, risk indicators, propensity), writes per-account state to the dashboard.
Account intelligence
csm-business-reviews-live
Runs: Daily
Scans the calendar for customer-facing meetings in the quarter, classifies meeting types, normalizes company names, and pushes enriched JSON to the dashboard for the Business Reviews view.
Account intelligence
csm-upcoming-meetings
Runs: Daily
Scans the calendar for customer-facing meetings in the next 14 days, filters out internal events, classifies meeting types, suggests relevant skills for prep, and pushes structured JSON to the dashboard.
Escalations
csm-jira-tracker-hub
Runs: Daily
Searches Jira broadly across escalation projects, checks for stalled support tickets, analyzes ticket status, and writes structured cards with deep-links for the Escalations dashboard.
Escalations
csm-policy-check
Runs: On demand
Self-fact-check skill. Answers questions like "did I cross a line?" or "what's the policy on X?" by mapping the situation to canonical references, scrubbing relevant channels for precedent, and producing a structured report. Refuses to soft-pedal.
Example output
POLICY CHECK · Question: Can I share QBR slides via my personal Drive?
FOUR LAYERS REVIEWED
· Data classification: Customer data (Tier 2)
· Sharing surface: External link
· Identity: Personal account
· Lifecycle: 30+ day retention
VERDICT: NO
Recommended path: share via shared drive with explicit Cloudflare-domain restriction.
Reference: [internal policy URL]
Escalations
csm-internal-comms
Runs: Ad-hoc
Generates executive summaries, product-feedback escalations, account handoff documents, and cross-functional updates. Translates raw account context into stakeholder-specific framing.
Escalations
csm-task-command-center
Runs: Always-on
Aggregates tasks from multiple sources (personal tracker, calendar action items, brief items, escalations, account intelligence) into a unified, deduplicated, prioritized task view. Pushes to the dashboard.
Operational
csm-tracker
Runs: Always-on
Manages the persistent CSM action item tracker. Bridged to the dashboard via sync-daemon. Action items, deadlines, snoozes, completion history.
Operational
csm-deploy
Runs: On demand
Safe deployment mechanics for personal infrastructure. Enforces pre-publish folder audit, access-before-deploy sequencing, and internal-vs-public file separation.
Operational
csm-doc-audit
Runs: On demand
Audits a repository's documentation for drift between the description, README, code, declared paths, file references, and the project registry. Reports Red/Yellow/Green per layer with file:line evidence.
Operational
csm-repo-bootstrap
Runs: On demand
Scaffolds a new repository with industry-best-practice documentation: README, LICENSE, CHANGELOG, AGENTS.md, .gitignore, MR templates, and optional catalog file.
Operational
csm-scrub
Runs: On demand
PII scrub. Scans a directory or repo for customer names, customer-affiliated zones, IDs, case numbers, and other identifiers that should be redacted before publishing.
Operational
csm-self-healing
Runs: Continuous
Recursive self-healing framework for skill improvement. Scores executions, detects failures, auto-repairs skills, and tracks maturity progression from draft through crystallization.
Operational
csm-skill-maintenance
Runs: Continuous
Standards, quality gates, and periodic health audit for the entire skill library. Governs how new skills are created, audits existing skills (scores, staleness, dependencies, token budgets, dead weight, command wiring, eval coverage).
Operational
csm-session-handoff
Runs: On demand
Appends a session entry to the project log and prints a copy-paste prompt for the next session. The "/handoff" pattern that makes long-running work survive across sessions.
Operational
csm-session-review
Runs: Weekly
Reviews session data to generate work logs, achievement summaries, value assessments, and time-savings estimates across sessions.
Operational
csm-tooling-watch
Runs: Weekly (Monday)
Weekly tooling-discovery scan. Scans relevant docs, changelogs, releases, plugin ecosystems, and package registries. Outputs a Monday-morning digest of what changed in the tools I depend on.
Operational
csm-audit-runner
Runs: Fortnightly
Orchestrates the fortnightly asks-vs-state audit. Extracts user asks from the session database, runs per-target sub-agent sweeps, reconciles against current state, triages findings into Tier 1/2/3, auto-fixes safe items.
Executive views
csm-exec-portfolio-hub
Runs: Weekly (Friday)
Generates compact per-account summary cards (ARR, health score, segment, renewal date, weekly narrative, watch items) for all accounts in the portfolio. Pushes to the Accounts dashboard tab.
Executive views
csm-exec-this-week-hub
Runs: Weekly (Friday)
Identifies 1-3 notable account events for the week (renewals, upsells, churn risk, escalation resolutions) and generates exec summaries plus a portfolio rollup. Pushes to the This Week dashboard tab.
Executive views
csm-exec-manager-1on1-hub
Runs: Weekly (Thursday)
Generates a structured 7-section prep brief for the weekly manager 1:1 meeting. Researches calendar, prior 1:1 notes, recent context, escalations, portfolio signals, and prep tips.
Executive views
csm-meeting-prep-auto
Runs: Daily (9:30 AM)
Automated meeting-prep orchestrator. Reads the calendar for upcoming customer meetings, pulls account context, runs the appropriate prep skill per meeting type, verifies all claims against sources, pushes verified prep output to the dashboard.
Slack
csm-slack-digest
Runs: On demand (paste-driven)
Ingests pasted Slack conversation context and structures it into account memory for use by meeting-prep, morning-brief, and EOD-wrap skills.
Slack
csm-slack-ingest
Runs: On demand (programmatic)
Programmatic Slack thread ingestion via a hardened wrapper. Reads customer Slack Connect threads through a Keychain-backed user OAuth token and writes structured account memory entries.
Engineering hygiene
pre-push-review
Runs: Pre-push
Mechanical pre-push review for non-trivial MRs. Runs an adversarial-input sweep, parallel-implementation symmetry audit, project-AGENTS.md compliance check, dead-code sweep, and engineering-codex compliance before git push.
Engineering hygiene
codex
Runs: On demand
Engineering codex skill covering Rust, TypeScript, Python, Go, Workers, reliability, AI, and governance standards. Use for any service development, code review, or architecture decisions.
QBR slide · Business priorities · Real rendered output
How I open every executive QBR
Before any usage data, before any product roadmap. The slide that says
"I understand what your business is trying to do." Proves the
research that earned the seat at the executive table.
Built from the sources every prepared CSM should be using: prior customer
conversations, internal account history, recent public announcements,
support escalation history, and stakeholder conversations across multiple
customer teams. The slide below is a real rendered output from my QBR
generator, run against an anonymized profile (customer name and team
identifiers replaced with synthetic placeholders; structure, density,
and rendering unchanged from production).
↑ Slide example · Customer Overview / Business Priorities · Mid-cycle review
Most CSMs open with usage charts. The customer's CFO doesn't care about
usage charts. They care whether you understand the business and whether
their investment is working. This slide answers both questions in 30
seconds, and is the reason I get invited back to the executive table
quarter after quarter.
Stack
Custom HTML/CSS slide template ·
JSON-driven content ·
Synthesized from public filings + internal research
Cross-functional governance · Pod model
How I run accounts when one CSM isn't enough
Stakeholder-aligned outcomes instead of single-thread relationships.
$2.5M upsell ARR delivered through pod execution at one role; 100%
logo retention across high-risk enterprise accounts at another. The
cross-functional motion is the durable lever, not the heroic CSM.
At the enterprise tier, no CSM gets to be a single point of contact.
The customer has six departments and the vendor has six functions, and
the work is keeping all twelve aligned on the same roadmap. I've stood
up two versions of this: a "Rescue Squad" for high-risk accounts in my
current role (CSM + AE + SE + RAM operating off a single shared plan),
and an account pod model in a previous role that drove $2.5M in upsell
ARR by replacing fragmented engagement with a unified cadence.
Architecture conversations, deep technical evaluation
RAM
Renewal mechanics
Quote-to-close, paper, procurement orchestration
Single shared customer roadmap
↑ Customer stakeholders (typical enterprise)
Executive
CTO / CISO / VP
Outcomes, board narrative, exec sponsor
Platform / DevOps
Daily operators
Reliability, configuration, runbooks
Security
Compliance + threat
Posture, incident response, audit
Procurement / Finance
Renewal owner
Contract economics, paper cycle
Beyond the account pod, the work spans the whole vendor org: Finance
(reconciled 5+ years of legacy revenue data with the CFO's team for
99.9% ARR accuracy), Product (Customer Advisory Board feedback that
shaped roadmap), Engineering (escalations, feature requests), and
Marketing (case study development). Senior CS isn't customer-facing.
It's everywhere-facing.
Operating cadence
Weekly internal pod sync ·
Monthly customer pod-to-pod ·
Quarterly executive QBR
· Single shared notebook of record
Memory layer · Context continuity
Persistent memory architecture for AI-augmented Customer Success
Every conversation builds on the last. Account context survives sessions.
Voice consistency persists across every AI interaction. The first time
AI tooling actually compounds, instead of starting from zero every morning.
For most CSMs using AI tools, every session starts from a blank slate.
Last week's customer conversation, the sponsor's preferences, the team's
running concerns, prior commitments. All of it has to be re-established
on every prompt. I built a persistent memory architecture that solves
this: structured account memory, a voice profile that travels with me
across all AI assistants, depth-control protocols, and session handoff
patterns that make AI tooling actually compound over time.
Layer 1
Account memory
Per-customer JSON file. Stakeholders, running concerns, prior
interactions, outstanding commitments, sentiment, sponsor changes.
Updated after every customer touch.
Layer 2
Voice profile
Codified version of how I write. Tone calibration, vocabulary
constraints, AI-tells to avoid, structural patterns. Applied
automatically to every customer-facing draft.
Layer 3
Session handoff
Long work survives across AI sessions through structured handoff
protocols. Resume yesterday's analysis exactly where it left off,
with all context intact.
What an account memory entry actually looks like (synthetic):
"account": "Acme Industries",
"slug": "acme-industries",
"last_updated": "2026-05-08",
"stakeholders": ["name": "Sarah Chen",
"role": "CTO",
"tenure_in_role": "3 months",
"engagement_style": "data-driven, prefers brevity",
"outstanding": ["Bot Mgmt review by Tuesday"]],
"running_concerns": ["Renewal at 90 days",
"INC-1142 unresolved (engineering team aware)",
"Sponsor change Feb 2026 (Sarah replaced Mark)"],
"voice_calibration": "tone": "warm but direct, no jargon",
"avoid": ["dwelling on past missteps", "over-promising roadmap"] ,
"recent_interactions": ["date": "2026-05-07", "type": "QBR", "summary": "expansion deferred; asked for Bot Mgmt deep-dive"]
Memory is what separates a useful AI assistant from one you have to
babysit. Every customer follow-up email I draft today reads the
customer's last interaction, knows the sponsor by name, references
prior commitments accurately, and applies my voice consistently. That's
not magic. That's deliberate architecture, and it's the layer most CS
teams haven't built yet.
AI cost discipline through model-tier routing
Architecture designed
Cost reduction of 50-65% projected on operational AI spend, with no
quality drop on customer-facing output. The architecture work and
playbook to make it executable across an entire CS team.
Most AI tooling at the operator level uses a single premium model
for everything. That works until the bills arrive. I designed a
model-tier routing layer that classifies each skill by its actual
reasoning requirement and routes accordingly: cheap models for the
structured, deterministic, repetitive work; mid-tier for synthesis
and drafting; premium models reserved for customer-facing judgment,
voice-sensitive output, and architecture decisions.
Tier 1 · Cheap
Kimi-K · Haiku
Data extraction. Formatting. Deterministic transforms.
Sub-task structuring. Telemetry parsing.
Customer-facing emails. Voice-sensitive drafting.
Architecture decisions. Final judgment calls.
~10-20% of workload
AI cost is moving from a small operational expense to a top-five
budget line item across Customer Success organizations. The teams
that invest in model-tier routing now will have 2-3x more usable AI
capacity per dollar than teams that don't. The architecture and
policy work to make this real are non-trivial: every skill needs a
classified reasoning requirement, every routing rule needs to be
verified for quality regression, and the team needs a playbook for
when to override defaults. That foundation work is the high-leverage
piece, and it's done.
Stack
OpenCode model-pinning ·
Anthropic Claude (Opus, Sonnet, Haiku) ·
Kimi-K ·
Gateway-level routing ·
Per-skill model selection metadata
Measurement · ROI defensibility
Quantifying the value of AI-augmented Customer Success
Architecture designed
Designed an executive-ready measurement framework that correlates AI
investment with operational outcomes (hours returned, retention
impact, expansion correlation, response-time improvements). The
answer to "what did AI deliver this quarter?" with numbers a CFO
takes seriously.
AI investment without measurement is a cost center waiting to be
cut. I designed a structured value-reporting framework that pulls
execution telemetry from every skill and automation, correlates it
with business outcomes from the CRM, and produces monthly executive
reports calibrated for finance and CS leadership audiences. The
framework also surfaces underutilized skills (capacity not yet
captured) and high-leverage skills (where investment is paying back
fastest), turning the report into a planning tool, not just a
scorecard.
CS Value Report · Q4 2026 (illustrative)
8 CSMs · 44 enterprise accounts
Hours returned to customer engagement~720 hrs
QBR cycle compression (vs FY25 baseline)-75%
Customer follow-up time-to-send-64%
NRR delta vs portfolios without AI tooling+8 pts
Underutilized skills flagged for re-investment7 skills
Across the CS industry, finance teams are starting to ask hard
questions about AI spend. The CS leaders who can answer "here's what
AI delivered last quarter" with concrete numbers will keep their
budgets and grow them. Those who can't will see investment cut.
This is measurement work that becomes existential within 12 months,
and the framework is the hard part. The reports themselves are the
easy output.
A version-controlled rulebook that AI assistants read at every customer-facing
touchpoint. Excerpts from the table of contents:
01
CRM is the source of truth for the account team. No inferences. Helping doesn't transfer ownership.
02
Customer-facing email output format. Plain text in fenced code blocks; structured by ALL CAPS section headers and bullet items; no markdown rendering tricks.
03
Date-day verification, non-negotiable. Every date in customer output must be machine-verified against `cal`, never inferred.
04
Time-of-day verification. Run `date` before stating elapsed time, current time, or remaining time. Hallucinated time is the most common LLM failure mode.
05
Tool-first context retrieval. When the answer exists in a tool, grab it. Don't ask the user for what an MCP can answer in seconds.
06
Declaration discipline. Never claim "done" without verifying user-level success, not just file-state success.
07
Reproduce-first debugging. Before opening any source file in response to a UI bug, reproduce the symptom with DevTools open.
08
Sub-agent delegation gates. 5 mechanical gates fire BEFORE inline tool calls; main session reserved for judgment-heavy work.
⋯
12 more rules covering security, attribution, voice and tone, customer-artefact destination, sensitive file handling, and process-state-vs-file-state debugging.
QBR report generator · End-to-end deck automation
The biggest single capacity unlock I've shipped
What used to take 4-6 hours of manual chart work per QBR now compresses
by 75%. Across a 44-account enterprise portfolio that's 150-180 hours
returned per quarter, or roughly four full working weeks of senior CSM
time redirected from spreadsheet wrangling to strategic preparation,
executive relationship work, and team coaching.
Most CSMs lose the day before a QBR to data extraction, copy-paste,
chart formatting, and slide assembly. The QBR report generator turns
that day into a 30-minute pipeline run: telemetry pulled, charts rendered,
slides assembled, deck exported as PDF and PPTX. The CSM's job becomes
the only thing the customer actually values: framing the story, refining
the narrative, anticipating the executive question.
The dollar math is real. At a fully-loaded senior-CSM cost, the
time recovered is six-figure annual capacity per CSM, and it compounds
across a team. But the bigger impact is qualitative. CSMs who aren't
burning the day before a QBR show up to the meeting prepared to
think, not just present. That's the difference between a
tactical reporting cadence and a strategic partnership.
The charts below are real renders from the generator, anonymized for
public display. Data shapes, peak values, and visual treatment are
unchanged from production.
↑ Slide example · Web Application Firewall · 13-month request volume by rule type
↑ Slide example · Bot Management · automated vs human traffic classification
Stack
Node.js + Puppeteer (headless Chrome) ·
Custom HTML/CSS chart templates ·
JSON-driven data model ·
14-slide deck output, PDF + PPTX exported
aboutGOLF · 2023–2025 · Director of Customer Success
Salesforce-native CS operating system, built from scratch
98% logo retention. 44% reactivation of churned ARR ($400K). 70% of the
subscription base captured on auto-pay. 99.9% ARR record accuracy in
partnership with Finance.
I walked into a Customer Success function with no playbook: no health
scoring, no renewal automation, no engagement tracking, no executive
reporting. Over 26 months I designed and shipped the Salesforce-native CS
operating system that ran 1,200 customer accounts. Custom objects for
customer journeys. Formula fields for a 1-10 health score aggregating NPS,
CSAT, CES, and engagement data. Automation flows for renewal kickoffs and
churn-risk alerts. Executive dashboards in Salesforce + Power BI that gave
the C-suite real-time visibility for the first time.
Same operator instinct as the AI builds above, applied with a CRM-native
stack and no AI to lean on. Building Customer Success from scratch in a
SaaS environment that hasn't run CS before is a genuinely different skill
from optimizing an existing CS function. It's the work that earned
the 2025 Creative Customer Success Leader Award.
Acme Industries · Account 360
$2.4M ARR · Renewal Mar 2026
Health Score
7.5/ 10
↑ +0.8 last 30 days · trending up
Engagement (last 30 days)
14 emails3 meetings2 training sessions
Active sponsor engagement
Voice of Customer
NPS 8 (promoter)CSAT 4.7 / 5CES 5 / 7
Renewal status
Auto-pay configured · Renews 14 Mar 2026
+120 days renewal runway · QBR scheduled Dec 12
Next actions
Q4 QBR scheduled Dec 12; prep brief drafted by csm-meeting-prep
Expansion opportunity: aG Leagues platform pilot (estimated $180K ARR)
Health score improved 0.8 in 30 days; share with sponsor in Q4 review
22 mo
Tenure
$2.4M
ARR
100%
Adoption
Stack
Salesforce (custom objects, formula fields, automation flows)
· QuickBooks · DocuSign
· Power BI ·
HubSpot · Pardot
System architecture
How the pieces fit together
User → Signal collection → Agentic orchestration → Cloudflare infrastructure → Operating surface
Career snapshot
Where I've worked
2025 — Now
Cloudflare · Customer Success Manager (Enterprise) · Sydney, AU
$10.5M ARR @ 147% NRR · 44 enterprise accounts · $934K Q3-Q4 pipeline · 100% logo retention · QBR cycle compressed 75% via AI tooling I built
2023 — 2025
aboutGOLF · Director of Customer Success and Support · US (Remote)
Built CS from scratch · 1,200 accounts · 98% retention · 44% churn reactivation ($400K) · $2.5M upsell ARR · 1-10 health scoring system in Salesforce · Cart-to-Curb e-commerce automation
2019 — 2023
WithYouWithMe · Head of Enterprise Account Management · Sydney, AU
$25M ARR @ 120% NRR · Promoted twice in 3.5 years · Grew Accenture from $1.3M → $5M ARR in 90 days · Led an 8-person CSM team across global expansion (UK + Canada launches)
2012 — 2019
U.S. Navy · IT Infrastructure Project Manager · Naples, Italy
140+ infrastructure projects across EMEA · Navy Achievement Medal · DISA Facility Control Office of the Year (2017) · The technical foundation underneath every CS role since
2025 Creative Customer Success Leader Award ·
Customer Success Collective. Selected by an industry panel of CS thought leaders
for the playbook used to reactivate 44% of churned accounts and drive 98%
retention across 1,200 customers.
Author · The CS PRESS · Substack newsletter
on practical CS leadership: building from scratch, AI-augmented Customer
Success, and the operator-engineer hybrid role.
Guest expert · The Customer Success Podcast with Irit Eizips ·
Featured episodes on reactivating churned accounts (44% recovery, 98%
retention across 1,200 accounts) and supporting major SaaS product launches
in mid-sized organizations.
TORCHED mentorship framework ·
A coaching framework I developed and applied with 100+ Customer Success
practitioners across career stages. Coach for the Catalyst Growth Coaching
program (2023–2024).
7,900+ LinkedIn followers ·
Active community around the realities of building Customer Success in
under-resourced environments. Posts regularly reach 500-2,500 impressions.
Get in touch
Talk to me
I'm actively exploring next-chapter opportunities in senior
Customer Success leadership, AI-native CS, and roles that bridge customer
relationship work with AI infrastructure. The fastest path to a conversation
is a direct email.
Sydney-based with Australian Permanent Residency. Open to senior IC and leadership
conversations in Customer Success and AI-native CS / Solutions Engineering globally,
on-site in Sydney, hybrid, or fully remote.