Hey, it’s Pavel
A few days ago, I got access to the Beehiiv MCP integration for Claude. After spending one day with it, I recommended it to every Beehiiv customer I work with that same evening.
My first thought when I saw it was: Can it take my job? No, it can’t. But it can make my job easier. I’ve been doing newsletter deliverability consulting long enough to know that no tool replaces thinking. Beehiiv MCP is a big time saver. It surfaces signals I would otherwise spend 30 minutes hunting for across six dashboard pages instantly.
It still requires experience to understand what these metrics mean, how they impacted previous campaigns, and how they will impact future ones.
For me, that means I can spot deliverability problems faster, explain them more clearly to clients, and spend my time on the work that actually moves the needle. For my customers, it means that the issues dragging down their performance are identified and addressed before they compound into a real problem. Revenue goes up when you fix what's broken. This helps you find it.
The goal of this tool (any AI tool), really, is to help you. The machine can only help you as well as you help it. Ask a vague question, get a vague answer. Ask a sharp, specific question about your bounce rate trend over the last 6 sends, and you get something you can actually act on. The prompts below are my attempt to give you the sharp questions so you don't have to figure them out from scratch.
So today I'm sharing the exact prompts I've been using 25 of them, organized from beginner to advanced. Copy them, run them against your own publication, and see what comes back.
⚠️ Note: some of these are more reliable at larger list sizes. I'll explain exactly where that line is at the end.
Get your baseline in 5 minutes
Run these first. They work immediately after connecting the MCP and give you a foundation for everything else.
"Pull my publication stats for all time. Summarize active subscribers, open rate, click rate, and top acquisition sources in plain language.""List all my published posts with their open rates and click rates, oldest to newest. Are the numbers trending up or down?""What's my current active subscriber count and how many have gone inactive? What percentage of my total list is still engaged?""Which of my posts got the highest open rate? Which got the lowest? What's different about the subject lines?"Spot inbox placement problems early
These are the questions I check after every send. Delivery rate and bounce trends are the earliest warning signs of reputation damage, and some operators may not look at them consistently.
"Pull delivery rates across all my published posts in chronological order. Flag any post where delivery rate dropped below 97%.""Show me bounce rates per post over time. Separate hard bounces from soft bounces. Is there an upward trend?""How many total hard bounces have accumulated across all my sends combined? Which single post had the most?""Did my delivery rate or open rate change significantly after any long gap between sends? Show me the before and after numbers.""Which posts had zero unsubscribes? Which had the most? Do the high-churn posts share anything — topic, send timing, list size at the time?"Look past the vanity metrics
Open rate is noisy. Click-to-open rate, re-open rate, and the raw vs. verified click gap tell you much more about what's actually resonating.
"Calculate click-to-open rate (CTOR) for each post — that's unique clicks divided by unique opens. Which editions had the most engaged readers after opening?""Compare raw click rate vs verified click rate for each post. Which posts have the biggest gap between the two numbers?""For each post, divide total opens by unique opens. Which posts had the highest re-open rate — people going back to read it more than once?""Which posts got the most web views relative to how many emails were sent? A high ratio suggests organic sharing beyond the list."💡 On raw vs. verified clicks: Corporate security systems pre-click links in emails to scan them for malware. If your audience is operators or B2B, expect 30–60% of your raw clicks to be bots. Always use verified click rate when measuring content performance — the raw number is mostly noise. P.S. Beehiiv has a powerful anti-fraud system, so they cut of bot clicks
List health and subscriber quality
These prompts get more powerful as your list grows past 2–3k. At smaller sizes they're directional; at scale they're genuinely predictive.
"List all inactive subscribers. When did each one subscribe, what was their acquisition source, and roughly how long before they went inactive?""Compare engagement rates by acquisition source. Do subscribers from LinkedIn open more than subscribers from direct traffic? Which source brings the most loyal readers?""Show me subscribers who joined in the last 90 days. How many have opened at least one email? What percentage have never opened anything?""Flag any subscribers who signed up via import rather than organically. How does their engagement compare to everyone else?"Content and cadence patterns
Once you have 8 - 10 posts, these prompts start surfacing real editorial patterns rather than just noise.
"What was the average number of days between my sends? Did open rate on a given post correlate with how long it had been since the previous send?""Group my posts by topic type — news-reactive vs. evergreen guides vs. opinion. Does one category consistently outperform the others on engagement?""Look at subject line patterns across my highest and lowest performing posts. Any consistent differences in word choice, length, or format?""Which posts drove the most clicks per opener? What did those posts have in common — single CTA, multiple links, specific topic?"Set these to run automatically
In Claude Cowork, you can schedule these to run on a cadence — weekly, after each send, whatever fits your workflow. The output saves to a file so context builds over time.
"Check my subscriber count vs. last week. What's the net change? Flag if churn exceeded new signups this period.""Pull stats for my most recent post. Compare open rate, click rate, bounce rate, and delivery rate against my all-time averages. Flag anything that's more than 10% off baseline.""Have any subscribers gone inactive since my last check? If so, what was their acquisition source and when did they subscribe?""Compare my delivery rate across the last 3 sends vs. the 3 before that. Plain-language verdict: improving, stable, or declining?"A note on reliability
These questions are worth asking at any list size, but confidence in the answers varies a lot. Here's the honest version:
< 1k
SUBSCRIBERS
Directional signals only. Single events swing percentages wildly. Build habits, not conclusions.
1k–10k
SUBSCRIBERS
Most metrics become reliable. Segmentation and source analysis start making sense.
10k+
SUBSCRIBERS
Everything is statistically solid. Delivery rate changes of 0.5% are meaningful. Run experiments.
The exception: send cadence, infrastructure questions, and the raw vs. verified click gap are meaningful at any size. Those don't depend on statistical volume they're about how email works, not how your specific numbers shake out.
Start with the green prompts. Run one of the deliverability checks after your next send. Let it compound from there.
Before You Go: Here's How I Can Help
Work with me directly — If you have a deliverability problem that needs fixing, I take on clients through Upwork. Audits, troubleshooting, ongoing support.
Start your newsletter on beehiiv — Send Point runs on beehiiv and I'm a beehiiv partner. If you're looking for a platform, get 20% off for 3 months with code PAGTH7YX at beehiiv.com. I can help you with setup and migration.
Stay in the loop — Issues go out weekly. Each one covers a specific deliverability problem: concrete signals, concrete fixes. Forward this to someone who needs it.
Until next time,
Pavel

