Most companies send a press release the same way every time. They write it, hit send through a distribution service, and wait for journalists to bite. Some do. Most don’t. The companies that get real pickup take a different approach: they test.
A/B testing press releases sounds complicated, but it’s straightforward. You change one element, measure what happens, and do it again. Over six months, a discipline around testing can double or triple your media pickup rate.
Why Test Press Releases at All
A journalist’s inbox gets hundreds of pitches each week. Your release is competing against venture announcements, product launches, earnings reports, and paid distribution spam. The difference between “opens and reads” and “deletes without opening” often comes down to five words in a headline.
Testing gives you data instead of guesses. Instead of wondering why a release bombed, you know that your subject line didn’t work but your distribution timing was solid. Next time, you fix the subject line and keep the timing. That’s how habits get built.
The stakes are real. A 10 percent improvement in open rate can double your media pickup. If 100 journalists see your release instead of 90, and your conversion rate stays constant, your media hits double. That’s not theoretical. That’s your founder’s name in more publications.
What You Can Test
Not all elements of a press release are worth testing. Focus on the variables that affect whether journalists open and read what you sent.
Headlines and Subject Lines
The headline is everything. Journalists decide whether to read in two seconds. They scan the subject line in your email, then they glance at the first paragraph. If neither one gives them a reason to care, your release gets archived or deleted.
Test two versions of a headline with different angles. One version might emphasize speed or efficiency: “New API Cuts Onboarding Time by 60 Percent.” Another emphasizes market impact: “Startup Takes on $2B Enterprise Integration Market.” Send both to different segments of your media list. Same distribution method, same timing, different headlines.
Track open rates and click-through rates. A strong headline will open 30 to 40 percent of emails. A weak one opens 15 to 20 percent. The gap is obvious.
Read more: How to Write Headlines That Journalists Actually Open
Email Subject Lines
Your subject line in the press release email is separate from your headline inside the release. Many companies treat them as the same thing. That’s a mistake. The email subject line lives in the inbox. The headline lives in the release itself. They serve different functions.
Test whether a direct subject line outperforms a narrative one. “Acme Launches New Developer API” is direct. “We Listened to Your Feedback (and Built This)” is narrative. Different journalists respond to different approaches. Some want the news upfront. Others want context first.
Track email open rates. That’s your metric. If Variation A has a 38 percent open rate and Variation B has a 29 percent open rate, Variation A wins. Use it next time.
More: Email Subject Lines That Convert Press Releases Into Coverage
Lead Paragraph Angle
After opening your email and reading the headline, a journalist reads the first paragraph. The lead paragraph should deliver news. It should answer one clear question: “Why should anyone care about this announcement?”
Some lead paragraphs anchor on market problem. “Enterprise teams waste an average of 12 hours per week on manual API integration. Acme fixes that in minutes.” Some anchor on product capability. “Acme’s new API supports 500 pre-built integrations across CRM, billing, and support platforms.” Some anchor on company growth. “Acme raised $15M Series B, bringing total funding to $42M, to expand enterprise API coverage.”
Test different lead angles. Keep everything else identical: same headline, same subject line, same distribution list, same timing. Change only the first paragraph. Measure which one gets more positive journalist responses, clicks, and media mentions.
Distribution Timing
When you send matters. Tuesday and Wednesday morning (9 AM to 11 AM in the journalist’s timezone) see higher open rates than Monday or Friday. Friday releases often get buried in inbox noise right before the weekend. Monday morning is chaos. Tuesday and Wednesday are the sweet spot.
But not all news breaks Tuesday. Sometimes news is time-sensitive. Sometimes you’re working around embargoes. Within your constraints, test different send times. Send Variation A on Tuesday at 10 AM. Send Variation B on Wednesday at 10 AM. Keep the headline, subject line, and distribution list identical. The only change is timing.
Open rates and click rates will shift. Some journalist segments check email heavily on Tuesday. Others are more active Wednesday. The data shows you the pattern. Over time, you’ll notice Tuesday often wins. But for your specific beat and audience, you might find Wednesday dominates. Test to know.
More: What Time to Send a Press Release for Maximum Coverage
Distribution Channel and List Segment
Some companies distribute their releases broadly to every journalist on their list. Others segment by beat or geography or publication size. Both approaches work. The question is which works better for your news.
When you have new product news targeted at finance teams, send it to finance journalists. When you have a local story, send it to local business media. When you have an enterprise product update, skip the consumer tech blogs.
Test whether segmented distribution outperforms broad distribution. Send the same release to a broad list (Variation A) and a curated list of 50 relevant journalists (Variation B). Track open rates, response rates, and media placements. You might find that 50 good contacts beat 500 generic ones. Or you might find the opposite. Test to know.
Call-to-Action Timing and Phrasing
Some releases include a call-to-action asking the journalist to request a demo, interview a founder, or visit your website. Others let the release stand alone. Some CTAs are explicit: “Request an interview with Sarah Chen, CEO, here.” Others are soft: “Learn more about the new API at [URL].”
Test whether an explicit CTA outperforms no CTA. Test whether a URL for demos outperforms a URL for interviews. Track how many journalists click through, request something, or actually reach out to a contact person.
This is where real business results happen. A small change to phrasing can move a journalist from passive reading to active engagement. Test and measure.
How to Run a Valid Test
Run a clean test. Change one thing. Keep everything else locked.
Step 1: Define Your Variable
Pick one element to test. Not multiple. One. Decide whether you’re testing the headline, the subject line, the lead angle, the send time, the distribution list, or the CTA. Pick one.
Step 2: Create Variations
Write two versions of that element. Make them meaningfully different. If you’re testing headlines, don’t test “New API Cuts Onboarding Time by 60 Percent” against “New API Cuts Onboarding Time by 60 Percent.” (That’s not a test, that’s a typo.)
Test something with contrast. “New API Cuts Onboarding Time by 60 Percent” vs. “Acme Launches First No-Code API Gateway.” Different angles. Different emotion. Different value props. That’s a test.
Step 3: Segment Your Audience
Divide your media list into two equal groups. Make sure they’re roughly similar in publication size, beat, geography, or whatever factors matter to your business. If you’re dividing 100 journalists, send Variation A to 50 and Variation B to 50. Try to keep the segments balanced.
Don’t accidentally bias the test. Don’t send Variation A to top-tier outlets and Variation B to bloggers. That skews results. Random or systematic division both work. Just keep it balanced.
Step 4: Send and Track
Send both variations on the same day and same time. Track open rates, click-through rates, journalist responses, and media placements. Note everything.
Use your email tracking tool to monitor opens. Use your CRM to track journalist responses and conversations. Use Google Alerts and your media monitoring service to track mentions. Get all the data.
Step 5: Analyze and Decide
After 48 to 72 hours, look at the numbers. Which variation had higher open rates? Which got more clicks? Which generated more journalist interest? Which led to actual media placements?
The winner becomes your new default. You use that headline, that subject line, that angle, that timing, or that CTA for your next release.
Then you test something else.
What to Track
Clean data makes clean decisions. Track the right metrics.
Email metrics: Open rate, click-through rate, click-to-open rate (clicks divided by opens, which shows reading quality), and response rate (how many journalists replied to your email).
Business metrics: Media mentions by publication, article placement type (news, feature, opinion), article URL, and audience reach (traffic or impressions generated by the article).
Qualitative metrics: Tone of coverage (positive, neutral, negative), accuracy of reporter’s understanding, and whether the story hit your key message.
Read more: What to Measure and How to Measure Press Release Performance
Build the Habit
One test doesn’t build anything. Five tests over five months builds a discipline. Twelve tests over a year builds a system.
Keep a simple spreadsheet: Test name, element tested, Variation A, Variation B, open rate A, open rate B, winner, and notes. After twelve tests, patterns emerge. You learn that headlines focusing on market size outperform headlines focusing on speed. You learn that your tech journalist segment opens emails on Tuesday but your business journalist segment opens Wednesday. You learn that explicit CTAs get 15 percent more responses than soft CTAs.
Those patterns become your company’s press release playbook. New team members learn them. You move faster. Your media pickup rate climbs.
That’s how A/B testing moves from a thing you do once to a thing that changes your media results. Start with one test. One variable. One clear win. Then do it again.
Key Takeaways
- Test one variable at a time. Change the headline OR the subject line OR the timing OR the segment. Not multiple things.
- Split your media list evenly. Variation A goes to half your list, Variation B to the other half, same time and date.
- Track open rates, clicks, and journalist responses. These are your signals. Media placements are the results.
- Run tests consistently. One test is interesting. Twelve tests build a system.
- Document your winner. Use winning variations as your new baseline for next release.
Press release A/B testing isn’t complex. It’s math. It’s measurement. It’s the difference between sending something and hoping versus sending something and knowing. Smart teams test. Smart teams win more media.