Most AEO advice online is theoretical. Someone read a blog post about how language models work and published a list of practices they assume should follow from the theory. Some of those practices hold up in the field. Some don’t. A few are actively counterproductive.

This post is the field-tested version — the practices that have moved citations across dozens of client programs over the past two years, and the ones that keep getting repeated online but don’t actually work in practice.

What holds up

Direct-answer formatting. The single highest-yield practice. Phrase H2s as questions users would type, follow immediately with a 40 to 80 word direct answer, expand after. This structure wins featured snippets, AI Overview citations, and ChatGPT retrieval citations in roughly equal measure. Every page that targets a specific question should follow it.

Specific facts and numbers. Content with concrete data gets cited at a much higher rate than content with general claims. “Small business email open rates average 21.5% in 2025” is citation-worthy. “Email open rates are declining” is not. Cite a source for the number and the model will often cite both you and the original source together.

Topical clustering. One pillar page plus a cluster of related deep-dive pages on a topic out-performs 15 disconnected posts about the same topic. The clustering signals depth, and depth is what models use to decide which source to cite for broader topic queries.

FAQ schema on pages with genuine questions. When the page actually contains Q&A content, FAQPage schema markup increases the odds the content gets pulled into AI answers. When the page doesn’t have real Q&A, adding schema to fake Q&A sections is worthless and sometimes triggers Google spam flags.

Author bylines with real credentials. Pages with named authors who have demonstrated expertise in the topic area get cited more often than anonymous or generically attributed content. The author’s other work, social presence, and bio all feed into the E-E-A-T signals models and search engines weight.

Off-site citation density. The number of different authoritative sources that mention your brand in a given topic area is one of the strongest predictors of AI citation frequency. Press coverage, trade publication mentions, podcast appearances, and educational resource citations all contribute. Volume matters; diversity of sources matters more.

Consistent entity framing. Using the same company description, positioning language, and category language across every piece of content and every press mention builds entity recognition. Inconsistent framing confuses models and dilutes the signal.

Page speed and clean HTML. Models retrieve and parse pages in real time for retrieval-based answers. Slow or malformed pages get skipped. This is a floor requirement more than a differentiator, but pages that violate it lose citations they should have won.

What sounds good but doesn’t work

Keyword stuffing questions into page headings. Sometimes the content really does only have two or three genuine questions. Forcing seven H2s phrased as questions when only three are real creates bloated content that nobody (model or human) wants to cite. Match the structure to the real content.

Chasing AI-specific schema types. There’s been a small cottage industry of “AEO schema” advice recommending obscure schema types or custom structured data for AI consumption. None of it moves citations. Models don’t look for special AEO schema. They look for clean content with standard markup.

AI-generated content at volume. Publishing 500 AI-generated blog posts to “capture long-tail queries” is the SEO tactic of 2023 repurposed for AEO, and it works about as well now as it did then. Models recognize low-effort content and deprioritize it. Worse, mass-generated content tends to dilute the entity signals your good content is building.

Private “AI optimization” tools that inject hidden keywords. Any tool that promises to optimize your page for AI by injecting hidden or cloaked content is selling you either a placebo or a spam vector that will get detected and punished. Skip.

Quote-stuffing press releases with buzzwords. Some PR guides suggest loading quotes with AI-category keywords to trigger AI citations. Models ignore this and reporters delete the release. Write quotes that a human would actually say.

Ignoring user intent to chase model behavior. The single biggest category error in AEO is writing content to please models while ignoring whether humans would find it useful. Models and humans reward mostly the same things. When they diverge, prioritize humans — the model will catch up in the next training run, and the humans won’t.

The measurement habits that matter

Monthly prompt inventory. Build a list of 30 to 80 questions real prospects ask in your category. Run them through ChatGPT, Claude, Perplexity, and Google AI Overviews once a month. Log whether your brand appears, where in the answer, and in what context. This is the most honest AEO metric there is.

AI referral traffic tracking. Most analytics platforms now identify referrals from ChatGPT, Perplexity, and other AI products in their referrer data. Set up a filter or segment for these referrers and track volume over time. It’s a lagging indicator but it correlates with the prompt inventory results.

Citation source auditing. When a model does cite you, look at which upstream content produced the citation. Sometimes it’s the page you expected. Sometimes it’s an old article you forgot about. The surprises are often the most useful signal — they tell you where the model is finding value you didn’t plan for.

Brand mention monitoring. Track your brand mentions across the web weekly. Google Alerts works as a floor. Paid tools like Brand24 or Meltwater give better coverage. The goal is to notice new mentions quickly so you can understand what’s driving them.

The cadence

A working AEO program runs on a repeating monthly cycle. Not because every piece of work takes a month to do, but because the feedback loops are long enough that weekly iteration is noise.

Week 1 of the month: publish new pillar content and update priority existing pages with refreshed data. Send press pitches for the month’s target angles.

Week 2: run the prompt inventory. Log results. Compare against previous months. Note any new citations or dropped citations.

Week 3: off-site work. Respond to HARO, participate in relevant Reddit threads, engage with journalists, support podcast guesting or trade publication quote requests.

Week 4: review and plan. Look at what moved, what didn’t, and what should happen next month. Update the content roadmap based on what the data is showing.

The teams that follow this cadence consistently for six months see measurable citation growth. The teams that do one big push and then stop regress within a quarter.

The trap to avoid

The biggest trap in AEO is treating it as a project with a finish line rather than a program that requires ongoing maintenance. The brands winning at AEO today will still be winning in 18 months only if they keep doing the work. The brands that did a 90-day sprint and declared victory are already watching their citations fade.

This isn’t because AEO is harder than SEO. It’s because the maintenance surface is larger — you have upstream press sources, off-site citation health, on-site content freshness, and prompt inventory tracking all requiring continuous attention. Drop any one and the whole program slowly decays.

The operators who win at AEO treat it the way good ops teams treat infrastructure: routine, unglamorous, compounding, and dependent on discipline. That’s the real best practice.