The freshness signal in AI search is more aggressive and less forgiving than in traditional Google rankings. A page that ranked top three in Google for five years can get passed over by Perplexity in favor of a fresher article from a less authoritative source. The retrieval logic underneath AI products treats freshness as a strong proxy for relevance because users asking AI products tend to want current answers. The publishers and brands that figure out how to maintain content freshness across their library are getting an outsized share of AI citations. The ones who treat content as a one-and-done asset are losing ground week by week.

This piece is about how freshness actually works inside AI search. It covers what signals the retrieval systems read, how different AI products weight freshness, what kinds of content are most freshness-sensitive, and the practical tactics that keep a content library competitive in AI search over time.

What freshness means inside an AI retrieval system

Freshness is not just about the publish date. AI retrieval systems read multiple signals to decide whether content is current.

The most basic signal is the dates on the page itself. The publishedDate and dateModified fields in schema markup. The visible byline date. The “last updated” notation if present. These get parsed during retrieval and used as ranking inputs.

The next signal is the dates referenced in the content. A 2026 guide that cites a 2024 study without acknowledging more recent research looks stale. A page that references “the recent” or “the latest” without specifying time gets discounted. Pages that reference current events, current pricing, current feature sets, and current data carry more freshness signal than pages that float without temporal anchors.

The signal that most publishers ignore is the freshness of outbound links. A page that cites only sources from 2018 and earlier reads as a 2018 page even if the publish date says 2026. A page that cites a mix of foundational sources and recent (2024 to 2026) sources reads as currently maintained. The retrieval system can read the link patterns and adjust accordingly.

There is also a secondary freshness signal from the page’s own update history. A page that has visible signs of regular maintenance (new sections added, updated data, refreshed examples) gets treated more favorably than a page that has not been touched in years. This signal is harder to read directly but emerges from comparing successive crawls.

How different AI products weight freshness

The major AI products handle freshness differently, and understanding the differences helps with optimization.

Perplexity is the most freshness-aggressive of the major products. The retrieval pipeline weights recently updated content heavily for almost any query, especially queries that involve products, pricing, news, or anything that changes over time. A Perplexity answer for “best CRM for small business” will tilt strongly toward articles published or substantially updated in the past 6 to 9 months. Older articles get cited only when they are uniquely authoritative on the topic.

ChatGPT with browsing or with the connected web search pulls from Bing and weighs Bing’s freshness signals. The pattern is similar to Perplexity but slightly less aggressive, and ChatGPT will sometimes default to its training data for questions it considers stable. For evolving topics, it does a fresh search and prioritizes recent results.

Google AI Overviews uses Google’s full ranking system with adjustments. Freshness matters more for AI Overviews than for traditional rankings because the queries that trigger AI Overviews tend to have higher time sensitivity. A page that ranks first for a query in the standard SERP can be passed over for a fresher page in the AI Overview citation.

Bing Copilot follows Bing’s rankings with similar dynamics. The freshness sensitivity is moderate.

Claude with web search uses a search backend and applies its own filtering. The behavior is similar to other LLM-based products, with a slight bias toward authoritative older sources for foundational topics and toward fresher sources for current topics.

The pattern across all products: freshness matters most for commercial queries, news queries, and any topic where the underlying facts evolve. Freshness matters least for stable definitional content, mathematical truths, and historical facts that have been settled.

The freshness-sensitive content categories

Some content categories require near-constant freshness work. Others are more forgiving.

Pricing pages and content that references specific prices need updating whenever the underlying prices change. AI products are sensitive to outdated pricing because users who ask about pricing want current information. A pricing comparison from 2024 cited in a 2026 query is worse than no answer at all from the user’s perspective.

Software and product feature content evolves quickly. SaaS feature lists, product capability descriptions, and “how to” content for specific products need updates on the product cycle. A guide to a SaaS product written in 2024 is likely to be substantially out of date by 2026 because the product has shipped a year of feature updates.

Regulatory and compliance content changes when regulations change. AI products check freshness aggressively for regulatory queries because giving outdated regulatory information could mislead users. A page about California privacy regulations needs to reflect the current state of CCPA and any subsequent amendments.

Industry trend and forecast content has a short shelf life. Predictions made in 2023 about 2025 lose value rapidly once 2025 data is available. Trend pieces should be either explicitly dated and treated as historical, or refreshed annually with current data.

News and event coverage decay quickly. A 2024 coverage of an event has very limited use by 2026. The exception is retrospective analysis pieces that are explicitly framed as historical context.

Local information about businesses, services, and amenities ages quickly. A local guide written in 2023 likely contains references to closed businesses, new entrants that did not exist, and changed conditions. AI products surface fresher local content when available.

The categories that age slowly: foundational education content, mathematical and scientific principles, historical events, biographical information about historical figures, and stable cultural reference material. These can run for years without updates.

The update strategies that work

There are three update strategies that produce reliable freshness signals.

The minor refresh: updating a few specific pieces of content (statistics, examples, screenshots) and bumping the modification date. The page structure stays the same. The byline gets a “last updated” notation reflecting the recent change. This is appropriate for pages that are still substantively correct but have aged at the margins.

The major rewrite: substantially updating the page content (new sections, updated framework, refreshed examples and citations throughout) and updating the publication date. This is appropriate for pages where the underlying topic has evolved enough that the original treatment is no longer the strongest version. The URL stays the same, the page redirects from the old version, and the rewrite gets indexed as new content.

The replacement: archiving the old page and publishing a new piece on the topic at a new URL. This is appropriate when the angle has shifted enough that the old framing no longer makes sense. The old page redirects to the new one. This strategy is rarely the best choice because it loses the URL’s accumulated authority, but it is sometimes necessary.

Most publishers should run a content audit at least annually and identify which pages need a minor refresh, which need a major rewrite, and which should be sunset. The work compounds because the audit reveals patterns: certain topic clusters age faster than others, certain authors produce more time-sensitive work, certain page types degrade more quickly.

The publication date question

There is an active debate about whether to update the visible publication date when refreshing content. The conservative answer is to keep the original publication date and add a “last updated” notation. The aggressive answer is to update both dates so the page reads as current.

The trade-off is between credibility and freshness signal. Updating only the modification date is honest about the page’s history. Updating the publication date can mislead users into thinking the page is brand new when it is actually a revision.

The pragmatic middle path is to update the publication date when the rewrite is substantial (more than 40 percent of the content has changed) and use the “last updated” notation when changes are minor. The schema markup should always reflect the actual modification date in dateModified, regardless of how the visible date is presented.

For AI search specifically, both approaches can work. The retrieval systems read dateModified from schema if it is present, and they cross-check against the visible date and the content itself. A page that claims a 2026 publication date but references only 2018 sources will be flagged as inconsistent and discounted. A page that has a 2024 publication date with a 2026 last-updated notation, current citations, and fresh examples will be treated as current.

A page’s outbound links shape how fresh it appears to retrieval systems. Pages with stale links read as stale even if the body has been updated.

The most common link-staleness issue is broken links. A 2026 guide that cites three sources, one of which leads to a 404, looks neglected. The retrieval system does not have a direct signal for broken links, but the absence of a target page where one is expected reduces the value of the citation. Run a broken link checker on your top-cited pages quarterly.

The second issue is links to outdated sources. A page on AEO that cites only 2019 era SEO articles looks like it has not engaged with how the field has evolved. Replacing or supplementing those citations with current sources improves the freshness signal even when the body of the page has not changed substantially.

The third issue is the absence of links to recent, authoritative sources on the topic. A 2026 page about a topic where 2024-2026 has produced major new research should reference that research. Pages that engage with current discourse rank better than pages that ignore it.

The structured data freshness signals

Schema markup includes specific freshness signals that should be used precisely.

The datePublished property should reflect the original publication date in ISO 8601 format. The dateModified property should reflect the most recent substantive update. The two often differ, and that is fine. Both should be present in the markup.

For Article and BlogPosting schema, these dates appear inside the schema graph and are read by retrieval systems. For pages that lack Article schema (some commercial pages, product pages), consider adding it where appropriate to expose freshness signals.

The author property should be present and consistent. Pages with named authors get treated more credibly than anonymous content. The author schema should include the author’s name and ideally a sameAs link to their LinkedIn or professional profile.

For evolving content, additional schema can help. The Dataset schema includes properties like temporalCoverage that establish what time period the content covers. Recipe schema has totalTime that anchors the content. Where the content type has dedicated schema, use it to expose temporal context.

Building the freshness operation

A publisher serious about content freshness operates on a quarterly cadence with annual deep audits.

Quarterly: review the top 50 pages by traffic and citations. For each, check the publication date, the modification date, the cited sources, and the visible content. Flag any page where the temporal anchors are out of date. Schedule the appropriate refresh (minor, major, or replacement) based on the assessment.

Annually: run a complete content audit. List every published page with its current traffic, citations, and freshness signals. Identify the topic clusters where the library has aged. Plan the rewrite calendar for the year. Sunset pages that no longer serve any audience.

The work is not glamorous. It is also one of the highest-leverage uses of editorial time in 2026. A library of 200 evergreen articles, all maintained on a freshness cycle, will outperform a library of 1,000 articles that decay without intervention. The compounding nature of the work means it gets more efficient over time as the library stabilizes.

The publishers winning AI search citations are the ones who treat the content library as a maintained asset rather than a series of one-time investments. The work is a discipline, not a project. The discipline pays back.