Skip to content
Glossary
SEO Glossary

Technical SEO Terms

Infrastructure, crawling, indexing, and site performance optimization. — 86 terms

.htaccess File

A configuration file for Apache web servers that controls URL redirects, access permissions, and other server behaviors. The .htaccess file is commonly used for implementing 301 redirects and managing URL rewriting rules.

Read definition

2xx Status Codes

HTTP response codes in the 200 range indicating successful requests. The most common is 200 OK, confirming the server delivered the requested page successfully.

Read definition

301 Redirect

A permanent server-side redirect that passes nearly all link equity from the original URL to the destination. Essential for preserving SEO value during site migrations, URL changes, and domain consolidations.

Read definition

302 Redirect

A temporary redirect indicating a page has moved temporarily. Unlike 301 redirects, search engines may continue indexing the original URL and may not transfer full link equity to the destination.

Read definition

4xx Status Codes

HTTP response codes in the 400 range indicating client-side errors. Common examples include 401 Unauthorized, 403 Forbidden, and 404 Not Found. Monitoring these codes helps identify broken links and access issues.

Read definition

5xx Status Codes

HTTP response codes in the 500 range indicating server-side errors. These codes signal that the server failed to fulfill a valid request, potentially blocking crawlers from accessing and indexing content.

Read definition

ADA Website Compliance

Ensuring websites meet Americans with Disabilities Act standards for accessibility. ADA-compliant sites provide better experiences for all users and can benefit from improved crawlability and structured content.

Read definition

AhrefsBot

The web crawler operated by Ahrefs that discovers and indexes backlinks across the web. It is one of the most active crawlers on the internet, building Ahrefs' extensive link database.

Read definition

AJAX

Asynchronous JavaScript and XML — a technique for loading content dynamically without full page reloads. AJAX-heavy sites can create crawling challenges if search engines cannot execute the JavaScript needed to render content.

Read definition

Bingbot

Microsoft's web crawler that discovers and indexes pages for Bing search results. Bingbot follows similar crawling protocols as Googlebot but has its own rendering and indexing characteristics.

Read definition

Breadcrumb Navigation

A secondary navigation system showing the user's location within a site's hierarchy. Breadcrumbs improve user experience, help search engines understand site structure, and can appear as rich results in SERPs.

Read definition

Browser

Software application used to access and navigate websites on the internet. Browser rendering behavior affects how search engines process JavaScript-heavy sites and influences Core Web Vitals measurements.

Read definition

Canonical Tag

An HTML element that specifies the preferred version of a page when duplicate or near-duplicate content exists. Canonical tags consolidate link equity to a single URL and prevent duplicate content issues in search results.

Read definition

Canonical URL

The preferred URL that search engines should index when multiple URLs serve the same or similar content. Setting canonical URLs correctly prevents dilution of ranking signals across duplicate pages.

Read definition

ccTLD

Country code top-level domain — a two-letter domain extension associated with a specific country, such as .uk, .de, or .jp. ccTLDs send strong geographic signals to search engines for local and international SEO targeting.

Read definition

Click Depth

The number of clicks required to reach a specific page from the homepage. Pages with shallow click depth are crawled more frequently and tend to receive more link equity, making site architecture a critical SEO factor.

Read definition

Client-Side and Server-Side Rendering

Two approaches to generating web page HTML. Server-side rendering produces complete HTML on the server for easy crawling, while client-side rendering builds pages in the browser with JavaScript, which can create indexing challenges.

Read definition

Core Web Vitals

Google's set of user experience metrics measuring loading performance (LCP), interactivity (INP), and visual stability (CLS). Core Web Vitals are confirmed ranking signals and essential benchmarks for technical SEO.

Read definition

Crawl Budget

The number of pages a search engine will crawl on your site within a given timeframe. For ecommerce sites with large catalogs, crawl budget determines whether new products get indexed fast enough to generate revenue — or sit invisible for weeks.

Read definition

Crawl Error

An issue preventing search engine crawlers from accessing a page, such as server errors, broken links, or blocked resources. Monitoring and resolving crawl errors ensures important content remains accessible for indexing.

Read definition

Crawlability

The ease with which search engine bots can discover and access pages on a website. Good crawlability requires clean site architecture, proper internal linking, XML sitemaps, and correctly configured robots.txt files.

Read definition

Crawler

An automated program that systematically browses the web to discover and index content. Google's crawler (Googlebot), Bing's crawler (Bingbot), and third-party crawlers from SEO tools all traverse the web following links.

Read definition

Crawler Directives

Instructions that tell search engine crawlers how to interact with a website, including what to crawl, index, or ignore. Common directives include robots.txt rules, meta robots tags, and canonical declarations.

Read definition

Crawler Traps

Website structures that cause search engine crawlers to get stuck in infinite loops or waste crawl budget on low-value pages. Common traps include infinite calendars, faceted navigation, and session-based URLs.

Read definition

Crawling

The process by which search engine bots discover new and updated web pages by following links. Crawling is the first step in getting content indexed and ranked in search results.

Read definition

Critical Rendering Path

The sequence of steps a browser takes to convert HTML, CSS, and JavaScript into rendered pixels on screen. Optimizing the critical rendering path reduces time to first meaningful paint and improves page speed metrics.

Read definition

CSS

Cascading Style Sheets — the language used to control the visual presentation of web pages. CSS optimization impacts page load speed, and render-blocking CSS can delay content visibility to both users and search engines.

Read definition

De-Index

The removal of a page or site from a search engine's index, making it no longer appear in search results. De-indexing can occur through manual penalties, noindex tags, or technical misconfigurations.

Read definition

Dead-End Page

A webpage with no outgoing links to other pages on the site. Dead-end pages trap users and crawlers, preventing the flow of link equity and reducing both user engagement and crawl efficiency.

Read definition

DNS

Domain Name System — the internet's phonebook that translates domain names into IP addresses. DNS configuration affects site accessibility, page load speed, and can impact crawling if resolution is slow or misconfigured.

Read definition

DOM

Document Object Model — a programming interface representing HTML documents as a tree structure. Search engines interact with the DOM to understand page content, making DOM rendering critical for JavaScript-heavy websites.

Read definition

Duplicate Content

Substantially similar content appearing at multiple URLs on the same or different websites. Duplicate content confuses search engines about which version to index and rank, diluting potential ranking signals across copies.

Read definition

Dwell Time

The amount of time a user spends on a page before returning to search results. Longer dwell times can indicate content that effectively satisfies search intent, though Google has not confirmed it as a direct ranking factor.

Read definition

Dynamic URL

A URL generated dynamically based on database queries, typically containing parameters like question marks and ampersands. Dynamic URLs can create crawling challenges and duplicate content issues if not properly managed.

Read definition

Edge SEO

Implementing SEO changes at the CDN or edge server level rather than modifying the origin server. Edge SEO enables rapid deployment of redirects, header modifications, and rendering optimizations without backend development cycles.

Read definition

Faceted Navigation

A filtering system that lets shoppers narrow product listings by attributes like size, color, and price. Every filter combination generates a unique URL — and without proper handling, those URLs destroy crawl budget, fragment ranking signals, and create thousands of duplicate pages.

Read definition

Findability

How easily users and search engines can discover content on a website. Findability depends on site architecture, internal linking, navigation design, and proper indexation of important pages.

Read definition

Follow

The default directive for links, indicating search engines should crawl the linked page and pass link equity. A followed link transfers ranking signals from the source page to the destination.

Read definition

HTTP

HyperText Transfer Protocol — the foundational protocol for data transfer on the web. HTTP defines how messages are formatted and transmitted between web servers and browsers.

Read definition

Image Compression

Reducing image file sizes without significant quality loss to improve page load times. Image optimization is one of the highest-impact performance improvements, as images often account for the majority of page weight.

Read definition

Index Bloat

When a search engine indexes far more pages than a site intends, including low-value or duplicate pages. Index bloat dilutes crawl budget and overall site quality signals, potentially depressing rankings for important pages.

Read definition

Index Coverage Report

A Google Search Console report showing which pages are indexed, excluded, or experiencing errors. This report is essential for diagnosing indexation issues and ensuring important content appears in search results.

Read definition

Indexability

Whether a page meets the technical requirements for search engines to include it in their index. Factors affecting indexability include noindex tags, canonical signals, crawl accessibility, and content quality thresholds.

Read definition

Indexed Page

A web page that has been crawled, processed, and added to a search engine's database. Only indexed pages can appear in search results, and the site: search operator can verify a page's index status.

Read definition

Indexing

The process by which search engines analyze crawled pages and store them in their database for retrieval. Indexing involves parsing content, evaluating quality, and organizing information for efficient search result generation.

Read definition

Infinite Scroll

A web design pattern that continuously loads new content as users scroll down, eliminating traditional pagination. Infinite scroll can create SEO challenges if not implemented with proper URL structures and crawlable link paths.

Read definition

Information Retrieval

The science of searching for and extracting relevant information from large datasets. Search engines are fundamentally information retrieval systems, using algorithms to match queries with the most relevant documents.

Read definition

Interaction to Next Paint

A Core Web Vitals metric measuring page responsiveness by tracking the time between a user interaction and the next visual update. INP replaced First Input Delay as the primary interactivity metric in March 2024.

Read definition

IP Address

A unique numerical identifier assigned to every device connected to the internet. IP addresses are relevant to SEO for server location signals, CDN configuration, and identifying potentially manipulative link networks.

Read definition

JavaScript SEO

The practice of ensuring search engines can properly crawl, render, and index JavaScript-generated content. JavaScript SEO involves techniques like server-side rendering, dynamic rendering, and careful resource management.

Read definition

JSON-LD

JavaScript Object Notation for Linked Data — Google's recommended format for implementing structured data markup. JSON-LD allows you to add rich data about entities, products, and content without modifying the visible HTML.

Read definition

Lazy Loading

A technique that defers loading of images and other resources until they are needed — typically when they enter the viewport. Lazy loading improves initial page load performance but must be implemented carefully to ensure search engines can access all content.

Read definition

Link Profile

The complete collection of backlinks pointing to a website, including their sources, anchor text, link attributes, and quality distribution. A healthy, diverse link profile signals genuine authority to search engines.

Read definition

Links (Internal)

Hyperlinks connecting pages within the same website domain. Internal links establish site hierarchy, distribute authority, aid user navigation, and help search engines discover and understand the relationships between content.

Read definition

Local Business Schema

Structured data markup specifically designed for local businesses that provides search engines with details like address, hours, phone number, and service areas. Local business schema enhances visibility in local search results and map packs.

Read definition

Log File Analysis

The process of examining server log files to understand how search engine bots crawl a website. Log file analysis reveals crawl frequency, crawl budget allocation, and potential issues that aren't visible in standard SEO tools.

Read definition

Minification

Removing unnecessary characters from code files — whitespace, comments, and line breaks — without changing functionality. Minifying HTML, CSS, and JavaScript reduces file sizes and improves page load performance.

Read definition

Mirror Site

An exact copy of a website hosted on a different domain or server. While mirror sites serve legitimate disaster recovery purposes, they create duplicate content issues and should use canonical tags to designate the primary version.

Read definition

Mobile-First Indexing

Google's approach of using the mobile version of a page's content for indexing and ranking. Since Google predominantly crawls with a mobile user agent, sites must ensure their mobile experience contains all critical content and functionality.

Read definition

Noopener

A link attribute that prevents a newly opened page from accessing the original page's window object. Noopener improves security and performance for links that open in new tabs.

Read definition

Noreferrer

A link attribute that prevents the browser from sending the referring page's URL to the destination site. Noreferrer provides privacy but also means the destination won't see referral traffic data in their analytics.

Read definition

Orphan Page

A page with no internal links pointing to it from other pages on the same website. Orphan pages are difficult for both users and crawlers to discover, resulting in wasted content potential and poor indexation.

Read definition

Pagination

Dividing content across multiple pages using numbered navigation links. Proper pagination implementation ensures search engines can discover all content while consolidating ranking signals to avoid dilution across paginated sequences.

Read definition

Protocol

A set of rules governing data transmission on the internet, such as HTTP and HTTPS. The protocol portion of a URL affects security signals, with HTTPS being a confirmed ranking factor and modern web standard.

Read definition

Referrer

The URL of the page that linked to the current page, sent as an HTTP header when users follow links. Referrer data helps track traffic sources but can be restricted by noreferrer attributes and browser privacy settings.

Read definition

Rel Canonical

An HTML element that tells search engines which URL is the master version when duplicate pages exist. In ecommerce, canonical tags prevent product variants, filtered URLs, and platform-generated duplicates from splitting your ranking power across dozens of near-identical pages.

Read definition

Relative URL

A URL path that doesn't include the full domain, relying on the current page's context to resolve. While relative URLs work for internal links, absolute URLs are generally preferred for canonical tags and structured data.

Read definition

Render-Blocking Scripts

JavaScript and CSS files that must be loaded and processed before a page can render, delaying visual content display. Eliminating or deferring render-blocking resources is a key performance optimization for improving Core Web Vitals.

Read definition

Rendering

The process of converting HTML, CSS, and JavaScript code into the visual page that users see. Search engines must render pages to understand JavaScript-generated content, creating a second wave of processing beyond initial crawling.

Read definition

Scrape

Extracting data from websites using automated tools. While scraping has legitimate uses in SEO research and competitive analysis, unauthorized scraping can violate terms of service and copyright protections.

Read definition

Search Engine Bot

An automated program operated by a search engine to crawl and index web content. Search engine bots follow links, read sitemaps, and process page content to build the index that powers search results.

Read definition

Search Engine Poisoning

A cyberattack technique where malicious actors manipulate search results to redirect users to harmful websites. SEO professionals should monitor for compromised pages and implement security measures to prevent exploitation.

Read definition

SEO Silo

A site architecture strategy that groups related content into distinct thematic sections with tight internal linking. Siloing creates clear topical clusters that help search engines understand a site's expertise areas.

Read definition

Server Log Analysis

Examining server access logs to understand how search engine crawlers interact with a website. Server log analysis reveals actual crawl behavior, crawl frequency patterns, and technical issues not visible through standard SEO tools.

Read definition

Structured Data

Standardized code formats that help search engines understand and categorize page content. Implementing structured data through schema markup enables rich results, knowledge panels, and enhanced SERP features.

Read definition

Subdomain

A prefix added before a domain name creating a separate section of a website, such as blog.example.com. Subdomains are treated as somewhat independent entities by search engines, which affects how authority and rankings are distributed.

Read definition

Taxonomy

The classification system used to organize website content into categories, tags, and hierarchical groupings. Well-structured taxonomies improve navigation, internal linking, and help search engines understand content relationships.

Read definition

Top-Level Domain

The last segment of a domain name, such as .com, .org, or .edu. While generic TLDs don't directly impact rankings, country-code TLDs send geographic signals, and domain extension choices can influence user trust perceptions.

Read definition

URL Folders

Subdirectory segments within a URL path that organize content hierarchically, such as /blog/category/post-title. URL folder structure communicates site architecture to search engines and affects how authority flows between content sections.

Read definition

URL Parameter

Query strings appended to URLs using ? and & characters that modify page content or tracking. URL parameters can create duplicate content and crawl waste if search engines index multiple parameter combinations of the same content.

Read definition

URL Slug

The human-readable portion of a URL that identifies a specific page, typically appearing after the domain and folder path. Optimized URL slugs are concise, descriptive, include target keywords, and use hyphens to separate words.

Read definition

User Agent

A string identifying the software making a request to a web server, used by search engine crawlers and browsers. SEO professionals analyze user agent data to understand which crawlers are accessing their sites and how frequently.

Read definition

WebP Image Format

A modern image format developed by Google that provides superior compression compared to JPEG and PNG. WebP images are typically 25-35% smaller at equivalent quality, improving page load speeds and Core Web Vitals scores.

Read definition

Website Structure

The overall organization and hierarchy of a website's pages, including URL structure, internal linking patterns, and content groupings. A well-planned website structure helps both users and search engines navigate content efficiently.

Read definition

X-Robots-Tag

An HTTP header directive that controls search engine crawling and indexing behavior for non-HTML resources like PDFs and images. The X-Robots-Tag provides the same functionality as meta robots tags but works at the server level.

Read definition

XML Sitemap

An XML file listing all important URLs on a website that search engines should crawl and index. XML sitemaps can include metadata about each URL, such as last modification date, change frequency, and priority level.

Read definition

Need help putting these concepts into practice?

Digital Commerce Partners builds organic growth systems for ecommerce brands.

Learn how we work