---
title: "Crawl-Delay"
description: "Crawl-delay is a robots.txt directive that tells crawlers to wait a specified number of seconds between consecutive requests, used to prevent server overload from aggressive crawling."
category: "AI & Bot Detection"
date: "2026-03-05"
url: "https://getbeast.io/glossary/crawl-delay/"
type: "glossary"
---

# Crawl-Delay

**Category:** AI & Bot Detection | **Updated:** 2026-03-05

Crawl-delay is a robots.txt directive that tells crawlers to wait a specified number of seconds between consecutive requests, used to prevent server overload from aggressive crawling.

---

## What Is Crawl-Delay?
Crawl-delay is a robots.txt directive that specifies the minimum number of seconds a crawler should wait between requests. For example, `Crawl-delay: 10` tells the bot to wait at least 10 seconds between requests. It is supported by Bingbot, Yandex, and some other crawlers, but **not by Googlebot**.

## Why Crawl-Delay Matters
Crawl-delay helps protect servers from being overwhelmed by aggressive crawlers. If a bot is making hundreds of requests per second, it can degrade performance for real users. Crawl-delay provides a simple way to throttle well-behaved bots without implementing complex rate limiting.

## How to Use Crawl-Delay
Add `Crawl-delay: N` to the relevant user-agent section in robots.txt. For Googlebot, use Google Search Console's crawl rate settings instead. Be careful not to set crawl-delay too high, as it reduces how many pages get crawled per day. Monitor compliance in your server logs with [LogBeast](/logbeast/).

---

## Related Terms

- [Robots.txt](/glossary/robots-txt/)
- [Crawl Rate](/glossary/crawl-rate/)
- [Rate Limiting](/glossary/rate-limiting/)
- [Bingbot](/glossary/bingbot/)
- [Crawler Management](/glossary/crawler-management/)

## Further Reading

- [The Ultimate robots.txt Guide](/blog/robots-txt/)

---

*Part of the [GetBeast SEO Glossary](/glossary/). Visit [GetBeast.io](https://getbeast.io) for professional SEO and log analysis tools.*
