web-crawler-research v1.0
What it does
web-crawler-research starts from one page, follows a small number of links on the same domain, and returns page titles for a quick research snapshot. It is designed for fast discovery work when you want a lightweight view of a site without building a full scraper.
Use cases
- Researchers who need a quick same-domain map before deciding what deeper pages to inspect
- Content and marketing teams gathering headline-level context from a company site or documentation hub
- Builders prototyping lightweight web research workflows with simple local tooling
Requirements
- Python 3
- No external dependencies, standard library only
- No API keys required
- Internet access to fetch pages
- OpenClaw v2026.3.23+
Example usage
python3 scripts/crawl_research.py https://example.com --limit 8 --json
Expected output
{
"seed_url": "https://example.com",
"seed_title": "Example Domain",
"pages": []
}
For domains with same-site child links, the pages array includes each discovered URL and page title.
Price
$4.99