Public API¶
The main entry points for Silkweb. Functions and types below are available from import silkweb (see __all__ in the package for the canonical list).
Fetching¶
fetch
¶
async_fetch
async
¶
Extraction¶
ask
¶
async_ask
async
¶
async_ask(url: str, prompt: str, *, output: str = 'auto', dataframe_engine: str = 'auto', explain: bool = False, **fetch_kwargs: Any)
Ask a natural-language question of a URL.
Pipeline: - fetch (auto tier) - hydration-first (optional: use hydration JSON as cleaned content) - otherwise clean → synthesize schema → extract → compile selectors → cache - output selection: - output="python": list[dict] - output="df": DataFrame (pandas/polars) if available - output="auto": backward-compatible auto-conversion when caller already imported pandas/polars
Source code in silkweb/__init__.py
443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 | |
extract
¶
async_extract
async
¶
async_extract(url: str, schema, prompt: str, *, output: str = 'python', dataframe_engine: str = 'auto', explain: bool = False, **kwargs: Any)
Extract typed data from a URL using a provided Pydantic schema.
- selector cache fast-path
- self-heal on validation failure
outputcontrols the return shape:"python"/"list"/"dict":list[BaseModel]with__silk_meta__when present"df"/"dataframe": pandas or polars DataFrame (seedataframe_engine), else falls back to list"auto": same as historical behavior — DataFrame only ifauto_detect_dataframeand pandas/polars already imported
Source code in silkweb/__init__.py
630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 | |
async_extract_from_html
async
¶
async_extract_from_html(url: str, html: str, *, schema, prompt: str, output: str = 'python', dataframe_engine: str = 'auto', **kwargs: Any)
Same extraction contract as async_extract, but uses pre-fetched HTML (no network fetch).
Returns list[BaseModel] by default, or a DataFrame when output="df" / "dataframe",
or auto-converts like async_extract when output="auto".
Source code in silkweb/__init__.py
SilkQL¶
query
¶
Compile and run a SilkQL query (sync). Arguments and return type match :func:async_query.
async_query
async
¶
async_query(url: str, silkql_string: str, *, provider=None, cache: SelectorCache | None = None, follow_pagination: bool = False, max_pages: int = 20, **fetch_kwargs: Any) -> QueryResult
Compile and run a SilkQL query against url.
Fetches the page (tier "auto" by default; pass tier= like :func:fetch),
extracts with the compiled schema, caches CSS/XPath selectors per domain, and returns
a :class:QueryResult whose data is a one-element list containing the merged root
model (list collections are merged across pages when follow_pagination is true).
provider: extraction LLM; defaults toconfigure(extraction_model=...).cleaner_model/selector_model: optional model strings (popped from**fetch_kwargs), defaulting to config — same split as :func:extract.cache: selector cache instance; defaults toCacheManager.from_config().selectors.follow_pagination: when the SilkQL AST includespagination { next_page_url }, follow relative/absolute next links up tomax_pages.force_llm: skip selector cache (popped fromfetch_kwargs, defaultconfigure(force_llm=...)).cachedon the result is true if any scraped page used a selector-cache hit.
Source code in silkweb/__init__.py
Crawling¶
crawl
¶
AsyncCrawler
dataclass
¶
AsyncCrawler(start_url: str, allowed_domains: set[str] | None = None, url_pattern: str | None = None, max_pages: int = 100, max_depth: int = 2, concurrency: int = 10, per_domain_concurrency: int = 2, max_pending_urls: int = 5000, schema: type[BaseModel] | None = None, prompt: str | None = None, dedup: SeenSet = SeenSet(), on_page: OnPage = None, on_item: OnItem = None, on_error: OnError = None, fetch_func: Callable[..., Awaitable[SilkPage]] | None = None, extract_func: Callable[..., Awaitable[list[BaseModel]]] | None = None, _pattern_re: Pattern[str] | None = None, _domain_sems: dict[str, Semaphore] = dict(), _pages_lock: Lock = asyncio.Lock(), _pages_fetched: int = 0)
run
async
¶
Crawl starting at start_url, yielding extracted items.
Requires schema and prompt both set or both omitted; mismatched
configuration raises ValueError.
Source code in silkweb/crawl/crawler.py
130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 | |
SeenSet
dataclass
¶
SeenSet(backend: DedupBackend = 'sqlite', sqlite_path: str | None = None, _mem: set[str] | None = None, _con: Connection | None = None)
URL deduplication set.
Backends: - sqlite: persistent set backed by a single table (single persistent connection). - memory: in-process set.
add
¶
Add url to seen-set. Returns True if it was newly added, False if already present.
Source code in silkweb/crawl/dedup.py
async_crawl
async
¶
async_crawl(start_url: str, *, allowed_domains: set[str] | None = None, url_pattern: str | None = None, max_pages: int = 100, max_depth: int = 2, concurrency: int = 10, per_domain_concurrency: int = 2, max_pending_urls: int = 5000, schema=None, prompt: str | None = None, on_page=None, on_item=None, on_error=None, **fetch_kwargs: Any)
Breadth-first crawl from start_url with URL dedup, global and per-domain concurrency,
and optional structured extraction on each page.
schema/prompt: both required together for extraction; if both omitted, onlyon_page/ link discovery run and the returned list is empty.max_pages: hard cap on fetched pages.max_depth: link-following depth from the start URL (0 = start page only).max_pending_urls: best-effort cap on the crawl work-queue size to limit memory.on_page,on_item,on_error: optional async callbacks (page after fetch, each extracted model, errors per URL).- Remaining keyword arguments are passed to the fetcher (same as :func:
fetch).
Source code in silkweb/__init__.py
crawl_sitemap
¶
async_crawl_sitemap
async
¶
async_crawl_sitemap(sitemap_url: str, *, schema=None, prompt: str | None = None, max_pages: int = 100, max_sitemap_files: int = 20, concurrency: int = 10, per_domain_concurrency: int = 2, **fetch_kwargs: Any)
Fetch a sitemap (urlset or sitemapindex), collect page <loc> URLs via XML
parsing, then run :func:async_crawl on each (max_depth=0, max_pages=1 per URL).
allowed_domains for each crawl defaults to the sitemap URL host. Pass max_sitemap_files
to cap nested sitemap documents when the root is an index.
Source code in silkweb/__init__.py
API Discovery¶
discover_api
¶
Fetch replay (observability)¶
silkweb.replay(session_file) reloads a recorded HTTP fetch session for debugging. It is not the same as replay_session, which replays browser actions from a saved SilkSession (see Sessions & authentication).
replay
¶
Load an HTTP fetch replay bundle (JSON *.silkweb + HTML sibling) written when
configure(replay_dir=...) is set. Returns :class:observability.replay.ReplaySession
with .html / .ask() / .extract() / .query() helpers.
This is not the same as :func:replay_session, which replays a Playwright
recording from record_session (cookies and actions under ~/.silkweb/sessions).
Source code in silkweb/__init__.py
Sessions (browser)¶
Interactive recording and headless replay use async Playwright helpers:
record_session
async
¶
Open a real (non-headless) browser and record navigations, clicks, and fills.
Persists to ~/.silkweb/sessions/<name>.silkweb (cookies, storage, actions).
This is Playwright session recording — not the same as HTTP replay_dir /
:func:silkweb.replay, which stores raw HTML + metadata for a single fetch.
Source code in silkweb/session/recorder.py
replay_session
async
¶
Replay a Playwright session by name (see :func:record) in headless mode.
Unlike :func:silkweb.replay, this does not load replay_dir HTML snapshots;
it uses the session JSON written by record.
Source code in silkweb/session/recorder.py
SilkSession
dataclass
¶
SilkSession(name: str, url: str | None = None, created_at: str | None = None, ua: str | None = None, cookies: list[dict[str, Any]] | None = None, localStorage: dict[str, Any] | None = None, sessionStorage: dict[str, Any] | None = None, actions: list[dict[str, Any]] | None = None, _playwright: Any | None = None, _browser: Any | None = None, _context: Any | None = None, _page: Any | None = None)
Persisted Playwright session (cookies + localStorage + sessionStorage).
Storage format (JSON):
fetch
async
¶
Navigate to a URL using a persisted session (Playwright).
tier and proxy are reserved for alignment with HTTP fetch tiers; the browser
context currently uses global configure() defaults (user agent, etc.). Use
Playwright-only proxy wiring in a future release if you need it here.
Source code in silkweb/session/session.py
save
async
¶
Serialize cookies + localStorage + sessionStorage to disk.
Source code in silkweb/session/session.py
Change watching¶
watch
¶
Bundled recipes¶
The silkweb.recipes object is a RecipeRegistry loaded from built-in YAML recipes:
RecipeRegistry
¶
Pre-fetched HTML¶
When you already have HTML (no network fetch), you can run the same pipelines against a string:
ask_from_html
¶
extract_from_html
¶
Sync wrapper around async_extract_from_html (same return contract as extract).
Source code in silkweb/__init__.py
query_from_html
¶
Sync SilkQL on existing HTML. Same pipeline as :func:async_query for a single page; see :func:async_query for options.
Source code in silkweb/__init__.py
Configuration¶
get_config
¶
configure
¶
Update global Silkweb configuration.
Known fields are set on :class:SilkwebConfig; unknown keys go into extra.
When environment variable SILKWEB_STRICT_CONFIG is 1 / true / yes,
unknown top-level keys raise :class:SilkwebConfigError instead of being stored
in extra (helps catch typos like configure(timeouts=30)).
Source code in silkweb/config.py
Unknown configure(...) keys normally go into SilkwebConfig.extra. With environment variable SILKWEB_STRICT_CONFIG set to 1, true, or yes, unknown top-level keys raise SilkwebConfigError instead (helps catch typos).