Skip to main content

Fetch a Single Page

Extract content from any URL as clean Markdown:
from tinyfish import TinyFish

client = TinyFish()
result = client.fetch.get_contents(
    urls=["https://example.com"],
    format="markdown",
)

page = result.results[0]
print(f"Title: {page.title}")
print(page.text)
Output:
{
  "results": [
    {
      "url": "https://example.com",
      "final_url": "https://example.com/",
      "language": "en",
      "text": "This domain is for use in documentation examples without needing permission...",
      "format": "markdown"
    }
  ],
  "errors": []
}

Batch Fetch

Fetch up to 10 URLs in a single request. Failed URLs appear in errors[] without affecting the rest:
from tinyfish import TinyFish

client = TinyFish()
result = client.fetch.get_contents(
    urls=[
        "https://example.com",
        "https://httpbin.org/html",
        "https://nonexistent.invalid",
    ],
    format="markdown",
)

for page in result.results:
    print(f"OK: {page.url}{page.text[:80]}")

for err in result.errors:
    print(f"FAIL: {err.url}{err.error}")
Get all hyperlinks and image URLs from a page:
from tinyfish import TinyFish

client = TinyFish()
result = client.fetch.get_contents(
    urls=["https://www.tinyfish.ai/"],
    format="markdown",
    links=True,
    image_links=True,
)

page = result.results[0]
print(f"Found {len(page.links)} links and {len(page.image_links)} images")

for link in page.links[:5]:
    print(f"  → {link}")

Fetch as HTML

Get semantic HTML instead of Markdown — useful for preserving structure:
from tinyfish import TinyFish

client = TinyFish()
result = client.fetch.get_contents(
    urls=["https://httpbin.org/html"],
    format="html",
)

page = result.results[0]
print(page.text[:300])

Fetch Reference

Full parameter, response, and content type docs

Search Examples

Web search with geo-targeting