Skip to main content

llama-index readers apify integration

Project description

Apify Loaders

Apify Actor Loader

Apify is a cloud platform for web scraping and data extraction, which provides an ecosystem of more than a thousand ready-made apps called Actors for various scraping, crawling, and extraction use cases.

This loader runs a specific Actor and loads its results.

Usage

In this example, we’ll use the Website Content Crawler Actor, which can deeply crawl websites such as documentation, knowledge bases, help centers, or blogs, and extract text content from the web pages. The extracted text then can be fed to a vector index or language model like GPT in order to answer questions from it.

To use this loader, you need to have a (free) Apify account and set your Apify API token in the code.

from llama_index import download_loader
from llama_index.readers.schema import Document


# Converts a single record from the Actor's resulting dataset to the LlamaIndex format
def tranform_dataset_item(item):
    return Document(
        text=item.get("text"),
        extra_info={
            "url": item.get("url"),
        },
    )


ApifyActor = download_loader("ApifyActor")

reader = ApifyActor("<My Apify API token>")
documents = reader.load_data(
    actor_id="apify/website-content-crawler",
    run_input={
        "startUrls": [{"url": "https://gpt-index.readthedocs.io/en/latest"}]
    },
    dataset_mapping_function=tranform_dataset_item,
)

This loader is designed to be used as a way to load data into LlamaIndex and/or subsequently used as a Tool in a LangChain Agent. See here for examples.

Apify Dataset Loader

Apify is a cloud platform for web scraping and data extraction, which provides an ecosystem of more than a thousand ready-made apps called Actors for various scraping, crawling, and extraction use cases.

This loader loads documents from an existing Apify dataset.

Usage

In this example, we’ll load a dataset generated by the Website Content Crawler Actor, which can deeply crawl websites such as documentation, knowledge bases, help centers, or blogs, and extract text content from the web pages. The extracted text then can be fed to a vector index or language model like GPT in order to answer questions from it.

To use this loader, you need to have a (free) Apify account and set your Apify API token in the code.

from llama_index import download_loader
from llama_index.readers.schema import Document


# Converts a single record from the Apify dataset to the LlamaIndex format
def tranform_dataset_item(item):
    return Document(
        text=item.get("text"),
        extra_info={
            "url": item.get("url"),
        },
    )


ApifyDataset = download_loader("ApifyDataset")

reader = ApifyDataset("<Your Apify API token>")
documents = reader.load_data(
    dataset_id="<Apify Dataset ID>",
    dataset_mapping_function=tranform_dataset_item,
)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_readers_apify-0.1.3.tar.gz (3.6 kB view hashes)

Uploaded Source

Built Distribution

llama_index_readers_apify-0.1.3-py3-none-any.whl (4.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page