The Raccoon AI Python library provides convenient access to the Raccoon AI REST API from any Python 3.8+ application. The library includes type definitions for all request params and response fields, and offers both synchronous and asynchronous clients powered by httpx.

Installation

pip install --pre raccoonai

Usage

import os
from raccoonai import RaccoonAI

client = RaccoonAI(
    secret_key=os.environ.get("RACCOON_SECRET_KEY"),
)

response = client.lam.run(
    query="Find the price of iphone 16 on Amazon.",
    raccoon_passcode="<end-user-raccoon-passcode>",
)
print(response.message)

While you can provide a secret_key keyword argument, we recommend using python-dotenv to add RACCOON_SECRET_KEY="My Secret Key" to your .env file so that your Secret Key is not stored in source control.

Async usage

Simply import AsyncRaccoonAI instead of RaccoonAI and use await with each API call:

import os
import asyncio
from raccoonai import AsyncRaccoonAI

client = AsyncRaccoonAI(
    secret_key=os.environ.get("RACCOON_SECRET_KEY"),
)


async def main() -> None:
    response = await client.lam.run(
        query="Find the price of iphone 16 on Amazon.",
        raccoon_passcode="<end-user-raccoon-passcode>",
    )
    print(response.message)


asyncio.run(main())

Functionality between the synchronous and asynchronous clients is otherwise identical.

Streaming responses

from raccoonai import RaccoonAI

client = RaccoonAI()

stream = client.lam.run(
    query="Find the price of iphone 16 on Amazon.",
    raccoon_passcode="<end-user-raccoon-passcode>",
    stream=True,
)
for response in stream:
    print(response.message)

The async client uses the exact same interface.

from raccoonai import AsyncRaccoonAI

client = AsyncRaccoonAI()

stream = await client.lam.run(
    query="Find the price of iphone 16 on Amazon.",
    raccoon_passcode="<end-user-raccoon-passcode>",
    stream=True,
)
async for response in stream:
    print(response.message)

Retries

Certain errors are automatically retried 2 times by default, with a short exponential backoff. Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, and >=500 Internal errors are all retried by default.

You can use the max_retries option to configure or disable retry settings:

from raccoonai import RaccoonAI

# Configure the default for all requests:
client = RaccoonAI(
    # default is 2
    max_retries=0,
)

# Or, configure per-request:
client.with_options(max_retries=5).lam.run(
    query="Find the price of iphone 16 on Amazon.",
    raccoon_passcode="<end-user-raccoon-passcode>",
)

Timeouts

By default, requests time out after 1 minute. You can configure this with a timeout option, which accepts a float or an httpx.Timeout object:

from raccoonai import RaccoonAI

# Configure the default for all requests:
client = RaccoonAI(
    # 300 seconds (default is 1 minute)
    timeout=300.0,
)

# Override per-request:
client.with_options(timeout=300.0).lam.run(
    query="Find the price of iphone 16 on Amazon.",
    raccoon_passcode="<end-user-raccoon-passcode>",
)

On timeout, an APITimeoutError is thrown.

Note that requests that time out are retried twice by default.

Advanced

Managing HTTP resources

By default the library closes underlying HTTP connections whenever the client is garbage collected. You can manually close the client using the .close() method if desired, or with a context manager that closes when exiting.

from raccoonai import RaccoonAI

with RaccoonAI() as client:
  # make requests here
  ...

# HTTP client is now closed

Determining the installed version

If you’ve upgraded to the latest version but aren’t seeing any new features you were expecting then your python environment is likely still using an older version.

You can determine the version that is being used at runtime with:

import raccoonai
print(raccoonai.__version__)

Requirements

Python 3.8 or higher.

Was this page helpful?