This library provides convenient access to the Raccoon AI REST API from server-side TypeScript or JavaScript.

Installation

npm install raccoonai

Usage

import RaccoonAI from 'raccoonai';

const client = new RaccoonAI({
  secretKey: process.env['RACCOON_SECRET_KEY']
});

async function main() {
  const response = await client.lam.run({
    query: 'Find the price of iphone 16 on Amazon.',
    raccoon_passcode: '<end-user-raccoon-passcode>'
  });

  console.log(response.message);
}

main();

While you can provide a secretKey keyword argument, we recommend using environment variables for maximum security. to add RACCOON_SECRET_KEY="My Secret Key" to your .env file so that your Secret Key is not stored in source control.

Streaming responses

We provide support for streaming responses using Server Sent Events (SSE).

import RaccoonAI from 'raccoonai';

const client = new RaccoonAI();

const stream = await client.lam.run({
  query: 'Find the price of iphone 16 on Amazon.',
  raccoon_passcode: '<end-user-raccoon-passcode>',
  stream: true,
});
for await (const lamRunResponse of stream) {
  console.log(lamRunResponse.message);
}

If you need to cancel a stream, you can break from the loop or call stream.controller.abort().

Retries

Certain errors will be automatically retried 0 times by default, with a short exponential backoff. Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, and >=500 Internal errors will all be retried by default.

You can use the maxRetries option to configure or disable this:

// Configure the default for all requests:
const client = new RaccoonAI({
  maxRetries: 0 // default is 2
});

// Or, configure per-request:
await client.lam.run({ query: 'Find the price of iphone 16 on Amazon.', raccoon_passcode: '<end-user-raccoon-passcode>' }, {
  maxRetries: 5
});

Timeouts

Requests time out after 10 minutes by default. You can configure this with a timeout option:

// Configure the default for all requests:
const client = new RaccoonAI({
  timeout: 20 * 1000, // 20 seconds (default is 10 minutes)
});

// Override per-request:
await client.lam.run({ query: 'Find the price of iphone 16 on Amazon.', raccoon_passcode: '<end-user-raccoon-passcode>' }, {
  timeout: 5 * 1000,
});

On timeout, an APIConnectionTimeoutError is thrown.

Note that requests which time out will be retried twice by default.

Advanced

Customizing the fetch client

By default, this library uses node-fetch in Node, and expects a global fetch function in other environments.

If you would prefer to use a global, web-standards-compliant fetch function even in a Node environment, (for example, if you are running Node with --experimental-fetch or using NextJS which polyfills with undici), add the following import before your first import from "RaccoonAI":

// Tell TypeScript and the package to use the global web fetch instead of node-fetch.
// Note, despite the name, this does not add any polyfills, but expects them to be provided if needed.
import 'raccoonai/shims/web';
import RaccoonAI from 'raccoonai';

To do the inverse, add import "raccoonai/shims/node" (which does import polyfills). This can also be useful if you are getting the wrong TypeScript types for Response (more details).

Logging and middleware

You may also provide a custom fetch function when instantiating the client, which can be used to inspect or alter the Request or Response before/after each request:

import { fetch } from 'undici'; // as one example
import RaccoonAI from 'raccoonai';

const client = new RaccoonAI({
  fetch: async (url: RequestInfo, init?: RequestInit): Promise<Response> => {
    console.log('About to make a request', url, init);
    const response = await fetch(url, init);
    console.log('Got response', response);
    return response;
  },
});

Note that if given a DEBUG=true environment variable, this library will log all requests and responses automatically. This is intended for debugging purposes only and may change in the future without notice.

Requirements

TypeScript >= 4.5 is supported.

The following runtimes are supported:

  • Web browsers (Up-to-date Chrome, Firefox, Safari, Edge, and more)
  • Node.js 18 LTS or later (non-EOL) versions.
  • Deno v1.28.0 or higher.
  • Bun 1.0 or later.
  • Cloudflare Workers.
  • Vercel Edge Runtime.
  • Jest 28 or greater with the "node" environment ("jsdom" is not supported at this time).
  • Nitro v2.6 or greater.

Note that React Native is not supported at this time.

Was this page helpful?