Last Update Date January 5, 2025

INTRODUCTION

This document provides a comprehensive set of guidelines for developers integrating LangChain agents into the enso AI Agent marketplace. By following this guide, developers can ensure their AI agents comply with the required specifications, handle inputs and outputs appropriately, and provide a seamless experience for users on the enso platform.

<aside> ❗ To interact with the API, you need an API key. You can obtain your API key from the “Seller Profile Page”. If this is your first time using the API, we recommend starting with the Getting Started Guide to learn how to generate an API key and set up your environment.

</aside>

Once your Agent is deployed and approved by enso’s review team, it can receive execution triggers via HTTP calls and respond with the required output. This document outlines the required input schema, the expected output schema, and best practices for handling API keys within the enso ecosystem.

CONVENTIONS

The base URL for all API requests is https://api.enso.bot/. HTTPS is required for all API interactions.

The API follows RESTful conventions with operations performed using GET, POST, PUT,PATCH , and DELETE requests on various resources. All request and response bodies are encoded as JSON.

INPUT SCHEMA CONVENTIONS

When your Agent is triggered, it will receive a JSON payload with the following structure:

{
  "execution_id": "string",
  "inputs": {
    "enso_input": {
      "business_description": "string",
      "business_name": "string",
      "email": "string",
      "logo":"url",
      "color_palette": List["strings"]
      "api_keys": {
        "openai": "string",
        "gemini": "string"
        // ... Additional API keys as needed
      }
    },
    "user_input": {
	    // ... Fields that you are requesting from the user
    }
  },
  "webhook_url": "Callback URL string"
}

Field-by-Field Explanation

  1. execution_id (string)
  2. inputs (object)
  3. enso_input (object)
  4. user_input (object)
  5. webhook_url (string or null)

Usage Restrictions and Recommendations

  1. Do Not Store Keys Persistently
  2. Environment Configuration
  3. Error Handling

Response

A single response should be formed as the following (ResultResponseItem):

{
    "type": "string (DataType)",
    "url": "string (optional)",
    "text": "string (optional)",
    "list": ["string (optional)"
}

The output is a list of ResultResponseItem objects, encapsulated in the following structure:

{
    "result": [
    [
        {
            "type": "string (DataType)",
            "url": "string (optional)",
            "text": "string (optional)",
            "list": ["string (optional)"]
        },
        
    ]
    ],
    "error": {
        "code": "string",
        "message": "string",
        "details": "string"
    }
}

.......

{
  "execution_id": "string",
  "status": "string",   // e.g., "success", "error"
  "message": "string",  // Optional info or error details
  "results": [
    {
      "type": "text/pdf/image_b64/image_url/video/csv/html/none",
      "url": "string or null",
      "text": "string or null",
      "list": ["string", ...]  // or null
    },
    ...
  ]
}

This is a single AI Agent response model. it can be encapsulated within another array of “Agents” and present numebr of Services results in the same agent catalog.

Example Output

Below is an example where the Agent returns multiple image URLs (e.g., 7 logos) plus some textual information:

{
  "execution_id": "LOGO123",
  "status": "success",
  "message": "Generated 7 logos successfully",
  "results": [
    {
      "type": "image_url",
      "url": "<https://bucket.s3.amazonaws.com/logo1.png>",
      "text": null,
      "list": null
    },
    ....
    {
      "type": "image_url",
      "url": "<https://bucket.s3.amazonaws.com/logo7.png>",
      "text": null,
      "list": null
    },
    {
      "type": "text",
      "url": null,
      "text": "Final design notes and suggestions go here...",
      "list": null
    }
  ]
}

CODE SAMPLES AND SDK

Request samples are provided for each endpoint using common tools like cURL and code snippets in popular SDKs. You can also use any HTTP client library of your choice.

Sample using cURL:

curl -X GET "<https://api.enso.bot/trigger>" \\
  -H "Authorization: Bearer API KEY"
  
  
{
  "execution_id": "123-asd-123-asd",
  "inputs": {
    "enso_input": {
      "business_description": "enso is an AI-driven marketplace designed to enhance small business growth through automated solutions, offering services like content creation, design, video production, and lead engagement across platforms such as Instagram, Facebook, LinkedIn, and Google. With a 2-minute setup and a 7-day free trial without requiring a credit card, Enso provides affordable packages starting from $29 to $500 per month, catering to diverse business needs. Its bots streamline operations by performing repetitive tasks, freeing up business owners to focus on strategy and creativity. Enso's solutions integrate effortlessly into existing workflows, are scalable, and cater to over 100 industries, ensuring cost-effective and efficient results. The company emphasizes reliability and productivity by guaranteeing 24/7 operation and offering comprehensive customer support. Additionally, users can request custom bot solutions tailored to specific business requirements, and Enso highlights ease of use with a quick, two-click setup process that saves time and money. Enso showcases its commitment to user engagement by encouraging free consultations to develop personalized strategies, ensuring simplified, scalable, and successful business growth.",
      "business_name": "enso",
      "email": "[email protected]",
      "logo":"<https://framerusercontent.com/images/opjW3P6FiKvd7FxaXyGt5c4ik.png?scale-down-to=1024>",
      "color_palette": ["#123123","#345345"]
      
      "api_keys": {
        "openai": "sk-123123123",
      }
    },
    "user_input": {
	    "slogen":"We are ONE!"
    }
  },
  "webhook_url": "<https://api.enso.bot/callback/123-asd-123-asd>"
}

Sample Langgraph project:

Sample using python :

from typing import Dict, Any
import requests
import json

# Example imports for LangChain usage
from langchain.agents import initialize_agent
from langchain.llms import OpenAI
# from your_models import DataType, ResultResponseItem  # Example import for pydantic models

def handle_request(request_data: Dict[str, Any]) -> Dict[str, Any]:
    """Main entry point for the Agent."""
    execution_id = request_data.get("execution_id", "")
    inputs = request_data.get("inputs", {})
    enso_input = inputs.get("enso_input", {})
    webhook_url = request_data.get("webhook_url")

    # Extract fields
    business_description = enso_input.get("business_description")
    business_name = enso_input.get("business_name")
    email = enso_input.get("email")
    api_keys = enso_input.get("api_keys", {})

    # Get relevant API keys
    openai_api_key = api_keys.get("openai")
    if not openai_api_key:
        return {
            "execution_id": execution_id,
            "status": "error",
            "message": "Missing OpenAI API key.",
            "results": []
        }

    # (Optional) Initialize your LLM
    llm = OpenAI(api_key=openai_api_key)

    # -------------------------------------------------------------
    # Example: Generate multiple image URLs and some text
    # -------------------------------------------------------------

    # Let's pretend we made a call and got 3 logo URLs
    logo_urls = [
        "<https://mybucket.s3.amazonaws.com/logo1.png>",
        "<https://mybucket.s3.amazonaws.com/logo2.png>",
        "<https://mybucket.s3.amazonaws.com/logo3.png>",
    ]

    # Build the results list of `ResultResponseItem`-like dictionaries
    results_list = []
    for url in logo_urls:
        results_list.append({
            "type": "image_url",
            "url": url,
            "text": None,
            "list": None
        })

    # Add a text result summarizing the operation
    results_list.append({
        "type": "text",
        "url": None,
        "text": f"Logos generated for {business_name}.",
        "list": None
    })

    # Build the response
    response = {
        "execution_id": execution_id,
        "status": "success",
        "message": "Logos generated successfully",
        "results": results_list
    }

    # If a webhook is specified, send the response asynchronously
    if webhook_url:
        try:
            requests.post(webhook_url, json=response)
        except Exception as e:
            response["status"] = "error"
            response["message"] = f"Failed to POST to webhook: {str(e)}"

    return response

# Example usage
if __name__ == "__main__":
    request_data_example = {
        "execution_id": "LOGO123",
        "inputs": {
            "enso_input": {
                "business_description": "AI marketplace.",
                "business_name": "Acme Corp",
                "email": "[email protected]",
                "api_keys": {
                    "openai": "sk-XXXX"
                }
            }
        },
        "webhook_url": None
    }

    final_response = handle_request(request_data_example)
    print(json.dumps(final_response, indent=2))

REQUEST LIMITS

To ensure fair usage and maintain performance, the API enforces rate and size limits on requests.

RATE LIMITS

The API imposes rate limits to prevent abuse and ensure equitable access.

<aside> ❗ Rate Limits May Change

The rate limits may be adjusted based on system demand and performance metrics. Different tiers of service might have distinct rate limits.

</aside>

SIZE LIMITS - (WIP)

The API enforces limits on the size of request parameters and depths.

| --- | --- | --- |

Requests that exceed these limits will trigger a 400 Validation Error, indicating that the request cannot be processed due to size constraints.


STATUS CODE

HTTP status codes provide critical information regarding the outcome of your API request. Understanding these codes helps in diagnosing issues and handling errors effectively.