top of page
Writer's pictureRevanth Reddy Tondapu

How to Deploy and Integrate Langro Cloud for AI-Powered Applications


Integrate Langro Cloud for AI-Powered Applications
Integrate Langro Cloud for AI-Powered Applications

In this blog post, we'll explore Langro Cloud, a powerful platform for deploying AI agents and monitoring their performance. Langro Cloud allows you to create API endpoints for your AI applications, enabling seamless interaction between agents and tools. We'll walk through the steps to create, test, deploy, and integrate your Langro application.


What is Langro Cloud?

Langro Cloud is a cloud-based platform that offers:

  • Visual Monitoring: Track the performance and interactions of your AI agents.

  • API Endpoints: Deploy your AI applications as API endpoints for easy integration.

  • Agent Management: Create and manage AI agents that utilize various tools and models.

Let's dive into the steps to create a Langro application, test it locally, deploy it to Langro Cloud, and integrate it into your own application.


Step 1: Create a Langro Application

First, let's set up the folder structure for your Langro application. You will need the following files:

  • agent.py: Define your AI agents.

  • requirements.txt: List all dependencies.

  • langgraph.json: Configuration file for Langro.

  • .env: Environment variables for API keys.


Folder Structure

langro_app/
│
├── agent.py
├── requirements.txt
├── langgraph.json
└── .env

requirements.txt

langgraph
langchain_anthropic
tavily-python
langchain_community
langgraph-cli

.env

TAVILY_API_KEY=your_tavily_api_key_here
ANTHROPIC_API_KEY=your_anthropic_api_key_here
LANGCHAIN_API_KEY=your_langchain_api_key_here
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from langgraph.prebuilt import create_react_agent

# Initialize the Claude 3.5 Sonet model from Anthropic
model = ChatAnthropic(model="claude-3-5-sonnet-20240620")

# Define the tools to be used by the agent
tools = [TavilySearchResults(max_results=2)]

# Create the agent graph
graph = create_react_agent(model, tools)

langgraph.json

{
    "dependencies": ["."],
    "graphs": {
        "agent": "./agent.py:graph"
    },
    "env": ".env"
}

Explanation

  • requirements.txt: Lists the packages required for the application.

  • .env: Stores API keys needed for various services.

  • agent.py: Defines the AI agent using the Claude 3.5 Sonet model and the Tavily Search tool.

  • langgraph.json: Configuration file that specifies dependencies and the agent graph.


Step 2: Test Locally

To test the application locally, we use the Langro CLI. First, install the dependencies:

pip install -r requirements.txt

Then, test the application locally:

langgraph test

This command will build the application and provide a local URL. You can test the endpoint using a tool like curl:

curl --request POST \
    --url http://localhost:8123/runs/stream \
    --header 'Content-Type: application/json' \
    --data '{
    "assistant_id": "agent",
    "input": {
        "messages": [
            {
                "role": "user",
                "content": "Give me latest AI News"
            }
        ]
    },
    "metadata": {},
    "config": {
        "configurable": {}
    },
    "multitask_strategy": "reject",
    "stream_mode": [
        "values"
    ]
}'

Step 3: Deploy to Langro Cloud

To deploy your application to Langro Cloud, follow these steps:

  1. Push your code to a GitHub repository.

  2. Go to the Langro Cloud deployment page.

  3. Link your GitHub repository.

  4. Configure the deployment settings.

  5. Deploy the application.

Once deployed, you will get a URL for your API endpoint. You can use this URL to integrate with your own applications.


Step 4: Integrate with Your Application

To integrate the deployed API with your application, you can use a framework like Chainlit for creating a user interface.

Install Chainlit

pip install chainlit

Create a main.py for Chainlit Integration

import os
import chainlit as cl
from dotenv import load_dotenv
import requests
import json

load_dotenv()

# Load API keys from .env file
ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY")
TAVILY_API_KEY = os.getenv("TAVILY_API_KEY")
LANGCHAIN_API_KEY = os.getenv("LANGCHAIN_API_KEY")

def parse_sse(line):
    if line.startswith("data:"):
        try:
            return json.loads(line[5:].strip())
        except json.JSONDecodeError:
            return None
    return None

def extract_content(message):
    if isinstance(message.get("content"), list):
        for content_item in message["content"]:
            if content_item.get("type") == "text":
                yield content_item.get("text", "")
            elif content_item.get("type") == "tool_use":
                yield f"\n**Tool Use:** {content_item.get('name', 'Unknown Tool')}\n"
                yield f"```json\n{json.dumps(content_item.get('partial_json', {}), indent=2)}\n```\n"
    elif isinstance(message.get("content"), str):
        yield message["content"]

@cl.on_message
async def main(message: cl.Message):
    response = requests.post(
        "http://your_deployed_url_here/runs/stream",
        headers={
            "Content-Type": "application/json",
            "X-Api-Key": ANTHROPIC_API_KEY,
        },
        json={
            "assistant_id": "agent",
            "input": {
                "messages": [{"role": "user", "content": message.content}]
            },
            "metadata": {},
            "config": {"configurable": {}},
            "multitask_strategy": "reject",
            "stream_mode": ["values"],
        },
        stream=True
    )

    try:
        response.raise_for_status()

        msg = cl.Message(content="")
        await msg.send()

        for line in response.iter_lines():
            if line:
                line = line.decode('utf-8')
                if line.startswith("event:"):
                    event_type = line.split(":", 1)[1].strip()
                    await msg.stream_token(f"\n\n**Event:** {event_type}\n\n")
                elif line.startswith("data:"):
                    data = parse_sse(line)
                    if data and "messages" in data:
                        for message in data["messages"]:
                            if message["type"] == "human":
                                await msg.stream_token(f"\n**Human:** {message['content']}\n\n")
                            elif message["type"] == "ai":
                                await msg.stream_token("**AI:** ")
                                for content in extract_content(message):
                                    await msg.stream_token(content)
                            elif message["type"] == "tool":
                                await msg.stream_token(f"\n**Tool Result ({message['name']}):**\n")
                                await msg.stream_token(f"```json\n{message['content']}\n```\n")
                    else:
                        prettified = json.dumps(data, indent=2)
                        await msg.stream_token(f"```json\n{prettified}\n```\n")
                else:
                    await msg.stream_token(f"{line}\n")

        await msg.update()

    except requests.exceptions.HTTPError as http_err:
        await cl.Message(content=f"HTTP error occurred: {http_err}").send()
    except requests.exceptions.RequestException as req_err:
        await cl.Message(content=f"Request error occurred: {req_err}").send()
    except ValueError:
        await cl.Message(content="Error: Invalid JSON response").send()

if __name__ == "__main__":
    cl.run()

Explanation

  • Load API Keys: Load necessary API keys from the .env file.

  • Parse SSE: Helper functions to parse server-sent events and extract content.

  • Main Function: Handle incoming messages, send requests to the Langro API, and stream responses back to the user.

  • Run Chainlit: Initialize and run the Chainlit server.


Run Chainlit

chainlit run main.py

Test the User Interface

Open the provided URL in your browser and interact with your AI agent by asking questions such as "Give me the latest AI news."


Conclusion

In this blog post, we walked through the steps to create, test, deploy, and integrate a Langro Cloud application. By leveraging Langro Cloud, you can easily manage and deploy AI agents, monitor their performance, and integrate them into your own applications.

We hope you found this tutorial helpful. Happy coding! 🚀

If you have any questions or need further assistance, feel free to reach out. Don't forget to like, share, and subscribe for more content!

9 views0 comments

Bình luận


bottom of page