How to Setup LLM Analytics and Monitoring to Enhance Your AI Chatflows

If you’ve ever wondered how you can improve your AI applications and see what’s going on under the hood of your LLM’s then you need to have a good LLM analytics and monitoring platform .

At this point Large Language Models (LLMs) have gotten really good at predictive text generation and answering questions based on embedded documents. However in a production environment, you still want to be able to troubleshoot problems and improve the quality of your chatflow responses. As a result, LLM analytics & monitoring can provide real-time insights that can help you in five important ways.

  1. Debugging: Large Language Models (LLMs) can be complex, with many interconnected parts, making it challenging to find and fix issues. However with tools like logging and telemetry data that act like breadcrumbs, we can track the source of the problem.

  2. Testing: Testing is our way of checking our application to make sure it performs well. LLM testing can be a bit of a rollercoaster though because they’re trained on such large datasets. While you may not necessarily have to test the LLM itself, you can test different components in your Flowise chatflow including memory, custom tools and prompt chains etc.

  3. Evaluation: Since there’s no one-size-fits-all way to determine LLM quality, its going to be upto you to decide the quality of the responses for your application. With rapidly improving LLMs though, the quality of your embedded data, especially for Conversational Retreival, will determine the quality of your output responses.


  4. Tracing: Tracing is a way of following our chatflow’s journey from inital prompt to an output response. Think of it as a way to ensure the LLM’s answers can be traced back to the original documents, so it’s not making things up or hallucinating. In Flowise you can also have your chatflow return its source documents so you can see where it’s getting its information from. This will dramatically help improve the quality of your responses.

  5. Usage Metrics: Monitoring how an LLM is used provides valuable insights for enhancing performance and usability. Common usage metrics include the number of requests per second, average usage time, and popular LLM tasks. Comparing metrics across different models assists in selecting the most suitable model for specific scenarios.

    How to Install LLM Analytics for Flowise

Flowise comes integrated with three analytics platforms: Langfuse, LLM Monitor(Lunary) and Langsmith

At the time of this writing, Langsmith is still invite only, so I was only able to try LangFuse and LLM Monitor, now called Lunary. Both are open source projects so you can self host them on your own server. However I used the cloud versions for both platforms.

Accessing both tools in Flowise is pretty straightforward. Once you sign up for an account at either Langfuse or Lunary, you have to copy and paste the API keys to Flowise as credentials. After that you can select those credentials when choosing which analytics tool you want to use for your chatflow.

Once we have entered our credentials and activated the platform we want (in this case Lunary), we can start monitoring our prompts and output responses as well as an estimate of the tokens used during a query and response.

As you can see below, we are able track calls to our LLM and see exactly how our prompts are working. 

 

Lunary also provides traces so we see each step of the execution chain including the elapsed time in seconds. This can often help us spot performance issues and reduce latency in our chatflows.

Using an analytics tool also helps you see how your chatflow history works and how previous responses are added to the prompt context window to answer related questions. 

Another real time insight is the number of tokens consumed. If your chatflow is using a chat history for example, that can significantly increase your token usage and affect LLM costs.

Both Langfuse and LLM Monitor/Lunary do a great job of show you your traces, although I tended to prefer Lunary’s UI and their ability to estimate my token usage. 

Depending on your LLM and Chatflow, you may get different results so you’ll just have to test it for yourself. I’ve found these tools are incredibly useful for sorting out any issues in my chatflow.

Langfuse:
langfuse.com

LLM Monitor(Lunary): 
lunary.ai

 
Share the Post:

Flowise Chain Node Mastery (Mini-Course)

Picture of Level: Intermediate

Level: Intermediate

Flowise Training

Level: Intermediate Training

Discover How to use Chains Nodes in Flowise to Build Powerful AI Applications

Chain nodes are the building blocks of Flowise, a powerful tool for creating conversational AI applications. These nodes enable you to perform a wide range of tasks, from retrieving data from APIs and databases to generating responses using large language models. In this mini course, you will learn about the different chain nodes available in Flowise and how to use them to build sophisticated conversational AI applications.

Here’s What You’ll Learn:

  • How each of the chain node works and how to use them in Flowise.
  • Which nodes have been replaced by newer tools and updates.
  • How to connect to a variety of APIs to extend the power of your Chatflows and AI applications
  • How to use the SQL Database Chain to interact with SQL databases using natural language
  • When to use prompt chains vs retrieval chains in your workflows
  • How to add chat models in your chains that might not be supported yet in Flowise
  • How to create multi-prompt chains as well as retrieval QA chains that can handle multiple documents.

By the end of this mini course, you will have a better understanding of the different chain nodes available in Flowise and how to use them to build sophisticated conversational AI applications.

You will also learn best practices for improving the performance of your chains, handling errors, and formatting your data and loading documents for your RAG (Retrieval) applications.

 

Course Content

8 Sections | 4 Lectures | 50 Minutes Total Length

Lesson 1: API Chains

In this lesson we’ll look at the API Chains including the GET and POST Chain as wel as the OpenAI API Chain. These Chains are used to connect to a variety of third party APIs for integration with Flowise.

Project Files: Open Movie Review Chatbot (POST | GET Request)
Lesson 2: Database Chains

In this lesson, we’ll expore our database chains, including the SQL Database Chain, Vectara QA Chain and the VectorDB QA Chain. These chains help you connect to and query a database.

In the case of the SQL Chain, it will use SQL query language to query a database based on your questions. The Vectara and Vector DB chains connect to a vector database in order to summarize and answer questions based on your documents. 

Project Files: SQL Database Chain

In this project, we've created our own SQL database with data from the Open Movie Database to test in our SQL Chain in Flowise

Lesson 3: Retrieval QA Chains

Retrieval QA Chains are a core feature of Flowise that allow you to build powerful Retrieval Augmented Generation or RAG applications. 

In this lesson, we’ll look at each of the Retrieval QA chains and sh

Project Files: Retrieval QA Chains
Lesson 4: LLM Prompt Chains

LLM Prompt Chains allow us to add Prompt Templates to our chatflows and customise exacly how we want our LLMs to respond to our queries or user input. 

Prompt templates also allow you to add additional information from previous nodes, allowing you to create multistep workflows based on what you need your application to do.

In this lesson, we’ll look at three nodes, The LLM Chain, The Conversation Chain and the Multi Prompt Chain. Each one can be connected to chat prompt templates so you can create complex conversational workflows that leverage the power of large language models (LLMs)

Project Files: LLM Prompt Chains

In this project, we'll create an App Idea Generator using LLM prompt chaining

Review Q+A

In this session, we'll answer some common questions about Flowise including:
1. How can I improve the performance of a chain in Flowise?(i.e improving quality of responses, Reduce hallucinations, increasing speed etc)
2. What is the best way to connect multiple chain nodes in Flowise?
3. How can I use chat models in my chains that might not be supported yet in Flowise
4. What is the best way to load documents for retrieval (PDF, CSV, TXT etc)
5. How do I handle errors in a chain in Flowise?
6. How do I format the output of a Retrieval QA Chain in Flowise?
7. How to create a Retrieval QA Chain that can handle multiple documents in Flowise?
8. What do I need to make sure that values are passed between chain nodes?
9. Is it possible to use a form with many inputs in Flowise instead of the chat input method?
10. How to create a chain that can handle large datasets in Flowise?

Contact Us