One question that arises often is what is a good use case to use LLM for? Below is such a use case.
TL;DR
LLMs excel at understanding natural language. You can use them to interpret unstructured user queries, figure out the correct API to call, and respond with precise results.
DETAILS
A picture is worth a thousand words, so let’s follow this flow:
Figure: Flow between User, Your App, and LLM
Imagine you have a repository of 10 APIs. While clients can technically call them directly (if public), you want to allow natural, conversational interaction instead of requiring them to know API details or write code.
For example, a client might ask: “Hey, can you tell me the best discount you can give me if the quote is $1200?”
Old way (pre-GenAI):
The user enters $1200 into a text box, clicks “Calculate discount,” the backend calls “calculateDiscount(quoteAmount)”, and the frontend displays the result.
New way (LLM-powered):
The user simply types or says the request in plain English. Your app sends that unstructured query to an LLM that already “knows” your API catalog and expected parameters. The LLM responds in a predefined structure specifying:
Which API to call
The exact arguments to use
Your backend uses that structured output to call the right API, get the result, and present it back to the user.
Alternatively, the LLM can also call that specific API along with its expected parameter, get the response back, send it to the user.
Result:
Users can now interact with your system naturally -- asking things like “How many types of paracetamol tablets are in stock?” or “What’s the max discount I can get on a $1200 quote?” -- without needing to navigate forms or learn API syntax.
Closing Remark
By bridging human-friendly language and machine-friendly API calls, LLMs can turn any app into a conversational interface -- making your systems both more accessible and more powerful.