Approach
Rather than retrofitting a chatbot or plugging in something generic, we started with the business workflow and use case — not the tech.
Our goal: identify where AI could actually solve a real problem without overwhelming the stack or user.
Step 1: Identify AI-worthy Use Cases
We started with user behavior analytics and feedback. Three use cases stood out:
- Smart form assistance: guide users through long data input steps
- Auto-generated summaries of dashboard reports
- Natural language Q&A on platform features
These were pain points where automation could enhance, not distract.
Step 2: Add AI Logic Without Bloating the Frontend
We kept the frontend lean by using:
fetch() and axios to call backend routes securely
debounce and async/await for optimal UX timing
- Suspense boundaries for loading fallback handling
On the backend, we created middleware with Laravel (could be Node as well) that handled:
- Prompt engineering per use case
- Authentication and rate-limiting
- Context injection (user, plan, past usage)
We routed requests to OpenAI and Anthropic based on function — OpenAI for summaries, Claude for broader answers.
Step 3: Keep Control With Custom Prompts
We didn’t use base LLMs directly. Instead, we created our own prompt templates using:
- Internal terminology
- Guardrails to avoid hallucinations
- Data-specific context pulled from the app database
Each prompt was dynamically assembled server-side. This kept the agent both smart and safe.