Maia's New Backend!
We built a new backend for Maia! This was for two reasons: 1) OAI signalled they would be depricating the Assistants API and 2) new features were increasingly difficult to implement on the previous stack. After a couple months building and testing, it is now live and serving all of our traffic. Here are a few of the more notable changes:
1) Model changes: we've long wanted to deploy a few different models, particularly for certain sector-questions that are complicated and require multi-step calculations and multi-turn online research. In addition to GPT 4.1 we are now using Gemini 2.5 Pro.
2) Business profiles: now that we're managing conversation history directly instead of using OAI's threads, we're implementing improvements in context engineering to make sure important business information is easily surfaced to the responding model as a detailed business profile rather than burried in a long transcript. These LLM-optimized business profiles are only available for that user's conversation and follow our strict privacy policy.
3) Usability improvements: our users aren't familiar with AI and expect the tool to function like a human conversation on WhatsApp. That menas processing multi-message bursts as a single message and queueing messages rather than forcing users into the usual one message - wait for response before next message structure of AI chatbots.
4) Image generation: we're adding LLM-generated images for particular business cases where its useful (eg ideating on logos and branding, social media promotions) while teaching users other tools for use cases that are less appropriate for LLM images (eg adding the Yape logo to their sign).
Along with these and other improvements for users, this backend is also highly scalable and modular (implemented as a set of google cloud functions), which makes it easier to share with white-label partners.