This post is cowritten with Harrison Hunter is the CTO and co-founder of MaestroQA.
MaestroQA augments call center operations by empowering the quality assurance (QA) process and customer feedback analysis to increase customer satisfaction and drive operational efficiencies. They assist with operations such as QA reporting, coaching, workflow automations, and root cause analysis.
In this post, we dive deeper into one of MaestroQA’s key features—conversation analytics, which helps support teams uncover customer concerns, address points of friction, adapt support workflows, and identify areas for coaching through the use of Amazon Bedrock. We discuss the unique challenges MaestroQA overcame and how they use AWS to build new features, drive customer insights, and improve operational inefficiencies.
Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies, such as AI21 Labs, Anthropic, Cohere, Meta, Mistral, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI.
The opportunity for open-ended conversation analysis at enterprise scale
MaestroQA serves a diverse clientele across various industries, including ecommerce, marketplaces, healthcare, talent acquisition, insurance, and fintech. All of these customers have a common challenge: the need to analyze a high volume of interactions with their customers. Analyzing these customer interactions is crucial to improving their product, improving their customer support, providing customer satisfaction, and identifying key industry signals. However, customer interaction data such as call center recordings, chat messages, and emails are highly unstructured and require advanced processing techniques in order to accurately and automatically extract insights.
When customers receive incoming calls at their call centers, MaestroQA employs its proprietary transcription technology, built by enhancing open source transcription models, to transcribe the conversations. After the data is transcribed, MaestroQA uses technology they have developed in combination with AWS services such as Amazon Comprehend to run various types of analysis on the customer interaction data. For example, MaestroQA offers sentiment analysis for customers to identify the sentiment of their end customer during the support interaction, enabling MaestroQA’s customers to sort their interactions and manually inspect the best or worst interactions. MaestroQA also offers a logic/keyword-based rules engine for classifying customer interactions based on other factors such as timing or process steps including metrics like Average Handle Time (AHT), compliance or process checks, and SLA adherence.
MaestroQA’s customers love these analysis features because they allow them to continuously improve the quality of their support and identify areas where they can improve their product to better satisfy their end customers. However, they were also interested in more advanced analysis, such as asking open-ended questions like “How many times did the customer ask for an escalation?” MaestroQA’s existing rules engine couldn’t always answer these types of queries because end-users could ask for the same outcome in many different ways. For example, “Can I speak to your manager?” and “I would like to speak to someone higher up” don’t share the same keywords, but are both asking for an escalation. MaestroQA needed a way to accurately classify customer interactions based on open-ended questions.
MaestroQA faced an additional hurdle: the immense scale of customer interactions their clients manage. With clients handling anywhere from thousands to millions of customer engagements monthly, there was a pressing need for comprehensive analysis of support team performance across this vast volume of interactions. Consequently, MaestroQA had to develop a solution capable of scaling to meet their clients’ extensive needs.
To start developing this product, MaestroQA first rolled out a product called AskAI. AskAI allowed customers to run open-ended questions on a targeted list of up to 1,000 conversations. For example, a customer might use MaestroQA’s filters to find customer interactions in Oregon within the past two months and then run a root cause analysis query such as “What are customers frustrated about in Oregon?” to find churn risk anecdotes. Their customers really liked this feature and surprised MaestroQA with the breadth of use cases they covered, including analyzing marketing campaigns, service issues, and product opportunities. Customers started to request the ability to run this type of analysis across all of their transcripts, which could number in the millions, so they could quantify the impact of what they were seeing and find instances of important issues.
Solution overview
MaestroQA decided to use Amazon Bedrock to address their customers’ need for advanced analysis of customer interaction transcripts. Amazon Bedrock’s broad choice of FMs from leading AI companies, along with its scalability and security features, made it an ideal solution for MaestroQA.
MaestroQA integrated Amazon Bedrock into their existing architecture using Amazon Elastic Container Service (Amazon ECS). The customer interaction transcripts are stored in an Amazon Simple Storage Service (Amazon S3) bucket.
The following architecture diagram demonstrates the request flow for AskAI. When a customer submits an analysis request through MaestroQA’s web application, an ECS cluster retrieves the relevant transcripts from Amazon S3, cleans and formats the prompt, sends them to Amazon Bedrock for analysis using the customer’s selected FM, and stores the results in a database hosted in Amazon Elastic Compute Cloud (Amazon EC2), where they can be retrieved by MaestroQA’s frontend web application.
MaestroQA offers their customers the flexibility to choose from multiple FMs available through Amazon Bedrock, including Anthropic’s Claude 3.5 Sonnet, Anthropic’s Claude 3 Haiku, Mistral 7b/8x7b, Cohere’s Command R and R+, and Meta’s Llama 3.1 models. This allows customers to select the model that best suits their specific use case and requirements.
The following screenshot shows how the AskAI feature allows MaestroQA’s customers to use the wide variety of FMs available on Amazon Bedrock to ask open-ended questions such as “What are some of the common issues in these tickets?” and generate useful insights from customer service interactions.
To handle the high volume of customer interaction transcripts and provide low-latency responses, MaestroQA takes advantage of the cross-Region inference capabilities of Amazon Bedrock. Originally, they were doing the load balancing themselves, distributing requests between available AWS US Regions (us-east-1
, us-west-2
, and so on) and available EU Regions (eu-west-3
, eu-central-1
, and so on) for their North American and European customers, respectively. Now, the cross-Region inference capability of Amazon Bedrock enables MaestroQA to achieve twice the throughput compared to single-Region inference, a critical factor in scaling their solution to accommodate more customers. MaestroQA’s team no longer has to spend time and effort to predict their demand fluctuations, which is especially key when usage increases for their ecommerce customers around the holiday season. Cross-Region inference dynamically routes traffic across multiple Regions, providing optimal availability for each request and smoother performance during these high-usage periods. MaestroQA monitors this setup’s performance and reliability using Amazon CloudWatch.
Benefits: How Amazon Bedrock added value
Amazon Bedrock has enabled MaestroQA to innovate faster and gain a competitive advantage by offering their customers powerful generative AI features for analyzing customer interaction transcripts. With Amazon Bedrock, MaestroQA can now provide their customers with the ability to run open-ended queries across millions of transcripts, unlocking valuable insights that were previously inaccessible.
The broad choice of FMs available through Amazon Bedrock allows MaestroQA to cater to their customers’ diverse needs and preferences. Customers can select the model that best aligns with their specific use case, finding the right balance between performance and price.
The scalability and cross-Region inference capabilities of Amazon Bedrock enable MaestroQA to handle high volumes of customer interaction transcripts while maintaining low latency, regardless of their customers’ geographical locations.
MaestroQA takes advantage of the robust security features and ethical AI practices of Amazon Bedrock to bolster customer confidence. These measures make sure that client data remains secure during processing and isn’t used for model training by third-party providers. Additionally, Amazon Bedrock availability in Europe, coupled with its geographic control capabilities, allows MaestroQA to seamlessly extend AI services to European customers. This expansion is achieved without introducing additional complexities, thereby maintaining operational efficiency while adhering to Regional data regulations.
The adoption of Amazon Bedrock proved to be a game changer for MaestroQA’s compact development team. Its serverless architecture allowed the team to rapidly prototype and refine their application without the burden of managing complex hardware infrastructure. This shift enabled MaestroQA to channel their efforts into optimizing application performance rather than grappling with resource allocation. Moreover, Amazon Bedrock offers seamless compatibility with their existing AWS environment, allowing for a smooth integration process and further streamlining their development workflow. MaestroQA was able to use their existing authentication process with AWS Identity and Access Management (IAM) to securely authenticate their application to invoke large language models (LLMs) within Amazon Bedrock. They were also able to use the familiar AWS SDK to quickly and effortlessly integrate Amazon Bedrock into their application.
Overall, by using Amazon Bedrock, MaestroQA is able to provide their customers with a powerful and flexible solution for extracting valuable insights from their customer interaction data, driving continuous improvement in their products and support processes.
Success metrics
The early results have been remarkable.
A lending company uses MaestroQA to detect compliance risks on 100% of their conversations. Before, agents would raise internal escalations if a consumer complained about the loan or expressed being in a vulnerable state. However, this process was manual and error prone, and the lending company would miss many of these risks. Now, they are able to detect compliance risks with almost 100% accuracy.
A medical device company, who is required to report device issues to the FDA, no longer relies solely on agents to report internally customer-reported issues, but uses this service to analyze all of their conversations to make sure all complaints are flagged.
An education company has been able to replace their manual survey scores with an automated customer sentiment score that increased their sample size from 15% to 100% of conversations.
The best is yet to come.
Conclusion
Using AWS, MaestroQA was able to innovate faster and gain a competitive advantage. Companies from different industries such as financial services, healthcare and life sciences, and EdTech all share the common desire to provide better customer services for their clients. MaestroQA was able to enable them to do that by quickly pivoting to offer powerful generative AI features that solved tangible business problems and enhanced overall compliance.
Check out MaestroQA’s feature AskAI and their LLM-powered AI Classifiers if you’re interested in better understanding your customer conversations and survey scores. For more about Amazon Bedrock, see Get started with Amazon Bedrock and learn about features such as cross-Region inference to help scale your generative AI features globally.
About the Authors
Carole Suarez is a Senior Solutions Architect at AWS, where she helps guide startups through their cloud journey. Carole specializes in data engineering and holds an array of AWS certifications on a variety of topics including analytics, AI, and security. She is passionate about learning languages and is fluent in English, French, and Tagalog.
Ben Gruher is a Generative AI Solutions Architect at AWS, focusing on startup customers. Ben graduated from Seattle University where he obtained bachelor’s and master’s degrees in Computer Science and Data Science.
Harrison Hunter is the CTO and co-founder of MaestroQA where he leads the engineer and product teams. Prior to MaestroQA, Harrison studied computer science and AI at MIT.