Overview
As part of the BOSC contract for the Center for Medicare and Medicaid Services (CMS), I led the design of an AI-powered chatbot designed to alleviate the strain on External User Support (EUS). By integrating Amazon Lex with CXOne, we aimed to automate high-volume, low-risk tasks for Medicare providers struggling with account access.
My Role: Sr. Product Designer
Duration: August 2025 - January 2026
Platform: Amazon Lex, CXOne
A Spike in Volume
During peak periods, support call and chat volumes for Medicare systems (IDM and I&A) exceeded project expectations and agent capacity. With limited resources to hire new staff, wait times spiked and CSAT scores declined.
The Goal:
Automate the "soft-locked account" password reset process.
Deflect common knowledge base questions.
Optimize live agent time by collecting user data and identity verification documents before hand-off.
Lean Research
Because time and access to users was limited, I utilized ServiceNow case data and CXOne end-of-session surveys to identify the most common pain points. A competitive audit of chatbots (especially in healthcare support) helped set a baseline for best practices and user expectations.
De-risking user-bot interactions was a primary concern. Ensuring bad actors couldn't probe the bot for user information meant updating our flow to send information to account-connected emails and ensuring wrong answers were never confirmed in the chat portal.
Technical Constraints
Amazon Lex Limitations: Providing the user with a review step (allowing users to check their data before submitting it for database query) seems to work against Lex logic, so we had to be strategic about when we offered that step.
Integration Hurdles: Bridging the gap between the Lex automated portal and the CXOne live agent portal was a significant risk to our January launch. Luckily, we're finally making some headway with the team that is managing that integration.
APIs for Days: CMS was able to provide us with APIs to search for I&A accounts by username as well as identifying info. We learned late in design that we'd also need one to automate closing ServiceNow tickets, or one of the agents would have to manually close every ticket resolved by the bot.
Seamless and Secure
Designing for a government entity meant that security and process were of the utmost importance. Even so, we wanted the experience to align with expectations Medicare providers might have of any non-government chat bot. I worked through several iterations of the user flow to ensure we were protecting Medicare provider PII and following processes that aligned with live agent support.
Critical Pivots:
No PII in the Portal: As mentioned above, we considered displaying usernames in the chat. For security reasons, we shifted the flow to have the bot trigger an automated email instead.
Eliminating the Pre-Chat Form: To avoid asking for too much from users up-front, we moved from a static pre-chat form to conversational data collection. Down the line, I'd love to A/B test whether autofill makes the pre-chat form the more convenient and accurate option.
Guided Navigation: Because NLU (Natural Language Understanding) can be unpredictable, I added suggested option buttons to guide users toward "happy paths" and act as guardrails against unrecognized inputs.
The challenge was making a rigid security verification feel like a natural conversation.
Dead Ends and Infinite Loops
Initial testing revealed two major friction points:
Premature Handoff: The bot was "too shy"—it frequently routed to live support for questions it actually had the data to answer, such as knowledge base questions. We wanted to route to agents for anything the bot couldn't support but still alleviate enough volume to impact our metrics.
Lack of Flexibility: Occasionally, if a user tried to decline a required field (like contact phone) or typed an answer like "I don't know" instead of selecting one of the offered options, the bot would repeat the question until it got an acceptable answer with no changes, even if the user requested an agent.
The Solution: We loosened the NLU thresholds and added contextual messaging. If a user was hesitant to provide a phone number, the bot now explains why it's needed (for verification/follow-up) rather than simply asking again.
What to Track After Launch
While the project launches in mid-January 2025, we have established clear metrics for success:
Increased CSAT scores
Decreased average wait times
Decreased time on call/chat
Decreased time to resolution
We also have a number of backlog features we’re looking to implement as fast-follows or as we’re able:
Confirming we have automated closing of ServiceNow tickets via API for bot-resolved cases
Having the bot look up duplicate accounts when initial search finds an archived account rather than transferring to an agent to do so
Expanding account access support to IDM accounts
A more nuanced response when the user says they still need help to handle questions about the same issue and new issues differently
Expanding automated services to other platforms in the contract beyond EUS.
Key Deliverables:
Comprehensive Figma High-Fi Mockups
Complex User Logic Flows & Journey Maps in FigJam
Documentation for the CMS AI Governance Board
User stories, Acceptance Criteria, and Scenario Tests for engineering and client-side UAT
Reflections
If given an unlimited budget and more time, I would have pushed for "User-First" discovery—speaking with providers and agents first to validate assumptions and find out what we don’t know.
This project taught me the importance of "getting my hands dirty" with the tech stack. In the future, I plan to dive deeper into the technical capabilities of platforms like Lex myself, rather than relying solely on engineering reports, to better advocate for the user's experience within technical constraints.





