UX for GEN AI

Transformed repetitive enterprise tasks into automated workflows, freeing up time for high-value work

Transformed repetitive enterprise tasks into automated workflows, freeing up time for high-value work

Duration

Sept 2024 - Dec 2024

My Role

Extensive UX Research, UX Writing, Iterating designs based on feedback, Design Handoff to Developers

Team

2 UX Designers 2 Product Managers

IMPACT

Reduced Manual Effort by 40%

Cut repetitive task execution by 40% by automating multi-step workflows through AI.

Increased Work Focus Time by 30%

Freed up 30% more time for strategic work by shifting focus to decision-making.

Reduced Cognitive Load

Minimized mental effort by 35% by preserving context & eliminating repetitive action

OVERVIEW

Introduced a company-wide AI interface that transforms how employees access, reuse, and act on information across workflows.

It answers company-specific questions, assisting with documentation, synthesizing reports, and more. To make these interactions more efficient, it also supports the creation of persistent, task-specific assistants. These assistants help employees maintain continuity in their AI workflows.

PROCESS

Translated user frustrations into a more seamless, context-aware AI workflow over a 3-month design cycle

GOAL

We set out to design buildable assistant framework: a way for users to…

♻️

Create Once, Re-use Always

Create assistants once and reuse them continuously.

🎚️

Flexible Creation Path

Choose between conversational or form-based creation.

🧪

Test & Refine

Test and iterate on prompts before finalizing.

Test and iterate on prompts before finalizing.

We believed a dual-path creation model (guided vs manual) with persistent memory would unlock long-term value and deepen user trust in AI.

RESEARCH

Shaped the product direction through direct user insights, hands-on testing, and collaborative exploration

Interviewed 10+ employees across 5 teams to uncover workflow gaps and expectations from AI.

Interviewed 10+ employees across 5 teams

Conducted moderated usability tests with the existing builder to identify friction in setup, context handling, and task flow.

Benchmarked tools like ChatGPT, ClaudeAI, Perplexity to understand patterns, gaps, and opportunities.

Held co-design workshops to explore workflows, improve clarity, and validate early ideas.

Held co-design workshops focused on flow + clarity

Held co-design workshops focused on flow + clarity


INSIGHTS

From 10+ Users across 5+ Departments and rest of the research, we identified the following themes

🎯

Most platforms didn’t offer persistent assistant creation.

Interviewed 10+ employees across 5 teams

🤖

'Conversational' creation felt natural, but lacked structure and support.

💬

Form experiences worked better when labels felt like questions.

🔍

🔍

Users preferred clear guidance over excessive flexibility, especially when working with AI.

Users needed clarity, not just control.

Explored a wide range of ideas, then focused on the ones that delivered the most impact with practical feasibility.

To move forward, we needed to identify what to build first. We conducted a 3-axis prioritization exercise, scoring each idea based on:

  1. User Impact: How much value it would bring to employees.

  2. Design Effort: Time and complexity to prototype and test.

  3. Development Effort: Engineering cost and feasibility.

This helped us map ideas into quick wins vs. long-term investments.

Idea Prioritization based on Scoring User Impact, Design & Dev Effort

SOLUTION

Combined conversational and structured flows to help users create, test, and refine assistants without starting from scratch

Flowchart for Creating Assistant

Final Design for Creating Assistant

KEY DESIGN DECISIONS

Our goal was to reduce friction and cognitive load without sacrificing flexibility. So we took some key decisions to deliver clarity at every step

Switchable paths
⦿ Some users preferred guidance; others wanted control. So we allowed seamless switching between bot-led and form-led creation, without loss of progress.
⦿ This met users where they were and respected their comfort levels.

Switchable paths
⦿ Some users preferred guidance; others wanted control. So we allowed seamless switching between bot-led and form-led creation, without loss of progress.
⦿ This met users where they were and respected their comfort levels.

Prompt scaffolding with starter templates
⦿ We introduced structured prompts with pre-filled starter text to help users articulate what they wanted their assistant to do.
⦿ This removed ambiguity and gave the LLM a better foundation to work from, resulting in clearer, more grounded outputs.

Prompt scaffolding with starter templates
⦿ We introduced structured prompts with pre-filled starter text to help users articulate what they wanted their assistant to do.
⦿ This removed ambiguity and gave the LLM a better foundation to work from, resulting in clearer, more grounded outputs.

Preview first, not post submitting
⦿ Rather than asking users to “submit and hope,” we enabled live, inline previews of assistant responses.
⦿ This helped users test prompts incrementally, building trust through visibility and reducing fear of getting it wrong.

Preview first, not post submitting
⦿ Rather than asking users to “submit and hope,” we enabled live, inline previews of assistant responses.
⦿ This helped users test prompts incrementally, building trust through visibility and reducing fear of getting it wrong.

Progressive disclosure of capabilities
⦿ In early versions, users couldn’t find advanced settings. We redesigned this to surface capabilities contextually.
⦿ This approach kept the UI clean for novices but still powerful for advanced users.

Progressive disclosure of capabilities
⦿ In early versions, users couldn’t find advanced settings. We redesigned this to surface capabilities contextually.
⦿ This approach kept the UI clean for novices but still powerful for advanced users.

CHALLENGES & COLLABORATION

Advocated for long-term scalability while navigating business, technical, and timeline constraints.

We were working towards reorganizing IA and making space for advanced capabilities. But the sprint focused towards shipping refinements for immediate rollout.

So we were holding off on foundational improvements we felt were critical for scale. Still, we optimized what we could, ensuring the experience remained clear, flexible, and ready for future evolution.

NEXT STEPS

Would love to identified next steps to evolve the assistant from a reactive tool to a more proactive system

Automatically generate assistants from repeated queries and usage patterns, minimizing manual effort.

Moving towards an Agentic Assistant model, where the assistant uses context to act and respond proactively.

LEARNINGS & TAKEAWAYS

How this project is helped me grow as a designer?

01

Prompt Design is UX too

Designing structured, supportive prompts isn’t just about backend performance. It's about empowering users to communicate clearly and get reliable outcomes.

I honed my communication skills through direct client engagement and gained the ability to understand understand and address client needs effectively.

02

Users Need Trust Cues

AI outputs aren’t always predictable, so we designed for reassurance. Real-time previews helped users see what the assistant “understood,” and fallback options ensured users never felt stuck or misled by the system.

I honed my communication skills through direct client engagement and gained the ability to understand understand and address client needs effectively.

You might also like to see…

©2026 Sakshi Sonawani 🤍

©2026 Sakshi Sonawani 🤍

©2026 Sakshi Sonawani 🤍

Create a free website with Framer, the website builder loved by startups, designers and agencies.