RADARs
2024 - 2025 Responsible Innovation Labs
Design Team Lead
RADARs is a 0-to-1 web product designed to support early-stage AI startups navigating Responsible AI. As the design team lead, I worked in a space where expectations were high, guidance was fragmented, and direction was unclear.
I had the most rewarding experience working on this six-month project. Throughout the project, I focused on reframing the problem and shaping product direction—so Responsible AI wasn’t just a vague concept, but something startup teams could actually use in their day-to-day product decisions. I guided the UW design team on product strategy and user experience design in collaboration with our NPO partner, Responsible Innovation Labs, to bring RADARs from concept to a functional web app.



THE
Problem Space
Add a Title
Early-stage AI startups are increasingly expected to think about responsibility, transparency, and risk. But most existing Responsible AI guidance is abstract, fragmented, or written for large organizations with legal and policy teams.
For small teams moving fast, this often creates friction instead of traction. Responsibility in AI feels important, but hard to interpret in practice. Startup teams know they should care, but don’t know where to start, what applies to them, or how to translate principles into real product decisions to act quickly.
RADARs lives in that gap between expectation and action.
THE
The Roadblock
Add a Title
While the problem space was clear at a high level, it was not clear how to design for it.
Early on, the biggest challenge wasn’t building—it was deciding who we were designing for and what would actually help them move forward. The same ambiguity that frustrated startup teams also made it hard for our design team to define what the product should be.
Until we understood why startup teams were stuck, and what kind of support would actually be valuable for them, progress on the product itself remained blocked.
That tension became the core challenge RADARs needed to resolve.

1. Stepping into the ambiguity
How the work unfolded
At the start of RADARs, the problem space was broad and ill-defined. Responsible AI touched ethics, regulation, technology, and business, but there was no clear entry point. Jumping into solution design too early would have meant guessing what mattered.
I decided to slow down visible progress and prioritize understanding the landscape as deeply as possible before committing to any product direction.
Why?
Without shared grounding, any solution risked being superficial or misaligned. I believed that grounding the team in the broader context was necessary to avoid locking the team into the wrong problem.
We ended up building a shared understanding and research base of the problem space, and that became the foundation for all our future decisions.
Why?
Without shared grounding, any solution risked being superficial or misaligned. I believed that grounding the team in the broader context was necessary to avoid locking the team into the wrong problem


2. Finding the anchor
How the work unfolded
As research expanded, the scope grew rapidly. Everything felt relevant, and the team risked getting lost in abstraction. Progress was slowing down because there were just too many possible directions.
I narrowed down the scope based on our research insight—“They don’t know what they don’t know”—and anchored our project around Responsible AI risks—helping startup teams identify, understand, and explain what risks apply to their product and why they matter.
Why?
We couldn’t “boil the ocean”. And we could not afford to get lost in exploration. Locating a research-rooted anchor would give the team a kick start in design work.
This way, we established a clearer sense of who we were helping and what problem we were solving first. We started design with intention.

3. When the plan broke
How the work unfolded
Our primary research plan depended on interviews with early-stage AI startups. After 400+ outreach attempts, only a few responded, and none agreed to participate. We hit a deadend.
I decided to pivot the research strategy rather than wait indefinitely for startup access. I reassessed the ecosystem and chose to engage investors as a secondary stakeholder.
Why?
Waiting would have stalled the project tremendously. When targeted user groups became inaccessible, the next-best stakeholder group was our best shot. Investors turned out to have a different but deeply connected perspective on Responsible AI—one tied to trust, credibility, and survival.
This pivot unlocked new insights and access to more startup teams: startups care about Responsible AI primarily when it affects customer trust and sales. This led us to a stronger anchor—helping startup teams not just understand but demonstrate AI transparency to support their business outcomes. We restored momentum.
Why?
Without shared grounding, any solution risked being superficial or misaligned. I believed that grounding the team in the broader context was necessary to avoid locking the team into the wrong problem
"I don’t care.”
“The last thing we need is for you to walk in the door and educate me about 'responsible AI.'"
- AI Startup founders

4. Reframing the problem
How the work unfolded
As prototyping and early testing progressed, it became clear that simply providing an AI transparency report was not enough. Startup teams still struggled to act on the information.
I reframed the product from an AI transparency reporting tool into a decision-support system.
Why?
Information alone is not enough to change behavior. What AI startup teams needed was help deciding what to do next—quickly and confidently.
This reframing turned into our central design principle. It guided our research synthesis, system structure, and how value would be delivered through the product.
Why?
Without shared grounding, any solution risked being superficial or misaligned. I believed that grounding the team in the broader context was necessary to avoid locking the team into the wrong problem

5. Creating structure to move forward
How the work unfolded
Even with a clear anchor and reframed design question, the next steps were still blurry. And we were running out of time. The challenge was translating the idea of decision support quickly into a concrete, usable product experience.
I approached this through systems thinking and designed a structured, time-bounded product flow that prioritized focus and efficiency for startup teams.
Why?
Startup teams are time-constrained. Anything that requires long onboarding or heavy effort would fail in practice. The product needed to guide users through the right journey, at the right pace.
This resulted in a clear product structure which then led to a concrete path to build and launch.
Why?
Without shared grounding, any solution risked being superficial or misaligned. I believed that grounding the team in the broader context was necessary to avoid locking the team into the wrong problem


6. Build
How the work unfolded
Time was ticking, we needed an MVP—but we had no engineering resources. Finding additional technical collaborators was impossible at the time under the budget constraints.
I made the call to move forward by building the product using existing AI tools and no-code platforms by ourselves after careful consideration of our goal, resources, and constraints.
Why?
Waiting for perfect resources would have stalled the project entirely. A quick, functional MVP was much more valuable than a perfect one—It would allow us to validate the product direction quickly and maintain momentum within the time constraint.
We successfully built a working MVP using Webflow, Zapier, Firebase, and AI-assisted workflows in four weeks. This allowed us to test with users, demonstrate core value to our stakeholders, and launch as a coherent 0-to-1 web application.
Why?
Without shared grounding, any solution risked being superficial or misaligned. I believed that grounding the team in the broader context was necessary to avoid locking the team into the wrong problem




THE
Outcome
Decision-support Tool
By the end of RADARs, we had a real 0-to-1 product in the world.
What started as a broad, abstract conversation about Responsible AI became a focused decision-support experience for early-stage startups. The product helps startup teams move from “we should care about this” to “here’s what we need to do next,” without adding unnecessary complexity or overhead.

Team Synergy
For me, one of the most meaningful outcomes was seeing our team move from disagreement and uncertainty to alignment. At the beginning of the project, we all had different interpretations and assumptions. The ambiguity of the problem space itself made it easy for us to drift in different directions.
Throughout the six months, I focused on creating a shared understanding—through research synthesis, reframing discussions, and structuring the product direction in ways that everyone could reference. As the project came into shape, our conversations shifted from debating what we should do to collaborating on how we would do it. We synced on the same page and worked unanimously towards the launch of our product. It was the most precious teamwork experience I ever had.


THE
Reflection
Navigate ambiguity
This project changed how I feel about ambiguity.
Early on, the problem space felt overwhelming. There were no clear answers, and a lot of moments I wasn’t sure what the “right” direction was. Instead of rushing to solutions, I learned to sit with that ambiguity—to slow down, ask more questions, and understand what’s actually blocking progress.
Leading this project pushed me to make decisions with incomplete information, to pivot when plans failed, and to take responsibility for direction even when there wasn’t a clear path forward. It was uncomfortable, but also incredibly formative. I learned that progress doesn’t always come from having better answers, but from asking better questions and making thoughtful tradeoffs.
Create clarity through structure
What this project taught me about product work is that clarity doesn’t just show up—you have to create it.
As the project progressed, I started turning messy insights and debates into shared frameworks, product structures, and decision points. Once that structure was in place, everything else became easier: collaboration, design, and execution.
This reinforced something that now feels core to how I approach product work: meaningful impact often comes from creating structure in complex spaces—structure that helps people decide, act, and move forward with confidence.



