Microsoft: UXR for Python & AI

From quick-pulse usability to org-level strategy, I convert noisy signals into decision-grade insights and operating frameworks that mature Microsoft’s developer and AI experiences.

2019-Present

I joined Microsoft in 2019 as a contract UX researcher supporting Python developers and data scientists in VS Code. Over time, my scope expanded from tactical usability testing to strategic research shaping cloud services, AI-powered developer tools, and intelligent application experiences. I evolved from running weekly usability studies as an individual contributor to leading multi-study research programs that informed cross-org product direction and long-term roadmap decisions.

Each anchor project reflects sustained, multi-phase research efforts tied to major product investments.

Anchor Projects

  • When AI reshaped the roadmap, the UXR team led a research-intensive PM/customer workshop series and synthesized findings into a Customer Capability Model (CCM), defining staged maturity with measurable indicators. I owned end-to-end synthesis (reviewed interviews, built customer cards, codified patterns) and co-led operationalization of the model across teams, guiding investment decisions across Copilot, Foundry, AI Toolkit, and intelligent application initiatives.

    Discovery work on “AI playgrounds” directly informed early AI Toolkit concepts. As agentic use cases emerged, I led a large-scale survey of agentic development to map practices, risks, and prioritization. In parallel, I established lean, repeatable research loops to accelerate decision-making without reducing rigor. Within ~18 months, the AI Toolkit evolved into the primary code-first experience in VS Code and was renamed Foundry Toolkit at GA, reaching 30x MAU.

    Led two industry-signal studies within Foundry: one shaping “lift” (cloud partnerships and startup acquisitions) with recommendations surfaced to senior leadership that directly influenced partnership direction and investment sequencing; and another identifying priority industries for new investment, delivered across marketing and product leadership to align go-to-market and investment strategy.

    Scope & responsibilities

    • Established weekly AI Toolkit research studies with a globally distributed team, spanning both long-term strategy and emergent usability needs

    • Co-defined capability model for agentic and intelligent app development, aligning PM, Eng, and Design across product areas

    • Leveraged quantitative signals (surveys, telemetry, dashboards) to identify adoption drivers, friction points, and strategic opportunities

    • Led competitive analysis across the agentic development landscape to surface trends, gaps, and strategic risks

    • Synthesized cross-study insights into customer cards, patterns, and actionable recommendations

    • Partnered with leadership on AI strategy workshops, translating findings into frameworks and roadmaps

    • Led discovery for Copilot and LLM development workflows, maintaining tight learn → decide loops

    • Designed and operationalized large-scale surveys to inform strategic prioritization and sequencing

    Impact

    • AI Toolkit/Foundry Toolkit became the primary code-first experience in VS Code, reaching 30x MAU within ~18 months

    • Established CCM as a shared decision framework across teams, standardizing language for capability maturity and reducing ambiguity in investment decisions

    • Drove product changes across onboarding, setup, authentication, and deployment flows for Python and Node developer experiences

    • Introduced a reusable CVT + lean research system that accelerated cross-org decision velocity

    • Surfaced demand for model customization and fine-tuning, establishing a new Foundry opportunity area

    • Shaped cloud partnership and acquisition strategy through industry-signal studies and executive-facing synthesis

    • Consolidated fragmented research streams into shared cross-org planning inputs used for prioritization and sequencing

  • Multi-year focus on the DS notebook experience. I drove definition, validation, and iteration for core features in our VS Code development environment—history diffing, Gather (replicating cell results), line-by-line debugging—and led the research track behind Data Wrangler to reduce data-cleaning pain.

    Facilitated Python team workshops, resulting in 40+ concepts, stood up a weekly validation cadence, and introduced a lightweight spec to align language and fidelity (works with a basic description, a user flow, or a working prototype).

    When strategyI adapted priority concepts for the new paradigm and evolving customer needs.

    Scope & responsibilities

    • Own DS research across VS Code (Python/Jupyter) and Azure Notebooks; partner with PM, Eng, and Design on problem framing and decision criteria.

    • Facilitate concept ideation workshops; drive backlog triage and sequencing; coach teams from idea → testable artifact → decision.

    • Manage recruitment/screeners for varied DS personas; keep a steady learn–build loop with weekly testing.

    • Triangulate qual with surveys/telemetry; maintain a concept portfolio and funnel for evidence-based prioritization.

    • Operationalize a reusable, lightweight concept spec and playbooks for rapid research.

    Impact

    • Direct product changes: shipped history diffing, Gather, line-by-line debugging, and drove the research behind Data Wrangler—reducing copy-paste thrash and accelerating data prep.

    • Established a reusable CVT framework and weekly test cadence that accelerated decision cycles and cut concept rework.

    • Prioritized and matured a 40+ concepts; coached teams to move faster with clearer problem statements and acceptance criteria.

    • Adapted the concept set for the AI era, preserving user value while aligning with new capabilities and guardrails.

    • Improved notebook UX (state clarity, debugging flow, reproducibility) and reduced friction in the E2E data-cleaning pipeline.

  • Born out of Quick Pulse insights, I connected patterns across multiple studies to surface cohort-level friction across VS Code and the Azure end-to-end developer journey. This synthesis highlighted systemic onboarding and workflow issues across Python and Node developers. As a result, leadership stood up a dedicated E2E team; I led the Python and Node research tracks while continuing to operate the Quick Pulse pipeline.

    This work resulted in a cross-product adoption framework used to prioritize, interpret, and track developer experience signals across teams.

    Scope & responsibilities

    • Own research for Python/Node end-to-end developer experience across Azure services; define learning goals with PM, Eng, and Design.

    • Facilitate research sessions and cross-functional workshops to drive synthesis and decision alignment.

    • Manage recruitment and screening across cohorts (new-to-cloud, migrating, advanced developers).

    • Triangulate telemetry, survey data, and product usage with qualitative findings.

    • Communicate insights, implications, and recommendations to leadership and partner teams.

    Impact

    • Drove direct product changes to onboarding, environment setup, authentication, and deployment flows for Python and Node developers, improving first-run experience and end-to-end workflow continuity.

    • Established a cross-product E2E adoption framework used by partner teams to prioritize and interpret developer experience signals.

    • Consolidated fragmented research and telemetry into shared org-wide playbooks and decision guardrails, improving consistency and reducing time-to-decision across teams.

  • Quick Pulse studies are one of Microsoft’s continuous-learning mechanisms: a lightweight study to engage customers weekly—creating a reliable pulse on product health and opportunity. I initially ran QPs solo end-to-end; as scope expanded, I onboarded and managed a contractor while staying accountable for quality, rigor, and synthesis. I partner with PM, Engineering, and Design to set learning goals, pressure-test hypotheses, and turn evidence into roadmap moves across VS Code, Azure/Notebooks in VS Code, Documentation, Azure, and more.

    Scope & responsibilities

    • Own research planning, protocols, and success criteria; ensure methodological fit and signal quality.

    • Facilitate sessions, drive rapid synthesis, and align cross-functional stakeholders on decisions.

    • Manage recruitment/screeners and lab/remote setups; standardize templates/playbooks; oversee contractor execution.

    • Make sense of quantitative signals in light of qualitative feedback.

    Impact

    • Direct product impact: shipped new features, refined/renamed patterns, and tightened workflows based on week-over-week evidence.

    • Connected “disconnected” team study sessions into cross-study patterns and emergent cohorts—unlocking clear prioritization and net-new opportunities.

    • Shortened the learn→build loop and de-risked releases via a repeatable, closed-loop research cadence adopted by partner teams.

Guiding Principles

  • I can be a scrappy researcher when needed—pragmatic, not paralyzed by perfection. My goal is to design approaches that are rigorous and grounded while still being achievable within constraints.

  • Insights don’t have to wait until something ships: The earlier we learn, the smarter we build. I make an effort to embed myself within teams—joining standups, reviews, and design discussions—to stay close to what’s emerging and to surface opportunities to help teams learn as they build. By working alongside engineers and PMs from the start, I identify unknowns earlier, shape features with evidence, and keep learning continuous, not episodic.

  • I leverage signals from every source available—telemetry, dashboards, surveys, hands-on explorations, and more—to illuminate what’s really happening. In addition to our users, I engage with engineers, designers, and other researchers to connect the dots and make sense of the mess. Research isn’t just about collecting data; it’s about driving clarity through the convergence of data, context, and human insight.

  • My approach towards collaboration is founded in my belief that as a UXR I’m here to help people succeed. That includes my colleagues. I treat every individual, regardless of role, with the same care and curiosity I would a customer in a user study. I need to understand their needs, pain points, and concerns so I can help them navigate uncertainty and design better experiences together.

  • When things feel uncomfortable or challenging, I view this as an opportunity for growth. I believe in intentionally taking on new problems, goals, or skills. I enjoy ambiguity in which I can stretch my expertise, experiment, and learn from other disciplines. It is in these uncomfortable moments when I learn, adapt, and adopt new ways of thinking.