Designing AI-Integrated Research Enablement Systems for Scalable, Governance-Aligned Decision-Making

Overview

Across product and operations teams, research practices were inconsistent, difficult to scale, and often disconnected from AI and product development workflows. Teams lacked clear guidance on how to conduct research, evaluate tools, and translate insights into actionable decisions.

To address this, I designed and implemented a structured research enablement system that standardized research practices, integrated insights into AI and product lifecycles, and improved governance-aligned decision-making. This system was designed to scale across cross-functional teams, enabling consistent practices across product, operations, and AI-related workflows. This work positioned research as a core decision-making infrastructure rather than a supporting function.

Problem

Teams faced several challenges including:

  • Inconsistent research practices leading to unreliable insights

  • Limited understanding of when to use moderated vs. unmoderated methods

  • Difficulty translating research into product and AI decision-making

  • Lack of governance alignment, auditability, and traceability

  • Low confidence among cross-functional teams conducting research independently

These gaps slowed decision-making and introduced risk in both product development and AI-related workflows.

Approach

Designed a comprehensive enablement system focused on three core areas:

1. Training & Capability Building

Developed and delivered training programs to improve how teams design studies, conduct interviews, and synthesize insights.

  • Taught end-to-end research design and best practices

  • Introduced ethical considerations and responsible data collection

  • Trained teams to produce reliable, reproducible research outputs

2. Decision Frameworks & Governance Integration

Created clear frameworks to guide research decisions while aligning with governance and compliance requirements.

  • Defined when to use moderated vs. unmoderated studies based on risk and context

  • Established criteria for selecting research platforms, including auditability and data governance compliance

  • Integrated research checkpoints into AI and product development lifecycles

3. Knowledge Systems & Documentation

Built structured documentation practices to ensure insights were accessible, traceable, and reusable.

  • Designed centralized research repositories (Confluence-based)

  • Implemented taxonomy and version control for research artifacts

  • Linked qualitative insights to performance metrics, risk assessments, and post-deployment monitoring

4. Adoption, Communication & Change Management

Ensured successful adoption by aligning stakeholders and embedding practices into daily workflows.

  • Trained teams to translate research into clear business and AI impact

  • Facilitated structured discussions to drive alignment across teams

  • Supported leadership in transitioning from manual to data-driven and AI-supported workflows

Key Contributions

  • Designed and scaled a research enablement system across cross-functional teams

  • Established governance-aligned research practices supporting AI and product decision-making

  • Improved consistency and reliability of research outputs

  • Enabled non-researchers to confidently conduct and apply research

  • Embedded research into AI and product lifecycles for continuous insight generation

  • Introduced repeatable frameworks adopted across cross-functional teams to standardize research and decision-making practices.

Impact

  • Increased team autonomy and reduced reliance on centralized research support

  • Improved quality and consistency of insights used in product and AI decisions

  • Strengthened governance readiness through improved documentation and traceability

  • Accelerated decision-making by making research more accessible and actionable

  • Enabled scalable, repeatable research practices across teams

How This Connects to AI Enablement

This work established the foundation for designing AI learning and enablement systems.

By improving how teams:

  • gather information

  • evaluate outputs

  • and make decisions

Developed a repeatable approach to training humans to interact effectively with complex, AI-driven systems, enabling more accurate, confident, and responsible decision-making.

This same approach now extends to:

  • AI literacy and prompting

  • responsible AI usage

  • human-AI collaboration

behavioral risk awareness

Approach to AI Enablement & Decision Systems

Across projects, I apply a consistent approach to building AI-enabled systems:

1. Understand how decisions are made in real-world environments

2. Structure AI outputs into usable, auditable decision flows

3. Embed human-in-the-loop validation and oversight

4. Align systems with governance, risk, and compliance requirements

5. Enable adoption through training, workflows, and operational integration

This approach ensures AI systems are not only technically effective, but trustworthy, usable, and scalable.

Testimonials

  • "I highly recommend Selena for mentoring and research leadership roles due to her exceptional expertise in UX design. Under her guidance, my professional growth was significantly influenced, and my skills in UX design and strategic thinking were greatly enhanced."

    Diego Rivera, Visual Designer Engineer

  • "Selena is a natural mentor, inspiring me and those around them to continuously strive for excellence and always reminding everyone that humans come first. With their wealth of experience and exceptional research skills, Selena is a valuable asset to any organization."

    Daniel Johnson, UX/UI Designer

  • "As a UX Researcher, she cares about the user’s goals and experiences when using an application, and strives to help make a product better. She was my mentor, and I learned so much. Her instructional material and presentation style are clear and easy to understand."

    Vespera Palmeras, UX Product Designer