Generative Research | Exploratory Research
Making the Invisible Visible: Turning Unstructured Text into Actionable Insights with LLMs and Visualization
Overview
I developed a solution that transforms unstructured text into quantifiable, personalized feedback using large language models (LLMs) and prompt engineering—making abstract concepts like inclusive teaching measurable and actionable. I led the full research and development cycle, including iterative UX research, data visualization design, and the creation of custom LLM prompts to generate accurate, explainable outputs.
Working closely with engineers, subject matter experts, and designers, I ensured the system aligned with the product team's goals while remaining grounded in users' real-world needs and constraints.
This project tackled two core challenges:
Translating qualitative teaching practices into scalable, machine-interpretable feedback using LLMs and carefully engineered prompts.
Delivering feedback in a format that users trust, understand, and are motivated to act on—without overwhelming or alienating them.
The result was a working prototype that not only provided meaningful insights to real users but also inspired reflection and behavior change among instructors. This case study highlights my ability to combine ethical research, technical implementation, and strategic communication to drive product decisions and empower users.
The Problem
Instructors—the enterprise users of teaching tools—navigate high cognitive load and time pressure, often with limited support for translating institutional goals (e.g. equity and inclusion) into daily teaching practices. While these values are emphasized at the organizational level, they’re often abstract, difficult to measure, and hard to act on in context.
Stakeholders needed a solution that could:
Analyze instructional documents (e.g., syllabi) at scale
Deliver feedback grounded in inclusive teaching principles
Avoid judgmental language, instead supporting reflection and actionable change
The core challenge was designing a system that could make complex, qualitative practices both machine-interpretable and human-usable—balancing the precision of automation with the nuance required for meaningful reflection.
My Role
I led the end-to-end UX research and prototyping process, from initial concept development to iterative user validation. My responsibilities included:
Designing and conducting concept testing and user interviews to validate the idea and surface new opportunities
Developing a functional prototype that used LLMs and prompt engineering to convert qualitative instructional content into structured, actionable data
Creating interactive data visualizations to support user reflection and personalized feedback
Leading the design of research studies, synthesizing findings, and informing product direction through evidence-based insights
This project was a highly collaborative effort. I worked closely with faculty researchers, subject matter experts, engineers, designers, and a product manager to ensure the solution aligned with both institutional goals and real user needs.
Research Process
To identify opportunities for innovation, I began by conducting a literature review and field observations to understand the core challenges instructors face when implementing inclusive teaching practices. I observed two faculty groups engaged in curriculum redesign, analyzed working documents, and attended team meetings. Across both sources, a clear pattern emerged: instructors believe in the value of inclusive teaching but lack concrete, actionable guidance—and institutions struggle to evaluate these efforts at scale. These insights revealed a critical gap and strategic opportunity: to design a solution that translates abstract values into measurable practices and feedback, empowering both individual instructors and program-level decision-makers.
A. Research Framing & Methodology Design
Translated institutional equity and student success frameworks into clear, measurable components that could drive product features.
Adapted a validated inclusive teaching instrument into an LLM-compatible format, enabling the transformation of unstructured text (syllabi) into quantifiable data for feedback.
B. Prompt Engineering & LLM Development
Applied few-shot learning and chain-of-thought prompting techniques to optimize LLM accuracy for nuanced, context-dependent tasks.
Designed an automated scoring pipeline for analyzing inclusive teaching practices across syllabi, enabling feedback at scale.
C. Data Visualization & Interaction Design
Developed interactive, hierarchical visualizations (e.g., sunburst charts, bar graphs) using Python (Plotly) and Svelte/D3.js.
Translated complex data into intuitive displays that helped users quickly identify strengths and gaps.
Designed abstraction layers (e.g., “Teaching Function”) to enhance usability and interpretability for diverse users.
D. Concept Testing & User Feedback
Led remote concept testing and user interviews with instructors to validate core product ideas and uncover emotional responses.
Used visual probes and scenario-based walkthroughs to gather actionable feedback on usefulness, clarity, and trust.
Synthesized insights to drive iterative improvements in both interface design and system logic.
E. Impact Measurement
Evaluated early-stage impact by tracking instructor engagement, behavioral intent, and reflective change after prototype use.
Identified key motivators and friction points for adopting inclusion-supportive practices—informing both product roadmap and messaging strategy.
Key Challenges
Translating abstract values into actionable product features
Tackled the complexity of making intangible concepts like “belonging” and “inclusiveness” computable and measurable, enabling structured feedback from unstructured content (e.g., syllabi).Designing for trust in a sensitive feedback space
Navigated ethical UX challenges by crafting non-judgmental, reflective user experiences—addressing potential defensiveness, resistance, and emotional responses to automated feedback.Aligning cross-functional stakeholders around shared goals
Bridged communication gaps between researchers, engineers, designers, and faculty experts, translating insights across disciplines and ensuring alignment despite differing vocabularies and priorities.
Outcome & Impact
Delivered a functional, LLM-powered prototype that was tested and used by instructors across multiple disciplines.
Received positive feedback from users, who found the tool helpful for reflection and identifying actionable areas for growth.
Observed qualitative behavior change, including increased self-awareness, thoughtful planning, and institutional interest in scaling the solution.
Created a reusable design and research framework with strong potential for adaptation in other domains—such as HR, healthcare, or compliance—where unstructured content must be translated into actionable feedback.
What I learned
This project deepened my understanding of how to design AI-powered systems that users trust and engage with, especially in high-stakes, values-driven domains like education. I learned how to translate abstract concepts like inclusion into measurable behaviors, and how to make those metrics meaningful—both to users and to institutional stakeholders.
I also strengthened my ability to balance automation with empathy, crafting feedback that supports reflection rather than judgment. Collaborating across disciplines challenged me to communicate research insights clearly to engineers, faculty, and designers—bridging technical complexity with human-centered needs.
Finally, I saw firsthand how thoughtful UX research can shape not just product decisions, but also organizational mindsets and long-term strategy—reinforcing the value of research as both a design driver and a system-level change agent.