Search on EES

Dr. Kavita Mittapalli is an education researcher and program evaluator with over 20 years of experience. She founded MN Associates, Inc. in 2004 to support equity-focused work in PK–16 education through meaningful evaluation and research.

Kavita and her team of mixed methodologists and statisticians work with schools, colleges, universities, and organizations across the U.S., evaluating programs and initiatives funded by federal agencies like the National Science Foundation, US Department of Education, Department of Defense, NASA, National Oceanic and Atmospheric Administration, and US Department of Agriculture.

MNA’s focus areas include STEM education, teacher preparation, arts integration, digital learning, and workforce development.

Known for her practical and culturally responsive approach, she brings data, context, and equity together to inform real change.

Learn more at www.mnassociatesinc.com

Just published: The Hidden Career – an illustrated storybook introducing program evaluation to high schoolers: Read it here

 

In this candid and insightful conversation, Kavita Mittapalli, PhD, shares about her unconventional journey into evaluation reveals how real-world experience, interdisciplinary thinking, and a commitment to social justice can shape a deeply impactful career.

What emerges is a philosophy of evaluation that is both pragmatic and purpose-driven. For her, evaluation is not just about generating data, it’s about making that data meaningful, usable, and centered on the communities it’s meant to serve. Whether working with under-resourced schools or national research projects, she brings systems thinking, contextual awareness, and a deep respect for human relationships into every evaluation.

 

  1. Given that the US education sector has historically been a birthplace and testing ground for evaluation as a profession, I’d love to hear how your journey as an evaluator in this sector began and how your specialization has evolved over time. What were the key turning points that shaped your career? Were they strategic moves, or did unexpected opportunities play a role?

My path into evaluation didn’t begin in a traditional way. I actually started in agriculture—studying plants, systems, sustainability, and how people interact with the land. That early exposure taught me that data matters, but context matters more. From there, I pivoted to applied sociology and education research, where I learned to see the structures and systems shaping people’s lives. That transition opened my eyes to research as a tool for social change, not just scholarship.

It was during my time in education research that I began to see the limitations of theory without practice. I was part of a large-scale project working with under-resourced schools across the country, and the educators weren’t asking for another literature review—they needed timely, usable insights to help serve their students better. They wanted to see data! That was the spark. Evaluation gave me the ability to ask, “How is this working for the people it’s meant to serve?” and then do something with the answer. Enter mixed methodology!

As for key turning points, some were strategic, like building partnerships with 2-year and 4-year institutions across the nation. Others were beautifully unexpected: a call from a colleague who thought I’d be a good fit for an NSF project or a last-minute invitation to observe a program on student retention that turned into a multi-year collaboration. Over time, my specialization evolved to focus on STEM education and workforce development with an equity lens—particularly for minoritized and underserved communities. That blend of systems thinking, sociological insights, and educational grounding has become my anchor.

  1. Building on your unique path and the turning points that shaped your career, what three concrete actions would you recommend to young evaluators who are just starting out and hoping to break into the highly competitive field of evaluation?
  1. Start evaluating before someone pays you to do it. Volunteer for nonprofits, community programs, or university initiatives. Create logic models, draft survey questions, or help interpret data. Experience builds credibility.
  2. Get into communities of practice. Join AEA (American Evaluation Association), attend a TIG (Topical Interest Group) meeting, participate in webinars, or just show up to virtual coffee chats. You’ll learn just by listening—and eventually, you’ll be contributing. [Interviewer’s note: For those based in Europe, the European Evaluation Society (EES) offers similar opportunities for connection and learning.]
  3. Find a mentor—or three. The field is built on relationships. Reach out to someone whose work you admire. Ask thoughtful questions. Don’t just look up—also build lateral peer relationships. Your cohort becomes your community.
  1. What are common mistakes to avoid when starting out?
  • Confusing evaluation with research.They overlap, yes, but evaluation is about utilization and use. If your report is gorgeous, but no one knows what to do with it, you’ve missed the mark.
  • Overpromising and underdelivering.New evaluators sometimes want to prove themselves by doing everything. It’s better to scope realistically and overdeliver.
  • Ignoring the politics and people.Evaluation happens in real-world systems, with real people and real power dynamics. If you’re only looking at the logic model and not the relationships, your findings may miss the context.
  1. How long does it typically take to become an established evaluator? How does the progress of an evaluator typically unfold over the course of the first three to five years?

Evaluation is a long game. You can start building credibility in the first year or two, but it usually takes 5–7 years to become “established,”—meaning you’re recognized, trusted, and have a solid portfolio. The first year is often about learning the ropes—methods, terminology, and client dynamics. Years 2–3, you’re usually refining your skills, gaining confidence, and starting to shape a niche. By years 4–5, if you’ve been intentional, you’re likely leading evaluations, mentoring others, or managing stakeholder relationships.

Progress isn’t linear—but it builds. Every project adds to your story.

  1. How do you stay updated with the latest trends and methodologies in evaluation, and what resources would you recommend to newcomers?

I treat learning as part of the job, not as an afterthought. Here’s how I stay current:

  • AEA’s EvalTalk listservand AEA365 blog are good reads. They bring new methods, ethical debates, and voices.
  • Conferences—AEA, CREA, Claremont, Evaluator’s Institute—aren’t just about content; they’re about community. I always walk away with at least one idea and five new contacts.
  • Podcasts and newsletterslike The Evaluators’ CollectiveBetterEvaluation, and EvalCafe keep things fresh and accessible. I have to be honest, though; I haven’t been too involved with the podcasts due to a lack of time.
  • And honestly, my clients and stakeholdersare some of my best teachers. Their questions push me to think differently, adapt faster, and stay grounded.

For newcomers, I say: be curious and learn as much as you can. Don’t ignore theory! Evaluation is a field where the questions matter as much as the answers.