Themes & Takeaways

Cross-cutting themes that emerged across all three breakout groups — Responsible Research in Academia, Measuring Social Indicators, and Public Sector AI Procurement — during the Metrics Makerspace.

Theme 1: Perspective Shapes What Gets Measured

Across all three groups, a consistent finding was that the choice of metrics depends heavily on who is doing the choosing. Researchers, administrators, affected communities, procurers, and vendors each prioritize different principles and different forms of evidence. What counts as a "good" metric is rarely neutral.

Theme 2: The Gap Between Principles and Practice

Each group surfaced a tension between high-level principles (integrity, well-being, accountability) and the practical difficulty of operationalizing them. Academic institutions cite responsible research principles but may lack concrete measures. Social indicators may be technically sound but miss what communities actually care about. AI procurement standards may exist on paper but go unverified in practice.

Theme 3: Accountability Requires Ongoing Engagement

All three domains pointed toward accountability as a process rather than a checkbox — requiring sustained engagement with affected parties, not just upfront consultation.

Recurring Questions & Tensions

Open Problems

These are areas where further work is needed: