Research Summit Participant Questions - Digital Promise

Digital Promise

Accelerating Innovation in Education

 

BASIC RESEARCH ON LEARNING | RESEARCH AND DEVELOPMENT | EVALUATION RESEARCH

Summit participants generated questions on how we can use research and data to:

  1. make better use of research on learning,
  2. conduct better research and development to continuously improve products and practices, and
  3. conduct better and more useful evaluation research.

Basic Research on Learning

  • How might we: align on the purpose → goals of school? i.e., non-cognitive skills vs. content knowledge?
  • How might we: make research more accessible – not behind pay-walls
  • How might we: explore uniqueness bias + networks’ impact on transfer of research outcomes to practice?
  • How might we: align academicians’ incentives to practitioners’ incentives?
  • How do we close the technology connectivity gap?
  • How might we: Bridge research + practice → make research useful
  • There are many purple southern states, but right here in Silicon Valley we have multiple families living in 1 house w/o connectivity.
  • What are the core competencies educators need to understand research?
  • How do we pair research on learning with research on poverty? Who will ultimately take responsibility for making changes based on the research?
  • Companies are reporting that they want more women and employees from various racial + cultural backgrounds. What is industry’s responsibility to help foster education for underrepresented groups?
  • How do we account for diverse perspectives in our research? (in particular re: race & class)
  • What are the best approaches to centering students, teachers & families in research?
  • How might we: rethink “scalability” + “generalizability” as desirable/useful?
  • How do we help teachers see data as a flashlight rather than a hammer?
  • What are the best leading indicators of long-term student success? (that educators can impact?)
  • What do we know today from the learning sciences?
  • How do we blend research on learning to approach questions from various viewpoints? (i.e., research on brain development, with research on learning strategies, with the impact of tech in the classroom, etc.)
  • How can educators in under-financed districts get access to research in paid publications?
  • How can we leverage the less formal research that companies do with the data they collect through their products to support more academically rigorous research + conclusions?
  • How do we shift the question to: How can we best leverage tech + research to advance learning?
  • Can we create a common language between research, education + developer?
  • How might we: redefine the “gold-standard?”
  • How might we: leverage behavioral economics to better understand decision-making (research based)?
  • How can researchers make their work accessible to practitioners who don’t attend AERA or subscribe to education journals?
  • How do we ensure the student and family priorities are paramount?
  • How can we test research concepts and theories in real world settings?
  • I’m a teacher. How can I access research if I cannot pay for the journals?
  • We focus on why kids are not ready for college. What if we started thinking about why colleges aren’t ready for the kids we send?
  • How many people in ed tech start ups have an education background, or have actually taught?
  • How can we create real time assessment communities in higher education?
  • How do we develop research competence among future educational researchers to further the work?
  • What designs most effectively bridge learning in informal and formal contexts?
  • How do we make basic research on learning less intimidating to the public?
  • How do we assess our current state of technology in education + find ways to reuse/avoid reinventing the wheel?
  • How can we involve teachers, students, administrators, families, and communities in these evaluation processes?
  • How might we develop an ecosystem to support teachers to grow and innovate with technology?
  • How can research be used effectively by educators, and do we know of any limitations for audience around the research/practice (i.e., grade, technology, etc.)?
  • How can we alter the higher education reward system to allow more academic researchers to pursue projects of need in schools?
  • How can we use technology to invite more participation in the process of research?
  • How can we continue to honor the participant/student voice in an era of “big data”?
  • What can we do to get research out from behind paywalls/restricted access?
  • How does leadership impact the culture of learning & innovation?
  • How can teacher prep providers prepare future teachers to understand + use research to inform their practice?
  • How do we help community members and politicians to understand how complicated teaching is, in order to give teachers room to effectively implement research?
  • How do we help schools/teachers use research results effectively?
  • Why can’t kids read well?
  • Where is R+D taking place? Who is defining what questions are being investigated?
  • How can we disseminate research findings in an understandable and actionable way for education practitioners?
  • How can we speed up research + communication of research without sacrificing quality?
  • How can we test our scientific hypotheses in real world educational settings given the barriers?
  • What are promising new approaches for conducting basic research in learning?
  • What are the gold standard approaches for conducting basic research in learning?
  • How do we ensure that the integrity of research as it gets implemented in the field?
  • How do we forge partnerships between academic and commercial groups to further research knowledge?

Research and Development

  • How do we ensure the learning sciences support R&D from the start?
  • How do we create opportunities for educators to digest data and collaborate?
  • How do we ensure that research is asking relevant questions?
  • How do we ensure that practitioners are prepared to make use of data?
  • How do we promote research that addresses “problems of practice”?
  • What new systems do we need to protect those with the data and dare to ask the toughest questions? (those researchers may not know yet)
  • How might we seed a research marketplace?
  • How do we advance research with practitioner and student at the center?
  • Are there key learning theories that best inform the design of socio-tech learning systems?
  • What incentives are there for companies to build products based on learning science?
  • How do community expectations about the nature of learning and school shift as needs change?
  • How do we translate research for practitioners so that it is put to use quickly? + translate practitioners needs to drive meaningful research?
  • How do we utilize existing research to generate new questions?
  • How do we effectively incorporate educators into the R+D process?
  • How can we hand ownership of learning to the learner, while still addressing community expectations?
  • How do we educate the decision makers who buy educational technology? What do they need to make buying decisions?
  • How can we help parents learn how to leverage digital media to support interests and learning (their child’s and their own)?
  • How do we update or help create authentic research and authentic teaching environments?
  • How do ed tech companies understand the challenges that students have in the daily classroom environment?
  • How many researchers have ever taught? How many teachers have ever conducted research? How do we move past this division?
  • How can we find out what educators need and will use – not just what they want or think they want?
  • How do we encourage ed researchers to do their homework and not try to recreate what has already been done?
  • How do we use open sourcing to support R&D?
  • When can research be generalized and applied, and when should it not?
  • How do you put research into practice in real world settings? How does continuous improvement factor into this?
  • How do we get folks to be patient enough with iterating toward successful change?
  • Where is design research in this process? Not just front-end & formative evaluation traditions. And not just design research coming out of learning sciences.
  • How do we get more in the research community doing more outreach to entrepreneurs so their insights can help more people? and vice versa?
  • How can entrepreneurs begin to co-create with researchers?
  • How should schools of education prepare teachers and administrators to understand and use efficacy research properly?
  • Are good buying decisions contextually specific? What determines what a “good” buying decision is?
  • A lot of research in education fails the fundamental principles of good treatment of evidence. How do we vet the current/future body of research for rigor and balance?
  • What can developers do (best practices) to elicit input from teachers (practitioners), and use that feedback to inform development of new products?
  • How can we find the R+D folks in ed tech companies so they can form a learning community? (and see how their research applies to their products)
  • How are we measuring the efficacy of the ed tech buying decisions? What indicators are we using to determine effectiveness of buying decisions?
  • How can we generate funding that will help capture innovation that is happening in schools, families, and community organizations?
  • What kinds of evidence, rationale, or support will administrators accept to allow teachers to try new approaches, models, and pedagogy they don’t recognize as “traditional” learning models?
  • What do we do when demand outstrips the research?
  • How do we enable the most troublesome data to be analyzed?
  • How do we promote research of “socio-technical solutions” rather than just “technical solutions”?
  • What are the best ways to ensure that practitioners, researchers, and industry (designers, entrepreneurs, etc.) communicate with each other?
  • How can data/research be taught and interpreted to district leaders in an effective manner?
  • How do we get this country and media to embrace the value of empirically sound research-based practice?
  • How do we resolve the inherent conflict between researchers and ed tech developers vis-a-vis their sense of time (i.e., developers think in time in terms of weeks, whereas researchers plan in terms of years)?
  • How do we get education decision-makers to move out of the “best practice” comfort zone and into the empirical abyss?
  • How do we prototype use cases in a light touch manner as much as possible?

Evaluation Research

  • What non-cognitive skills should we measure, and how should we measure them?
  • How can crowdsourcing support practice and research/evaluation at the same time?
  • What if the questions are big and can’t be answered quickly? For example, there is a hypothesis that problem-based learning promotes the development of interpersonal skills, but this will likely take a long time (i.e., participate in PBL → soc-emotional outcomes)? Or, supporting teachers’ understanding of data (whatever data that may be) and ability to act on those data effectively?
  • How should teacher credentialing change to meet the needs/demands of modern teachers? (increase quality of teaching → stronger data)
  • What outcomes measures make the most sense? (re: impact & implementation)
  • How do we encourage ed tech researchers to systematically develop evidence of promise a-la common guidelines?
  • How can we ensure research inclusively engages a diverse range of perspectives and challenges authentically?
  • If we expand our sample to approaching population sizes, can we move quickly to come to causal (or high-confidence) conclusions?
  • Does “inferential statistics” still work for us in the social sciences? Should we explore or develop other paradigms?
  • Evaluation Research: How can we help educators use evaluation research to strengthen instruction and increase learning?
  • How to leverage existing datasets to reduce time to results?
  • What would incentivize developers of ed tech to partner w/ researchers to evaluate technology?
  • What is a reasonable turn around time for research that is still powerful and relevant?
  • Comment: these categories are limiting. How do we generate new models of research and collaboration that will advance learning communities supported with tech industry?
  • How can/should developers partner with teachers (practitioners) to design evaluation plans that answer the questions that teachers care about?
  • What are best ways to do evaluation research on emerging education technology?
  • Who should pay for the research needed to demonstrate efficacy (or not) of growing ed tech companies?
  • What are the key barriers to rapid fire or real time evaluations?
  • How can we identify high quality research or evaluations that get presented but never published in peer review? What is quality research outside academia?
  • How do we maintain forward momentum, when we know most new ideas will fail? (>90% of IES sponsored RCTs did not find effects)
  • What changes (measures) are most important/valuable to learn (“what is working”)? Which technology, pedagogy, information, etc.?
  • How do we capitalize on short cycle trials to add to causal evidence?
  • How do we best define what is not working?
  • What is the difference between substantive + significant research findings in practice?
  • How might we: align research timelines to school-based decision timelines + ed tech company timelines?
  • What effect sizes matter?
  • How might we: embed evaluative thinking in teacher practice – on-going/iterative versus one-off or short feedback loops?
  • What does a gold-standard product really mean in ed tech, and why is that purchase criteria when most products don’t have it?
  • How do we avoid having ed research done under unrealistically rosy scenarios?
  • Why not think about developmental evaluation? Why long-term evaluation? Quick evaluations w/o compromising rigor?
  • Can we support a common language between researchers, educators, and developers? How do we accelerate evaluation research to be relevant?
  • How might we: rethink summative assessment, and leverage multi data points over time?
  • How do we know what is a meaningful action in an online learning environment?
  • How are teachers currently using technology?
  • Data/ed tech/change management coach → innovation specialist is needed.
  • Teacher uses a variety of strategies daily to meet individual + group needs.
  • What evidence of student learning should be expected for making decisions?
  • How can parents learn whether/how their children’s schools use research-based practices + tools?
  • What data can meet the needs of practitioners and evaluators? When are they interested in the same things?
  • Stanford + SFUSD → California education partners co-develop research questions.
  • What are the most high-leverage education technologies available to teachers and schools, and what aspects of these technologies can or should be replicated in future technologies?
  • If current research + evaluation practices are not working, what are the emerging and promising alternatives? What are the audiences for R/E? Teachers? Other researchers? Who is the consumer of resulting evidence?
  • How do we reimagine evaluation research to be predictive?
Sign Up For Updates! Sign Up For Updates

Sign up for updates!

×