Over the years, I’ve been involved in developing research programs and projects in education technology, games, and virtual reality. As I’ve developed my thinking around funding and conducting research in learning technologies, I always come back to an unpublished technical report written by one of my early mentors in the Navy. Norm Lane was a brilliant behavioral scientist who was commissioned as a Navy Aerospace Experimental Psychologist in 1963 and had retired when I first met him in 1994. After he retired, he wrote a technical report on issues related to validating efficacy of virtual reality based training systems. Like the Ark of the Covenant at the end of Raiders of the Lost Ark, this report was lost in the vast government archives, but I saved a copy and it has influenced the way I currently see efficacy research and behavioral science research writ large.

The premise of the paper is quite simple and although he was evaluating virtual reality systems that were state of the art in 1994, the same issues exist today. Dr. Lane reasoned that every virtual interface (and the virtual environment itself) introduces noise to the evaluation data because the interfaces themselves operate at a level that is not a perfect replication of the real world action or perception. So, if you have a headset, haptics, and a locomotive interface connected to a virtual environment, each element, including the virtual environment, adds a layer of noise to the research data. Change or upgrade any element and the complex relationships are altered.

This logic has far reaching consequences for behavioral science research because it calls into question the extent to which research in virtual environments or learning technologies can be generalized beyond highly specific contexts. Human beings themselves are innately noisy and variable creatures. As a result, behavioral science research that does not involve technology is itself highly variable. If each additional layer of technology we introduce adds to the variability of human data, we might well question how we fund, conduct, and interpret research involving humans and the use of technology.

Since behavior is highly context specific, we can’t reasonably expect to generalize across different games, virtual environments, intelligent tutoring systems, or any other interfaces that humans use. Change one element and the context changes. In short, it’s a leap to think that a research study that examines a single or even a group of technologies, generalizes beyond those specific technologies (or specific products). The results MIGHT be applicable in another context, but then again, they may not. An argument might be made that if we continue to reductively conduct this type of research, we will refine our understanding over time. However, fields like education, training, and healthcare require immediate solutions.

What is the alternative? As I’ve written before, I advocate working within Donald Stoke’s concept of Pasteur’s Quadrant; basic research that iteratively and intentionally pursues applied goals. If the immediate necessity is to develop a product or application that is effective, developing insights into the fundamental nature and complexities of human behavior is a secondary concern. The primary goal is to rapidly progress towards a solution and to conduct fundamental research as needed to reach that goal. In noisy contexts such as varied learning environments and the variability of student populations, you can’t expect basic research to consistently translate into an effective application if you disconnect the basic research from the applied goal. So, you conduct basic research to answer the questions you need in the environments of interest and then iterate basic and applied research to make your specific application effective. In the process, you will learn things that may be generalizable to other applications, but you treat that information as an assumption, not a certainty.

The other important benefit of this approach is that you more rapidly transition concepts from basic research into a product because basic and applied research become one continuous process freed from disconnects caused by publication lag and possible failures resulting from basic research studies that don’t translate into practice.

Creating research programs using Pasteur’s Quadrant requires a different funding model than is currently applied to most edtech development projects. Overall, funding levels for an individual project are necessarily higher to cover both the development of a professional quality application AND for conducting a series of iterative research projects to inform the design and refinement of the application’s effectiveness. In the long term, the cost should be lower due to improved research efficiency and product effectiveness.

This is one reason why we have been proposing an advanced research program fund modelled roughly around the Defense Advanced Research Project Agency (DARPA) that would be sourced by private and public funding that would tackle challenging technical problems that have eluded the learning technology community using traditional R&D and funding mechanisms. We are not proposing this model as a replacement for the traditional basic/applied research model, but as another tool to find solutions to hard education challenges that have remained elusive using traditional models.

Lane, Norman E. (1994). Special Report Measuring Performance in Virtual Environment Training Applications: Issues and Approaches. U.S. Army Research Laboratory Scientific Services Program, TCN 93-176.

Shilling, R. (2015). Why We Need a DARPA for Education. Retrieved July 06, 2017, from https://www.scientificamerican.com/article/why-we-need-a-darpa-for-education/

Stokes, D. E. (1997). Pasteur’s quadrant basic science and technological innovation. Washington: Brookings Institution Press.

About Russell D. Shilling, Ph.D.

Russell D. Shilling, Ph.D. is the Senior Innovation Fellow for Education R&D at Digital Promise.

Leave a Reply

Your email address will not be published. Required fields are marked *