National Academies Press: OpenBook

Science, Evidence, and Inference in Education: Report of a Workshop (2001)

Chapter: Theme 1: Supporting Scientific Quality at the Federal Level: Consistency and Variation

« Previous: Front Matter
Suggested Citation:"Theme 1: Supporting Scientific Quality at the Federal Level: Consistency and Variation." National Research Council. 2001. Science, Evidence, and Inference in Education: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10121.
×

Page 1

Theme 1.

Supporting Scientific Quality at the Federal Level: Consistency and Variation

To help the committee consider “design principles” that support science in a federal education research agency, the first workshop session was designed to engage a select set of experienced federal research managers in a discussion of how their institutions define and support scientifically based research. Panelists represented agencies that support a range of sciences—primarily social and behavioral—with an emphasis on institutions that fund the lion's share of federally sponsored research in education: the Office of Educational Research and Improvement (OERI), the National Science Foundation (NSF), and the National Institute for Child Health and Human Development (NICHD).

WHAT IS QUALITY? CONSENSUSON BASIC PRINCIPLES, DIVERGENCEON SPECIFICS

Across agencies and disciplines, panelists agreed that research quality at a conceptual level consisted of “rigor, vigor, and vision.” How these broad concepts translated into specific standards of quality varied; no single definition of quality emerged. One NSF panelist offered some specific elements of quality that the Education and Human Resources Directorate looks for in evaluating research proposals: for example, a conceptually deep, internally consistent plan; evidence that it stretches the boundaries of what is known; clarity about connections to prior research and related disciplines; and its relevance to practice. Another Office of Naval Research panelist dismissed the question altogether, asserting bluntly that true scientists do not need definitions to conduct high-quality research. In a later session, this sentiment was reinforced by a guest speaker who suggested he did not think it would be difficult to define research quality “standards.”

Real scientists just do it. They don't brood about whether what they are doing counts as science.

—Susan Chipman

Some panelists suggested that certain principles of quality apply to all sciences, including scientific research in education. In contrast, another panelist emphasized the apparent absence of a well-accepted definition of quality held by those conducting research on education. He argued “quality is an elusive concept...very much in the eye of the beholder...”

It's a lot easier to think about... quality if you can be very explicit and clear about the things you are trying to do... it at least gives you a context [to understand] evidence, and methods, and use.

—Kent McGuire

Suggested Citation:"Theme 1: Supporting Scientific Quality at the Federal Level: Consistency and Variation." National Research Council. 2001. Science, Evidence, and Inference in Education: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10121.
×

Page 2

The definition of quality also varies according to which aspect of the research enterprise is considered. Panelists discussed quality with respect to research proposals, research output or results, research use, and research programs. For example, one panelist distinguished between quality in projects and programs, while another reinforced this distinction by arguing that even when most individual projects in a program of work are good, there can still be a lack of coherence in the program as a whole.

Sometimes the small steps add up and sometimes they don't. And sometimes you don't know that for ten to twenty years. It's very difficult to judge...

—Richard Suzman

The accumulation of knowledge is a primary goal of sustained research efforts in all the agencies represented at the workshop. Panelists argued that gauging quality against this goal was more difficult than assessing individual proposals or reports. One National Institutes of Health (NIH) speaker suggested that all projects added to the cumulativity of knowledge, but that the key question was which ones contribute major findings and breakthroughs. He went on to say that it is very difficult to judge if individual findings added up to something greater, and that this accumulation often takes decades.

In a University of Chicago Medical School post-doctoral program in research...I was astounded...because applicants wanted to learn evidence-based medicine. I said... ‘what are you practicing?' And [they] said ‘we basically do what our chief has been doing for the last 20 years... '

—Norman Bradburn

Raising issues that would be central in a later workshop session, some panelists specifically referenced quality from the perspective of the application of the research in practice. As one Department of Agriculture panelist noted, the struggle is “how best to get information out to the consumers.” Another panelist argued for a balance between scientific rigor and practical utility.

THE PURSUIT OF QUALITY: NO ONE MODEL

Quality is supported in federal research agencies through a range of mechanisms. While some common strategies emerged, it was clear that no one model is envisioned as ideal. One panelist, however, singled out this diversity as a positive aspect of the federal research enterprise and argued that it is a strength of the U.S. system that we have so many different processes, a result of which is a greater likelihood of good projects being funded.

It's nonsense to think everybody knows good science even if they are from out of field. They don't. [We need people who can] consider multiple perspectives but always have quality in mind.

—Reid Lyon

Amid this variability, peer review emerged as the most common and most trusted mechanism federal agencies use to promote quality in the research they fund. However, it is not universally used, and the goals, structure, and composition of peer-review panels vary substantially from agency to agency. Two speakers explained that peer review is both a screen and a feedback mechanism: even applicants whose proposals are rejected receive important feedback that strengthens their work and improves their future projects.

Suggested Citation:"Theme 1: Supporting Scientific Quality at the Federal Level: Consistency and Variation." National Research Council. 2001. Science, Evidence, and Inference in Education: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10121.
×

Page 3

Many panelists emphasized the importance of selecting peer reviewers and matching expertise and perspectives to the proposals being evaluated. One NIH speaker was especially strong in his insistence on using reviewers who were widely recognized experts in the field, and he warned against the mistaken belief “everybody knows good science.”

We don't honor the sacred cow of peer review...we do like to consider ourselves peers. We are held accountable in terms of being able to tell a reasonable story about how what we are funding adds to something within 30 years...

—Susan Chipman

At the Office of Naval Research, peer review is not the screening mechanism of choice, but agency staff have more discretion in selecting proposals. While the other panelists heavily relied on peer review, they also agreed that it had its limitations, and that alone could not ensure that an agency would always generate good work. One panelist summarized the discussion by noting the difficulty in attributing an agency's successes and failures to a single factor such as peer review.

Indeed, the research managers noted several other strategies for bolstering the quality of the work they support. For example, most panelists targeted their resources to fund centers that engaged scientists from a range of disciplines to focus on solving clearly defined problems. Another common way in which panelists support quality is through a sustained interaction with both the research community and the communities (stakeholders) that the research is intended to inform. These communications serve a number of goals in research agencies, including the identification of researchable problems and substantive gaps in knowledge.

ENABLING CONDITIONS: AN ENVIRONMENT SUPPORTING RESEARCH QUALITY

What are the conditions under which quality is best supported? Panelists offered a range of responses, with a fair amount of consensus on some key requirements: flexibility and autonomy, quality staff, and sustained efforts requiring a critical mass of resources.

I have it easier [than OERI and NSF because] I work for an organization that can literally turn on a dime. We [NIH] can make rapid decisions ...[when] scientific opportunity arises. We are not encumbered... by the amount of regulation that OERI is. ... OERI should never be held to a quality standard until [regulations] are out of there.

—Reid Lyon

The flexibility and autonomy of research agencies was cited as a necessary condition to be an effective institution. The ability to make quick decisions, shift resources toward particularly promising opportunities, and maintain a supple approach to proposals and projects were cited as key determinants of quality.

Many panelists cited the importance of recruiting and retaining a high-quality staff who could think across domains and methodologies. The staff characteristics that were identified as most critical were similar to those of effective peer reviewers: respected and agile thinkers.

Suggested Citation:"Theme 1: Supporting Scientific Quality at the Federal Level: Consistency and Variation." National Research Council. 2001. Science, Evidence, and Inference in Education: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10121.
×

Page 4

The importance of a sustained research effort on important short-, mid-, and long-term problems or possibilities was emphasized repeatedly. For example, at the Office of Naval Research, research on advanced educational technology took 30 years to yield applicable products; similarly, a reading program at the NIH started with three centers and gradually grew to a 42-site research network that has substantially improved our understanding of how children learn to read.

The first contract in artificially intelligent tutoring was awarded in 1969...today we are just beginning to see the first practical applications ... it takes a long term investment.

—Susan Chipman

OBSTACLES TO THE PURSUIT OF QUALITY

Mirroring and reinforcing the earlier discussion of flexibility as an enabling condition for supporting quality, panelists repeatedly cited a lack of flexibility and autonomy as an obstacle. Two panelists specifically derided the prescriptive nature of OERI's authorizing statute, arguing that it stripped agency leadership from making necessary professional judgments about scientific opportunity. This issue repeatedly resurfaced throughout the session. Other participants cited the abundance of service-oriented programs that OERI administers as a hindrance to research quality.

Another problem with the research agenda [at OERI] is ...a disparate assorted bunch of reform activities... None of these...activities was based on research. Their negative evaluations have absolutely no impact on their continuing to be in place.

—Diane Ravitch

Another common obstacle—particularly in education research—mentioned by panelists was the lack of resources to conduct sustained, high-quality work. One panelist estimated that as a nation we invest just two-tenths of one percent of the total amount spent on K-12 education each year in research and development, compared to roughly 5 to 10 percent in other areas. The lack of, and need for, a “critical mass” of resources also came up in a later discussion of education research and practice. Just as panelists in this first session argued that resources were insufficient at the federal level, so too was the proportion of dollars spent on research and evaluation at the local level.

If a school district gets $100,000... they spend 2% on formative evaluation and research. We way underfund research. We don't find out what works... [then] not only can we not improve them [the programs] but they don't last...as soon as the next [leadership] group comes in, [it is abandoned because the programs] haven't grown roots that can withstand an administrative change...

—Steven Ross

Another problem commonly cited by the panelists is a lack of candor and accountability. One panelist, arguing that the lack of accountability and candor with sponsors is an essential element of an effective federal research organization, suggested that researchers “are too keen on making promissory notes in the political system in order to get resources” when they cannot possibly deliver on those promises. This lack of candor, panelists suggested, contributes to unrealistic expectations—among researchers, practitioners and policy makers—about how quickly rigorous research can be conducted, as well as how quickly the results of research can “go to scale” in practice.

Suggested Citation:"Theme 1: Supporting Scientific Quality at the Federal Level: Consistency and Variation." National Research Council. 2001. Science, Evidence, and Inference in Education: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10121.
×

Page 5

On the issue of accountability, two panelists argued that the incentive structures for improving quality in education research are lacking, citing strong pressure from lobbyists to fund established institutions regardless of the quality of their work. This sentiment was echoed by a panelist from the field of health, who suggested that it was very difficult to identify and reduce funding for areas of research that are not fruitful if that research is performed or is backed by politically influential groups.

...where there hasn't been much [research] progress, there is a tremendous reluctance to state that clearly...to diagnose it... and then look why it hasn't happened...

—Richard Suzman

Panelists across such diverse fields as health, agriculture, and education cited a lack of consensus on goals as an impediment to promoting and sustaining quality. They argued that this absence of agreement on how to define and measure success results in fragmented research and an inability to sustain programs that can grow knowledge in a specific area.

I found that there was almost no discretion in the [OERI] budget because everything was either allocated to a specific entity or earmarked...

—Diane Ravitch

Finally, several panelists argued that political interference, often in the form of congressional earmarks, detracts from the development of sustained programs of research that can accumulate research-based knowledge over time.

REVITALIZING FEDERAL EDUCATION RESEARCH AND DEVELOPMENT: IMPROVING THE R&D CENTERS, REGIONAL EDUCATIONAL LABORATORIES, AND THE 'NEW' OERI

The workshop session on federal research agencies concluded with a luncheon discussion of a forthcoming book 5 on the history of OERI. Maris Vinovskis, the book's author, presented a brief overview of the major findings of the book. The book features an evaluation of past OERI R&D Centers and Regional Laboratories.

My fear is that [policymakers working on OERI reauthorization] will prescribe a whole series of activities that don't make sense...but they are grappling out of frustration...they don't trust you and me [education researchers] to do the high quality work.

—Maris Vinovskis

Vinovskis discussed the significant variation he found in the quality of the work conducted by the Centers and Labs. Related, he noted that a lack of coherence typified several of the institutions he evaluated even though several of their individual projects were excellent. He also described in detail the impact of Congress and lobbying groups on the politicization of education research. Echoing themes that emerged from earlier panels, Vinovskis repeatedly called for more candor and accountability, arguing that the academic community has been unwilling to engage in a dialogue of the criticisms he has made about the agency. He also agreed that funding for education research was inadequate.


5 Vinovskis, M. (In press). Revitalizing Federal Education Research and Development: Improving the R&D Centers, Regional Educational Laboratories, and the 'New' OERI. Ann Arbor: University of Michigan Press.

Suggested Citation:"Theme 1: Supporting Scientific Quality at the Federal Level: Consistency and Variation." National Research Council. 2001. Science, Evidence, and Inference in Education: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10121.
×

Page 6

In a lively discussion spurred by Vinovskis's presentation, participants raised a number of detailed questions about how he assessed the centers and labs he describes in his book. Following on Vinovskis' observation that many of the programs he assessed lacked coherence, participants also discussed the disincentives that prevent collaborative work and some ways those disincentives can be overcome. During the discussion, Vinovskis and members of the audience also engaged in a heated debate about the merits of various funding and staffing decisions made by OERI leadership.

Suggested Citation:"Theme 1: Supporting Scientific Quality at the Federal Level: Consistency and Variation." National Research Council. 2001. Science, Evidence, and Inference in Education: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10121.
×
Page 1
Suggested Citation:"Theme 1: Supporting Scientific Quality at the Federal Level: Consistency and Variation." National Research Council. 2001. Science, Evidence, and Inference in Education: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10121.
×
Page 2
Suggested Citation:"Theme 1: Supporting Scientific Quality at the Federal Level: Consistency and Variation." National Research Council. 2001. Science, Evidence, and Inference in Education: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10121.
×
Page 3
Suggested Citation:"Theme 1: Supporting Scientific Quality at the Federal Level: Consistency and Variation." National Research Council. 2001. Science, Evidence, and Inference in Education: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10121.
×
Page 4
Suggested Citation:"Theme 1: Supporting Scientific Quality at the Federal Level: Consistency and Variation." National Research Council. 2001. Science, Evidence, and Inference in Education: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10121.
×
Page 5
Suggested Citation:"Theme 1: Supporting Scientific Quality at the Federal Level: Consistency and Variation." National Research Council. 2001. Science, Evidence, and Inference in Education: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/10121.
×
Page 6
Next: Theme 2: The Interface of Research and Practice in Education: Linking Quality with Utility »
Science, Evidence, and Inference in Education: Report of a Workshop Get This Book
×
Buy Paperback | $21.00 Buy Ebook | $16.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Research on education has come into the political spotlight as the demand grows for reliable and credible information for the guidance of policy and practice in the education reform environment. Many debates among the education research community feature questions concerning the nature of evidence and these questions have also appeared in broader policy and practice arenas. Inquiry has generally, over the past years, created bodies of scientific knowledge that have profound implications for education. Dramatic advances in understanding how people learn, how young children acquire early reading skills, and how to design and evaluate educational and psychological measurements is a good example of this. However, the highly contextualized nature of education and the wide range of disciplinary perspectives that rely on it have made the identification of reducible, generalizable principles difficult and slow to achieve.

Due to this, the U.S. Department of Education's National Educational Research Policy and Priorities Board (NERPPB) has asked the NRC to establish a study committee to consider the scientific underpinnings of research in education. The committee consists of members with expertise in statistics, psychology, sociology, anthropology, philosophy of science, history of education, economics, chemistry, biology, and education practice. The committee worked with the three questions in mind: What are the principles of scientific quality in education research?, How can research-based knowledge in education cumulate?, and How can a federal research agency promote and protect scientific quality in the education research it supports?.

A workshop was held on March 7-8, 2001 that was organized into three main sessions: Supporting Scientific Quality at the Federal level, The Interface of Research and Practice in Education, and Evidence and Inference. Science, Evidence, and Inference in Education: Report of a Workshop summarizes this workshop through these three ideas. The report also includes what the committee plans to do next, the workshop agenda, and information on the workshop's participants and speakers.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!