3
Assessment

Assessment is commonly thought of as the means to find out whether individuals have learned something—that is, whether they can demonstrate that they have learned the information, concepts, skills, procedures, etc., targeted by an educational effort. In school, examinations or tests are a standard feature of students’ experience, intended to measure the degree to which someone has, for example, mastered a subtraction algorithm, developed a mental model of photosynthesis, or appropriately applied economic theory to a set of problems. Other products of student work, such as reports and essays, also serve as the basis for systematic judgments about the nature and degree of individual learning.

Informal settings for science learning typically do not use tests, grades, class rankings, and other practices commonly used in schools and workplace settings to document achievement. Nevertheless, the informal science community has embraced the cause of assessing the impact of out-of-school learning experiences, seeking to understand how everyday, after-school, museum, and other types of settings contribute to the development of scientific knowledge and capabilities.1 This chapter discusses the evidence for outcomes from engagement in informal environments for science learning,

1

The educational research community generally makes a distinction between assessment—the set of approaches and techniques used to determine what individuals learn from a given instructional program—and evaluation—the set of approaches and techniques used to make judgments about a given instructional program, approach, or treatment, improve its effectiveness, and inform decisions about its development. Assessment targets what learners have or have not learned, whereas evaluation targets the quality of the intervention.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 54
 3 Assessment Assessment is commonly thought of as the means to find out whether individuals have learned something—that is, whether they can demonstrate that they have learned the information, concepts, skills, procedures, etc., tar- geted by an educational effort. In school, examinations or tests are a standard feature of students’ experience, intended to measure the degree to which someone has, for example, mastered a subtraction algorithm, developed a mental model of photosynthesis, or appropriately applied economic theory to a set of problems. Other products of student work, such as reports and essays, also serve as the basis for systematic judgments about the nature and degree of individual learning. Informal settings for science learning typically do not use tests, grades, class rankings, and other practices commonly used in schools and work- place settings to document achievement. Nevertheless, the informal science community has embraced the cause of assessing the impact of out-of-school learning experiences, seeking to understand how everyday, after-school, museum, and other types of settings contribute to the development of sci- entific knowledge and capabilities.1 This chapter discusses the evidence for outcomes from engagement in informal environments for science learning, The educational research community generally makes a distinction between assessment— 1 the set of approaches and techniques used to determine what individuals learn from a given instructional program—and evaluation—the set of approaches and techniques used to make judgments about a given instructional program, approach, or treatment, improve its effective- ness, and inform decisions about its development. Assessment targets what learners have or have not learned, whereas evaluation targets the quality of the intervention.

OCR for page 54
 Assessment focusing on the six strands of scientific learning introduced earlier and ad- dressing the complexities associated with what people know based on their informal learning experiences. In both informal and formal learning environments, assessment requires plausible evidence of outcomes and, ideally, is used to support further learn- ing. The following definition reflects current theoretical and design standards among many researchers and practitioners (Huba and Freed, 2000, p. 8): Assessment is the process of gathering and discussing information from multiple and diverse sources in order to develop a deep understanding of what students know, understand, and can do with their knowledge as a result of their educational experiences; the process culminates when assess- ment results are used to improve subsequent learning. Whether assessments have a local and immediate effect on learning activi- ties or are used to justify institutional funding or reform, most experts in assessment agree that the improvement of outcomes should lie at the heart of assessment efforts. Yet assessing learning in ways that are true to this intent often proves difficult, particularly in informal settings. After reviewing some of the practical challenges associated with assessing informal learning, this chapter offers an overview of the types of outcomes that research in informal environments has focused on to date, how these are observed in research, and grouping these outcomes according to the strands of science learning. Appendix B includes discussion of some technical issues related to assessment in informal environments. DIFFICULTIES IN ASSESSING SCIENCE LEARNING IN INFORMAL ENVIRONMENTS Despite general agreement on the importance of collecting more and better data on learning outcomes, the field struggles with theoretical, tech- nical, and practical aspects of measuring learning. For the most part, these difficulties are the same ones confronting the education community more broadly (Shepard, 2000; Delandshere, 2002; Moss, Giard, and Haniford, 2006; Moss, Pullin, Haertel, Gee, and Young, in press; Wilson, 2004; National Research Council, 2001). Many have argued that the diversity of informal learning environments for science learning further contributes to the dif- ficulties of assessment in these settings; they share the view that one of the main challenges is the development of practical, evidence-centered means for assessing learning outcomes of participants across the range of science learning experiences (Allen et al., 2007; Falk and Dierking, 2000; COSMOS Corporation, 1998; Martin, 2004). For many practitioners and researchers, concerns about the appropriate- ness of assessment tasks in the context of the setting are a major constraint

OCR for page 54
 Learning Science in Informal Environments on assessing science learning outcomes (Allen et al., 2007; Martin, 2004).2 It stands roughly as a consensus that the standardized, multiple-choice test—what Wilson (2004) regrets has become a “monoculture” species for demonstrating outcomes in the K-12 education system—is at odds with the types of activities, learning, and reasons for participation that characterize informal experiences. Testing can easily be viewed as antithetical to common characteristics of the informal learning experience. Controlling participants’ experiences to isolate particular influences, to arrange for pre- and post- tests, or to attempt other traditional measures of learning can be impractical, disruptive, and, at times, impossible given the features, norms, and typical practices in informal environments. To elaborate: Visits to museums and other designed informal settings are typically short and isolated, making it problematic to separate the effects of a single visit from the confluence of factors contributing to positive sci- ence learning outcomes. The very premise of engaging learners in activities largely for the purposes of promoting future learning experiences beyond the immediate environment runs counter to the prevalent model of assess- ing learning on the basis of a well-defined educational treatment (e.g., the lesson, the unit, the year’s math curriculum). In addition, many informal learning spaces, by definition, provide participants with a leisure experience, making it essential that the experience conforms to expectations and that events in the setting do not threaten self-esteem or feel unduly critical or controlling—factors that can thwart both participation and learning (Shute, 2008; Steele, 1997). Other important features of informal environments for science learning include the high degree to which contingency typically plays a role in the unfolding of events—that is, much of what happens in these environments emerges during the course of activities and is not prescribed or predetermined. To a large extent, informal environments are learner-centered specifically because the agenda is mutually set across participants—including peers, family members, and any facilitators who are present—making it difficult to consistently control the exposure of participants in the setting to particular treatments, interventions, or activities (Allen et al., 2007). It may well be that contingency, insofar as it allows for spontaneous alignment of personal goals and motivations to situational resources, lies at the heart of some of the most powerful learning effects in the informal domain. Put somewhat differently, the freedom and flexibility that participants have in working with people and materials in the environment often make informal learning set- tings particularly attractive. Another feature that makes many informal learning environments at- tractive is the consensual, collaborative aspect of deciding what counts as success: for example, what children at a marine science camp agree is a This is also an issue of great importance among educators and education researchers 2 concerned with classroom settings.

OCR for page 54
 Assessment good design for a submersible or an adequate method for measuring salinity. In some instances, determining a workable standard for measuring success ahead of time—that is, before the learning activities among participants take place—can be nearly impossible. The agenda that arises, say, in a family visit to a museum may include unanticipated episodes of identity reinforcement, the telling of stories, remindings of personal histories, rehearsals of new forms of expression, and other nuanced processes—all of which support learning yet evade translation into many existing models of assessment. The type of shared agency that allows for collaborative establishment of goals and standards for success can extend to multiple aspects of informal learning activities. Participants in summer camps, science centers, family activities, hobby groups, and such are generally encouraged to take full advantage of the social resources available in the setting to achieve their learning goals. The team designing a submersible in camp or a playgroup engineering a backyard fort can be thought of as having implicit permission to draw on the skills, knowledge, and strengths of those present as well as any additional resources available to get their goals accomplished. “Doing well” in informal settings often means acting in concert with others. Such norms are generally at odds with the sequestered nature of the isolated performances characteristic of school. Research indicates that these sequestered assess- ments lead to systematic undermeasurement of learning precisely because they fail to allow participants to draw on material and human resources in their environment, even though making use of such resources is a hallmark of competent, adaptive behavior (Schwartz, Bransford, and Sears, 2005). Despite the difficulties of assessing outcomes, researchers have managed to do important and valuable work. In notable ways, this work parallels the “authentic assessment” approaches taken by some school-based researchers, employing various types of performances, portfolios, and embedded assess- ments (National Research Council, 2000, 2001). Many of these approaches rely on qualitative interpretations of evidence, in part because researchers are still in the stages of exploring features of the phenomena rather than quantitatively testing hypotheses (National Research Council, 2002). Yet, as a body of work, assessment of learning in informal settings draws on the full breadth of educational and social scientific methods, using questionnaires, structured and semistructured interviews, focus groups, participant observa- tion, journaling, think-aloud techniques, visual documentation, and video and audio recordings to gather data. Taken as a whole, existing studies provide a significant body of evidence for science learning in informal environments as defined by the six strands of science learning described in this report. TYPES OF OUTCOMES A range of outcomes are used to characterize what participants learn about science in informal environments. These outcomes—usually described

OCR for page 54
8 Learning Science in Informal Environments as particular types of knowledge, skills, attitudes, feelings, and behaviors—can be clustered in a variety of ways, and many of them logically straddle two or more categories. For example, the degree to which someone shows persis- tence in scientific activity could be categorized in various ways, because this outcome depends on the interplay between multiple contextual and personal factors, including the skills, disposition, and knowledge the person brings to the environment. Similarly, studies focusing on motivation might emphasize affect or identity-related aspects of participation. In Chapter 2, we described the goals of science learning in terms of six interweaving conceptual strands. Here our formulation of the strands focuses on the science-related behaviors that people are able to engage in because of their participation in science learning activities and the ways in which researchers and evaluators have studied them. Strand 1: Developing Interest in Science Nature of the Outcome Informal environments are often characterized by people’s excitement, interest, and motivation to engage in activities that promote learning about the natural and physical world. A common characteristic is that participants have a choice or a role in determining what is learned, when it is learned, and even how it is learned (Falk and Storksdieck, 2005). These environments are also designed to be safe and to allow exploration, supporting interac- tions with people and materials that arise from curiosity and are free of the performance demands that are characteristic of schools (Nasir, Rosebery, Warren, and Lee, 2006). Engagement in these environments creates the opportunity for learners to experience a range of positive feelings and to attend to and find meaning in relation to what they are learning (National Research Council, 2007). Participation is often discussed in terms of interest, conceptualized as both the state of heightened affect for science and the predisposition to reengage with science (see Hidi and Renninger, 2006).3 Interest includes the excitement, wonder, and surprise that learners may experience and the knowledge and values that make the experience relevant and meaningful. Recent research on the relationship between affect and learning shows that the emotions associated with interest are a major factor in thinking and learning, helping people learn as well as helping with what is retained and how long it is remembered (National Research Council, 2000). Interest may even have a neurological basis (termed “seeking behavior,” Panksepp, Whereas motivation is used to describe the will to succeed across multiple contexts (see 3 Eccles, Wigfield, and Schiefele, 1998), interest is not necessarily focused on achievement and is always linked to a particular class of objects, events, or ideas, such as science (Renninger, Hidi, and Krapp, 1992; Renninger and Wozniak, 1985).

OCR for page 54
 Assessment 1998), suggesting that all individuals can be expected to have and to be able to develop interest.4 In addition, interest is an important filter for selecting and focusing on relevant information in a complex environment (Falk and Dierking, 2000). In this sense, the psychological state of mind referred to as interest can be viewed as an evolutionary adaptation to select what is per- ceived as important or relevant from the environment. People pay attention to the things that interest them, and hence interest becomes a strong filter for what is learned. When people have a more developed interest for science—sometimes described in terms of hobbies or personal excursions (Azevedo, 2006), is- lands of expertise (Crowley and Jacobs, 2002), passions (Neumann, 2006), or identity-related motivations (Ellenbogen, Luke, and Dierking, 2004; Falk and Storksdieck, 2005; Falk, 2006)—they are inclined to draw more heavily on available resources for learning and use systematic approaches to seek answers (Engle and Conant, 2002; Renninger, 2000). This line of research suggests that the availability or existence of stimulating, attractive learning environments can generate the interest that leads to participation (Falk et al., 2007). People with an interest in science are also likely to be motivated learners in science; they are more likely to seek out challenge and difficulty, use effective learning strategies, and make use of feedback (Csikszentmihalyi, Rathunde, and Whalen, 1993; Lipstein and Renninger, 2006; Renninger and Hidi, 2002). These outcomes help learners continue to develop interest, further engaging in activity that promotes enjoyment and learning. People who come to informal environments with developed interests are likely to set goals, self-regulate, and exert effort easily in the domains of their interests, and these behaviors often come to be habits, supporting their ongoing engage- ment (Lipstein and Renninger, 2006; Renninger and Hidi, 2002; Renninger, Sansone, and Smith, 2004). Methods of Researching Strand 1 Outcomes Although self-report data are susceptible to various forms of bias on the part of the research participant, they are nonetheless frequently used in studying outcomes with affective and attitudinal components because of the subjective nature of these outcomes. Self-report studies are typically based on questionnaires or structured interviews developed to target attitudes, beliefs, and interests regarding science among respondents in particular age groups, with an emphasis on how these factors relate to school processes and outcomes (e.g., Renninger, 2003; Moore and Hill Foy, 1997; Weinburgh and Steele, 2000). Methods linking prior levels of interest and motivation to outcomes have been used in research as well. It should be noted that all normatively functioning individuals might be expected to have 4 interest; Travers (1978) points out that lack of interest accompanies pathology.

OCR for page 54
0 Learning Science in Informal Environments Researchers have also used self-report techniques to investigate whether prior levels of interest were related to learning about conservation (Falk and Adelman, 2003; Taylor, 1994). Falk and Adelman (2003), for example, showed significant differences in knowledge, understanding, and attitudes for subgroups of participants based on their prior levels of knowledge and attitudes. Researchers replicated this approach with successful results in a subsequent study at Disney’s Animal Kingdom (Dierking et al., 2004). Studies of public understanding of science have used questionnaires to assess levels of interest on particular topics. For example, they have documented variation in people’s reported levels of interest in science top- ics: The general adult population in both the United States and Europe is mildly interested in space exploration and nuclear energy; somewhat more than mildly interested in new scientific discoveries, new technologies, and environmental issues; and fairly interested in medical discoveries (European Commission, 2001; National Science Board, 2002). An important component of interest, as noted, is positive affect (Hidi and Renninger, 2006). Whereas positive affect toward science is often regarded as a primary outcome of informal learning, this outcome is notoriously dif- ficult to assess. Positive affect can be transient and can develop even when conscious attention is focused elsewhere making it difficult for an observer to assess. Various theoretical models have attempted to map out a space of emotional responses, either in terms of a small number of basic emotions or emotional dimensions, such as pleasure, arousal, and dominance, and to apply these in empirical research (Plutchik, 1961; Russell and Mehrabian, 1977; Isen, 2004). Analysis of facial expressions has been a key tool in studying affect, with mixed results. Ekman’s seven facial expressions have been used to assess fleeting emotional states (Ekman and Rosenberg, 2005). Dancu (2006) used this method in a pilot study to assess emotional states of children as they engaged with exhibits and compared these observations to reports by chil- dren and their caregivers, finding low agreement among all measures. Kort, Reilly, and Picard (2001) have created a system of analyzing facial expressions suited to capturing emotions relevant to learning (such as flow, frustration, confusion, eureka), but her methods require special circumstances (e.g., the subject must sit in a chair) and do not allow for naturalistic study in large spaces, thus complicating application of this approach many informal set- tings. Ma (2006) used a combination of open-ended and semantic-differential questions, in conjunction with a self-assessment mannequin. Physiological measures (skin conductance, posture, eye movements, EEG, EKG) relevant to learning are being developed (Mota and Picard, 2003; Lu and Graesser, in press; Jung, Makeig, Stensmo, and Seinowski, 1997). Discourse analysis has been another important method for naturalistic study of emotion during museum visits. Allen (2002), for example, coded visitors’ spontaneous articulations of their emotions using three categories of

OCR for page 54
 Assessment affect: positive, negative, and neutral. Both spontaneous comments and com- ments elicited by researchers have similarly been coded to show differences in emotional response during museum visits. Clipman (2005), for example, used the Positive and Negative Affect Schedule to show that visitors leaving a Chihuly exhibit of art glass reported being more happy and inspired than visitors to a quilting exhibit in the same museum (Clipman, 2005). Myers, Saunders, and Birjulin (2004) used Likert and semantic-differential measures to show that zoo visitors had stronger emotional responses to gorillas than other animals on display. Raphling and Serrell (1993) asked visitors to complete the sentence “It reminded me that . . .” as a part of an exit questionnaire for exhibitions on a range of topics, and they reported that this prompt tends to elicit affective responses from visitors, including wonderment, imagining, reminiscences, convictions, and even spiritual connection (such as references to the power of God or nature). In studies of informal learning, interest and related positive affect are also often inferred on the basis of behavior displayed. That is, participants who seem engaged in informal learning activities are presumed to be interested. In this sense, interest and positive affect are often not treated as outcomes, but rather as preconditions for engagement. Studies that document children spontaneously asking “why” questions, for example, take as a given that children are curious about, interested in, and positively predisposed to en- gaging in activity that entails learning about the natural world (e.g., Heath, 1999). Studies that focus on adult behavior, such as engaging in hobbies, are predicated on a similar assumption—that interest can be assumed for the people and the context being studied (e.g., Azevedo, 2006). A meta-analysis of the types of naturally occurring behavior thought to provide evidence of individuals’ interest in informal learning activities could be useful for develop- ing systematic approaches to studying interest. Such an analysis also could be useful in showing how interest is displayed and valued among participants in informal learning environments, providing an understanding of interest as it emerges and is made meaningful in social interaction. Strand 2: Understanding Science Knowledge Nature of the Outcome As progressively more research shows, learning about natural phenom- ena involves ordinary, everyday experiences for human beings from the earliest ages (National Research Council, 2007). The types of experiences common across the spectrum of informal environments, including everyday settings, do more than provide enjoyment and engagement: they provide substance on which more systematic and coherent conceptual understand- ing and content structures can be built. Multiple models exist of the ways in which scientific understanding is built over time. Some (e.g., Vosniadou

OCR for page 54
 Learning Science in Informal Environments and Brewer, 1992) argue that learners build coherent theories, much like scientists, by integrating their experiences, and others (e.g., diSessa, 1988) argue that scientific knowledge is often constructed of many small fragments that are brought to mind in relevant situations. Either way, small pieces of insight, inferences, or understanding are accepted as vital components of scientific knowledge-building. Most traditionally valued aspects of science learning fall into this strand: models, fact, factual recall, and application of memorized principles. These aspects of science learning can be abstract and highly curriculum-driven; they are often not the primary focus of informal environments. Assessments that focus on Strand 2 frequently show little or no positive change of Strand 2 outcomes for learners. However, there are several studies that have shown positive learning outcomes, suggesting that even a single visit to an informal learning setting (e.g., an exhibition) may support development or revision of knowledge (Borun, Massey, and Lutter, 1993; Fender and Crowley, 2007; Guichard, 1995; Korn, 2003; McNamara, 2005). At the same time, studies of informal environments for science learning have explored cognitive outcomes that are more compatible with experiential and social activities: perceiving, noticing, and articulating new aspects of the natural world, understanding concepts embedded in interactive experiences, making connections between scientific ideas or experiences and everyday life, reinforcing prior knowledge, making inferences, and building an expe- riential basis for future abstractions to refer to. Informal experiences have also been shown to be quite memorable over time (see, e.g., Anderson and Piscitelli, 2002; Anderson and Shimizu, 2007). While the knowledge of most learners is often focused on topics of per- sonal interest, it is important to note that most people do not learn a great deal of science in the context of a single, brief “treatment.” However, this ought not to be considered an entirely negative finding. Consider that learn- ing in school is rarely assessed on the basis of a one- or two-hour class, yet science learning in informal environments is often assessed after exposures that do not exceed one to two hours. Falk and Storksdieck (2005) found that a single visit to an exhibition did increase the scientific content knowledge of at least one-third of the adult visitors, particularly those with low prior knowledge. However, even participants whose learning is not evident in a pre-post design may take away something important: The potential to learn later—what How People Learn refers to as preparation for future learning (National Research Council, 2000). For example, visitors whose interest is sparked (Strand 1) presumably are disposed to build on this experience in the months that follow a science center visit by engaging in other informal learning experiences.

OCR for page 54
 Assessment Methods of Researching Strand 2 Outcomes Outcomes in this category can be the most “loaded” for learners. If not carefully designed, assessments of content knowledge can make learners feel inadequate, and this throws into question the validity of the assessment, going against the expectations of learners in relation to norms of the setting and the social situation. The traditional method for measuring learning (or science literacy) has been to ask textbook-like questions and to judge the nearness of an individual’s answer to the expert’s version of the scientific story. In terms of what researchers know about the nature of learning, this is a limited approach to documenting what people understand about the world around them. This outcome category is also vulnerable to false nega- tives, because cognitive change is highly individual and difficult to assess in a standardized way. An essential element of informal environments is that learners have some choice in what they attend to, what they take away from an experience, what connections they make to their own lives. Consequently, testing students only on recall of knowledge can cause researchers to miss key learning outcomes for any particular learner, since these outcomes are based on the learner’s own experience and prior knowledge. To avert the ethical, practical, and educational pitfalls related to assessing content knowledge, many researchers and evaluators working in informal environments put effort into generating assessments that have nonthreatening content, a breadth of possible responses, comfortable delivery mechanisms, a conversational tone, and appropriateness to the specific audience being targeted. Also, these assessments leave room for unexpected and emergent outcomes. Questions asked with an understanding of the ways in which people are likely to have incorporated salient aspects of a scientific idea into their own lives appropriately measure their general level of science knowledge and understanding. Yet we also acknowledge that while such measures are well aligned with the goals of informal environments, they lack objectivity of standardized measures. An important method for assessing scientific knowledge and understand- ing in informal environments is the analysis of participants’ conversations. Researchers interested in everyday and after-school settings study science- related discourse and behavior as it occurs in the course of ordinary, ongoing activity (Bell, Bricker, Lee, Reeve, and Zimmerman, 2006; Callanan, Shrager, and Moore, 1995; Sandoval, 2005). Researchers focused on museums and other designed environments have used a variety of schemes to classify these conversations into categories that show that people are doing cognitive work and engaging in sense-making. The categories used in these classification schemes have included: identify, describe, interpret/apply (Borun, Chambers, and Cleghorn, 1996); list, personal synthesis, analysis, synthesis, explanation (Leinhardt and Knutson, 2004); perceptual, conceptual, affective, connecting, strategic (Allen, 2002); and levels of metacognition (Anderson and Nashon,

OCR for page 54
 Learning Science in Informal Environments 2007). Most of these categorizations have some theoretical basis, but they are also partly emergent from the data. A great deal of research has been conducted on the new information, ideas, concepts, and even skills acquired in museums and other designed settings. Some museum researchers have measured content knowledge us- ing think-aloud protocols. In these protocols, a participant goes through a learning experience and talks into a microphone while doing it. O’Neil and Dufresne-Tasse (1997) used a talk-aloud method to show that visitors were very cognitively active when looking at objects, even objects passively dis- played. The principal limitation of this method is that it is likely to disrupt the learning processes to some degree, not least of which is the elimination of conversation in a visiting group. Beaumont (2005) used a variation of this technique with whole groups by inviting families to think aloud “when appropriate” during their visit to an exhibition. When studying children, clinical interviews may be helpful for eliciting the ways in which they think about concepts embedded in exhibits, as well as the ways in which their understanding may be advanced or hindered. For example, Feher and Rice (1987, 1988) interviewed children using a series of museum exhibits about light and color, to identify common conceptions and suggest modifications to the exhibit. Several methods are used to elicit the concepts, explanations, arguments, models and facts related to science that participants generate, understand, and remember after engaging in science learning experiences. These include structured self-reports, in the form of questionnaires, interviews, and focus groups (see Appendix B for a discussion of individual and group interviews). Self-reports can be used to assess understanding and recall of an individual’s experiences, syntheses of big ideas, and information that the respondent says he or she “never knew.” For example, a summative evaluation of Search for Life (Korn, 2006) showed that visitors had understood a challenging big idea (that the search for life on other planets begins by looking at extreme environments on Earth that may be similar) and also showed they had not thought deeply about issues regarding space exploration or life on other planets. Researchers also sometimes engage visitors to museums, science centers, and other designed environments in conversations; they ask them to talk about their experience in relation to particular issues of interest to the institution to better understand the overlap between the agendas of the institution’s staff and the visitors. For example, for each of an exhibition’s five primary themes, Leinhardt and Knutson (2004) gave visitors a picture and a statement and coded the ensuing discussion as part of their assessment of learning in the exhibition. Rubrics have been used to code the quality of visitors’ descriptions of a particular topic or concept of interest. Perry called these “knowledge hierarchies” (1993) and used them to characterize both baseline understandings and learning from an exhibition. One important underlying assumption in this research is the relationship between thought

OCR for page 54
80 Learning Science in Informal Environments Anderson, D., and Piscitelli, B., (2002). Parental recollections of childhood museum visits. Museum National, 10 (4), 26-27. Anderson, D., and Shimizu, H. (2007). Factors shaping vividness of memory episodes: Visitors’ long-term memories of the 1970 Japan world exposition. Memory, 15 (2), 177-191. Anderson, D., Lucas, K.B., Ginns, I.S., and Dierking, L.D. (2000). Development of knowledge about electricity and magnetism during a visit to a science museum and related post-visit activities. Science Education, 84 (5), 658-679. Astor-Jack, T., Whaley, K.K., Dierking, L.D., Perry, D., and Garibay, C. (2007). Un- derstanding the complexities of socially mediated learning. In J.H. Falk, L.D. Dierking, and S. Foutz (Eds.), In principle, in practice: Museums as learning institutions. Walnut Creek, CA: AltaMira Press. Azevedo, F.S. (2006). Personal excursions: Investigating the dynamics of student engagement. International Journal of Computers for Mathematical Learning, 11 (1), 57-98. Barron, B. (2006). Interest and self-sustained learning as catalysts of development: A learning ecology perspective. Human Development, 49 (4), 193-224. Bartholomew, H., Osborne, J., and Ratcliffe, M. (2004). Teaching students ideas about science: Five dimensions of effective practice. Science Education, 88 (5), 655-682. Beals, D.E. (1993). Explanatory talk in low-income families’ mealtime. Preschoolers’ questions and parents’ explanations: Causal thinking in everyday parent-child activity. Hispanic Journal of Behavioral Sciences, 19 (1), 3-33. Beane, D.B., and Pope, M.S. (2002). Leveling the playing field through object-based service learning. In S. Paris (Ed.), Perspectives on object-centered learning in museums (pp. 325-349). Mahwah, NJ: Lawrence Erlbaum Associates. Beaumont, L. (2005). Summative evaluation of wild reef-sharks at Shedd. Report for the John G. Shedd Aquarium. Available: http://www.informalscience.com/ download/case_studies/report_133.doc [accessed October 2008]. Bell, P., and Linn, M.C. (2002). Beliefs about science: How does science instruction contribute? In B.K. Hofer and P.R. Pintrich (Eds.), Personal epistemology: The psychology of beliefs about knowledge and knowing. Mahwah, NJ: Lawrence Erlbaum Associates. Bell, P., Bricker, L.A., Lee, T.F., Reeve, S., and Zimmerman, H.H. (2006). Understand- ing the cultural foundations of children’s biological knowledge: Insights from everyday cognition research. In A. Barab, K.E. Hay, and D. Hickey (Eds.), 7th international conference of the learning sciences, ICLS 2006 (vol. 2, pp. 1029- 1035). Mahwah, NJ: Lawrence Erlbaum Associates. Bitgood, S., Serrell, B., and Thompson, D. (1994). The impact of informal education on visitors to museums. In V. Crane, H. Nicholson, M. Chen, and S. Bitgood (Eds.), Informal science learning: What the research says about television, sci- ence museums, and community-based projects (pp. 61-106). Deadham, MA: Research Communication. Borun, M., Chambers, M., and Cleghorn, A. (1996). Families are learning in science museums. Curator, 39 (2), 123-138. Borun, M., Massey, C., and Lutter, T. (1993). Naive knowledge and the design of science museum exhibits. Curator, 36 (3), 201-219.

OCR for page 54
8 Assessment Brickhouse, N.W., Lowery, P., and Schultz, K. (2000). What kind of a girl does science? The construction of school science identities. Journal of Research in Science Teaching, 37 (5), 441-458. Brickhouse, N.W., Lowery, P., and Schultz, K. (2001). Embodying science: A femi- nist perspective on learning. Journal of Research in Science Teaching, 18 (3), 282-295. Brossard, D., and Shanahan, J. (2003). Do they want to have their say? Media, agricul- tural biotechnology, and authoritarian views of democratic processes in science. Mass Communication and Society, 6 (3), 291-312. Brossard, D., Lewenstein, B., and Bonney, R. (2005). Scientific knowledge and at- titude change: The impact of a citizen science program. International Journal of Science Education, 27 (9), 1099-1121. Brown, B. (2004). Discursive identity: Assimilation into the culture of science and its implications for minority students. Journal of Research in Science Teaching, 41 (8), 810-834. Brown, B., Reveles, J., and Kelly, G. (2004). Scientific literacy and discursive identity: A theoretical framework for understanding science learning. Science Education, 89 (5), 779-802. Calabrese Barton, A. (2003). Teaching science for social justice. New York: Teachers College Press. Callanan, M.A., and Oakes, L. (1992). Preschoolers’ questions and parents’ explana- tions: Causal thinking in everyday activity. Cognitive Development, 7, 213-233. Callanan, M.A., Shrager, J., and Moore, J. (1995). Parent-child collaborative explana- tions: Methods of identification and analysis. Journal of the Learning Sciences, 4 (1), 105-129. Campbell, P. (2008, March). Evaluating youth and community programs: In the new ISE framework. In A. Friedman (Ed.), Framework for evaluating impacts of informal science education projects (pp. 69-75). Available: http://insci.org/docs/ Eval_Framework.pdf [accessed October 2008]. Carey, S., and Smith, C. (1993). On understanding the nature of scientific knowledge. Educational Psychologist, 28 (3), 235-251. Chaput, S.S., Little, P.M.D., and Weiss, H. (2004). Understanding and measuring attendance in out-of-school time programs. Issues and Opportunities in Out- of-School Time Evaluation, 7, 1-6. Clipman, J.M. (2005). Development of the museum affect scale and visit inspiration checklist. Paper presented at the 2005 Annual Meeting of the Visitor Studies Association, Philadelphia. Available: http://www.visitorstudiesarchives.org [ac- cessed October 2008]. COSMOS Corporation. (1998). A r eport on the evaluation of the National Science Foundation’s informal science education program. Washington, DC: National Science Foundation. Available: http://www.nsf.gov/pubs/1998/nsf9865/nsf9865. htm [accessed October 2008]. Crowley, K., and Jacobs, M. (2002). Islands of expertise and the development of family scientific literacy. In G. Leinhardt, K. Crowley, and K. Knutson (Eds.), Learning conversations in museums (pp. 333-356). Mahwah, NJ: Lawrence Erlbaum Associates.

OCR for page 54
8 Learning Science in Informal Environments Csikszentmihalyi, M., and Hermanson, K. (1995). Intrinsic motivation in museums: Why does one want to learn? In J.H. Falk and L.D. Dierking (Eds.), Public insti- tutions for personal learning: Establishing a research agenda. Washington, DC: American Association of Museums. Csikszentmihalyi, M., Rathunde, K., and Whalen, S. (1993). Talented teenagers: The roots of success and failure. New York: Cambridge University Press. Dancu, T. (2006). Comparing three methods for measuring children’s engagement with exhibits: Observations, caregiver interviews, and child interviews. Poster presented at 2006 Annual Meeting of the Visitor Studies Association, Grand Rapids, MI. Delandshere, G. (2002). Assessment as inquiry. Teachers College Record, 104 (7), 1461-1484. Diamond, J. (1999). Practical evaluation guide: Tools for museums and other educa- tional settings. Walnut Creek, CA: AltaMira Press. Dierking, L.D., Adelman, L.M., Ogden, J., Lehnhardt, K., Miller, L., and Mellen, J.D. (2004). Using a behavior change model to document the impact of visits to Disney’s Animal Kingdom: A study investigating intended conservation action. Curator, 47 (3), 322-343. diSessa, A. (1988). Knowledge in pieces. In G. Forman and P. Pufall (Eds.), Con- structivism in the computer age (pp. 49-70). Mahwah, NJ: Lawrence Erlbaum Associates. Driver, R., Leach, J., Millar, R., and Scott, P. (1996). Young people’s images of science. Buckingham, England: Open University Press. Eccles, J.S., Wigfield, A., and Schiefele, U. (1998). Motivation to succeed. In N. Eisenberg (Ed.), Handbook of child psychology: Social, emotional, and personality development (5th ed., pp. 1017-1095). New York: Wiley. Ekman, P., and Rosenberg, E. (Eds.). (2005). What the face reveals: Basic and applied studies of spontaneous expression using the facial action coding system. New York: Oxford University Press. Ellenbogen, K.M. (2002). Museums in family life: An ethnographic case study. In G. Leinhardt, K. Crowley, and K. Knutson (Eds.), Learning conversations in muse- ums. Mahwah, NJ: Lawrence Erlbaum Associates. Ellenbogen, K.M. (2003). From dioramas to the dinner table: An ethnographic case study of the role of science museums in family life. Dissertation Abstracts Inter- national, 64 (3), 846A. (University Microfilms No. AAT30-85758.) Ellenbogen, K.M., Luke, J.J., and Dierking, L.D. (2004). Family learning research in museums: An emerging disciplinary matrix? Science Education, 88 (Suppl. 1), S48-S58. Engle, R.A., and Conant, F.R. (2002). Guiding principles of fostering productive disciplinary engagement: Explaining an emergent argument. Cognition and Instruction, 20 (4), 399-483. European Commission. (2001). Eurobarometer 55.2: Europeans, science and technol- ogy. Brussels, Belgium: Author. Fadigan, K.A., and Hammrich, P.L. (2004). A longitudinal study of the educational and career trajectories of female participants of an urban informal science education program. Journal of Research on Science Teaching, 41 (8), 835-860. Falk, J.H. (2006). The impact of visit motivation on learning: Using identity as a con- struct to understand the visitor experience. Curator, 49 (2), 151-166.

OCR for page 54
8 Assessment Falk, J.H. (2008). Calling all spiritual pilgrims: Identity in the museum experience. Museum (Jan/Feb.). Available: http://www.aam-us.org/pubs/mn/spiritual.cfm [accessed March 2009]. Falk, J.H., and Adelman, L.M. (2003). Investigating the impact of prior knowledge and interest on aquarium visitor learning. Journal of Research in Science Teach- ing, 40 (2), 163-176. Falk, J.H., and Dierking, L.D. (2000). Learning from museums. Walnut Creek, CA: AltaMira Press. Falk, J.H., and Storksdieck, M. (2005). Using the “contextual model of learning” to understand visitor learning from a science center exhibition. Science Education, 89 (5), 744-778. Falk, J.H., Moussouri, T., and Coulson, D. (1998). The effect of visitors’ agendas on museum learning, Curator, 41 (2), 107-120. Falk, J.H., Reinhard, E.M., Vernon, C.L., Bronnenkant, K., Deans, N.L., and Heimlich, J.E. (2007). Why zoos and aquariums matter: Assessing the impact of a visit. Silver Spring, MD: Association of Zoos and Aquariums. Falk, J.H., Scott, C., Dierking, L.D., Rennie, L.J., and Cohen-Jones, M.S. (2004). Inter- actives and visitor learning. Curator, 47 (2), 171-198. Feher, E., and Rice, K. (1987). Pinholes and images: Children’s conceptions of light and vision. I. Science Education, 71 (4), 629-639. Feher, E., and Rice, K. (1988). Shadows and anti-images: Children’s conceptions of light and vision. II. Science Education, 72 (5), 637-649. Fender, J.G, and Crowley, K. (2007). How parent explanation changes what children learn from everyday scientific thinking. Journal of Applied Developmental Psy- chology, 28 (3), 189-210. Gallenstein, N. (2005). Never too young for a concept map. Science and Children, 43 (1), 45-47. Garibay, C. (2005, July). Visitor studies and underrepresented audiences. Paper pre- sented at the 2005 Visitor Studies Conference, Philadelphia. Garibay, C. (2006, January). Primero la Ciencia remedial evaluation. Unpublished manuscript, Chicago Botanic Garden. Gration, M., and Jones, J. (2008, May/June). Learning from the process: Develop- mental evaluation within “agents of change.” ASTC Dimensions, From Intent to Impact: Building a Culture of Evaluation. Available: http://www.astc.org/ blog/2008/05/16/from-intent-to-impact-building-a-culture-of-evaluation/ [ac- cessed April 2009]. Guichard, H. (1995). Designing tools to develop the conception of learners. Inter- national Journal of Science Education, 17 (2), 243-253. Gupta, P., and Siegel, E. (2008). Science career ladder at the New York Hall of Sci- ence: Youth facilitators as agents of inquiry. In R.E. Yaeger and J.H. Falk (Eds.), Exemplary science in informal education settings: Standards-based success stories. Arlington, VA: National Science Teachers Association. Gutiérrez, K., and Rogoff, B. (2003). Cultural ways of learning: Individual traits or repertoires of practice. Educational Researcher, 32 (5), 19-25. Hammer, D., and Elby, A. (2003). Tapping epistemological resources for learning physics. Journal of the Learning Sciences, 12 (1), 53-90.

OCR for page 54
8 Learning Science in Informal Environments Heath, S.B. (1999). Dimensions of language development: Lessons from older children. In A.S. Masten (Ed.), Cultural processes in child development: The Minnesota symposium on child psychology (Vol. 29, pp. 59-75). Mahwah, NJ: Lawrence Erlbaum Associates. Hein, G.E. (1995). Evaluating teaching and learning in museums. In E. Hooper-Greenhill (Ed.), Museums, media, message (pp. 189-203). New York: Routledge. Hein, G.E. (1998). Learning in the museum. New York: Routledge. Hidi, S., and Renninger, K.A. (2006). The four-phase model of interest development. Educational Psychologist, 41 (2), 111-127. Holland, D., and Lave, J. (Eds.) (2001). History in person: Enduring struggles, con- tentious practice, intimate identities. Albuquerque, NM: School of American Research Press. Holland, D., Lachicotte, W., Skinner, D., and Cain, C. (1998). Identity and agency in cultural worlds. Cambridge, MA: Harvard University Press. Huba, M.E., and Freed, J. (2000). Learner-centered assessment on college campuses: Shifting the focus from teaching to learning. Needham Heights, MA: Allyn and Bacon. Hull, G.A., and Greeno, J.G. (2006). Identity and agency in nonschool and school worlds. In Z. Bekerman, N. Burbules, and D.S. Keller (Eds.), Learning in places: The informal education reader (pp. 77-97). New York: Peter Lang. Humphrey, T., and Gutwill, J.P. (2005). Fostering active prolonged engagement: The art of creating APE exhibits. San Francisco: The Exploratorium. Isen, A.M. (2004). Some perspectives on positive feelings and emotions: Positive affect facilitates thinking and problem solving. In A.S.R. Manstead, N. Frijda, and A. Fischer (Eds.), Feelings and emotions: The Amsterdam symposium (pp. 263-281). New York: Cambridge University Press. Jackson, A., and Leahy, H.R. (2005). “Seeing it for real?” Authenticity, theater and learning in museums. Research in Drama Education, 10 (3), 303-325. Jacoby, S., and Gonzales, P. (1991). The constitution of expert-novice in scientific discourse. Issues in Applied Linguistics, 2 (2), 149-181. Jolly, E.J., Campbell, P.B., and Perlman, L. (2004). Engagement, capacity, and conti- nuity: A trilogy for student success. St. Paul: GE Foundation and Science Museum of Minnesota. Jung, T., Makeig, S., Stensmo, M., and Sejnowski, T.J. (1997). Estimating alertness from the EEG power spectrum. Biomedical Engineering, 44 (1), 60-69. Korn, R. (2003). Summative evaluation of “vanishing wildlife.” Monterey, CA: Monterey Bay Aquarium. Available: http://www.informalscience.org/evaluations/report_ 45.pdf [accessed October 2008]. Korn, R. (2006). Summative evaluation for “search for life.” Queens: New York Hall of Science. Available: http://www.informalscience.org/evaluation/show/66 [ac- cessed October 2008]. Kort, B., Reilly, R., and Picard, R.W. (2001). An affective model of interplay between emotions and learning: Reengineering educational pedagogy—Building a learn- ing companion. In Proceedings of IEEE International Conference on Advanced Learning Technologies, Madison, WI. Lave, J. (1996). Teaching, as learning, in practice. Mind, Culture, and Activity, 3 (3), 149-164.

OCR for page 54
8 Assessment Leinhardt, G., and Knutson, K. (2004). Listening in on museum conversations. Walnut Creek, CA: AltaMira Press. Leinhardt, G., Tittle, C., and Knutson, K. (2002). Talking to oneself: Diaries of museum visits. In G. Leinhardt, K. Crowley, and K. Knutson (Eds.), Learning conversations in museums (pp. 103-133). Mahwah, NJ: Lawrence Erlbaum Associates. Lipstein, R., and Renninger, K.A. (2006). “Putting things into words”: The development of 12-15-year-old students’ interest for writing. In P. Boscolo and S. Hidi (Eds.), Motivation and writing: Research and school practice (pp. 113- 140). New York: Kluwer Academic/Plenum. Loomis, R.J. (1989). The countenance of visitor studies in the 1980’s. Visitor Studies, 1(1), 12-24. Lu, S., and Graesser, A.C. (in press). An eye tracking study on the roles of texts, pictures, labels, and arrows during the comprehension of illustrated texts on device mechanisms. Submitted to Cognitive Science. Ma, J. (2006). Philosopher’s corner. Unpublished report. Available: http://www. exploratorium.edu/partner/pdf/philCorner_rp_02.pdf [accessed October 2008]. Martin, L.M. (2004). An emerging research framework for studying informal learning and schools. Science Education, 88 (Suppl. 1), S71-S82. McCreedy, D. (2005). Youth and science: Engaging adults as advocates. Curator, 48 (2), 158-176. McDermott, R., and Varenne, H. (1995). Culture as disability. Anthropology and Edu- cation Quarterly, 26 (3), 324-248. McNamara, P. (2005). Amazing feats of aging: A summative evaluation report. Portland: Oregon Museum of Science and Industry. Available: http://www.informalscience. org/evaluation/show/82 [accessed October 2008]. Meisner, R., vom Lehn, D., Heath, C., Burch, A., Gammon, B., and Reisman, M. (2007). Exhibiting performance: Co-participation in science centres and museums. In- ternational Journal of Science Education, 29 (12), 1531-1555. Melton, A.W. (1935). Problems of installation in museums of art. Washington, DC: American Association of Museums. Miller, J.D. (2004). Public understanding of and attitudes toward scientific research: What we know and what we need to know. Public Understanding of Science, 13 (3), 273-294. Moore, R.W., and Hill Foy, R.L. (1997). The scientific attitude inventory: A revision (SAIII). Journal of Research in Science Teaching, 34 (4), 327-336. Moss, P.A., Girard, B., and Haniford, L. (2006). Validity in educational assessment. Review of Research in Education, 30, 109-162. Moss, P.A., Pullin, D., Haertel, E.H., Gee, J.P., and Young, L. (Eds.). (in press). As- sessment, equity, and opportunity to learn. New York: Cambridge University Press. Mota, S., and Picard, R.W. (2003). Automated posture analysis for detecting learner’s interest level. Paper prepared for the Workshop on Computer Vision and Pattern Recognition for Human-Computer Interaction, June, Madison, WI. Available: http://affect.media.mit.edu/pdfs/03.mota-picard.pdf [accessed March 2009]. Moussouri, T. (1997). The use of children’s drawings as an evaluation tool in the museum. Museological Review, 4, 40-50. Moussouri, T. (2007). Implications of the social model of disability for visitor research. Visitor Studies, 10 (1), 90-106.

OCR for page 54
8 Learning Science in Informal Environments Myers, O.E., Saunders, C.D., and Birjulin, A.A. (2004). Emotional dimensions of watching zoo animals: An experience sampling study building on insights from psychology. Curator, 47 (3), 299-321. Nasir, N.S. (2002). Identity, goals, and learning: Mathematics in cultural practices. Mathematical Thinking and Learning, 4(2 & 3), 213-247. Nasir, N.S., Rosebery, A.S., Warren B., and Lee, C.D. (2006). Learning as a cultural process: Achieving equity through diversity. In R.K. Sawyer (Ed.), The Cam- bridge handbook of the learning sciences (pp. 489-504). New York: Cambridge University Press. National Research Council. (1996). National science education standards. National Committee on Science Education Standards and Assessment. Washington, DC: National Academy Press. National Research Council. (2000). How people learn: Brain, mind, experience, and school (expanded ed.). Committee on Developments in the Science of Learning. J.D. Bransford, A.L. Brown, and R.R. Cocking (Eds.). Washington, DC: National Academy Press. National Research Council. (2001). Knowing what students know: The science and design of educational assessment. Committee on the Foundations of Assessment. J.W. Pellegrino, N. Chudowsky, and R. Glaser (Eds.). Washington, DC: National Academy Press. National Research Council. (2002). Scientific research in education. Committee on Scientific Principles for Education Research. R.J. Shavelson and L. Towne (Eds.). Washington, DC: National Academy Press. National Research Council. (2007). Taking science to school: Learning and teaching science in grades K-8. Committee on Science Learning, Kindergarten Through Eighth Grade. R.A. Duschl, H.A. Schweingruber, and A.W. Shouse (Eds.). Wash- ington, DC: The National Academies Press. National Science Board. (2002). Science and engineering indicators—2002 (NSB- 02-1). Arlington, VA: National Science Foundation. Available: http://www.nsf. gov/statistics/seind02/pdfstart.htm [accessed October 2008]. Neumann, A. (2006). Professing passion: Emotion in the scholarship of professors in research universities. American Educational Research Journal, 43 (3), 381-424. O’Neill, M.C., and Dufresne-Tasse, C. (1997). Looking in everyday life/Gazing in museums. Museum Management and Curatorship, 16 (2), 131-142. Osborne, J., Collins, S., Ratcliffe, M., Millar, R., and Duschl, R. (2003). What “ideas- about-science” should be taught in school science? A Delphi study of the expert community. Journal of Research in Science Teaching, 40 (7), 692-720. Panksepp, J. (1998). Affective neuroscience: The foundations of human and animal emotions. New York: Oxford University Press. Penner, D., Giles, N.D., Lehrer, R., and Schauble, L. (1997). Building functional models: Designing an elbow. Journal of Research in Science Teaching, 34 (2), 125-143. Perry, D. L. (1993). Measuring learning with the knowledge hierarchy. Visitor Studies: Theory, Research and Practice: Collected Papers from the 1993 Visitor Studies Conference, 6, 73-77. Plutchik, R. (1961). Studies of emotion in the light of a new theory. Psychological r eports, 8, 170.

OCR for page 54
8 Assessment Randol, S.M. (2005). The nature of inquiry in science centers: Describing and assessing inquiry at exhibits. Unpublished doctoral dissertation, University of California, Berkeley. Raphling, B., and Serrell, B. (1993). Capturing affective learning. Current Trends in Audience Research and Evaluation, 7, 57-62. Reich, C., Chin, E., and Kunz, E. (2006). Museums as forum: Engaging science center visitors in dialogue with scientists and one another. Informal Learning Review, 79, 1-8. Renninger, K.A. (2000). Individual interest and its implications for understanding intrin- sic motivation. In C. Sansone and J.M. Harackiewicz (Eds.), Intrinsic motivation: Controversies and new directions (pp. 373-404). San Diego: Academic Press. Renninger, K.A. (2003). Effort and interest. In J. Gutherie (Ed.), The encyclopedia of education (2nd ed., pp. 704-707). New York: Macmillan. Renninger, K.A., and Hidi, S. (2002). Interest and achievement: Developmental is- sues raised by a case study. In A. Wigfield and J. Eccles (Eds.), Development of achievement motivation (pp. 173-195). New York: Academic Press. Renninger, K.A., and Wozniak, R.H. (1985). Effect of interests on attentional shift, recognition, and recall in young children. Developmental Psychology, 21 (4), 624-631. Renninger, K., Hidi, S., and Krapp, A. (1992). The role of interest in learning and development. Mahwah, NJ: Lawrence Erlbaum Associates. Renninger, K.A., Sansone, C., and Smith, J.L. (2004). Love of learning. In C. Peterson and M.E.P. Seligman (Eds.), Character strengths and virtues: A handbook and classification (pp. 161-179). New York: Oxford University Press. Robinson, E.S. (1928). The behavior of the museum visitor. New Series, No. 5. Wash- ington, DC: American Association of Museums. Rockman Et Al. (1996). Evaluation of Bill Nye the Science Guy: Television series and out- reach. San Francisco: Author. Available: http://www.rockman.com/projects/124. kcts.billNye/BN96.pdf [accessed October 2008]. Rockman Et Al. (2007). Media-based learning science in informal environments. Background paper for the Learning Science in Informal Environments Committee of the National Research Council. Available: http://www7.nationalacademies.org/ bose/Rockman_et%20al_Commissioned_Paper.pdf [accessed October 2008]. Rogoff, B. (2003). The cultural nature of human development. New York: Oxford University Press. Rosenberg, S., Hammer, D., and Phelan, J. (2006). Multiple epistemological coher- ences in an eighth-grade discussion of the rock cycle. Journal of the Learning Sciences, 15 (2), 261-292. Roth, E.J., and Li, E. (2005, April). Mapping the boundaries of science identity in ISME’s first year. A paper presented at the annual meeting of the American Educational Research Association, Montreal. Rounds, J. (2006). Doing identity work in museums. Curator, 49 (2), 133-150. Russell, J.A., and Mehrabian, A. (1977). Evidence for a three-factor theory of emotions. Journal of Research in Personality, 11, 273-294. Sachatello-Sawyer, B., Fellenz, R.A., Burton, H., Gittings-Carlson, L., Lewis-Mahony, J., and Woolbaugh, W. (2002). Adult museum programs: Designing meaningful experiences. American Association for State and Local History Book Series. Blue Ridge Summit, PA: AltaMira Press.

OCR for page 54
88 Learning Science in Informal Environments Sandoval, W.A. (2005). Understanding students’ practical epistemologies and their influence on learning through inquiry. Science Education, 89 (4), 634-656. Schauble, L., Glaser, R., Duschl, R., Schulze, S., and John, J. (1995). Students’ un- derstanding of the objectives and procedures of experimentation in the science classroom. Journal of the Learning Sciences, 4 (2), 131-166. Schreiner, C., and Sjoberg, S. (2004). Sowing the seeds of ROSE. Background, ra- tionale, questionnaire development and data collection for ROSE (relevance of science education)—A comparative study of students’ views of science and science education. Department of Teacher Education and School Development, University of Oslo. Schwartz, D.L., Bransford, J.D., and Sears, D. (2005). Efficiency and innovation in transfer. In J.P. Mestre (Ed.), Transfer of learning from a modern multidisciplinary perspective (pp. 1-51). Greenwich, CT: Information Age. Schwartz, R.S., and Lederman, N.G. (2002). “It’s the nature of the beast”: The influ- ence of knowledge and intentions on learning and teaching nature of science. Journal of Research in Science Teaching, 39 (3), 205-236. Serrell, B. (1998). Paying attention: Visitors and museum exhibitions. Washington, DC: American Association of Museums. Shepard, L. (2000). The role of assessment in a learning culture. Educational Re- searcher, 29 (7), 4-14. Shute, V.J. (2008). Focus on formative feedback. Review of Educational Research, 78 (1), 153-189. Smith, C.L., Maclin, D., Houghton, C., and Hennessey, M.G. (2000). Sixth-grade stu- dents’ epistemologies of science: The impact of school science experiences on epistemological development. Cognition and Instruction, 18, 349-422. Songer, N.B., and Linn, M.C. (1991). How do students’ views of science influ- ence knowledge integration? Journal of Research in Science Teaching, 28 (9), 761-784. Spock, M. (2000). On beyond now: Strategies for assessing the long term impact of museum experiences. Panel discussion at the American Association of Museums Conference, Baltimore. St. John, M., and Perry, D.L. (1993). Rethink role, science museums urged. ASTC Newsletter, 21 (5), 1, 6-7. Steele, C.M. (1997). A threat in the air: How stereotypes shape the intellectual identi- ties and performance of women and African Americans. American Psychologist, 52 (6), 613-629. Stevens, R. (2007). Capturing ideas in digital things: the traces digital annotatin me- dium. In R. Goldman, B. Barron, and R. Pea (Eds.), Video research in the learning sciences. Cambridge: Cambridge University Press. Stevens, R., and Hall, R.L. (1997). Seeing the “tornado”: How “video traces” medi- ate visitor understandings of natural spectacles in a science museum. Science Education, 81 (6), 735-748. Stevens, R., and Toro-Martell, S. (2003). Leaving a trace: Supporting museum visitor interpretation and interaction with digital media annotation systems. Journal of Museum Education, 28 (2), 25-31. Tai, R.H., Liu, C.Q., Maltese, A.V., and Fan, X. (2006). Planning early for careers in science. Science, 312 (5777), 1143-1144.

OCR for page 54
8 Assessment Taylor, R. (1994). The influence of a visit on attitude and behavior toward nature conservation. Visitor Studies, 6 (1), 163-171. Thompson, S., and Bonney, R. (2007, March). Evaluating the impact of participation in an on-line citizen science project: A mixed-methods approach. In J. Trant and D. Bearman (Eds.), Museums and the web 2007: Proceedings. Toronto: Archives and Museum Informatics. Available: http://www.archimuse.com/mw2007/papers/ thompson/thompson.html [accessed October 2008]. Travers, R.M.W. (1978). Children’s interests. Kalamazoo: Michigan State University, College of Education. Van Luven, P., and Miller, C. (1993). Concepts in context: Conceptual frameworks, evaluation and exhibition development. Visitor Studies, 5 (1), 116-124. vom Lehn, D., Heath, C., and Hindmarsh, J. (2001). Exhibiting interaction: Conduct and collaboration in museums and galleries. Symbolic Interaction, 24 (2), 189-216. vom Lehn, D., Heath, C., and Hindmarsh, J. (2002). Video-based field studies in museums and galleries. Visitor Studies, 5 (3), 15-23. Vosniadou, S., and Brewer, W.F. (1992). Mental models of the earth: A study of con- ceptual change in childhood. Cognitive Psychology, 24 (4), 535-585. Warren, B., Ballenger, C., Ogonowski, M., Rosebery, A., and Hudicourt-Barnes, J. (2001). Rethinking diversity in learning science: The logic of everyday sense- making. Journal of Research in Science Teaching, 38, 529-552. Weinburgh, M.H., and Steele, D. (2000). The modified attitude toward science inven- tory: Developing an instrument to be used with fifth grade urban students. Journal of Women and Minorities in Science and Engineering, 6 (1), 87-94. Wilson, M. (Ed.). (2004). Towards coherence between classroom assessment and ac- countability. Chicago: University of Chicago Press.

OCR for page 54