Click for next page ( 323

The National Academies of Sciences, Engineering, and Medicine
500 Fifth St. N.W. | Washington, D.C. 20001

Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement

Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 322
322 APPENDIX B Some Technical Considerations in Assessment Interviewing Groups Versus Individuals Learners in informal environments, such as museums, generally partici- pate in multigenerational groups rather than as individuals, and these groups often move loosely through the environment, splitting and reforming as members make new discoveries and share what they are experiencing with one another. This makes interviewing particularly challenging. It is difficult to craft interview questions that are suitable for a broad range of people. Also, the learning experience varies frequently between an individual and group focus. There are trade-offs between interviewing individuals and groups: (a) Interviewing individuals has the advantage that participants do not influ- ence each other’s opinions, that the resulting data are amenable to statistical methods with equal weighting for each person, and that the time taken to conduct an interview is relatively short. However, such interviews require selecting an individual interviewee (and often individuals prefer to self-nomi- nate rather than accept a random sampling method), locating the rest of the group to explain what is happening, ensuring that minors have appropriate child care, and finding a nearby yet quiet location to conduct the interview. Also, parents often respond to questions by reporting on what they think their children learned rather than what they themselves learned, because their children’s experience is often their framing reason for attending. Unless parents’ interpretations of children’s experience are the focus of the study or the child is too young to be interviewed, this approach is problematic because it relies on indirect inferences rather than self-report.

OCR for page 322
Appendix B 323 (b) Interviewing groups has the advantage of not separating group mem- bers, so families are more likely to agree to participate. Also, the responses they give, as a group, reflect the actual learning when the group members were jointly engaged in activity. One disadvantage of group interviews is that one member may dominate (typically an adult), and group members will often fall into agreement with each other’s opinions. A way to reduce this tendency is to question members individually but in inverse order of status. However, some researchers feel that unequal power dynamics are likely to be representative of the learning dynamic, and therefore interviews with asymmetrical participation are authentic. Group interviews also pres- ent the problem of how to quantify data from groups of different sizes, particularly if the study attempts to characterize frequencies of responses. Some researchers code response frequencies into 3 categories: “1,” “2,” and “many.” Finally, although group interviews are more relaxing for partici- pants, there is rarely time to ask all questions equitably before the group becomes restless, so most persons in the group typically do not complete the interview. Alternatively, interviews conducted from a more qualitative or naturalistic perspective may allow for a much looser participation structure by the group members, but they require extended and careful analysis by the researcher afterward. Control Groups Because informal environments emphasize learning by choice, using random assignment of learners to treatment and control groups may some- times be logistically impossible, upsetting to the learners, threatening to the study validity, or all of the above. In such cases, it may be desirable to refer- ence a comparison group that is not a strict control but that provides some sense of plausible baseline behavior (data from visitors to other museums or exhibitions, literature that cites common knowledge, behaviors, or attitudes to a topic, etc.). Video- and Audiotaping With increasing interest in such process-based outcomes as engagement, conversations, and actions, research in informal environments has made increasing use of recording systems, such as audio- and videotape. These raise technical and ethical issues. Technically, the main challenge is often to obtain audio of sufficiently high quality to hear what people are saying above the ambient noise. Attempts at solution include using a Dictaphone (Borun, Chambers, and Cleghorn, 1996), wearing of cordless microphones (e.g., Leinhardt and Knutson, 2004), or placement of microphones on individual exhibits (e.g., Gutwill, 2003). The ethical issues, namely the need to have visitors give informed consent to being recorded, have been addressed by posting signs, augmenting posting-signs (Gutwill, 2003), asking for consent

OCR for page 322
324 Learning Science in Informal Environments when visitors arrive and placing a sticker on their clothing to alert the videog- rapher (Crowley and Callanan, 1998), or getting explicit consent as visitors enter a space. Such methods are generally compromises, and researchers should always refer to their local institutional review board for approval of their specific data collection method. Time as a Measure of Learning In environments such as museums, botanical gardens, and zoos, where learners move freely through a physical space of options, time spent (“holding time” or “dwell time”) is a commonly used measure of impact in summa- tive evaluations. At the same time, there is controversy about what exactly it assesses in relation to learning. There are various approaches to thinking about time, including: (a) Some researchers regard it as a necessary but not sufficient condition for learning. In this view, learners need to pause and engage with objects, people, or activities in order to have a chance to learn from them, but learn- ing is not necessarily linearly related to time spent. Some researchers have interpreted histograms of holding time as bimodal or multimodal, reveal- ing different audience characteristics in terms of background or motivation (browsers, grazers, etc.), but these are controversial: most exhibitions show a single peak at the short end of the spectrum of time spent (Serrell, 1998, 2001). (b) Some regard it as an indicator of learning, using the well-established principle that time on task is the most universal correlate with learning across contexts. However, the meaning of “on task” is particularly am- biguous in free-choice environments (Shettel, 1997), as is the definition of learning. A few studies have shown direct evidence that time spent in exhibitions correlates with learning, as measured by previsit questionnaires on the exhibit topic (Abler, 1968) or free recall of objects seen (Barnard and Loomis, 1994). (c) Some regard time spent as a direct measure of learning, defined as engagement in socially sanctioned collaborative activity. From this socio- cultural perspective, participants are learning throughout their engagement, although the exact nature of what they learn may be quite different from institutional expectations. Internet Surveys Increasingly, the Internet is being used to conduct surveys of learners. These may be assessments of online resources or may ask about previous experiences in another setting (such as a museum visit, viewing of a TV series, etc.). They may be contained within emails, or, increasingly, be web-

OCR for page 322
Appendix B 325 based. Compared with paper surveys, Internet surveys are inexpensive and generate quick responses, but often raise concerns about response rates and biased populations of respondents. For recent reviews of the literature on web surveys in informal science learning environments, including suggestions for effective design and usage, see Parsons (2007), Yalowitz and Ferguson (2007), and Storksdieck (2007). REFERENCES Abler, T.S. (1968). Traffic patterns and exhibit design: A study of learning in the museum. In S.F. DeBorhegyi and I. Hanson (Eds.), The museum visitor (vol. 3, pp. 103-141). Milwaukee, WI: Milwaukee Public Museum. Barnard, W.A., and Loomis, R.J. (1994). The museum exhibit as a visual learning museum. Visitor Behavior, 9 (2), 14-17. Borun, M., Chambers, M., and Cleghorn, A. (1996). Families are learning in science museums. Curator, 39 (2), 123-138. Crowley, K., and Callanan, M. (1998). Describing and supporting collaborative scientific thinking in parent-child interactions. Journal of Museum Education, 17 (1), 12-17. Gutwill, J. (2003). Gaining visitor consent for research II: Improving the posted sign method. Curator, 46 (2), 228-235. Leinhardt, G., and Knutson, K. (2004). Listening in on museum conversations. Walnut Creek, CA: AltaMira Press. Parsons, C. (2007). Web-based surveys: Best practices based on the research literature. Visitor Studies, 10 (1), 13-33. Serrell, B. (1998). Paying attention: Visitors and museum exhibitions. Washington, DC: American Association of Museums. Serrell, B. (2001). In search of the elusive bimodal distribution. Visitor Studies Today, 4 (2), 4-9. Shettel, H.H. (1997). Time—is it really of the essence? Curator, 40, 246-249. Storksdieck, M. (2007). Using web surveys in early front-end evaluations with open populations: A case study of amateur astronomers. Visitor Studies, 10 (1), 47-54. Yalowitz, S., and Ferguson, A. (2007). Using web surveys in summative evaluations: A case study at the Monterey Bay Aquarium. Visitor Studies, 10 (1), 34-46.

OCR for page 322