NOTE: A revised and shorter version of this report has been published:

Taylor, R. B., Anderson, T., and McConnell, P. (2003). Competencies and interest in a problem-focused undergraduate research methods criminal justice course: Two assessments. Journal of Criminal Justice Education 14 133-148. 

 

Competencies and Interest in a Problem-Focused

Introduction to Research Methods in Criminal Justice Course: Two Assessments*

 

Ralph B. Taylor

 
Department of Criminal Justice

Temple University

1115 West Berks St.

Philadelphia, PA 19122

voice: 215.204.7169

email: ralph@blue.temple.edu

 

* The views expressed here are solely those of the author, and do not necessarily reflect the views of Temple University, the College of Liberal Arts, or the Department of Criminal Justice. The author thanks Todd Anderson and Patrick McConnell who served as teaching assistants in 1999 and 2000, respectively, and provided project support. Ruth Karras, Associate Dean, College of Liberal Arts, provided support to the Department of Criminal Justice during spring semester 2000 for a competencies assessment focused on Introduction to Criminal Justice. That support, and the resulting activities, contributed substantively to this effort. Pat Jenkins and Ron Davis provided helpful comments on earlier drafts. For the two offerings described here, details on the course structure, assignments, and resources can be found at: http://www.rbtaylor.net.

Abstract

Two competencies assessments were carried out in two sections of an undergraduate criminal justice research methods course a year apart. Specific competencies areas assessed were abilities to interpret complex graphical data display, understanding of a quasi-experimental research design, and use of and views toward web-based research tools. The course used two specific criminal justice problems - drunk driving and guns - as vehicles to increase students' interest in research methods topics. Results showed that in both years students performed significantly better than chance at the end of the semester in understanding data display, and interpreting a research design. Use of and views about the utility of web research sources increased over the semester for the first course, but did not shift during the second course offering. Self reports on interest in topic areas showed both classes reporting increased interest in the problems addressed. Implications for an interest enhancement vs. competencies enhancement view about undergraduate criminal justice education in a liberal arts tradition are discussed.

Competencies and Interest in a Problem-Focused

Introduction to Research Methods in Criminal Justice Course: Two Assessments

 

The present investigation reports on two competencies assessments made a year apart for an undergraduate criminal justice research methods course in a liberal arts college at a large, multi-ethnic urban university. The purpose of the assessments were to gauge skills students had gained in specific areas, and self-reported shifts in their interests. The course format used introduced students to two specific policy-relevant criminal justice issues, and used these issues, reflected in the literature, class discussions, and in students' responses to a mass questionnaire, as a vehicle for introducing research methods topics. The introduction briefly summarizes some current discussions about different assessment vehicles, and assessment in the broader context of the goals of liberal arts education. Following, the reader is oriented to specific developments in the college, the home department, and the specific course that set the context for the current effort. The introduction concludes with a specification of the outcomes of interest.

 

Questions of How to Assess

Any university professor committed to instruction is also committed to assessment. We can't know how well we are teaching unless we gather systematic feedback. At many universities end-of-semester evaluations are widely if not universally implemented. Such evaluations are useful for a broad range of purposes. The feedback provides a rough evaluation of course and instructor effectiveness. The ratings also can be used to suggest different dimensions of teacher effectiveness (Patrick & Smart 1998). But such broad assessment instruments provide results that depend on far more than course content, personability of instructor, and presentation effectiveness. Ratings are influenced consistently by received and expected grades (Hamilton 1980; Greenwald 1997), gender of instructor and students (Dukes & Victoria 1989; Tieman & Rankin-Ullock 1985), instructor status (Hamilton 1980), interactions between instructor and student characteristics (Dukes & Victoria 1989), and broader factors such as class size (Hamilton 1980). But most importantly, such end-of-semester evaluation efforts cannot clarify what it is that students have learned in a specific course. Although the latter has traditionally been viewed as reflected in students' grades, there are at least as many factors beyond what students have learned that affect their grades as affect their ratings of professors. To take just one example, different students excel at different types of assessments, and courses that fail to incorporate assessment strategies that play to a student's strengths make it more difficult for him/her to do well on grades, regardless of how much has been learned (Hedley 1978). In part as a result of these various limitations, and generally increasing concerns about the values of higher education generally, efforts to more systematically assess the educational impacts of college courses have widened in the last two decades.

 

Questions about the Goals of Liberal Arts Instruction

The question of specifying and assessing educational impacts has proven particularly intriguing within a liberal arts context. Although criminal justice departments around the country find themselves in a variety of academic homes, and sometimes in separate colleges, many do reside in liberal arts colleges or institutions. So questions about the impact of criminal justice courses are bound up with broader questions about the goals of a liberal arts education generally.

There appear to be three perspectives on the value of a liberal arts education. Dating from the 18th and 19th centuries, and holding sway until at least the 1970s was the idea that liberal arts was an end in itself (Mann 1999). Usually this end was gained by mastering a specified set of content in a specified set of areas (Flanagan 2000). Of course in this country, with the rise of technical schools from the 1830s to the 1850s as part of efforts to provide skilled labor to rapidly changing industries, technical or polytechnic education also appeared, with institutions devoted solely to those areas (Katz 1975). So the polytechnic model has stood in opposition to the liberal arts model for quite some time, and this opposition or complementarity (depending on one's perspective) has been an enduring feature of higher education.

But the idea of an intrinsically valuable liberal arts education -- learning for learning's sake, appropriate for a large segment of society -- has received extensive criticism over the past three if not five decades. Part of the shift in the U.S. may be linked to the massive increase in higher education opportunities linked to the post-WW II GI bill (Bennett 1994), and the accompanying increased emphasis on higher education as a bridge to work (Kerr 1994). In the U.K. the rationale the rationale for liberal arts courses shifted in the 1970s to a focus on competencies, or core skills, or key skills (Drew 1998; Mann 1999). "This was the start of the decline of education being viewed as an end in itself, available only to the wealthy who could afford such educational advancement without it necessarily leading to career advancement. This was the start of the replacement of 'liberal education' for the few with 'skills approaches' for the many" (Mann 1999:439). Beyond the expansion of higher education opportunities generally here and abroad, numerous other factors undoubtedly contributed to the shift as well: higher unemployment among liberal arts graduates than among graduates in the areas of computers, business, and other specifically marketable skills; and the increasing paces of technological and societal change, for example. One could argue how significant the shift has been, which schools have been most affected, and how much specific factors have contributed to it. But as a starting point perhaps we can agree that to some extent in some number of schools there is increasing concern about the marketability of liberal arts graduates, and that a reflection of that concern is an effort to specify competencies that these students acquire en route to their degrees.

A third perspective on liberal arts (Freedman 1996) also focuses on outcomes, like the skills approach, but concentrates on psychological and behavioral transformation. As a result of a liberal arts education one should be more interested in a problem or issue or idea, have enhanced skills to critically examine a topic and be more willing to do so, and generally be more curious. The presumption is that these psychological changes lead to changes in how one leads one's life and contributes to society.

At the present time the second of these approaches, the focus on competencies or core skills, seems to hold sway in higher education both in the U.S. and in Britain. There are, however, critics of the focus on competencies. For example, some suggest that such a view places too much emphasis on assessment without taking the time to define what learning is in the first place (Bowden & Marton 1998), and thereby sidetracks discussion away from deeper issues relevant to university education. Such criticisms are worthwhile, and deserve attention, but are not the focus here.

 

Defining Competencies

With the idea of competencies or core skills holding main stage, much discussion has focused on defining what those competencies are. Educators who criss-cross the country speaking with university faculty argue that broad agreement exists about the list of specific skills or competencies required for a "successful" college graduate.

 

They typically include skills in communicating (writing, speaking, reading, and listening), mathematics (especially basic statistics), problem solving, and critical thinking; interpersonal skills (such as working in groups and leading them); computer literacy; and, most recently, appreciation of cultural diversity and the ability to adapt to innovation and change. Of course, most of us would add to this list knowledge in the student's major discipline and some general knowledge of other core disciplines in the humanities and sciences. (Diamond 1997: A14)

 

According to this view, curriculum reform and review builds on the identification and assessment of these competencies (Diamond 1998). It is only by stating explicitly what the skills are that are sought as an end product, that one can systematically examine and revise what one teaches and how one teaches in a particular course. In short, according to this view, the only way to improve curricula is to identify the skill-based outcomes, gauge progress toward those outcomes, and modify curricula accordingly.

 

Competencies for A Research Methods Course in Criminal Justice

The concern about specifying competencies and gauging progress toward the acquisition of them is relevant to criminal justice because criminal justice is at root a liberal arts curriculum. Flanagan (2000) identifies four core themes that deserve emphasis in criminal justice curricula and which, if emphasized, would bring about a more successful integration between criminal justice and other liberal arts disciplines. One of those themes touches on "a frame for analyzing the change process - at the individual, organizational, institutional, and community levels" (Flanagan 2000:8). And part of developing that framework is helping students develop "a keen appreciation of the properties and limits of the scientific method" (Flanagan 2000:9).

In our undergraduate criminal justice curriculum at Temple University we have two courses whose goals include deepening that appreciation: an introduction to research methods in criminal justice, and an introduction to statistics in criminal justice. The present article describes two efforts to assess competencies acquisition in the first of these two courses.

More specifically, in two different large sections of an undergraduate research methods course, a year apart, students completed both beginning and end of semester competencies assessments. The focus was on three areas of interest: use of web-based resources, interpretations of graphical data display, and ability to understand the research methods section of a journal article. Information on the first of these areas was gathered both at the beginning and end of the semester. Information on the latter two areas was gathered just at the end of the semester. In accord with the third perspective described above on liberal arts education, students also were asked to report on shifts in their interest in the topics covered.

 

More Focused Description of Outcomes

The first area addressed in these assessments was views about and utilization of web-based resources for completing course assignments in research methods. Using the world wide web (WWW or the web) is a key component of resource-based learning (MacDonald, Mason & Heap 1999). In mastering web resource for specific course purposes, two general issues are learning how to most efficiently access the materials sought, and learning how to evaluate those sources (Hammett 1999).

During the first of the two classes assessed (Spring 1999), several course components were introduced to help students gain these web-based competencies.(1) First, a large webliography was included on the course web site, and an entire class session was devoted to showing students how to use it. The webliography included keyword and usage instructions for each web source cited. The cited sources addressed topics related to drunk driving and guns. Two of the four short paper assignments in the course addressed this competency. In one assignment students found information on drunk driving fatalities, and interpreted the data presented (paper 1). In another assignment (paper 4) students were required to evaluate the qualities of a website (Hammett 1999). Students were given her article to read, and a part of a class session was devoted to explaining Hammett's (1999) evaluation criteria.

The above web-based elements were dropped from the course in Spring 2000 for several reasons. First, in feedback cards after class sessions explaining web usage a significant fraction of students claimed they already were familiar with the material in question. Second, although the assignment on evaluating websites produced many excellent papers, the management of the process, with groups of students picking websites for evaluation, and myself or the TA evaluating suitability, turned out to be extremely time consuming. Third, in completing the first paper assignment, a significant fraction of students, despite extremely detailed instructions, generated the wrong data table from the Fatality Accident Reporting System (). Finally, in the Spring 2000 course the class questionnaire was expanded to include more outcome behaviors linked to drunk driving and guns, leading more class time to be spent with those data.

The outcomes for web usage, described more specifically below, looked at frequency of use, ease of use, and perceived utility of web sources. How students respond to such questions obviously is limited by resource constraints both on campus and at home. And between 1999 and 2000 several developments on campus addressed some of those constraints. Student use of computer resources generally increased, as shown by central help desk requests logged by the Vice President for Computing Services, and email accounts. Campus enhancements also were taking place. A new computerized classroom building opened in Fall 1999, and the library released a major upgrade for access to electronic resources. In another shift, the university announced it was going out of the ISP business, limiting off-campus access to two hours a day, and encouraging students, faculty, and administration to select a separate dialup vendor for off-campus access. All of these changes suggest that the results seen in this outcome area with the first course may not be seen in the second.

The second outcome area assessed was comprehension of graphical data displays (Taylor 1994: Chapter 5). Throughout the semester students had been presented with a variety of line charts, bar charts, and scatterplots; in the accompanying presentations I emphasized focusing specifically on what those displays communicated, while simultaneously being aware of the limitations of such presentations. Therefore as an outcome I asked them to answer several questions after examining a clustered bar chart showing race and city effects on handgun ownership.

The third outcome area was in the interpretation of quasi-experimental designs (Taylor 1994: Chapter 13). Quasi-experiments have proved exceedingly important for criminal justice program evaluation and theory assessment, in a number of areas. Threats to internal validity, and in-class reviews of major quasi-experiments in each topic area were covered in depth in the classroom presentations and accompanying readings. Understanding the purposes behind and limitations of various quasi-experimental designs was one of the key goals of the methods course.

Finally, although not specifically a competency outcome assessed, but in line with Freedman's description of liberal arts, in the end-of-semester assessments I asked about students' interest, and whether that had increased or not during the semester, either in the research methods topics addressed, or in the problems investigated.

In both the 1999 and the 2000 end-of-semester assessments, the outcome variables were held constant. Therefore, by collecting data from two different classes a year apart, we can see to what extent the results obtained with the first class are replicated with the second class.

 

College and Departmental Context Surrounding Competencies

During the 1998-1999 Academic Year, in the College of Liberal Arts at Temple University, administrators encouraged faculty to more closely specify the competencies students were acquiring in their various courses. Under the guidance of the Dean, Carolyn Adams, and the Associate Dean for Graduate Studies, Chuck Weitz, the college prodded departments to initiate a process of stating as clearly as possible specific skills advanced by specific courses. In the Department of Criminal Justice at Temple University our Undergraduate Committee, headed by Associate Professor Pat Jenkins, polled faculty on specific competencies and collated that information. These competencies were then reduced to a smaller number, and cross-tabulated with specific courses, so faculty could see which particular competencies applied to which particular courses. At a faculty meeting in the spring, faculty reviewed the matrix of courses by competencies.

During the 1999-2000 academic year, interest in the department in competencies focused on one course: introduction to criminal justice. This course had been in the university "core curriculum" for some time. Students from around the university can take core courses to satisfy a number of different distributional requirements. In the latter half of the academic year a group of faculty collaborated with instructors in the introductory course to specify a number of detailed competencies, and to get information on them at the end of the semester.

 

Course Orientation

In my undergraduate research methods course, starting in Spring 1999, I revised my overall approach in an effort to increase student interest in the material. Up until that time I had generally followed the sequence of topics appearing in my research methods textbook (Taylor 1994): logic of scientific inquiry, ethics, graphical data display, benchmarks of scientific quality, and different research approaches. There were two key elements in the revised approach. First, students gained detailed information about two serious criminal justice problems: drunk driving, and guns.(2) I hoped that viewing the research methods topics in the context of a specific issue would facilitate learning abstract topics. By specific problems provide a common set of issues around which to structure discussions of major ideas in research methods: concepts, variables, operationalization, predictors, outcomes, propositions, hypotheses, measurement error, surveys, experiments and so on. In addition, I hoped learning about the problems would increase interest. If students were interested in the issue, they might be more motivated to learn the concepts.

Such an approach represents the reverse of what has been advocated for content courses. For content-focused courses, writers have suggested grounding them more closely in using and learning about research methods and statistics (Gulley 1982; Johnson & Steward 1997; Markham 1991). Here, I sought to illustrate research methods topics by exploring in depth two specific criminal justice problems.

By grounding research methods topics in two specific criminal justice problems a spiral approach can be applied. The spiral approach has been widely documented in secondary school science education for at least three decades (Downing 1995; Murphy 1973). The central idea is that students are repeatedly exposed to the same topic or core ideas. As we proceeded through each topic, we reviewed basic ideas in research methods (constructs, hypotheses, measurement, independent and dependent variables, hypotheses, reliability and validity, types of data collection, and so on). When returning to a topic, students bring greater understanding of and insight into the surrounding complexities. In short, they learn some the first time around and learn more deeply the second time around. So, for example, students would be asked to generate a hypothesis about the causes of a particular outcome relevant to drunk driving; later in the semester they would be asked to generate a hypothesis for an outcome like gun ownership. In both exercises students could see the results of their hypothesis tests, based either on national survey data, or data from student questionnaires completed by classmates at the beginning of the semester (Lorenz & Bruton 1996).

 

Comment on Quasi-Experimental Design Selected

     Agreeing on what competencies are being learned is one thing. Verifying that those competencies are being acquired is another. The verification problem is made difficult by several factors. Verification of competencies acquisition can be empirically investigated through any number of different quasi-experimental research designs (Cook, T. D. & Campbell 1979). Each data collection design has unique strengths and weaknesses. The weaknesses or threats to internal validity interfere with our ability to conclude that a treatment caused a change on an outcome.

     For an ideal competencies assessment, I would administer a pretest at the beginning of the semester to a random half of the students in the course. The questions on the pretest would gauge how well students scored at the beginning of the semester on the competencies specific to the course. At the end of the semester a posttest, similar in content to the pretest, would be administered again to the same students. It also would be administered to the random half of students who had not taken the pretest.

      The "gain" shown by students on the competencies from the pretest to the posttest would represent the competencies acquired due to being in the class. As further checks against various threats to internal validity I also could do the following. (1) I could ask people how much they think they have learned in this class, how often they attended class, how much time they spent working on the course, and the grade they expected to get for the course. Hopefully those more faithfully attending class, putting in more hours per week on the course, and receiving higher grades, would have "gained" more in competencies through the semester. (2) There is the possibility that students did better on the posttest than on the pretest just because it was the second time they had seen the test -- an instrumentation effect. To control for this threat I could compare posttest scores of the random half of students who took the pretest with the posttest scores of the random half of students who did not take the pretest. If scores of those taking the test twice are higher than the scores of those taking it just once, then I have an instrumentation effect that I could then subtract out. (3) Even though this research design is pretty powerful, there are two threats to internal validity that it cannot rule out. One is maturation. Students may score higher on the posttest than on the pretest just because they are one semester more mature. Although maturation seems an unlikely explanation for the gain, especially when students in the class are so spread out on age (from 19 to 43), it is plausible. (4) More troublesome are other courses. Some students in the research methods class may be taking another course, in criminal justice or outside the department, that helps them acquire some of the same competencies that are targeted in the research methods class. Enough students in research methods may be taking that other course (or those other courses) that the group average score on competencies is significantly pushed up by the end of the semester, compared to where it started. (5) the number of students creates problems of low statistical power (Cohen 1977) if I am using a design that give the pretest to some and not others. When I teach research methods I usually have about 60-80 students. On any given class day about 35 - 55 students are present. If I were to divide those few students into two groups, the statistical analyses I could apply would have extremely low statistical power. That would limit my chances to find pretest vs. posttest differences, even if those differences "really" existed.

       Because of those limitations, and since this was the first time I had tried to assess competencies, I opted for a simpler design. In the Spring 1999 semester I constructed a posttest to be administered at the end of the semester, during the final examination period. I told students that completing the competencies assessment was part of the regular class requirements. Consequently, all registered students, save one mother-to-be who took the test ahead of time, showed up at the appointed time. This was a one-group, post-test only design.

For the set of outcomes addressing web usage, as part of a broader student survey, about three weeks into the semester students reported on their usage of web resources for course work for the preceding semester. These same items were repeated at the end of the course, in the post-test, asking about the current semester. So for this one outcome, we can compare beginning- and end-of-the-semester scores. In the 1999 analysis, no linked identifiers were available, so it was not possible to control at the individual level for beginning-of-the-semester scores. But in the 2000 analysis, linked identifiers were available, so it was possible to control for initial scores at the individual level.

STUDY 1: SPRING 1999

Method

For both this and the subsequent offering, the competencies assessment was administered during the final exam period so students would have no conflicts in attending and completing the protocol. Students were told that by completing the assessment, they would automatically receive full credit for that work, and that credit counted as ten percent of the final grade. They were told that their performance on the competencies would not be graded. It took students between 40 and 80 minutes to complete the assessment.

 

Respondents

     58 students completed the posttest on competencies. 32 (56%) were female 25(43%) were male; gender was missing for one respondent. Ages ranged from 19 to 44; almost half the respondents (49%) were 21 or younger; 9% of the respondents were 30 or older. 15 of the respondents (26%) identified themselves as African-American. About half of the respondents reported living in Philadelphia (32; 55%).

 

Dependent Variables

Comprehension of Quasi-Experimental Design. Several questions gauged students' comprehension about a specific quasi-experimental design research study. Students read the entire research design section from an article describing a national evaluation of a delinquency prevention program (GREAT: gang reduction education and treatment) (Esbensen & Osgood 1999). The evaluation design in the study was "post-test only comparison between students who participated in the GREAT program the previous year and students who did not" (Esbensen & Osgood 1999:202) On my post-test I also included the section of the paper on the outcome analysis, describing the various factors researchers planned to control so as to increase experimental and control comparability. Seven questions about the study followed, all in true-false format:

1.In this evaluation of GREAT, students were randomly assigned to either participate in the program, or to not participate in the program (Real experiment).

2.In this evaluation of GREAT, students who had participated in GREAT were compared to a group of non-participating students who were almost exactly equivalent to the participating group (Equivalence).

3.The schools where the students answered the questionnaires were a representative random sample of public schools ; thus the results obtained here can be justifiably generalized to the population of students in public schools in the continental US (Representativeness);

4. In this study the researchers "controlled for" several background factors. They did this to try and reduce pre-existing differences between the treatment and control groups (Controls);

5.In the study, if the researchers had found significant differences between treatment and control students on the outcomes, and they had NOT controlled for background differences between the two groups, these background differences would represent threats to internal validity (TIV) of the results; the background differences, rather than the program, might have been responsible for the differences on the outcomes between experimentals and controls (TIV);

6.The study relies on one and only one outcome variable (Outcome)

7.One of the major outcomes examined in this study is whether or not the respondent reports being a gang member (Membership)

 

Comprehension of a Bar Chart

Students were presented with a clustered bar chart, based on 1972-1996 national data from the General Social Survey, a national survey completed yearly. In class, earlier in the semester, as part of our review of work on guns, students had developed hypotheses about handgun ownership, and we had tested those hypotheses with these GSS data. We had reviewed the resulting bar graphs over a couple of class periods. Therefore, students were familiar with the dependent variable, and the independent variables, and the data source.(3) The results were portrayed in the form of simple bar charts. I encouraged students to use a rough rule of thumb when interpreting the bar chart results: If two percentages differed by more than 5%, they should think of that difference as "significant."

When students' hypotheses about the predictors of handgun ownership failed to gain support from the GSS data, I encouraged them to think more carefully about the topic, and suggest an additional variable that might condition the expected relationship; i.e., what other factors might be involved? After they sent me their modified hypotheses, I generated clustered bar charts, showing the relationships of both independent variables to the outcome, the percentage owning a handgun. In class we had reviewed a couple of these two-way or clustered bar charts.

The particular bar graph shown in the post-test, however, had neither been viewed in class, nor shown in class.(4) The students working on race had originally hypothesized that handgun ownership would be more prevalent among African-Americans compared to whites (Hispanics were coded as whites): "African-Americans are more likely to own a pistol because they are more likely to live in urban areas and therefore need pistols for protection," students hypothesized. When they viewed the national results they saw that the reverse was true; handgun ownership was more prevalent among whites than African-Americans. Results showed(5) that whereas 23% of whites reported handgun ownership, only 18% of African-Americans reported ownership. Students, in their discussions, reminded me of their interest in African-Americans in urban locations. So I constructed the follow-up chart which looked at race and pistol ownership in big cities, with over 250,000 population, versus in other locations. The second inquiry turned out in the direction expected by the students; whereas 13% of African-Americans living in big cities reported pistol ownership, only 10% of whites reported ownership. Outside big cities, about 24% of whites reported ownership compared to about 20% of African-Americans. The figure was introduced as follows:

 

Take a look at Figure 1 that appears at the end of the assessment. It is based on data from the General Social Survey, 1972 through 1996 (added together). This is a nationally representative survey done yearly. The respondents represent a cross section of households in the United States. The dependent variable is pistol ownership - whether the person reports owning a pistol or handgun. The height of each bar shows you the proportion in that group reporting that they own a pistol. Respondents are classified in two ways. First, they are separated into two racial groups: African-American vs. not African-American. (In the chart "not African-American" is labeled WHITE.). Second, they are separated into those that live in big cities vs. those that do not. "Pistol" refers to any type of handgun.

 

The students were asked to answer the following true-false questions based on their examination of the chart:

1.A higher proportion of African-Americans report pistol ownership in the big cities than outside the big cities.

2.A higher proportion of Whites report pistol ownership in the big cities than outside the big cities.

3.Among those living outside the big cities, a slightly higher proportion of Whites than African-Americans report pistol ownership.

4.Among those living in the big cities, a slightly higher proportion of Whites than African-Americans report pistol ownership

5.The effects of race on pistol ownership are reversed when we switch from those living outside the big cities to those living in the big cities.

6.Your colleagues who were hypothesizing about these data suggested that African-Americans were more likely to own handguns in the big city than whites, because they lived in more dangerous neighborhoods than did the Whites. It looks as if these data support that hypothesis.

 

Computer Usage

For computer usage, I wanted to get at issues of comfort, ease of access, and perceived utility of web sources. To get at perceived utility and comfort for criminal justice course-related research, I asked about current sentiments with the following items:

In general, as a research tool for helping you with your criminal justice courses, how useful do you think the web is? (0) Not at all useful; (1) Somewhat useful; (2) Useful; and (3) Very useful ; and (4) Do not have enough experience to judge, coded as missing (Utility).

In general, when you are using the web as a research tool for helping you with your criminal justice courses, how comfortable do you feel? ((0) Extremely Uncomfortable; (1) Uncomfortable; (2) Somewhat uncomfortable; (3) Somewhat comfortable; (4) Comfortable; (5) Extremely comfortable; and (6) Do not have enough experience to judge, coded as missing (Comfort).

For access, I asked separately about access on campus and off campus, since those are distinct computing environments. For on campus, I asked separately about going to the library, and using web access. One item asked how often they tried to gain access, and the second asked how much difficulty they experienced when making these attempts. The pretest asked about usage during the prior semester; the post-test item asked about the current semester.

The items for on-campus were as follows:

LAST/THIS semester how many times did you try to access the web from a computer lab ON CAMPUS, for the purpose of gathering research information from the web for a class assignment FOR ANY CLASS/THIS CLASS. The assignment may or may not have REQUIRED you to use the web. (DO NOT count visits to the lab to use computers for word processing, email, checking unrelated news, or playing games.) ((0) Never; (1) Once a semester; (2) A couple times a semester; (3) A few times a semester (about 3 - 5); (4) More than a few times, but less than once a week (about 6 - 10 times); (5) About once a week on average (about 11 - 13 times): (6) More than once a week, on average. (Access-On Campus-Frequency). The same item was repeated asking about off-campus usage (Access-Off Campus-Frequency).

LAST/THIS semester, how many times did you go to A COMPUTER LAB ON CAMPUS and have as one purpose of that trip to locate one or more research articles in a criminal justice journal using an on-line search?: (0) Never; (1) Once a semester; (2) A couple times a semester; (3) A few times a semester (about 3 - 5); (4) ore than a few times, but less than once a week (about 6 - 10 times); (5) About once a week on average (about 11 - 13 times); (6) More than once a week, on average.(6) (Online search-On Campus-Frequency).

Students who were not enrolled at Temple the previous semester were allowed to report that, and were coded as missing.

 

RESULTS

Interpreting Quasi-Experimental Research Design

The questions they were asked, and the percent getting the correct answers, are shown in Table 1. All these questions were true/false items. The percent correct was calculated with missing data included in the total.

- Insert Table 1 about here -

Across the seven true false items, respondents averaged 61% correct (median=66%); the percent correct per item ranged from a low of 38% to a high of 74%.(7)

Do these scores represent "success" or not? The z test for a difference in proportions (Blalock 1979: 195-199) to see if the results are significantly better than chance guessing, wherein students would have averaged about 50% correct. Results suggested that the students' average of 61% correct, across the seven items, was significantly better than chance (z =1.675, p < .05). So student comprehension exceeds random guessing, but, certainly, could be much higher.

 

Bar Chart Comprehension

     Again, as with the items on the quasi-experimental evaluation of GREAT, to assess comprehension of the clustered bar chart I used True-False Items. The questions, and the percent correct responses appear in Table 2. Across the six items on the clustered bar graph, the proportion supplying the correct answer ranged from 81% to 62%, and averaged 70.7% (median=70.7% also). If we assume that students by randomly guessing at these items would have attained 50% correct over the six items, and treat that as our null hypothesis, we can reject the null hypothesis that students did no better than they would have done by randomly guessing, using the z test of the difference in proportions (z=4.11; p <.001). Students did significantly better than random guessing on these items.

 

- Insert Table 2 -

Computer Usage

Evaluation of the post-test items on computer usage is facilitated because pretest scores were available from the Time 1 questionnaire. Of course, I cannot attribute changes that appear solely to this course, given other changes that also were taking place in the students' lives, and other courses they were taking. The analyses employed were one-way analyses of variance (ANOVAs) with pretest vs. posttest as the independent variable.(8) I also checked to be sure that homogeneity of variance existed between pretest and posttest scores. It did except in the instances noted. Table 3 shows results from the ANOVAs and the means and standard deviations at the pretest and the posttest.

Utility. At the beginning of the semester, students rated the web as "useful" for doing research in criminal justice courses (mean=2.00). By the end of the semester that rating had shifted up to almost halfway between "useful" and "very useful" (mean=2.41). That shift was significant.

Comfort. But despite seeing the web as a more useful research tool, students' comfort with the web as a research tool did not increase. Students started out scoring between "somewhat comfortable" and "comfortable" and remained there at the end of the semester. It appears that students already had enough exposure to web research that the additional exposure gained in the class was only marginally useful. Based on this pattern of findings, presentation of web tools per se was de-emphasized in the second (Spring 2000) class offering.

Frequency of use. On-campus use shifted up significantly (p < .001), going from between once a semester and a couple times a semester (mean=1.76) to between a few times (3-5) and more than a few times but less than once a week (6-10) (mean=3.21).(9) Off-campus use shifted up marginally (p < .10) to almost a few times during the semester (pretest mean=2.22; posttest mean=2.98). Temple students are not well off; many do not have computers at home. About 1/3 of respondents reported "never" using a computer off campus for a research assignment at either the pretest or the posttest. Limited access may help explain why scores on this question did not increase more dramatically.

 

- Insert Table 3 -

 

STUDY 2: SPRING 2000

A replication of the first study was completed a year later to see if the pattern of results observed could be obtained a second time. External validity, as it always must be, is an empirical matter (Taylor 1994: 164). Only one assessment leaves open the question of whether the results could be replicated with another class. Of course, replication with another class at the same university would still leave open the question of replication elsewhere, with a different student population. In addition, it is not clear if the skills demonstrated at the end of the semester were specific to the particular mix of course content offered. Given student comments offered during the Spring 1999 course, if less class time and assignments were devoted to web sources and web evaluation, would students' view these sources less positively?

The following differences were planned between the Spring 1999 and the Spring 2000 offerings of the research methods course. Two changes in content and assignment reflected a de-emphasis on explicit instruction about web sources. The first paper assignment for the Spring 1999 course required students to query the Fatal Accident Reporting System database, generate a table, and report on the results. Numerous students queried the database in a way that resulted in the wrong table. Since the main purpose of the assignment had been to interpret the table, in Spring 2000 I generated a data table for them from the FARS data, and required simply that they interpret the table and link their discussion to our main text on drunk driving (Jacobs 1989: 32-33), and his discussion of the problem with accident reporting. Second, the last paper assignment for Spring 1999 requested that students locate and evaluate a web site devoted to gun-related issues. In addition I had reviewed guidelines for evaluating websites (Hammett 1999). Although students found this exercise helpful, and a number of intriguing papers resulted (see papers presented on line at course site), that assignment precluded assigning a paper on quasi-experimentation. In early April 2000 Maryland passed a widely-hailed gun control law. Given the publicity surrounding that legislation, and its close connection with the second problem studied in the course, I assigned a paper where students designed a quasi-experiment to evaluate specific program features in the legislation.

Another change was the data source used to illustrate gun ownership issues. For the latter, in Spring 1999, as described above, students evaluated data (aggregated) from 1974-1996 General Social Surveys. As in-class work groups reported out on their exercises in which they interpreted these data, it became apparent that their skepticism toward answers to the outcome question -- do you own a pistol? -- was a major stumbling block. Many remained extremely doubtful that most handgun owners would answer this item truthfully. Class discussions about these doubts, although hopefully illuminating the topic of response bias in surveys, took time away from generating hypotheses to predict gun ownership and interpreting results. Since the Spring 2000 class was slightly larger than the previous years I opted instead to use the data from the first class assessment. Rather than focusing on personal handgun ownership, the focus shifted to an item that asked "Does any member of your immediate family currently own one or more guns of any type (handgun, shotgun, or long rifle)?" Of the 63 students responding to this item at the initial assessment, 33 (52.4 percent) reported family gun ownership. This item corresponded fairly closely to the concept of household gun possession, a topic discussed at length in the main gun reading (Cook, P. & Ludwig 1995) used in Spring 2000. Since the students themselves generated answers to this question, it was hoped they would be less skeptical.

A final change from 1999 to 2000 was the main reading used on gun topics. I substituted the Cook and Ludwig's Guns in America for Kates' and Kleck's The Great American Gun Debate since the latter was out of print by spring 2000 (Cook, P. & Ludwig 1995; Kates & Kleck 1997).

These changes in course assignments and readings could affect how students scored on competency-related outcomes, or reported levels of interest. Stated differently, the 2000 assessment was not an effort to complete a simple replication, with different students, of the 1999 results. Rather, the point was to see if the results were robust enough so that comparable findings emerged given these course differences.

For the 2000 initial assessment, students used a private ID code - the last six digits of their SSN - in the class survey completed about three weeks into the semester. The same code was used at the posttest. Thus for items on computer access, web use, and web views, the repeated measures analysis is fully appropriate.

 

Respondents

Fifty-eight students completed both the initial and follow-up assessments. Thirty three out of 58 were women (56.9 percent). At the initial assessment, 31 of the 65 respondents (47.7 percent) were women. Of the 58 completing both assessments, 18 (31 percent) were African-American and 29 (50 percent) were non-Caucasian. This compared with 23 out of 65 (35.4 percent) African-Americans, and 31 out of 65 (47.7 percent) non-Caucasian completing just the initial assessment. Of those completing both assessments, 34 of the 58 (58.6 percent) lived in Philadelphia. This compared with 41 out of 65 (63.1 percent) living in Philadelphia among those completing just the initial assessment. Age ranged from 18 to 51 years. Slightly more than half the 59 respondents (57.6 percent) were aged 21 or younger at the time of the initial assessment.

If we focus on those completing both assessments, the proportion women, proportion African-American and the proportion living in Philadelphia were about the same as the proportions in the Spring 1999 class. The age distribution was slightly different in 2000. There were slightly more younger students (21 and under), and the highest age was higher, at the beginning of the semester. But in general, the two sets of respondents look roughly comparable. Further, those completing just the initial assessment, and those completing both assessments, look quite similar.

 

Dependent Variables

The dependent variables were the same as used in the 1999 end-of-semester assessment. The same quasi-experiment was used, and the same clustered bar chart.

 

RESULTS

Comprehension of Quasi-Experiment

Table 4 shows the results of students' answers after reading the method section of the same gang-reduction quasi-experiment as was used in Spring 1999, based on about 70 students who completed the post-test. The percentage answering correctly for the various items ranged from a low of 39% to a high of 82%; the proportion getting the correct answer, averaged across all the items was 68%, and by the z test this performance was superior to chance (p < .001), and also appeared to be slightly higher than in Spring 1999. But paralleling the 1999 results, the two questions with the lowest percentage correct answers in 2000 were the same as in 1999; one item asked if the control group was comparable, and another asked if the sample was a representative, random sample of public schools.

 

- Insert Table 4 about here -

 

So Spring 2000 students, reading the same article, and being asked the same questions, answered significantly better than chance. The overall comprehension level appears roughly equivalent to the level among Spring 1999 students.

 

Bar Chart Comprehension

The results with the Spring 2000 students on bar chart comprehension were virtually identical to the results from a year earlier. Averaged across all the items, about 70% of the students provided a correct answer, and this performance was significantly better than chance (z=4.37; p < .001). This comparable level of performance should perhaps be considered more impressive than the results from the Spring 1999 class, since the Spring 2000 class did not work directly with the GSS data on which the clustered bar chart was based.

 

- Insert Table 5 about here -

 

Computer Usage

The mean differences between pre- and post-test answers on computer utilization, reflecting differences between the Fall 1999 semester, the period asked about in the pretest, and the Spring 2000 semester, the period asked about in the posttest, appear in Table 6. There were no significant differences from pretest to posttest on any of the individual items.

There are probably two sets of factors responsible for the lack of change in the 2000 class on these items. First, as mentioned above, in this class there were fewer assignments specifically devoted to finding, using and evaluating web sources, and less class time devoted to the topic.

In addition to differences in course assignments and selection of class presentation material, the changes, described above, taking place on the campus itself, may have played a role. Looking at the 2000 results alongside the 1999 results from a year earlier show that the students in this class have had comparable usage patterns, comfort levels, and views about utility since the end of the Spring 1999 semester (Figure 1). So both the class differences, and the changes on campus, may explain the lack of a shift for the 2000 students. Of course, it also is possible that there is a cohort effect operating. As better prepared, more computer literate students enter as first-year and transfer students, we are less likely to see an impact of any one course on their computer web searching uses or attitudes.

One final point about these results; for the 2000 class there were no differences by gender. It was not the case that male as compared to female students were more comfortable using the computers for websearching on course topics, or that they used the resources more frequently for these purposes.

 

CLASSES 1 & 2: EVALUATIVE REACTIONS, ASSESSMENTS OF INTEREST

The competencies assessments for both classes included a series of questions asking the students to react evaluatively to a number of features in the course, to decide if my presentations had been biased or unbiased on the two topics of interest, and to tell me if their interest had been elevated or not during the course. For all these items students used a 6 point Likert scale with answers ranging from "strongly disagree" (1) to "strongly agree" (6) with no midpoint provided.

I address first the question of appreciation and increased interest in the area. In these courses, the problems of drunk driving and guns were used as vehicles to convey specific research methods topics. But a beneficial side effect would be that students became more interested in these social issues, and reported that they had a deeper understanding of them. Such a shift would be congruent with Freedman's discussion of the purposes of liberal arts education.

Students in both classes agreed slightly that at class end they had a better appreciation of the problem of drunk driving (means = 4.44 and 4.76 for 1999 and 2000), between "agree slightly" (4) and "agree" (5). For the problem of guns the average answers were roughly comparable; the means were 4.25 for 1999 and 4.49 for 2000. So it appears that in both courses students on average reported that their insight into these pressing problems had been somewhat enhanced.

Students also reported slight increases in their interest in the problems areas themselves. When asked if they were "more interested" in either guns or drunk driving than they were at the beginning of the semester, responses for the gun issue were above "agree slightly" for both classes (mean = 4.19 for 1999, 4.43 for 2000). For the drunk driving problem, 1999 students' average response to this item fell just below "slightly agree" (mean=3.86) and 2000 students' average response fell just above "slightly agree" (mean=4.14). But for three out of these four interest items, the means were at least two standard errors above a response indicating no change in interest.(10)

Another way to get at students' reactions to the two problems used as the vehicle to discuss research methods was to ask them about the amount of time spent on the topic. For both years, and for both topics, students disagreed that "in the course, we spent too much time learning about" drunk driving or guns. The drunk driving means were 2.86 (1999) and 2.73 (2000), between "disagree" (2) and "disagree slightly" (3). The means for guns were 2.56 in 1999 and 2.46 in 2000, about halfway between "disagree" and "disagree slightly." So students in neither semester judged that the amount of time devoted to these topics was excessive.

A final way to get at students' thoughts about the two social problems and the content related to them was to ask students if they thought "the course would have been a lot more interesting if the professor has just concentrated on research methods and NOT spent so much time teaching us about guns and drunk driving." For both semesters the mean response was around "disagree" (2) (1999: 2.24; 2000: 2.20). So students, on average, did not feel that the problem content crowded out fundamental research method issues.

In sum, it appears that students in both semesters expressed increased appreciation of the complexities of these two problems, expressed slightly increased interest compared to the beginning of the semester, and did not feel that the coverage of the problems "shortchanged" them on core topics in research methods.

 

DISCUSSION

The present study reports on two assessments of competencies and interest in two offerings of an undergraduate, problem-focused introduction to criminal justice research methods. The results show that: during the first but not the second offering students reported increased use of and comfort with using web-based research tools; during both offerings students interpreted a complex graphical display of data, and a quasi-experimental research design, at levels significantly better than chance; and students generally reported that their interest in the two problems assessed - drunk driving and guns - increased as a result of the course. Although some changes in content and assignment were made between the first and second offering, the results, except for those in the area of web resources, were quite parallel across both offerings. The discussion addresses the limitations of the present investigation, ways that outcomes can be expanded, procedures for implementing more effective quasi-experimental assessment designs, and the broader question of tradeoffs between enhancing interest and honing competencies in undergraduate criminal justice courses.

The limitations surrounding the current investigation are numerous. Most importantly, pre-test assessments in two of the three outcome areas were not completed. So for interpretation of graphical display and interpretation of quasi-experiments improvements per se during the course were not assessed. All we know is how well students did at the end, and that they did better than random guessing. But in the web utilization area, pretest and posttest measures were gathered, and improvement was seen over the course of the first offering, but not the second one. The lack of change during the second offering may have been linked either to changes in the campus computing environment, or cohort changes, or a de-emphasis in the second offering on web-based research tools, or a combination thereof.

The second largest limitation, even for changes surfacing during the course of the offering, arises from potential threats to internal validity of history and maturation (Taylor 1994). As discussed earlier, given that a more rigourous quasi-experimental design was not implemented, the results seen generally may not have been due to the course per se, but could be attributed to these two other sources. Stronger quasi-experimental designs can help with this problem, but may create problems of low statistical power.

One way around the low statistical power emerging from a stronger quasi-experimental design would be to complete pretests and posttests in more than one course offering, using the same instrument, and analyze results with multilevel models (Kreft & de Leeuw 1998; Snijders & Bosker 1999). With multilevel models data from several assessments can be joined, while simultaneously attending to the grouped nature of the data and individual starting scores. The optimal assessment design for implementation in each class would be a pretest-posttest design combined with a posttest only design, and students randomly assigned to receive or not receive a pretest, with several quasi-experiments from different classes combined into a multilevel design. At least such a design can rule out instrumentation or testing effects.

It is important that further assessments of an undergraduate research methods in criminal justice course expand the outcomes examined. One of the goals of a such a course is to equip students to read research articles in journals. The current assessments focused on interpreting the research methods section. Future assessments should have students assess the methods used in different types of articles, and also interpret the results emerging from different types of articles. A potential problem arising with such an expansion is that the amount of time it takes students to read and decode entire journal articles makes it difficult to shoehorn such comprehension assessments within an in-class assessment exercise. Comprehension assessments of journal articles could be complemented by assessments of detailed research reports appearing in newspapers.

In closing, the tension between an interest-enhancement view of undergraduate liberal arts education, as articulated by Freedman and others, and a competencies-acquisition focus, as articulated by Diamond and others, deserves recognition. In the course described here I have tried to use one goal - enhance their interest by providing detail on a specific problem that affects them - to support the second goal - gain competency in assessing criminal justice research and research results. But one can envision how these different goals move curriculum revision, at the course or the program level, in different directions. If interest enhancement is the primary goal, then reform moves toward describing the issues of interest and the surrounding policy, practice, and theory complexities, in sufficient detail that students are inevitably drawn in. Particularly appropriate here might be problems that students already have fairly set views on (the death penalty, prison overcrowding), and that are described simplistically in public discourse. The tactic to pursue is issue elaboration, with students reconsidering their opinions and suggested solutions as the issue unfolds.

By contrast, if acquisition of competencies is the primary goal then course or curriculum revision moves away from in-depth issue exploration and towards practicing the criterion skills. The focus here might be exercises across a wide range of topic areas that ask students to interpret. In the case of an introduction to research methods, after discussing quasi-experimentation, for example, one would ask students to read methods sections of articles addressing a wide array of concerns. Similarly for graphical data display. The strategy would be to practice to the criterion, in a broad enough range of settings, so that students increase their proficiency. Decoding and interpretation become key, rather than reflection leading to enhanced interest.

Future competencies assessments could hopefully explore this potential tension. The tradeoff discussed here may not be inevitable in the context of curriculum reform or course revision. But if there is a tradeoff hopefully future work will clarify how it is enlarged by or minimized by course-specific or program-specific factors.

References

Bennett, M. J. 1994 "The Law that worked." Educational Record fall:6-14.

Bowden, J. and F. Marton 1998 The University of Learning: Beyond Quality and Competencies in Higher Education. London: Kogan Page.

Cohen, J. 1977 Statistical power analysis for the behavioral sciences. Second ed. New York: Academic Press.

Cook, P. and J. Ludwig 1995 Guns in America. Washington, DC: Police Foundation.

Cook, T. D. and D. T. Campbell 1979 Quasi-experimentation. Chicago: Rand-McNally.

Diamond, R. M. 1997 "Curriculum reform needed if students are to master core skills." The Chronicle of Higher Education :A14.

Diamond, R. M. 1998 Designing and assessing courses and curricula: A Practical Guide. second ed. San Francisco: Jossey-Bass.

Downing, C. 1995 "Science in a S.A.C.K.: A Spiral approach to content knowledge." Science Teacher 62:46-48.

Drew, S. 1998 Key Skills in Higher Education: Background and Rationale. London: SEDA.

Dukes, R. L. and G. Victoria 1989 "The Effects of gender, status and effective teaching of the evaluation of college instruction." Teaching Sociology 17:447-457.

Esbensen, F. A. and D. W. Osgood 1999 "Gang resistance education and training (GREAT): Results from the national evaluation." Journal of Research in Crime and Delinquency 36:194-225.

Flanagan, T. J. 2000 "Liberal education and the criminal justice major." Journal of Criminal Justice Education 11:1-14.

Freedman, J. O. 1996 Idealism and liberal education. Ann Arbor: University of Michigan Press.

Greenwald, A 1997 American Psychologist

Gulley, W. 1982 "Integrating theory, methods, and statistics." Teaching Sociology 10:65-70.

Hamilton, L. L. 1980 "Grades, class size and faculty status predict teaching evaluations." Teaching Sociology 8:47-62.

Hammett, P. 1999 "Teaching tools for evaluating wolrd wide web sources." Teaching Sociology 27:31-37.

Hedley, R. A. 1978 "Measurement: Social research strategies and their relevance to grading." Teaching Sociology 6:21-29.

Johnson, M. A. and G. Steward 1997 "Integrating research methods into substantive courses: A Course project to identify social backgrounds of political elites." Teaching Sociology 25:168-175.

Kates, D. and G. Kleck 1997 The Great American Gun Debate. San Francisco: Pacific Research Institute for Public Policy.

Katz, M. 1975 Class, bureaucracy, and Schools: The Illusion of Educational Change in America. New York: Norton.

Kerr, C. 1994 "Expanding access and changing missions: The Federal role in U.S. higher education." Educational Record fall:27-31.

Kreft, I. and J. de Leeuw 1998 Introducing multilevel modeling. Thousand Oaks: Sage.

Lorenz, F. O. and B. T. Bruton 1996 "Experiments in surveys: Linking mass class questionnaires to introductory research methods." Teaching Sociology 24:264-271.

MacDonald, J., R. Mason, and N. Heap 1999 "Refining assessment for resource based learning." Assessment and Evaluation in Higher Education 24:345-355.

Mann, S. 1999 "Review of "Key Skills in Higher Education" (1998), by Sue Drew." Assessment and Evaluation in Higher Education 24:439-440.

Markham, W. T. 1991 "Research methods in the introductory course: To be or not to be?" Teaching Sociology 19:464-471.

Murphy, P. D. 1973 "Modules for consumer education: A Spiral-process approach to curriculum development." American Vocational Journal 48:7,52.

Patrick, J. and R. M. Smart 1998 "An Empirical evaluation of teacher effectiveness: The Emergence of three critical factors." Assessment and Evaluation in Higher Education 23:165-179.

Snijders, T. A. B. and R. J. Bosker 1999 Multilevel analysis: An Introduction to basic and advanced multilevel modeling. Thousand Oaks: Sage.

Taylor, R. B. 1994 Research methods in criminal justice. New York: McGraw Hill.

Tieman, C. R. and B. Rankin-Ullock 1985 "Student evaluations of teachers." Teaching Sociology 12:177-191.

 

Table 1

Comprehension of Quasi-Experiment Methods Section: Spring 1999 Students

Question

Percent (n) with correct answer

Real experiment

63.8 (37/58) (false)

Equivalence

43.1 (25/58) (false)

Representativeness

37.9 (22/58) (false)

Controls

65.5 (38/58) (true)

TIV

72.4 (42/58) (true)

Outcome

74.1 (43/58) (false)

Membership

70.7 (41/58) (true)

Across all Seven Items

Average Percent Correct:61%

Median Percent Correct66%   

Comparison Against Chancez = 1.68; p < .05 


Note For wording of specific items, see text.

Table 2

Comprehension of Clustered Bar Chart: Spring 1999 Students

Item

% correct (n) (answer)

A higher proportion of African-Americans report pistol ownership in the big cities than outside the big cities.

69% (40/58) (false)

A higher proportion of Whites report pistol ownership in the big cities than outside the big cities.

72.4% (42/58) (false)

Among those living outside the big cities, a slightly higher proportion of Whites than African-Americans report pistol ownership.

74.1% (43/58) (true)

Among those living in the big cities, a slightly higher proportion of Whites than African-Americans report pistol ownership

62.1% (36/58) (false)

The effects of race on pistol ownership are reversed when we switch from those living outside the big cities to those living in the big cities.

81% (47/58) (true)

Your colleagues who were hypothesizing about these data suggested that African-Americans were more likely to own handguns in the big city than whites, because they lived in more dangerous neighborhoods than did the Whites. It looks as if these data support that hypothesis.

65.5% (38/58) (true)

Across all six items

Average Percent Correct: 70.7%

Median Percent Correct: 70.7%

Comparison Against Chance: z=4.11; p < .001

    

Table 3

Changes in Uses of Web Resources: Spring 1999 Students

Item

Pretest

 mean (sd)

Posttest

 mean (sd)

F

Utility

2.00 (.88)

2.41 (.76)

F(1, 101)=6.45; p < .05

 Comfort

3.34 (1.43)

3.40 (1.26)

F < 1; ns

Access-On Campus-Frequency

1.76 (2.16)

3.21 (2.01)

F(1, 107)=13.05; p < .001

Access-Off Campus-Frequency

2.22 (2.25)

2.98 (2.28)

F(1, 106)=3.05; p < .10

Online Search-On Campus-Frequency

0.84 (1.49)

1.83 (1.55)

F(1, 106)=11.33; p < .01

 

Note. For question wording, see text. Pretest refers to the previous semester, all criminal justice courses. Posttest refers to the semester in question for the course in question. For utility the response categories were: (0) Not at all useful; (1) Somewhat useful; (2) Useful; and (3) Very useful ; and (4) Do not have enough experience to judge, coded as missing. For comfort the response categories were: (0) Extremely Uncomfortable; (1) Uncomfortable; (2) Somewhat uncomfortable; (3) Somewhat comfortable; (4) Comfortable; (5) Extremely comfortable; and (6) Do not have enough experience to judge, coded as missing. For all three frequency items the response format used was: ((0) Never; (1) Once a semester; (2) A couple times a semester; (3) A few times a semester (about 3 - 5); (4) More than a few times, but less than once a week (about 6 - 10 times); (5) About once a week on average (about 11 - 13 times): (6) More than once a week, on average.

Table 4

Comprehension of Quasi-Experiment Methods Section: Spring 2000 Students

 

Question

% correct (n/N) (correct)

Real experiment

77.8 (56/72) (false)

Equivalence

40.3 (29/72) (false)

Representativeness

39.4 (28/71) (false)

Controls

81.9 (59/72) (true)

TIV

79.2 (57/72) (true)

Outcome

73.2 (52/71) (false)

Membership

80.6 (58/72) (true)

Across all seven items

Median percent correct: Average percent correct:

Comparison against chance

 

77.8

67.5

Z = 3.17, p < .001




Table 5

Comprehension of Clustered Bar Chart: Spring 2000 Students

Item

% correct (n) (answer)

A higher proportion of African-Americans report pistol ownership in the big cities than outside the big cities.

69.4% (50/72) (false)

A higher proportion of Whites report pistol ownership in the big cities than outside the big cities.

70.8% (51/72) (false)

Among those living outside the big cities, a slightly higher proportion of Whites than African-Americans report pistol ownership.

63.9% (46/72) (true)

Among those living in the big cities, a slightly higher proportion of Whites than African-Americans report pistol ownership

68.1% (49/72) (false)

The effects of race on pistol ownership are reversed when we switch from those living outside the big cities to those living in the big cities.

93% (66/71) (true)

Your colleagues who were hypothesizing about these data suggested that African-Americans were more likely to own handguns in the big city than whites, because they lived in more dangerous neighborhoods than did the Whites. It looks as if these data support that hypothesis.

72.2% (52/72) (true)

Across all six items

Average Percent Correct: 72.9%

Median Percent Correct: 70.1%

Comparison Against Chance: z=4.37; p < .001

    

Table 6

 

Pre- vs. Post-test means for Reported Web-Use: Spring 2000

 

Item

Pretest

 mean (sd)

Posttest

 mean (sd)

F

Utility

2.64 (.97)

2.47 (.71)

F(1,56)=1.08; ns

Comfort

3.46 (1.56)

3.72 (.93)

F(1,56)=1.14; ns

Access-On-campus-Frequency

3.02 (2.83)

2.88 (1.95)

F(1,56) < 1; ns

Access-Off-Campus-Frequency

3.48 (2.58)

3.55 (1.82)

F(1,56) < 1; ns

On-Line Search-On-campus-Frequency

1.93 (2.96)

1.76 (1.65)

F(1,56) < 1; ns


Note. For question wording, see text. Pretest refers to the previous semester, all criminal justice courses. Posttest refers to the semester in question for the course in question. For utility the response categories were: (0) Not at all useful; (1) Somewhat useful; (2) Useful; and (3) Very useful ; and (4) Do not have enough experience to judge, coded as missing. For comfort the response categories were: (0) Extremely Uncomfortable; (1) Uncomfortable; (2) Somewhat uncomfortable; (3) Somewhat comfortable; (4) Comfortable; (5) Extremely comfortable; and (6) Do not have enough experience to judge, coded as missing. For all three frequency items the response format used was: ((0) Never; (1) Once a semester; (2) A couple times a semester; (3) A few times a semester (about 3 - 5); (4) More than a few times, but less than once a week (about 6 - 10 times); (5) About once a week on average (about 11 - 13 times): (6) More than once a week, on average.

Note. For all items, comparing the pre- vs. post-test results, Multivariate F(5,52) < 1; ns. Similarly, the multivariate F's for both gender and the interaction of gender and pre- vs. post-test were nonsignificant (both multivariate F's < 1).

Table 7

 

Evaluative Reactions, and Assessments of Interest at Course End:

Spring 1999 and Spring 2000

 


Item

Spring 1999

mean

(se)

n

Spring 2000

mean

(se)

n

In this course we spent too much time learning about research resources on the worldwide web.

2.70

(.14)

57

2.47

(.11)

72

In this course we spent too much time learning about drunk driving.

2.86

(.15)

58

2.73

(.14)

70

In this course we spent too much time learning about guns.

2.56

(.16)

57

2.46

(.12)

70

I think this course would have been a lot more interesting if the professor had just concentrated on research methods, and NOT spent so much time teaching us about guns and drunk driving

2.24

(.19)

55

2.20

(.16)

66

In this course I have learned a lot about the logic of scientific inquiry, including topics like independent variables, dependent variables, and testing hypotheses.

4.47

(.16)

58

4.60

(.13)

72

In this course I have learned a lot about different types of research tools like surveys and quasi-experiments.

4.47

(.14)

58

4.54

(.11)

72

As a result of this course, I have a better appreciation of the problem of drunk driving from a social science perspective.

4.44

(.14)

57

4.76

(.12)

71

I did NOT like having to go to the course website all the time to find documents for this course.

2.53

(.18)

57

2.59

(.14)

69

As a result of this course, I have a better appreciation of the costs and benefits of guns from a social science perspective.

4.25

(.14)

57

4.49

(.14)

72

The professor was UNable to present information about drunk driving effectively because his biases on the topic were so strong.

1.96

(.13)

55

1.72

(.11)

69

The professor was UNable to present information about guns and handguns effectively because his biases on the topic were so strong.

1.89

(.13)

54

1.71

(.10)

69

In class, the professor allowed students TOO MUCH time to ask questions about lecture material being presented.

2.34

(.18)

56

1.75

(.11)

67

We did a multi-class exercise on family gun ownership, where groups of students made hypotheses about what variables predicted ownership, and those ideas were tested using the class survey. I learned a lot from this exercise.

4.29

(.16)

56

4.72

(.13)

71

Because of this class I am more interested in the problem of drunk driving than I was at the beginning of the semester.

3.86

(.16)

58

4.14

(.15)

72

Because of this class I am more interested in the issue of guns and handguns than I was at the beginning of the semester.

4.19

(.14)

42

4.43

(.14)

68

Note. The response format used was:

1 = disagree strongly

2 = disagree

3 = disagree slightly

4 = agree slightly

5 = agree

6 = agree strongly

Note. The wording of the specific items was the same for both classes, with two exceptions. The wording on the class exercise question is what was used for the Spring 2000 class. For the Spring 1999 class the wording was: "We did a multi-class exercise on pistol ownership, where groups of students made hypotheses about what variables predicted pistol ownership, and those ideas were tested using a national data file. I learned a lot from this exercise."

Footnotes

1. 1. To see the course materials for the Spring 1999 course, go to: http://www.rbtaylor.net/research_methods.htm. To see the course materials for Spring 2000, go to: http://www.rbtaylor.net/160s00.htm.

2. 2. In choosing what two specific topics to address, I considered a number of issues. First, I sought issues that had an "everyday" aspect -these were not abstract problems, but rather something with which students had experience. In addition, I sought problems that had proved troublesome for the criminal justice system in the past, suggesting we needed to learn more about these problems in order to address them effectively. And, third, I chose topics that were not addressed in depth in other criminal justice undergraduate courses.

3. 3. An outline of the basic exercise can be found online at: http://blue.temple.edu/~ralph/gunex.html. That page also shows the hypotheses that the students developed in their work groups, and provides links to click on for showing the results.

4. 4. It can be found on-line at http://www.rbtaylor.net/gunexrace2.htm

5. 5. The chart can be found online at http://www.rbtaylor.net/gunexbreniser.htm.

6. 6. The following clarification appeared following the question: * you can count trip a trip where you looked up the journal article using an on-line source, as long as you either read the article in its entirety, or printed out the article in entirety for later reading, or saved the text of the whole article on a file on a disk for later reading; * you could have done this for ANY class you were taking this semester; * do not count trips where you just scanned articles on-line, or looked at abstracts of articles, if that's all you did.

7. 7. These percentages were calculated with missing data included in the totals and thus treat no answer as a wrong answer; there were two to five missing answers across the different items.

8. 8. The ANOVAs presented here are not technically correct. Students completing the posttest were not completely independent of the students completing the pretest. There was overlap between the two groups, and thus correlated errors. But I did not have linked identifiers and thus could not do a repeated measures design, where each student serves as his/her own control. Thus I was unable to link individual pretests and posttests. As a rough cross check, I carried out nonparametric Mann-Whitney U exact tests; all results significant in the ANOVAs were significant at a similar or higher level in the nonparametric tests.

9. 9. This was the one outcome where the Levene test of homogeneity of variance failed to be nonsignificant (p < .05), suggesting that the variances in the two groups were not homogeneous. Nonetheless, the same result was confirmed by a nonparametric test.

10. 10. A score of 3.5, midway between "slightly disagree" (3) and "slightly agree" (4) would reflect no change in interest since the beginning of the semester.