Skip to main content
Log in

Who do you trust? College students’ attribution of stigma to peers with incarceration histories

  • Published:
Journal of Experimental Criminology Aims and scope Submit manuscript

Abstract

Objectives

This brief communication tests how an undergraduate student’s incarceration history (i.e., previous incarceration vs. no previous incarceration) affects evaluations by their peers on several scales (e.g., desired social distance, warmth, competence, expected immoral behaviors).

Methods

The experimental conditions were presented in a survey delivered to a sample of MTurk respondents currently enrolled in undergraduate classes (N=400). OLS regression was used to estimate the impact of the experimental manipulation on respondents’ feelings toward formerly incarcerated peers.

Results

Formerly incarcerated students were rated as less warm and less moral by respondents, and incarceration history led to increased desired social distance among respondents. Meditation analysis indicates perceived warmth is the main cause of desired social distance.

Conclusions

The results show that formerly incarcerated undergraduate students are stigmatized by their peers in significant ways. However, concerns about morality and competence do not affect desired social distance. The behavioral penalties assessed against students with incarceration histories are driven by concerns about warmth. Further research on the mechanisms that create (as well as reduce) stigma for formerly incarcerated college students is necessary.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Data availability

The data used in this study is available to readers upon request.

Notes

  1. However, Jones Young and Powell (2015) developed hypotheses about how offense type influences stereotype content.

  2. Parameter estimates are virtually identical to what we obtained from OLS models.

References

Download references

Funding

This project was funded by the Farris Family Innovation Fund through Kent State University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jon Overton.

Ethics declarations

Ethics approval

The data collection procedures in this study were approved by the Kent State University Institutional Review Board. All participants consented to participate in this study.

Conflict of interest

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Vignette text

The text of the vignette participants read as follows (note that the text in brackets appeared only in the condition where the target had been incarcerated):

For the next portion of the study, please imagine that you are in a college writing course. The professor has asked everyone to bring a short piece of writing about themselves and then read it in front of the class. One of the students in your class shares the following:

“I am very excited to be going to school with all of you. [I recently got out of prison, after spending two years inside for a felony conviction.] I grew up around here and my parents were very proud that I graduated high school with a 3.4 GPA and got into school here. I’m not sure what I want to major in, but I look forward to figuring that out.”

GSS comparison

When using crowdsourced samples, Thompson and Pickett (2020) recommend asking a few questions identical to those used in national probability surveys as a point of comparison. Therefore, we compare our sample to a sample representative of public opinion in the United States (2018 General Social Survey, weighted to account for nonresponse bias) in Table 6. We compare our sample to the full GSS and to the following subsample of the GSS: people with at least a junior college degree and who are at least 40 years old. We use this subsample of the GSS to help assess the cause of differences between the MTurk and GSS samples: Is it that college students (our MTurk sample and GSS subsample) have different views than the general population (the full GSS)? Or is it that our platform of choice (MTurk) is producing different results than the GSS (the GSS subsample)? The subsample is a reasonable, albeit rough proxy for current college students, because it captures people who have been exposed to college and those in the same age range as the typical college student. Forty years was our age cutoff because 39.5 years is the 95th age percentile in our MTurk sample. This gives us a rough comparison group that includes people of around the same age and educational backgrounds between the two samples.

Table 6 Comparison of GSS and MTurk samples

The MTurk respondents were less supportive of harsh criminal sentencing and spending on law enforcement than either the general population or the college-educated under-40 subset of the GSS, as seen in Table 6. At the same time, the MTurk respondents also exhibit greater fear of crime, with just under half reporting they would be afraid to walk alone at night in the area where they live.

Randomization check

If participants are randomly assigned to their experimental conditions, then given a sufficiently large sample, we should not expect to see statistically significant differences between participant demographics in the two conditions. We compute standardized difference scores to capture the degree of imbalance across demographic categories in our two conditions, following Austin (2011). Our results are reported in Table 7. Scores around .10 indicate negligible differences, while those near .20 are at the beginning of the non-negligible threshold. As none of our balance scores cross the non-negligible threshold, we conclude that our randomization procedure succeeded.

Table 7 Balance of demographic characteristics across experimental conditions, before imputation

MTurk procedures

Following recommendations by other researchers, we took extensive precautions to ensure we obtained high-quality data from MTurk. Within the Amazon platform, we only allowed participants to submit work if they were in the United States, had completed at least 50 Human Intelligence Tasks (or HITs) for other requesters, and had a greater than 95% approval rating on those HITs. We also took extra steps to screen out participants from outside the United States. Kennedy and colleagues (2020) find that responses from outside the United States are a major source of low-quality data. We followed their procedures for screening out participants who are masking their location with a VPN (virtual private network) or whose IP address indicates they are outside the United States.

Data cleaning

We coded responses as invalid if they included nonsensical answers to open-response questions (following Chmielewski & Kucker, 2019) (N = 85). When asked to summarize the scenario they read, responses with answers that were either nonsense or completely off-topic were coded as invalid (e.g., they responded by writing “very good study” or explaining how to paraphrase a segment of text). These respondents tended to give similarly incomprehensible answers to diagnostic open-response questions at the end of the study and often gave impossible answers to other questions, like college GPAs equal to “5000.”

Some participants also were clearly not actual college students (N = 11). These participants would begin the study, answer our screening questions, and then see a screen stating that they were not eligible. We suspect they cleared the cookies in their browser to restart the survey in Qualtrics and then answered the screening questions differently so that they could participate. We knew these were the same participants because they had to enter their MTurk worker ID before they were directed to the “ineligible” screen. Fortunately, as indicated above, this comprised a very small number of participants. Those that did this also frequently gave nonsensical answers and performed poorly on the attention check items, removing them from our analysis anyway.

Of the valid responses we received, where participants were from the United States, gave real comprehensible answers to open-response questions, and did not attempt to bypass screening questions, we removed from the analysis any participant who failed one of our two attention checks (N = 29). The first check came immediately after the vignette. It asked participants to select one of four options summarizing what happened in the scenario. The second check was a question in the middle of subsequent survey questions telling the respondent to select a specific response option. Participants were excluded from the final analysis if they answered either attention check incorrectly.

Table 8 Comparison of sample demographics with and without imputation.
Table 9 OLS coefficients of demographic variables predicting prejudice against formerly incarcerated persons from imputed and non-imputed data

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Overton, J., Fretwell, M.D. & Dum, C.P. Who do you trust? College students’ attribution of stigma to peers with incarceration histories. J Exp Criminol 18, 847–870 (2022). https://doi.org/10.1007/s11292-021-09463-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11292-021-09463-0

Keywords

Navigation