Abstract
Objectives
This brief communication tests how an undergraduate student’s incarceration history (i.e., previous incarceration vs. no previous incarceration) affects evaluations by their peers on several scales (e.g., desired social distance, warmth, competence, expected immoral behaviors).
Methods
The experimental conditions were presented in a survey delivered to a sample of MTurk respondents currently enrolled in undergraduate classes (N=400). OLS regression was used to estimate the impact of the experimental manipulation on respondents’ feelings toward formerly incarcerated peers.
Results
Formerly incarcerated students were rated as less warm and less moral by respondents, and incarceration history led to increased desired social distance among respondents. Meditation analysis indicates perceived warmth is the main cause of desired social distance.
Conclusions
The results show that formerly incarcerated undergraduate students are stigmatized by their peers in significant ways. However, concerns about morality and competence do not affect desired social distance. The behavioral penalties assessed against students with incarceration histories are driven by concerns about warmth. Further research on the mechanisms that create (as well as reduce) stigma for formerly incarcerated college students is necessary.
Similar content being viewed by others
Data availability
The data used in this study is available to readers upon request.
Notes
However, Jones Young and Powell (2015) developed hypotheses about how offense type influences stereotype content.
Parameter estimates are virtually identical to what we obtained from OLS models.
References
Austin, P. C. (2011). An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivariate Behavioral Research, 46(3), 399–424. https://doi.org/10.1080/00273171.2011.568786.
Awale, A., Chan, C. S., & Ho, G. T. S. (2019). The influence of perceived warmth and competence on realistic threat and willingness for intergroup contact. European Journal of Social Psychology, 49(5), 857–870. https://doi.org/10.1002/ejsp.2553.
Campbell, D. T., & Stanley, J. C. (1973). Experimental and quasi-experimental designs for research. Rand McNally College Publishing Company.
Casciaro, T., & Lobo, M. S. (2008). When competence is irrelevant: The role of interpersonal affect in task-related ties. Administrative Science Quarterly, 53(4), 655–684. https://doi.org/10.2189/asqu.53.4.655.
Chmielewski, M., & Kucker, S. C. (2019). An MTurk crisis? Shifts in data quality and the impact on study results. Social Psychological and Personality Science, 11(4), 464–473. https://doi.org/10.1177/1948550619875149.
Courtright, K. E., Mackey, D. A., & Packard, S. H. (2005). Empathy among college students and criminal justice majors: Identifying predispositional traits and the role of education. Journal of Criminal Justice Education, 16(1), 125–144. https://doi.org/10.1080/1051125042000333514.
Cuddy, A. J. C., Glick, P., & Beninger, A. (2011). The dynamics of warmth and competence judgments, and their outcomes in organizations. Research in Organizational Behavior, 31, 73–98. https://doi.org/10.1016/j.riob.2011.10.004.
Dum, C. P., Socia, K. M., & Rydberg, J. (2017). Public support for emergency shelter housing interventions concerning stigmatized populations. Criminology & Public Policy, 16(3), 835–877. https://doi.org/10.1111/1745-9133.12311.
Dum, C. P., Socia, K. M., Long, B. L., & Yarrison, F. (2019). Would God forgive? Public attitudes toward sex offenders in places of worship. Sexual Abuse, 32(5), 567–590. https://doi.org/10.1177/1079063219839498.
Duwe, G., & Clark, V. (2014). The effects of prison-based educational programming on recidivism and employment. The Prison Journal, 94(4), 454–478. https://doi.org/10.1177/0032885514548009.
Falco, D. L., & Martin, J. S. (2012). Examining punitiveness: Assessing views toward the punishment of offenders among criminology and non-criminology students. Journal of Criminal Justice Education, 23(2), 205–232. https://doi.org/10.1080/10511253.2011.631931.
Farnworth, M., Longmire, D. R., & West, V. M. (1998). College students’ views on criminal justice. Journal of Criminal Justice Education, 9(1), 39–57. https://doi.org/10.1080/10511259800084171.
Fiske, S. T., Cuddy, A. J. C., Glick, P., & Xu, J. (2002). A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition. Journal of Personality and Social Psychology, 82(6), 878–902. https://doi.org/10.1037/0022-3514.82.6.878.
Frana, J. F., & Schroder, R. D. (2013). An analysis of student opinions on former convicts as professors. Contemporary Journal of Anthropology and Sociology, 3(1), 29–40.
Giguere, R., & Dundes, L. (2002). Help wanted: A survey of employer concerns about hiring ex-convicts. Criminal Justice Policy Review, 13(4), 396–408. https://doi.org/10.1177/088740302237806.
Goffman, E. (1963). Stigma: Notes on the management of spoiled identity. Simon & Schuster.
Halkovic, A., & Greene, A. C. (2015). Bearing stigma, carrying gifts: What colleges can learn from students with incarceration experience. The Urban Review, 47(4), 759–782. https://doi.org/10.1007/s11256-015-0333-x.
Harper, C. A., Bartels, R. M., & Hogue, T. E. (2018). Reducing stigma and punitive attitudes toward pedophiles through narrative humanization. Sexual Abuse: A Journal of Research and Treatment, 30(5), 533–555. https://doi.org/10.1177/1079063216681561.
Hensley, C., Miller, A., Koscheski, M., & Tewksbury, R. (2003). Student attitudes toward inmate privileges. American Journal of Criminal Justice, 27(2), 249–262.
Hirschfield, P. J., & Piquero, A. R. (2010). Normalization and legitimation: Modeling stigmatizing attitudes toward ex-offenders. Criminology, 48(1), 27–55. https://doi.org/10.1016/j.theochem.2009.07.043.
Jones Young, N. C., & Powell, G. N. (2015). Hiring ex-offenders: A theoretical model. Human Resource Management Review, 25(3), 298–312. https://doi.org/10.1016/j.hrmr.2014.11.001.
Kennedy, R., Clifford, S., Burleigh, T., Waggoner, P. D., Jewell, R., & Winter, N. J. G. (2020). The shape of and solutions to the MTurk quality crisis. Political Science Research and Methods, 8(4), 614–629. https://doi.org/10.1017/psrm.2020.6.
Krings, F., Sczesny, S., & Kluge, A. (2011). Stereotypical inferences as mediators of age discrimination: The role of competence and warmth. British Journal of Management, 22(2), 187–201. https://doi.org/10.1111/j.1467-8551.2010.00721.x.
Lageson, S. E., Denver, M., & Pickett, J. T. (2019). Privatizing criminal stigma: Experience, intergroup contact, and public views about publicizing arrest records. Punishment & Society, 21(3), 315–341. https://doi.org/10.1177/1462474518772040.
Lawler, E. J., & Thye, S. R. (1999). Bringing emotions into social exchange theory. Annual Review of Sociology, 25, 217–244.
LeBel, T. P. (2012). Invisible stripes? Formerly incarcerated persons’ perceptions of stigma. Deviant Behavior, 33(2), 89–107. https://doi.org/10.1080/01639625.2010.538365.
Link, B. G., & Phelan, J. C. (2001). Conceptualizing stigma. Annual Review of Sociology, 27, 363–385. https://doi.org/10.1146/annurev.soc.27.1.363.
Lockwood, S., Nally, J. M., Ho, T., & Knutson, K. (2012). The effect of correctional education on postrelease employment and recidivism: A 5-year follow-up study in the state of Indiana. Crime & Delinquency, 58(3), 380–396. https://doi.org/10.1177/0011128712441695.
Lovett, B. J., Jordan, A. H., & Wiltermuth, S. S. (2012). Individual differences in the moralization of everyday life. Ethics & Behavior, 22(4), 248–257. https://doi.org/10.1080/10508422.2012.659132.
Mackey, D. A., & Courtright, K. E. (2000). Assessing punitiveness among college students: A comparison of criminal justice majors with other majors. The Justice Professional, 12(4), 423–441. https://doi.org/10.1080/1478601X.2000.9959561.
Mackey, D. A., Courtright, K. E., & Packard, S. H. (2006). Testing the rehabilitative ideal among college students. Criminal Justice Studies, 19(2), 153–170. https://doi.org/10.1080/14786010600764534.
MacKinnon, D. P., Fairchild, A. J., & Fritz, M. S. (2007). Mediation analysis. Annual Review of Psychology, 58, 593–614. https://doi.org/10.1146/annurev.psych.58.110405.085542.
Mancini, C., & Mears, D. P. (2010). To execute or not to execute? Examining public support for capital punishment of sex offenders. Journal of Criminal Justice, 38(5), 959–968. https://doi.org/10.1016/j.jcrimjus.2010.06.013.
McTier Jr., T. S., Briscoe, K. L., & Davis, T. J. (2020). College administrators’ beliefs and perceptions of college students with criminal records. Journal of Student Affairs Research and Practice, 57(3), 296–308. https://doi.org/10.1080/19496591.2019.1648273.
Miller, A. J., Tewksbury, R., & Hensley, C. (2004). College students’ perceptions of crime, prison and prisoners. Criminal Justice Studies, 17(3), 311–328. https://doi.org/10.1080/1478601042000281132.
Miner-Romanoff, K. (2014). Student perceptions of juvenile offender accounts in criminal justice education. American Journal of Criminal Justice, 39(3), 611–629. https://doi.org/10.1007/s12103-013-9223-5.
Moore, K. E., & Tangney, J. P. (2017). Managing the concealable stigma of criminal justice system involvement: A longitudinal examination of anticipated stigma, social withdrawal, and post–release adjustment. Journal of Social Issues, 73(2), 322–340. https://doi.org/10.1111/josi.12219.
Moore, K. E., Stuewig, J. B., & Tangney, J. P. (2016). The effect of stigma on criminal offenders’ functioning: A longitudinal mediational model. Deviant Behavior, 37(2), 196–218. https://doi.org/10.1080/01639625.2014.1004035.
National Center for Education Statistics. (2019). Digest of Education Statistics, 2019. National Center for Education Statistics. https://nces.ed.gov/programs/digest/d19/tables/dt19_303.70.asp. Accessed 19 March 2021
Ott, M., & McTier Jr., T. S. (2020). Faculty attitudes toward college students with criminal records. Journal of Diversity in Higher Education, 13(4), 297–308. https://doi.org/10.1037/dhe0000138.
Quadlin, N. (2018). The mark of a woman’s record: Gender and academic performance in hiring. American Sociological Review, 83(2), 331–360. https://doi.org/10.1177/0003122418762291.
Rade, C. B., Desmarais, S. L., & Mitchell, R. E. (2016). A meta-analysis of public attitudes toward ex-offenders. Criminal Justice and Behavior, 43(9), 1260–1280. https://doi.org/10.1177/0093854816655837.
Ridener, R., & Kuehn, S. (2017). College education, major, or criminology classes? An examination of what drives students’ level of punitiveness. Criminal Justice Studies, 30(1), 1–16. https://doi.org/10.1080/1478601X.2016.1269325.
Selke, W. L. (1980). The impact of higher education on crime orientations. Journal of Criminal Justice, 8(3), 175–184. https://doi.org/10.1016/0047-2352(80)90024-0.
Solomon, A. L., Osborne, J. W. L., LoBuglio, S. F., Mellow, J., & Mukamal, D. A. (2008). Life after lockup: Improving reentry from jail to the community. Urban Institute. https://www.urban.org/research/publication/life-after-lockup-improving-reentry-jail-community. Accessed 19 March 2021
Strayhorn, T. L., Johnson, R. M., & Barrett, B. A. (2013). Investigating the college adjustment and transition experiences of formerly incarcerated Black male collegians at predominantly White institutions. Spectrum: A Journal on Black Men, 2(1), 73–98.
Tajalli, H., Soto, W. D., & Dozier, A. (2013). Determinants of punitive attitudes of college students toward criminal offenders. Journal of Criminal Justice Education, 24(3), 339–356. https://doi.org/10.1080/10511253.2012.740055.
Thompson, A. J., & Pickett, J. T. (2020). Are relational inferences from crowdsourced and opt-in samples generalizable? Comparing criminal justice attitudes in the GSS and five online samples. Journal of Quantitative Criminology, 36, 907–932. https://doi.org/10.1007/s10940-019-09436-7.
Unnever, J. D., Cochran, J. K., Cullen, F. T., & Applegate, B. K. (2010). The pragmatic American: Attributions of crime and the hydraulic relation hypothesis. Justice Quarterly, 27(3), 431–457. https://doi.org/10.1080/07418820902855362.
Weinberg, J., Freese, J., & McElhattan, D. (2014). Comparing data characteristics and results of an online factorial survey between a population-based and a crowdsource-recruited sample. Sociological Science, 1, 292–310. https://doi.org/10.15195/v1.a19.
Zellner, A. (1962). An efficient method of estimating seemingly unrelated regressions and tests for aggregation bias. Journal of the American Statistical Association, 57(298), 348–368. https://doi.org/10.1080/01621459.1962.10480664.
Funding
This project was funded by the Farris Family Innovation Fund through Kent State University.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Ethics approval
The data collection procedures in this study were approved by the Kent State University Institutional Review Board. All participants consented to participate in this study.
Conflict of interest
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Vignette text
The text of the vignette participants read as follows (note that the text in brackets appeared only in the condition where the target had been incarcerated):
For the next portion of the study, please imagine that you are in a college writing course. The professor has asked everyone to bring a short piece of writing about themselves and then read it in front of the class. One of the students in your class shares the following:
“I am very excited to be going to school with all of you. [I recently got out of prison, after spending two years inside for a felony conviction.] I grew up around here and my parents were very proud that I graduated high school with a 3.4 GPA and got into school here. I’m not sure what I want to major in, but I look forward to figuring that out.”
GSS comparison
When using crowdsourced samples, Thompson and Pickett (2020) recommend asking a few questions identical to those used in national probability surveys as a point of comparison. Therefore, we compare our sample to a sample representative of public opinion in the United States (2018 General Social Survey, weighted to account for nonresponse bias) in Table 6. We compare our sample to the full GSS and to the following subsample of the GSS: people with at least a junior college degree and who are at least 40 years old. We use this subsample of the GSS to help assess the cause of differences between the MTurk and GSS samples: Is it that college students (our MTurk sample and GSS subsample) have different views than the general population (the full GSS)? Or is it that our platform of choice (MTurk) is producing different results than the GSS (the GSS subsample)? The subsample is a reasonable, albeit rough proxy for current college students, because it captures people who have been exposed to college and those in the same age range as the typical college student. Forty years was our age cutoff because 39.5 years is the 95th age percentile in our MTurk sample. This gives us a rough comparison group that includes people of around the same age and educational backgrounds between the two samples.
The MTurk respondents were less supportive of harsh criminal sentencing and spending on law enforcement than either the general population or the college-educated under-40 subset of the GSS, as seen in Table 6. At the same time, the MTurk respondents also exhibit greater fear of crime, with just under half reporting they would be afraid to walk alone at night in the area where they live.
Randomization check
If participants are randomly assigned to their experimental conditions, then given a sufficiently large sample, we should not expect to see statistically significant differences between participant demographics in the two conditions. We compute standardized difference scores to capture the degree of imbalance across demographic categories in our two conditions, following Austin (2011). Our results are reported in Table 7. Scores around .10 indicate negligible differences, while those near .20 are at the beginning of the non-negligible threshold. As none of our balance scores cross the non-negligible threshold, we conclude that our randomization procedure succeeded.
MTurk procedures
Following recommendations by other researchers, we took extensive precautions to ensure we obtained high-quality data from MTurk. Within the Amazon platform, we only allowed participants to submit work if they were in the United States, had completed at least 50 Human Intelligence Tasks (or HITs) for other requesters, and had a greater than 95% approval rating on those HITs. We also took extra steps to screen out participants from outside the United States. Kennedy and colleagues (2020) find that responses from outside the United States are a major source of low-quality data. We followed their procedures for screening out participants who are masking their location with a VPN (virtual private network) or whose IP address indicates they are outside the United States.
Data cleaning
We coded responses as invalid if they included nonsensical answers to open-response questions (following Chmielewski & Kucker, 2019) (N = 85). When asked to summarize the scenario they read, responses with answers that were either nonsense or completely off-topic were coded as invalid (e.g., they responded by writing “very good study” or explaining how to paraphrase a segment of text). These respondents tended to give similarly incomprehensible answers to diagnostic open-response questions at the end of the study and often gave impossible answers to other questions, like college GPAs equal to “5000.”
Some participants also were clearly not actual college students (N = 11). These participants would begin the study, answer our screening questions, and then see a screen stating that they were not eligible. We suspect they cleared the cookies in their browser to restart the survey in Qualtrics and then answered the screening questions differently so that they could participate. We knew these were the same participants because they had to enter their MTurk worker ID before they were directed to the “ineligible” screen. Fortunately, as indicated above, this comprised a very small number of participants. Those that did this also frequently gave nonsensical answers and performed poorly on the attention check items, removing them from our analysis anyway.
Of the valid responses we received, where participants were from the United States, gave real comprehensible answers to open-response questions, and did not attempt to bypass screening questions, we removed from the analysis any participant who failed one of our two attention checks (N = 29). The first check came immediately after the vignette. It asked participants to select one of four options summarizing what happened in the scenario. The second check was a question in the middle of subsequent survey questions telling the respondent to select a specific response option. Participants were excluded from the final analysis if they answered either attention check incorrectly.
Rights and permissions
About this article
Cite this article
Overton, J., Fretwell, M.D. & Dum, C.P. Who do you trust? College students’ attribution of stigma to peers with incarceration histories. J Exp Criminol 18, 847–870 (2022). https://doi.org/10.1007/s11292-021-09463-0
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11292-021-09463-0