当前位置: X-MOL 学术Social Media + Society › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Influencer Management Tools: Algorithmic Cultures, Brand Safety, and Bias
Social Media + Society ( IF 4.636 ) Pub Date : 2021-03-30 , DOI: 10.1177/20563051211003066
Sophie Bishop 1
Affiliation  

This article explores algorithmic influencer management tools, designed to support marketers in selecting influencers for advertising campaigns, based on categorizations such as brand suitability, “brand friendliness,” and “brand risk.” I argue that, by approximating these values, tools reify existing social inequalities in influencer industries, particularly along the lines of sexuality, class, and race. They also deepen surveillance of influencer content by brand stakeholders, who are concerned that influencers will err and be “cancelled” (risking their investments in content). My critical framework synthesizes feminist critiques of ostensibly participatory influencer industries with close attention to critical algorithmic studies. This article provides an in-depth look at how brand risk and brand safety are predicted and measured using one tool, Peg. Through a “walk through” of this tool, underpinned by a wider industry ethnography, I demonstrate how value-laded algorithmic judgments map onto well-worn hierarchies of desirability and employability that originate from systemic bias along the lines of class, race, and gender.



中文翻译:

影响者管理工具:算法文化,品牌安全性和偏见

本文探讨了算法的影响者管理工具,该工具旨在支持营销人员根据品牌适合性,“品牌友好度”和“品牌风险”等分类来选择广告活动的影响者。我认为,通过近似这些价值,工具可以改善影响者行业中现有的社会不平等现象,尤其是沿着性,阶级和种族的界限。他们还加深了品牌利益相关者对影响者内容的监控,他们担心影响者会犯错并被“取消”(冒着风险投资内容)。我的关键框架综合了表面上参与式影响者行业的女权主义批评,并密切关注着关键算法研究。本文深入探讨了如何使用一种工具来预测和衡量品牌风险和品牌安全,钉子 通过在更广泛的行业人种学基础上对该工具进行“遍历”,我展示了价值导向的算法判断如何映射到陈旧的需求和就业能力层次结构中,这些层次结构源自阶级,种族和性别等方面的系统性偏见。

更新日期:2021-03-31
down
wechat
bug