Skip to main content
Log in

Multi-label crowd consensus via joint matrix factorization

  • Regular Paper
  • Published:
Knowledge and Information Systems Aims and scope Submit manuscript

Abstract

Crowdsourcing is a useful and economic approach to annotate data. Various computational solutions have been developed to pursue a consensus of high quality. However, available solutions mainly target single-label tasks, and they neglect correlations among labels. In this paper, we introduce a multi-label crowd consensus (MLCC) model based on a joint matrix factorization. Specifically, MLCC selectively and jointly factorizes the sample-label association matrices into products of individual and shared low-rank matrices. As such, it makes use of the robustness of low-rank matrix approximation to noisy annotations and diminishes the impact of unreliable annotators by assigning small weights to their annotation matrices. To obtain coherent low-rank matrices, MLCC additionally leverages the shared low-rank matrix to model correlations among labels, and the individual low-rank matrices to measure the similarity between annotators. MLCC then computes the low-rank matrices and weights via a unified objective function, and adopts an alternative optimization technique to iteratively optimize them. Finally, MLCC uses the optimized low-rank matrices and weights to compute the consensus labels. Our experimental results demonstrate that MLCC outperforms competitive methods in inferring consensus labels. Besides identifying spammers, MLCC achieves robustness against their incorrect annotations, by crediting them small, or zero, weights.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Notes

  1. http://www.amt.com.

  2. http://www.crowdflower.com.

  3. http://test.baidu.com/crowdtest/.

  4. http://mulan.sourceforge.net/datasets-mlc.html.

References

  1. Abbas Q, Celebi ME, Serrano C, GarcíA IF, Ma G (2013) Pattern classification of dermoscopy images: a perceptually uniform model. Pattern Recognit. 46(1):86–97

    Article  Google Scholar 

  2. Bragg J, Weld DS (2013) Crowdsourcing multi-label classification for taxonomy creation. In: 1st AAAI conference on human computation and crowdsourcing

  3. Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  4. Chen X, Yu G, Domeniconi C, Wang J, Li Z, Zhang Z (2018) Cost effective multi-label active learning via querying subexamples. In: IEEE international conference on data mining, pp 905–910

  5. Chen X, Yu G, Domeniconi C, Wang J, Zhang Z (2018) Matrix factorization for identifying noisy labels of multi-label instances. In: Pacific Rim international conference on artificial intelligence, pp 508–517

  6. Difallah DE, Demartini G, Cudré-Mauroux P (2012) Mechanical cheat: spamming schemes and adversarial techniques on crowdsourcing platforms. In: Proceedings of the first international workshop on crowdsourcing web search, Lyon, France, pp 26–30

  7. Dawid AP, Skene AM (1979) Maximum likelihood estimation of observer error-rates using the EM algorithm. Appl Stat 28(1):20–28

    Article  Google Scholar 

  8. Duan L, Oyama S, Sato H, Kurihara M (2014) Separate or joint? Estimation of multiple labels from crowdsourced annotations. Expert Syst Appl 41(13):5723–5732

    Article  Google Scholar 

  9. Duan L, Oyama S, Kurihara M, Sato H (2015) Crowdsourced semantic matching of multi-label annotations. In: Proceedings of international joint conference on artificial intelligence, pp 3483–3489

  10. Demartini G, Difallah DE, Cudr-Mauroux P (2012) ZenCrowd: leveraging probabilistic reasoning and crowdsourcing techniques for large-scale entity linking. In: Proceedings of the 21st international conference on world wide web, pp 469–478

  11. Elisseeff A, Weston J (2001) A kernel method for multi-labelled classification. In: Advances in neural information processing systems (NeurIPS), Vancouver, British Columbia, Canada, 3–8 Dec 2001, pp 681–687

  12. Ekman P (1992) An argument for basic emotions. Cognit Emot 6(3–4):169–200

    Article  Google Scholar 

  13. Franklin MJ, Kossmann D, Kraska T, Ramesh S, Xin R (2011) CrowdDB: answering queries with crowdsourcing. In: Proceedings of the 2011 ACM SIGMOD international conference on management of data, pp 61–72

  14. Gokhale C, Das S, Doan A, Naughton JF, Rampalli N, Shavlik J, Zhu X (2014) Corleone: hands-off crowdsourcing for entity matching. In: Proceedings of the 2014 ACM SIGMOD international conference on management of data, pp 601–612

  15. Gibaja E, Ventura S (2015) A tutorial on multilabel learning. ACM Comput Surv (CSUR) 47(3):52

    Article  Google Scholar 

  16. Godbole S, Sarawagi S (2004) Discriminative methods for multi-labeled classification. In: Proceedings of Pacific-Asia conference on knowledge discovery and data mining, pp 22–30

  17. Ho CJ, Vaughan JW (2012) Online task assignment in crowdsourcing markets. Proc AAAI Conf Artif Intell 12:45–51

    Google Scholar 

  18. Hung NQV, Viet HH, Tam NT, Weidlich M, Yin H, Zhou X (2018) Computing crowd consensus with partial agreement. IEEE Trans Know Data Eng 30(1):1–14

    Article  Google Scholar 

  19. Hung NQV, Nguyen TT, Lam NT, Aberer K (2013) An evaluation of aggregation techniques in crowdsourcing. In: International conference on web information systems engineering, Nanjing, China, 13–15 Oct 2013, pp 1–15

  20. Howe J (2006) The rise of crowdsourcing. Wired Mag 14(6):1–4

    Google Scholar 

  21. Kazai G, Kamps J, Koolen M, Milic-Frayling N (2011) Crowdsourcing for book search evaluation: impact of hit design on comparative system ranking. In: Proceedings of the 34th international ACM SIGIR conference on research and development in information retrieval, pp 205–214

  22. Kazai G, Kamps J, Milic-Frayling N (2011). Worker types and personality traits in crowdsourcing relevance labels. In: Proceedings of the 20th ACM international conference on information and knowledge management, pp 1941–1944

  23. Kazai G, Kamps J, Milic-Frayling N (2012) The face of quality in crowdsourcing relevance labels: demographics, personality and labeling accuracy. In: Proceedings of the 21st ACM international conference on information and knowledge management, pp 2583–2586

  24. Kurve A, Miller DJ, Kesidis G (2015) Multicategory crowdsourcing accounting for variable task difficulty, worker skill, and worker intention. IEEE Trans Knowl Data Eng 27(3):794–809

    Article  Google Scholar 

  25. Kamar E, Kapoor A, Horvitz E (2015) Identifying and accounting for task-dependent bias in crowdsourcing. In: 3rd AAAI conference on human computation and crowdsourcing

  26. Karger DR, Oh S, Shah D (2011). Budget-optimal crowdsourcing using low-rank matrix approximations. In: 49th Annual Allerton conference on communication, control, and computing, pp 284–291

  27. Konstantinides K, Natarajan B, Yovanof GS (1997) Noise estimation and filtering using block-based singular value decomposition. IEEE Trans Image Process 6(3):479–483

    Article  Google Scholar 

  28. Kovashka A, Russakovsky O, Fei-Fei L, Grauman K (2016) Crowdsourcing in computer vision. Foundations and trends® in computer graphics and vision 10(3):177–243

    Article  Google Scholar 

  29. Nakamura A (1993) Kanjo Hyogen Jiten (Dictionary of emotive expressions). Tokyodo Publishing, Tokyo

    Google Scholar 

  30. Lease M, Yilmaz E (2012) Crowdsourcing for information retrieval. ACM SIGIR Forum 45(2):66–75

    Article  Google Scholar 

  31. Li SY, Jiang Y, Zhou ZH (2015) Multi-label active learning from crowds. arXiv preprint arXiv:1508.00722

  32. Lee DD, Seung HS (1999) Learning the parts of objects by non-negative matrix factorization. Nature 401(6755):788

    Article  MATH  Google Scholar 

  33. Meng D, De La Torre F (2013) Robust matrix factorization with unknown noise. In: Proceedings of the IEEE international conference on computer vision, pp 1337–1344

  34. Moreno PG, Artés-Rodríguez A, Teh YW, Perez-Cruz F (2015) Bayesian nonparametric crowdsourcing. J Mach Learn Res 16(1):1607–1627

  35. Meng R, Tong Y, Chen L, Cao CC (2015) CrowdTC: crowdsourced taxonomy construction. In: IEEE international conference on data mining, pp 913–918

  36. Nie F, Wang H, Cai X, Huang H, Ding C (2012) Joint Schatten \(p\)-norm and \(l_p\)-norm robust matrix completion for missing value recovery. Knowl Inf Syst 42(3):525–544

    Article  Google Scholar 

  37. Nowak S, Rüger S (2010) How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation. In: Proceedings of the international conference on multimedia information retrieval, pp 557–566

  38. Otani N, Baba Y, Kashima H (2015) Quality control for crowdsourced hierarchical classification. In: IEEE international conference on data mining, pp 937–942

  39. Rahman H, Roy SB, Thirumuruganathan S, Amer-Yahia S, Das G (2015) Task assignment optimization in collaborative crowdsourcing. In: IEEE international conference on data mining, pp 949–954

  40. Raykar VC, Yu S (2012) Eliminating spammers and ranking annotators for crowdsourced labeling tasks. J Mach Learn Res 13(2):491–518

    MathSciNet  MATH  Google Scholar 

  41. Raykar VC, Yu S, Zhao LH, Valadez GH, Florin C, Bogoni L, Moy L (2010) Learning from crowds. J Mach Learn Res 11:1297–1322

    MathSciNet  Google Scholar 

  42. Smilde AK, van der Werf MJ, Bijlsma S, van der Werff-van der Vat BJ, Jellema RH (2005) Fusion of mass spectrometry-based metabolomics data. Anal Chem 77(20):6729–6736

    Article  Google Scholar 

  43. Sheng VS, Provost F, Ipeirotis PG (2008) Get another label? improving data quality and data mining using multiple, noisy labelers. In: Proceedings of the 14th ACM SIGKDD international conference on knowledge discovery and data mining, pp 614–622

  44. Smilde AK, Kiers HA, Bijlsma S, Rubingh CM, Van Erk MJ (2008) Matrix correlations for high-dimensional data: the modified RV-coefficient. Bioinformatics 25(3):401–405

    Article  Google Scholar 

  45. Tu J, Yu G, Domeniconi C, Wang J, Xiao G, Guo M (2018) Multi-label answer aggregation based on joint matrix factorization. In: IEEE international conference on data mining pp 517–526

  46. Tsoumakas G, Katakis I, Vlahavas I (2011) Random k-labelsets for multilabel classification. IEEE Trans Know Data Eng 23(7):1079–1089

    Article  Google Scholar 

  47. Vuurens J, de Vries AP, Eickhoff C (2011) How much spam can you take? An analysis of crowdsourcing results to increase accuracy. In: Proceeding ACM SIGIR workshop on crowdsourcing for information retrieval, pp 21–26

  48. Venanzi M, Guiver J, Kohli P, Jennings NR (2016) Time-sensitive Bayesian information aggregation for crowdsourcing systems. J Artif Intell Res 56:517–545

    Article  MathSciNet  MATH  Google Scholar 

  49. Venanzi M, Guiver J, Kazai G, Kohli P, Shokouhi M (2014) Community-based bayesian aggregation models for crowdsourcing. In: Proceedings of the 23rd international conference on world wide web, pp 155–164

  50. Wang A, Hoang CDV, Kan MY (2013) Perspectives on crowdsourcing annotations for natural language processing. Lang Resour Eval 47(1):9–31

    Article  Google Scholar 

  51. Wang W, Guo XY, Li SY, Jiang Y, Zhou ZH (2017) Obtaining high-quality label by distinguishing between easy and hard items in crowdsourcing. In: International joint conference on artificial intelligence, pp 2964–2970

  52. Whitehill J, Ruvolo P, Wu T, Bergsma J, Movellan JR (2009) Whose vote should count more: optimal integration of labels from labelers of unknown expertise. In: Advances in neural information processing systems, Vancouver, British Columbia, Canada, 7–10 Dec 2009, pp 2035–2043

  53. Wu M, Wu X (2019) On big wisdom. Knowl Inf Syst 58(1):1–8

    Article  Google Scholar 

  54. Xu L, Wang Z, Shen Z, Wang Y, Chen E (2014) Learning low-rank label correlations for multi-label classification with missing labels. In: IEEE international conference on data mining, pp 1067–1072

  55. Yoshimura K, Baba Y, Kashima H (2017) Quality control for crowdsourced multi-label classification using RAkEL. In: International conference on neural information processing, pp 64–73

  56. Yu G, Zhang G, Zhang Z, Yu Z, Deng L (2015) Semi-supervised classification based on subspace sparse representation. Knowl Inf Syst 43(1):81–101

    Article  Google Scholar 

  57. Yu G, Chen X, Domeniconi C, Wang J, Li Z, Zhang Z, Wu X (2018) Feature-induced partial multi-label learning. In: IEEE international conference on data mining pp 1398–1403

  58. Zhang J, Wu X, Sheng VS (2016) Learning from crowdsourced labeled data: a survey. Artif Intell Rev 46(4):543–576

    Article  Google Scholar 

  59. Zhang J, Wu X, Sheng VS (2015) Imbalanced multiple noisy labeling. IEEE Trans Knowl Data Eng 27(2):489–503

    Article  Google Scholar 

  60. Zhang J, Wu X (2018) Multi-Label Inference for Crowdsourcing. In: ACM SIGKDD international conference on knowledge discovery and data mining, pp 2738–2747

  61. Zhang J, Sheng VS, Li Q, Wu J, Wu X (2017) Consensus algorithms for biased labeling in crowdsourcing. Inf Sci 382:254–273

    Article  Google Scholar 

  62. Zhang ML, Zhang K (2010) Multi-label learning by exploiting label dependency. In: Proceedings of the 16th ACM SIGKDD international conference on knowledge discovery and data mining, pp 999–1008

  63. Zhang ML, Zhou ZH (2014) A review on multi-label learning algorithms. IEEE Trans Knowl Data Eng 26(8):1819–1837

    Article  Google Scholar 

  64. Zhang Y, Chen X, Zhou D, Jordan MI (2014) Spectral methods meet EM: a provably optimal algorithm for crowdsourcing. In: Advances in neural information processing systems, Montreal, Quebec, Canada, 8–13 Dec 2014, pp 1260–1268

  65. Zhou ZH, Li M (2010) Semi-supervised learning by disagreement. Knowl Inf Syst 24(3):415–439

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We thank the authors who kindly shared their source code and datasets with us for the experiments, the anonymous reviewers for their comments on improving this paper, and Mr. Jia Bin for maintaining the computing resources. This research is supported by NSFC (61872300, 61741217,61873214 and 61871020), Fundamental Research Funds for the Central Universities (XDJK2019B024), the Natural Science Foundation of CQ CSTC (cstc2018jcyjAX0228,cstc2016jcyjA035).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guoxian Yu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tu, J., Yu, G., Domeniconi, C. et al. Multi-label crowd consensus via joint matrix factorization. Knowl Inf Syst 62, 1341–1369 (2020). https://doi.org/10.1007/s10115-019-01386-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10115-019-01386-7

Keywords

Navigation