Abstract
The primary focus of this article is a general discussion of the theory and development of artificial intelligence (AI) systems and their expanding implications for human interactions and socioeconomic issues. We provide an overview of AI origins and the ubiquitous forms of AI that now saturate our culture, and we discuss the many ways AI is likely to continue to have a pervasive influence on every aspect of our lives. In a separate but directly related article in this issue, we provide an experimental analysis of a current version of an AI in the form of a deep neural network, and we describe how it is able to model and forecast human learning. Throughout this article, we attempt to link common elements in both articles. However, in this article, the emphasis is on a comparison of how cloud-based AI systems impact U.S. citizens and citizens of the European Union in terms of individual rights, security, equitable access, inadvertent machine discrimination, and corporate responsibilities. We note that although AI offers researchers throughout academic disciplines fascinating and previously unimagined explorations and critically important research possibilities, there are areas in which AI systems may be unscrupulously misused.
Similar content being viewed by others
References
Bengio, Y. (2009). Learning deep architectures for A.I. Machine Learning, 2, 1–127. https://doi.org/10.1561/2200000006.
Beqiri, R. (2016). A.I. architecture intelligence. Retrieved from http://futurearchitectureplatform.org/news/28/ai-architecture-intelligence/
Buchweitz, A., Shinkareva, S. V., Mason, R. A., Mitchell, T. M., & Just, M. A. (2012). Identifying bilingual semantic neural representations across languages. Brain and Language, 120, 282–289. https://doi.org/10.1016/j.bandl.2011.09.003.
Burrell, J. (2016). How the machine “thinks”: Understanding opacity in machine learning algorithms. Big Data & Society, 3, 205395171562251. https://doi.org/10.1177/2053951715622512.
Carey, N., & Lienert, P. (2018). Honda to invest $2.75 billion in GM’s self-driving car unit. Retrieved from https://www.reuters.com/article/us-gm-autonomous/honda-buys-in-to-gm-cruise-self-driving-unit-idUSKCN1MD1GW
Cellarius , M. (2017, Dec. 13). Artificial intelligence and the right to informational self-determination. Retrieved from https://www.oecd-forum.org/posts/28608-artificial-intelligence-and-the-right-to-informational-self-determination.
Chan, L. S. (2017). Who uses dating apps? Exploring the relationships among trust, sensation-seeking, smartphone use, and the intent to use dating apps based on the integrative model. Computers in Human Behavior, 72, 246–258. https://doi.org/10.1016/j.chb.2017.02.053.
Ciresan, D. C., Meier, U., Masci, J., & Schmidhuber, J. (2012). Multi-column deep neural network for traffic sign classification. Neural Networks, 32, 333–338. https://doi.org/10.1016/j.neunet.2012.02.023.
Dafoe, A. (2018). AI governance: A research agenda—Governance of AI Program, Future of Humanity Institute. Oxford, UK: University of Oxford Retrieved from https://www.fhi.ox.ac.uk/wp-content/uploads/GovAI-Agenda.pdf.
Dahl, G. E., Yu, D., Deng, L., & Acero, A. (2012). Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Transactions on Audio, Speech, and Language Processing, 20, 30–42. https://doi.org/10.1109/tasl.2011.2134090.
Dent, K. (2019). The risks of amoral AI: The consequences of deploying automation without considering ethics could be disastrous. Retrieved from https://techcrunch.com/2019/08/25/the-risks-of-amoral-a-i/
Diakopoulos, N. (2013). Algorithmic accountability reporting: On the investigation of black boxes (Tow Center for Digital Journalism: A Tow/Knight Brief). Retrieved from the Columbia University Libraries website: https://academiccommons.columbia.edu/doi/10.7916/D8ZK5TW2
Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4, eaao5580. https://doi.org/10.1126/sciadv.aao5580
Ellis, A.K. (1970). Teaching and Learning Elementary Social Studies (3rd Ed.) Allyn and Bacon.
Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini, A. (2016). The rise of social bots. Communications of the ACM, 59, 96–104. https://doi.org/10.1145/2818717.
General Data Protection Regulation. (n.d.). In Wikipedia. Retrieved from https://en.wikipedia.org/wiki/General_Data_Protection_Regulation
Gershgorn, D. (2017). Google’s voice-generating AI is now indistinguishable from humans. Retrieved from https://qz.com/1165775/googles-voice-generating-ai-is-now-indistinguishable-from-humans/
Greene, M. N., Morgan, P. H., & Foxall, G. R. (2017). NEURAL Networks and consumer behavior: NEURAL models, logistic regression, and the behavioral perspective model. The Behavior Analyst, 40, 393–418. https://doi.org/10.1007/s40614-017-0105-x.
Guzman, A. L. (2017). Making AI safe for humans: A conversation with Siri. In R. W. Gehl & M. Bakardjieva (Eds.), Socialbots and their friends: Digital media and the automation of sociality (pp. 69–85). New York, NY: Routledge.
Hagan, M., Demuth, H., & Beale, M. (2002). Neural network design. Boston, MA: PWS. https://doi.org/10.1002/rnc.727.
Haykin, S. (2008). Neural networks: A comprehensive foundation (3rd ed.). Upper Saddle River, NJ: Prentice Hall. https://doi.org/10.1007/s10278-012-9556-5.
Howard, J. (2019). Artificial intelligence: Implications for the future of work. Retrieved from www.ishn.com/articles/111325-artificial-intelligence-implications-for-the-future-of-work
Hwang, T., Pearce, I., & Nanis, M. (2012). Socialbots. Interactions, 19, 38–40. https://doi.org/10.1145/2090150.2090161.
IBM Knowledge Center. (2017). IBM Kohonen node. Retrieved from https://www.ibm.com/support/knowledgecenter/en/SS3RA7_15.0.0/com.ibm.spss.modeler.help/kohonennode_general.htm
Jones, S. (2014). People, things, memory and human-machine communication. International Journal of Media & Cultural Politics, 10, 245–258. https://doi.org/10.1386/macp.10.3.245_1.
Joshi, N. (2019). How far are we from achieving artificial general intelligence? Retrieved from https://www.forbes.com/sites/cognitiveworld/2019/06/10/how-far-are-we-from-achieving-artificial-general-intelligence/#57cf40176dc4
Koedinger, K. R., Aleven, V., Roll, I., & Baker, R. (2009). In vivo experiments on whether supporting metacognition in intelligent tutoring systems yields robust learning. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Handbook of metacognition in education (pp. 383–412). New York, NY: Routledge.
Koedinger, K. R., Corbett, A. T., & Perfetti, C. (2012). The knowledge-learning-instruction framework: Bridging the science-practice chasm to enhance robust student learning. Cognitive Science, 36, 757–798. https://doi.org/10.1111/j.1551-6709.2012.01245.x.
Kohonen, T. (2001). Self-organization and associative memory. Berlin, Germany: Springer-Verlag.
Langley, P. (2006). Cognitive architectures and general intelligent systems. AI Magazine, 27, 33–44.
Lyddy, F., & Barnes-Holmes, D. (2007). Stimulus equivalence as a function of training protocol in a connectionist network. Journal of Speech & Language Pathology & Applied Behavior Analysis, 2, 14–24. https://doi.org/10.1037/h0100204.
Machida, S. (2010). U.S. soft power and the “China threat”: Multilevel analyses. Asian Politics & Policy, 2, 351–370. https://doi.org/10.1111/j.1943-0787.2010.01198.x.
Manyika, J., & Bughin, J. (2018). The promise and challenge of the age of artificial intelligence. Retrieved from https://www.mckinsey.com/featured-insights/artificial-intelligence/the-promise-and-challenge-of-the-age-of-artificial-intelligence
McCaffrey, J. (2014). Neural networks using C# succinctly [Web log post]. Retrieved from https://jamesmccaffrey.wordpress.com/2014/06/03/neural-networks-using-c-succinctly
McCaffrey, J. (2015). Coding neural network back-propagation using C#. Visual Studio Magazine. Retrieved from https://visualstudiomagazine.com/articles/2015/04/01/back-propagation-using-c.aspx
McCaffrey, J. (2017). Test run: Deep neural network training. Visual Studio Magazine. Retrieved from https://msdn.microsoft.com/en-us/magazine/mt842505.aspx
McClelland, J. L., & Rumelhart, D. E. (1986). Parallel distributed processing: Explorations in the microstructure of cognition: Volume 2. In Psychological and biological models. Cambridge, MA: MIT Press. https://doi.org/10.1016/b978-1-4832-1446-7.50010-8.
McKendrick, J. (2020). Now, AI makes online courses even smarter. Retrieved from https://www.forbes.com/sites/joemckendrick/2018/12/04/now-ai-makes-online-courses-even-smarter/#5e345f8110b1
Minsky, M., & Papert, S. (1969). Perceptrons: An introduction to computational geometry. Cambridge, MA: MIT Press. https://doi.org/10.1126/science.165.3895.780.
Mnih, V., Heess, N., Graves, A., & Kavukcuoglu, K. (2014). Recurrent models of visual attention. In Computing Research Repository Retrieved from https://arxiv.org/abs/1406.6247.
Nass, C., & Brave, S. (2005). Wired for speech: How voice activates and advances the human-computer relationship. Cambridge, MA: MIT Press.
Ninness, C., Rumph, R., McCuller, G., Harrison, C., Vasquez, E., Ford, A., et al. (2005). A relational frame and artificial neural network approach to computer-interactive mathematics. The Psychological Record, 55, 561–570. https://doi.org/10.1901/jaba.2005.2-04.
Ninness, C., Lauter, J., Coffee, M., Clary, L., Kelly, E., Rumph, M., et al. (2012). Behavioral and biological neural network analyses: A common pathway toward pattern recognition and prediction. The Psychological Record, 62, 579–598. https://doi.org/10.5210/bsi.v22i0.4450.
Ninness, C., Rumph, M., Clary, L., Lawson, D., Lacy, J. T., Halle, S., et al. (2013). Neural network and multivariate analysis: Pattern recognition in academic and social research. Behavior and Social Issues, 22, 49–63. https://doi.org/10.5210/bsi.v22i0.4450.
Ninness, C., Henderson, R., Ninness, S., & Halle, S. (2015). Probability pyramiding revisited: Univariate, multivariate and neural network analyses of complex data. Behavior and Social Issues, 24, 164–186. https://doi.org/10.5210/bsi.v24i0.6048.
Ninness, C., Ninness, S., Rumph, M., & Lawson, D. (2018). The emergence of stimulus relations: Human and computer learning. Perspectives on Behavioral Science, 41, 121–154. https://doi.org/10.1007/s40614-017-0125-6.
Ninness, C., Rehfeldt, R. A., & Ninness, S. (2019). Identifying accurate and inaccurate stimulus relations: Human and computer learning. The Psychological Record, 69, 333–356. https://doi.org/10.1007/s40732-019-00337-6.
Organisation for Economic Co-operation and Development. (2019). Artificial intelligence in society. Paris, France: Author. https://doi.org/10.1787/eedfee77-en.
Ozer, N. (2012). Note to self: Siri not just working for me, working full-time for Apple too [Web log post]. Retrieved from https://www.aclu.org/blog/national-security/note-self-siri-not-just-working-me-working-full-time-apple-too?redirect=blog/free-speech-technology-and-liberty/note-self-siri-not-just-working-me-working-full-time-apple
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Cambridge, MA: Harvard University Press.
Perry, B., & Uuk, R. (2019). AI governance and the policymaking process: Key considerations for reducing AI risk. Big Data and Cognitive Computing, 3, 1–17. https://doi.org/10.3390/bdcc3020026.
Phan, N., Dou, D., Wang, H., Kil, D., & Piniewski, B. (2017). Ontology-based deep learning for human behavior prediction with explanations in health social networks. Information Sciences, 384, 298–313. https://doi.org/10.1016/j.ins.2016.08.038.
Plis, S. M., Hjelm, D. R., Salakhutdinov, R., Allen, E. A., Bockholt, H. J., Long, J. D., et al. (2014). Deep learning for neuroimaging: A validation study. Frontiers in Neuroscience, 8, 1–11. https://doi.org/10.3389/fnins.2014.00229.
Popenici, S. A. D., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning, 12, 1–13. https://doi.org/10.1186/s41039-017-0062-8.
Rogers, T. T. (2009). Connectionist models. Encyclopedia of Neuroscience, 75–82. https://doi.org/10.1016/b978-008045046-9.00328-4
Rosenblatt, R. F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65, 386–408. https://doi.org/10.1037/h0042519.
Rumph, M. (2013). Artificial neural network applications to research in school and behavioral psychology (Unpublished doctoral dissertation). Stephen F. Austin State University, Nacogdoches, TX.
Su, J. (2019). Why Amazon Alexa is always listening to your conversations: Analysis. Retrieved from https://www.forbes.com/sites/jeanbaptiste/2019/05/16/why-amazon-alexa-is-always-listening-to-your-conversations-analysis/#38611bcd2378
Suchman, L. A. (2009). Human-machine reconfigurations: Plans and situated actions (2nd ed.). New York, NY: Cambridge University Press.
Suchman, L. A. (2015). Situational awareness: Deadly bioconvergence at the boundaries of bodies and machines. Media Tropes, 1, 1–24 Retrieved from https://mediatropes.com/index.php/Mediatropes.
Talbot, D., Kim, L., Goldstein, E., & Sherman, J. (2017). Charting a roadmap to ensure AI benefits all. Retrieved from https://medium.com/berkman-klein-center/charting-a-roadmap-to-ensure-artificialintelligence-ai-benefits-all-e322f23f8b59
TikTok. (n.d.). In Wikipedia. Retrieved from https://en.wikipedia.org/wiki/TikTok
Tovar, A. E., & Torres-Chávez, A. (2012). A connectionist model of stimulus class formation with a yes-no procedure and compound stimuli. The Psychological Record, 62, 747–762. https://doi.org/10.1007/s40732-016-0184-1.
Turchin, A., & Denkenberger, D. (2018). Global catastrophic and existential risks communication scale. Futures, 102, 27–38. https://doi.org/10.1016/j.futures.2018.01.003.
Verbeek, P. P. (2009). Ambient intelligence and persuasive technology: The blurring boundaries between human and technology. NanoEthics, 3, 231–242. https://doi.org/10.1007/s11569-009-0077-8.
Vernucio, R. R., & Debert, P. (2016). Computational simulation of equivalence class formation using the go/no-go procedure with compound stimuli. The Psychological Record, 66, 439–440. https://doi.org/10.1007/s40732-016-0184-1.
Walker, S. F. (1992). A brief history of connectionism and its psychological implications. In A. Clark & R. Lutz (Eds.), Connectionism in context (pp. 123–144). Berlin, Germany: Springer-Verlag.
Yanes, J. (2018). Drones that kill on their own: Will artificial intelligence reach the battlefield? Retrieved from https://www.bbvaopenmind.com/en/technology/artificial-intelligence/drones-that-kill-on-their-own-will-artificial-intelligence-reach-the-battlefield/
Yu, J., Lin, Z. L., Yang, J., Shen, X., Lu, X., & Huang, T. S. (2018). Generative image inpainting with contextual attention. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT (pp. 5505–5514). Piscataway, NJ: IEEE. https://doi.org/10.1109/cvpr.2018.00577
Yujie, X. (2019). Camera above the classroom: Chinese schools are using facial recognition on students. But should they? Retrieved from https://www.sixthtone.com/ news/1003759/camera-above-the-classroom
Zhang, B., & Dafoe, A. (2019). Artificial intelligence: American attitudes and trends. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3312874.
Funding
No type of funding or other compensation was provided for conducting any part of this investigation.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author affirms that there are no conflicts of interest with regard to any aspect of this investigation.
Ethical approval
All training and testing procedures performed with respect to the human participants as discussed in and linked to a related article in this issue were conducted in accordance with the ethical standards of the institutional research committee and in conformance with the 1964 Helsinki declaration and all of its subsequent amendments or similar ethical standards.
Rights and permissions
About this article
Cite this article
Ninness, C., Ninness, S.K. Emergent Virtual Analytics: Artificial Intelligence and Human-Computer Interactions. Behav. Soc. Iss. 29, 100–118 (2020). https://doi.org/10.1007/s42822-020-00031-1
Published:
Issue Date:
DOI: https://doi.org/10.1007/s42822-020-00031-1