当前位置: X-MOL 学术J. Big Data › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Argument annotation and analysis using deep learning with attention mechanism in Bahasa Indonesia
Journal of Big Data ( IF 8.1 ) Pub Date : 2020-10-19 , DOI: 10.1186/s40537-020-00364-z
Derwin Suhartono , Aryo Pradipta Gema , Suhendro Winton , Theodorus David , Mohamad Ivan Fanany , Aniati Murni Arymurthy

Argumentation mining is a research field which focuses on sentences in type of argumentation. Argumentative sentences are often used in daily communication and have important role in each decision or conclusion making process. The research objective is to do observation in deep learning utilization combined with attention mechanism for argument annotation and analysis. Argument annotation is argument component classification from certain discourse to several classes. Classes include major claim, claim, premise and non-argumentative. Argument analysis points to argumentation characteristics and validity which are arranged into one topic. One of the analysis is about how to assess whether an established argument is categorized as sufficient or not. Dataset used for argument annotation and analysis is 402 persuasive essays. This data is translated into Bahasa Indonesia (mother tongue of Indonesia) to give overview about how it works with specific language other than English. Several deep learning models such as CNN (Convolutional Neural Network), LSTM (Long Short-Term Memory), and GRU (Gated Recurrent Unit) are utilized for argument annotation and analysis while HAN (Hierarchical Attention Network) is utilized only for argument analysis. Attention mechanism is combined with the model as weighted access setter for a better performance. From the whole experiments, combination of deep learning and attention mechanism for argument annotation and analysis arrives in a better result compared with previous research.



中文翻译:

印尼语中使用深度学习和注意力机制进行参数标注和分析

论证挖掘是一个研究领域,其重点是论据类型的句子。议论性句子经常用于日常交流中,并且在每个决策或结论制定过程中都具有重要作用。研究目的是结合注意机制对深度学习中的观察进行注解和分析。自变量注释是自某些话语到几个类的自变量成分分类。类别包括主要主张,主张,前提和非议论。论证分析指出论证的特征和有效性,它们被安排在一个主题中。分析之一是关于如何评估既定论点是否足够的分类。用于论点注释和分析的数据集是402个有说服力的论文。该数据被翻译成印度尼西亚语(印尼语),以概述其如何使用英语以外的特定语言。诸如CNN(卷积神经网络),LSTM(长期短期记忆)和GRU(门控循环单元)之类的几种深度学习模型用于参数注释和分析,而HAN(分层注意力网络)仅用于参数分析。注意机制与该模型组合在一起作为加权访问设置器,以实现更好的性能。从整个实验来看,与以往的研究相比,将深度学习和注意力机制结合起来进行论点注释和分析可获得更好的结果。诸如CNN(卷积神经网络),LSTM(长期短期记忆)和GRU(门控循环单元)之类的几种深度学习模型用于参数注释和分析,而HAN(分层注意力网络)仅用于参数分析。注意机制与该模型组合在一起作为加权访问设置器,以实现更好的性能。从整个实验来看,与以往的研究相比,将深度学习和注意力机制结合起来进行论点注释和分析可获得更好的结果。诸如CNN(卷积神经网络),LSTM(长期短期记忆)和GRU(门控循环单元)之类的几种深度学习模型用于参数注释和分析,而HAN(分层注意力网络)仅用于参数分析。注意机制与该模型组合在一起作为加权访问设置器,以实现更好的性能。从整个实验来看,与以往的研究相比,将深度学习和注意力机制相结合来进行参数注释和分析的效果更好。注意机制与该模型组合在一起作为加权访问设置器,以实现更好的性能。从整个实验来看,与以往的研究相比,将深度学习和注意力机制相结合来进行参数注释和分析的效果更好。注意机制与该模型组合在一起作为加权访问设置器,以实现更好的性能。从整个实验来看,与以往的研究相比,将深度学习和注意力机制相结合来进行参数注释和分析的效果更好。

更新日期:2020-10-19
down
wechat
bug