Web Intell. Pub Date : 2021-10-11 DOI:10.3233/web-210459
X. Mao, Wei Duan, Lin Li, Jianwei Zhang
{"title":"A prison term prediction model based on fact descriptions by capturing long historical information","authors":"X. Mao, Wei Duan, Lin Li, Jianwei Zhang","doi":"10.3233/web-210459","DOIUrl":null,"url":null,"abstract":"The legal judgments are always based on the description of the case, the legal document. However, retrieving and understanding large numbers of relevant legal documents is a time-consuming task for legal workers. The legal judgment prediction (LJP) focus on applying artificial intelligence technology to provide decision support for legal workers. The prison term prediction(PTP) is an important task in LJP which aims to predict the term of penalty utilizing machine learning methods, thus supporting the judgement. Long-Short Term Memory(LSTM) Networks are a special type of Recurrent Neural Networks(RNN) that are capable of handling long term dependencies without being affected by an unstable gradient. Mainstream RNN models such as LSTM and GRU can capture long-distance correlation but training is time-consuming, while traditional CNN can be trained in parallel but pay more attention to local information. Both have shortcomings in case description prediction. This paper proposes a prison term prediction model for legal documents. The model adds causal expansion convolution in general TextCNN to make the model not only limited to the most important keyword segment, but also focus on the text near the key segments and the corresponding logical relationship of this paragraph, thereby improving the predicting effect and the accuracy on the data set. The causal TextCNN in this paper can understand the causal logical relationship in the text, especially the relationship between the legal text and the prison term. Since the model uses all CNN convolutions, compared with traditional sequence models such as GRU and LSTM, it can be trained in parallel to improve the training speed and can handling long term. So causal convolution can make up for the shortcomings of TextCNN and RNN models. In summary, the PTP model based on causality is a good solution to this problem. In addition, the case description is usually longer than traditional natural language sentences and the key information related to the prison term is not limited to local words. Therefore, it is crucial to capture substantially longer memory for LJP domains where a long history is required. In this paper, we propose a Causality CNN-based Prison Term Prediction model based on fact descriptions, in which the Causal TextCNN method is applied to build long effective history sizes (i.e., the ability for the networks to look very far into the past to make a prediction) using a combination of very deep networks (augmented with residual layers) and dilated convolutions. The experimental results on a public data show that the proposed model outperforms several CNN and RNN based baselines.","PeriodicalId":245783,"journal":{"name":"Web Intell.","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Web Intell.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/web-210459","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

法律判决总是基于对案件的描述,法律文书。然而,对于法律工作者来说,检索和理解大量的相关法律文件是一项耗时的任务。法律判决预测(LJP)侧重于应用人工智能技术为法律工作者提供决策支持。刑期预测(PTP)是LJP中的一项重要任务,旨在利用机器学习方法预测刑期,从而支持判决。长短期记忆(LSTM)网络是一种特殊类型的递归神经网络(RNN),它能够处理长期依赖关系而不受不稳定梯度的影响。主流的RNN模型如LSTM和GRU可以捕获长距离的相关性,但训练时间较长,而传统的CNN可以并行训练,但更关注局部信息。两者在案例描述预测方面都存在不足。本文提出了一个法律文书刑期预测模型。该模型在一般TextCNN中加入了因果展开卷积,使模型不仅局限于最重要的关键字段,还关注关键段附近的文本以及该段对应的逻辑关系,从而提高了对数据集的预测效果和准确性。本文的因果文本cnn可以理解文本中的因果逻辑关系,特别是法律文本与刑期之间的关系。由于该模型使用了全部的CNN卷积,与传统的序列模型如GRU、LSTM相比,可以并行训练,提高训练速度,可以长期处理。因此,因果卷积可以弥补TextCNN和RNN模型的不足。综上所述,基于因果关系的PTP模型很好地解决了这一问题。此外,案件描述通常比传统的自然语言句子更长,与刑期相关的关键信息也不局限于当地词汇。因此,对于需要长历史记录的LJP域,捕获更长的内存是至关重要的。在本文中,我们提出了一种基于事实描述的基于Causality cnn的刑期预测模型,其中使用非常深的网络(增强残差层)和扩展卷积的组合,应用Causality TextCNN方法来构建长有效历史大小(即网络查看非常远的过去以进行预测的能力)。在公开数据上的实验结果表明,该模型优于几种基于CNN和RNN的基线。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A prison term prediction model based on fact descriptions by capturing long historical information
The legal judgments are always based on the description of the case, the legal document. However, retrieving and understanding large numbers of relevant legal documents is a time-consuming task for legal workers. The legal judgment prediction (LJP) focus on applying artificial intelligence technology to provide decision support for legal workers. The prison term prediction(PTP) is an important task in LJP which aims to predict the term of penalty utilizing machine learning methods, thus supporting the judgement. Long-Short Term Memory(LSTM) Networks are a special type of Recurrent Neural Networks(RNN) that are capable of handling long term dependencies without being affected by an unstable gradient. Mainstream RNN models such as LSTM and GRU can capture long-distance correlation but training is time-consuming, while traditional CNN can be trained in parallel but pay more attention to local information. Both have shortcomings in case description prediction. This paper proposes a prison term prediction model for legal documents. The model adds causal expansion convolution in general TextCNN to make the model not only limited to the most important keyword segment, but also focus on the text near the key segments and the corresponding logical relationship of this paragraph, thereby improving the predicting effect and the accuracy on the data set. The causal TextCNN in this paper can understand the causal logical relationship in the text, especially the relationship between the legal text and the prison term. Since the model uses all CNN convolutions, compared with traditional sequence models such as GRU and LSTM, it can be trained in parallel to improve the training speed and can handling long term. So causal convolution can make up for the shortcomings of TextCNN and RNN models. In summary, the PTP model based on causality is a good solution to this problem. In addition, the case description is usually longer than traditional natural language sentences and the key information related to the prison term is not limited to local words. Therefore, it is crucial to capture substantially longer memory for LJP domains where a long history is required. In this paper, we propose a Causality CNN-based Prison Term Prediction model based on fact descriptions, in which the Causal TextCNN method is applied to build long effective history sizes (i.e., the ability for the networks to look very far into the past to make a prediction) using a combination of very deep networks (augmented with residual layers) and dilated convolutions. The experimental results on a public data show that the proposed model outperforms several CNN and RNN based baselines.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信