Legal Case Multi-label Text Classification Using BERT-CNN Model
Author Names:
Jianhua Guo
Author Affiliation:
Puyang Vocational and Technical College, College of Education Sciences, Puyang City, China
Author Email:
glwtgzyyx1@126.com
Publication Date:
February 26, 2026
Page numbers:
641-655
DOI Number:
https://doi.org/10.1177/14727978251361281
Abstract:
In order to address problems related to label dependency, imbalance, extensive text analysis, and multi-label classification in legal contexts, this paper presents the BERT-CNN framework. The model’s development and integration with actual court cases allowed for the identification of the ideal hyperparameter configuration for the BERT-CNN architecture. The gathered dataset can be divided into three separate sections for the purpose of conducting a thorough assessment and examination of the modelthe training, validation, and testing sets. The evaluation data from the testing set is used to evaluate the model’s performance. The evaluation’s conclusions show that the BERT-CNN model obtains metrics for precision, recall, F1 score and AUC-PR of 0.83, 0.82, 0.83 and 0.88, in that order. The empirical results highlight the BERT-CNN framework’s applicability for classifying legal texts, demonstrating impressive resilience and precision in multilabel classification tasks pertaining to court cases.
Keywords:
multi-label text classification, BERT-CNN model, hyperparameter configuration, legal cases, long text processing
You need to register before accessing this content.