Abstract:
Spiking neural P systems (SNPs) are efficient parallel computing systems abstracted from the mechanism of information exchange between biological neurons. For the first time, LSTM-SNPs combine nonlinear SNPs and long short-term memory (LSTM) to form a universal deep learning model that gating mechanisms can explain. LSTM-SNPs, the latest variant of the traditional sequence analysis model LSTM, has yet to be studied on the performance of typical sequence analysis in natural language processing. This paper comprehensively analyzes the performance difference in the named entity recognition tasks between LSTM-SNPs, traditional LSTMs, and its variant BiLSTM by adding different deep learning components. The study provides a reliable reference for applying the LSTM-SNP model in natural language processing tasks. The results of comparative experiments based on CoNLL-2003 and OntoNotes 5.0 data sets indicate the LSTM-SNP model has a similar entity recognition performance to the LSTM model. In further research,the overall model performance can be improved significantly with the pretreatment operation. The results show the LSTM-SNP model is an effective method for named entity recognition and has great application potential.