International Journal of Innovative Research in Computer and Communication Engineering

ISSN Approved Journal | Impact factor: 8.771 | ESTD: 2013 | Follows UGC CARE Journal Norms and Guidelines

| Monthly, Peer-Reviewed, Refereed, Scholarly, Multidisciplinary and Open Access Journal | High Impact Factor 8.771 (Calculated by Google Scholar and Semantic Scholar | AI-Powered Research Tool | Indexing in all Major Database & Metadata, Citation Generator | Digital Object Identifier (DOI) |


TITLE Context-Aware Named Entity Recognition Using Hybrid Bi-LSTM–CRF Architecture
ABSTRACT Named Entity Recognition (NER) plays a crucial role in Natural Language Processing by enabling the extraction of structured information from unstructured text. Accurate identification of named entities requires effective modeling of contextual dependencies and label sequence constraints. In this work, a context-aware Named Entity Recognition framework based on a Hybrid Bi-LSTM–CRF architecture is presented. The proposed model leverages a Bidirectional Long Short-Term Memory (Bi-LSTM) network to capture both forward and backward contextual information, while a Conditional Random Field (CRF) layer is employed to model inter-label dependencies and enforce valid tag transitions. To enhance semantic representation and improve model convergence, pretrained GloVe 300-dimensional word embeddings are integrated into the embedding layer. The effectiveness of the proposed approach is evaluated on the CoNLL-2003 benchmark dataset using standard evaluation metrics, including precision, recall, and F1-score. Experimental results demonstrate that the model achieves a peak validation F1-score of approximately 0.76, indicating a notable improvement over baseline models without pretrained embeddings. The system is implemented using the PyTorch deep learning framework with GPU acceleration, ensuring computational efficiency and scalability. The results confirm that the integration of contextual modeling and sequence-level optimization significantly enhances NER performance. Future work will focus on incorporating contextualized embeddings such as BERT and extending the approach to domain-specific named entity recognition tasks.
AUTHOR R. MONICA VENKATA SAI, G.S.N.MALLESWARI Department of Computer Science and Engineering, St.Mary’s Women’s Engineering College, Guntur, Andhra Pradesh, India
VOLUME 177
DOI DOI: 10.15680/IJIRCCE.2025.1312107
PDF pdf/107_Context-Aware Named Entity Recognition Using Hybrid Bi-LSTM–CRF Architecture.pdf
KEYWORDS
References [1] E. F. Tjong Kim Sang and F. De Meulder, “Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition,” Proc. CoNLL, pp. 142–147, 2003.
[2] G. Lample, M. Ballesteros, S. Subramanian, K. Kawakami, and C. Dyer, “Neural Architectures for Named Entity Recognition,” Proc. NAACL-HLT, pp. 260–270, 2016.
[3] X. Ma and E. Hovy, “End-to-End Sequence Labeling via Bi-Directional LSTM-CNNs-CRF,” Proc. ACL, pp. 1064–1074, 2016.
[4] J. Pennington, R. Socher, and C. Manning, “GloVe: Global Vectors for Word Representation,” Proc. EMNLP, pp. 1532–1543, 2014.
[5] J. P. Chiu and E. Nichols, “Named Entity Recognition with Bidirectional LSTM-CNNs,” Trans. ACL, vol. 4, pp. 357–370, 2016.
[6] M. Peters et al., “Deep Contextualized Word Representations,” Proc. NAACL-HLT, pp. 2227–2237, 2018.
[7] J. Devlin, M. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” Proc. NAACL-HLT, pp. 4171–4186, 2019.
[8] T. Mikolov et al., “Efficient Estimation of Word Representations in Vector Space,” Proc. ICLR, 2013.
[9] L. Rabiner, “A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition,” Proc. IEEE, vol. 77, no. 2, pp. 257–286, 1989.
[10] J. Lafferty, A. McCallum, and F. Pereira, “Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data,” Proc. ICML, pp. 282–289, 2001.
image
Copyright © IJIRCCE 2020.All right reserved