International Journal of Innovative Research in Computer and Communication Engineering
ISSN Approved Journal | Impact factor: 8.771 | ESTD: 2013 | Follows UGC CARE Journal Norms and Guidelines
| Monthly, Peer-Reviewed, Refereed, Scholarly, Multidisciplinary and Open Access Journal | High Impact Factor 8.771 (Calculated by Google Scholar and Semantic Scholar | AI-Powered Research Tool | Indexing in all Major Database & Metadata, Citation Generator | Digital Object Identifier (DOI) |
| TITLE | Event Detection and Fault Localization in Distributed Acoustic Sensing |
|---|---|
| ABSTRACT | Event detection and fault localization in Distributed Acoustic Sensing (DAS) systems based on phase-sensitive optical time- domain reflectometry (Φ-OTDR) are critical for applications such as pipeline security and perimeter monitoring. However, long multi-channel Φ-OTDR traces exhibit impulsive, non-stationary, and multi-scale temporal characteristics, which pose significant challenges for existing deep-learning methods. Traditional signal-processing and convolution-based approaches struggle to capture long-range dependencies, while recent efficient Transformer models (e.g., Informer, FEDformer, and TimesNet) rely on sparsity or periodicity assumptions that are poorly matched to impulsive DAS events. To overcome these limitations, this paper presents the first interpretable CNN–Swin Transformer framework for Φ-OTDR intrusion monitoring, explicitly designed for long 1-D DAS signals. A custom 1-D adaptation of the Swin Transformer with exposed and extractable attention weights enables efficient hierarchical multi-scale temporal modeling with linear computational complexity, while a lightweight CNN branch captures fine-grained local transient patterns. The fused architecture is trained in a unified multi-task learning framework that jointly performs six-class event classification and meter-level fault localization. In addition, post-training INT8 quantization is incorporated to enable efficient real-time edge deployment. Experiments conducted on a 15,419-trace public Φ-OTDR dataset demonstrate that the proposed method achieves 98.31% classification accuracy, a 0.983 macro F1-score, and a localization mean absolute error of 9.2 m, outperforming both a 1-D CNN baseline and a CNN+vanilla Transformer baseline by 1.05% and 1.91%, respectively, and prior methods by up to 2.91%. Post-training quantization further reduces model size by 75% and inference latency by 68% with only a 0.18% accuracy drop, confirming the proposed framework’s effectiveness, interpretability, and deployability for real-world DAS systems. |
| TITLE | |
| AUTHOR | EZZEGRAOUI CHEROUK, CHANGLI LI Master Student, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China Professor, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China |
| VOLUME | 181 |
| DOI | DOI: 10.15680/IJIRCCE.2026.1402001 |
| pdf/1_Event Detection and Fault Localization in Distributed Acoustic Sensing.pdf | |
| KEYWORDS | |
| References | [1] A. Borovykh, A. Soltanolkotabi, and G. G. Hinton. Conditional variational autoencoder for time series. Proceedings of the 34th International Conference on Machine Learning, 70:3831–3839, 2017. [2] H. Chefer, S. Gur, and L. Wolf. Transformer interpretability beyond attention visualization. CVPR, 2021. [3] Y. Cheng, W. Chen, and J. Liu. Model compression for efficient neural networks: A survey. IEEE Transactions on Neural Networks and Learning Systems, 31(9):3123–3138, 2020. [4] J. Dong, Z. Chen, and Z. Liu. Deep learning optimization for time-series data: A survey. IEEE Transactions on Neural Networks and Learning Systems, 31(5):1462–1473, 2020. [5] S. Han, H. Mao, and W. Dally. Deep compression: Compressing deep neural networks. ICLR, 2016. [6] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997. [7] B. Jacob, S. Kligys, and B. Chen. Quantization and training of neural networks for efficient inference. CVPR, 2018. [8] Y. Juan, X. Bao, and L. Chen. Distributed acoustic sensing for infrastructure monitoring: A review. Sensors, 22(6):2193, 2022. [9] J. Kang, P. Han, Y. Chun, et al. Transformdas: Mapping ϕ-otdr signals to riemannian manifold for robust classification. arXiv preprint arXiv:2502.02428, 2025. [10] X. Li, Z. Liu, and P. Zhang. Deep learning for distributed acoustic sensing: A survey. Journal of Lightwave Technology, 37(9):2205–2216, 2019. [11] X. Li and Y. Zhou. Lstm-based das event recognition under low snr conditions. IEEE Access, 9:112345–112356, 2021. [12] H. Liu and J. Zhao. Convolutional neural network based event recognition in distributed acoustic sensing. IEEE Sensors Journal, 21(9):10335–10343, 2021. [13] Y. Liu and J. Chen. Hierarchical swin transformer for 1d signal modeling. IEEE Signal Processing Letters, 29:2257–2261, 2022. [14] Y. Liu, Y. Lin, and Z. Zhang. Swin transformer for time-series data: A comprehensive review. IEEE Transactions on Neural Networks and Learning Systems, 33(10):3210–3221, 2022. [15] Y. Liu, Z. Wang, and H. Zhang. Pipeline intrusion monitoring using phase-sensitive otdr. Optics Express, 31(4):5632–5648, 2023. [16] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 10012–10022, 2021. [17] J. Park, H. F. Martins, M. A. Soto, and L. The´venaz. Optical fiber distributed acoustic sensing using phase-sensitive otdr. Optics Express, 24(12):13415– 13425, 2016. [18] J. Serra and D. Arcos. Explaining time series classifiers. Data Mining and Knowledge Discovery, 32:1–29, 2018. [19] X. Shi, Z. Chen, H. Wang, D. Yeung, W. Wong, and W. Woo. Convolutional lstm network: A machine learning approach for precipitation nowcasting. Proceedings of the 2015 Neural Information Processing Systems (NIPS), 2015. [20] M. Straub, J. Reber, T. Saier, et al. Ml approaches for otdr diagnoses in passive optical networks—event detection and classification: ways for odn branch assignment. Journal of Optical Communications and Networking, 16(7):C43–C50, 2024. [21] M. Tian, H. Dong, and K. Yu. An open dataset of ϕ-otdr events with two classification models as baselines. Data in Brief, 46:108824, 2023. Dataset available at https://github.com/BJTUSensor/Phi-OTDRdatasetandcodes. [22] A. Vaswani, N. Shazeer, N. Parmar, L. Uszkoreit, J. Kaiser, M. Polosukhin, and H. Wattenberg. Attention is all you need. Proceedings of Neural Information Processing Systems (NIPS), 2017. [23] J. Wang, Y. Li, and Q. Sun. Characteristics analysis of vibration events in -otdr systems. Journal of Lightwave Technology, 40(18):6215–6224, 2022. [24] K. Wang and P. Lu. Multi-channel das event classification using deep convolutional networks. Optics and Lasers in Engineering, 152:106982, 2022. [25] Z. Wang and H. Dong. Benchmarking deep learning models for das event detection. Sensors, 23(14):6412, 2023. [26] R. Wightman. Pytorch image models (timm). https://github.com/huggingface/pytorch-image-models, 2023. Version 0.9.12 used in this work. [27] H. Wu, T. Hu, Y. Liu, H. Zhou, J. Wang, and M. Long. Timesnet: Temporal 2d-variation modeling for general time series analysis. In International Conference on Learning Representations (ICLR), 2023. [28] M. Yan and O. Qiaofeng. Otdr event detection method based on improved 1d unet. Instruments and Experimental Techniques, 67:332–342, 2024. [29] H. Zhou, S. Zhang, J. Peng, S. Zhang, J. Li, H. Xiong, and W. Zhang. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 11106–11115, 2021. [30] T. Zhou, Z. Ma, Q. Wen, X. Wang, L. Sun, and R. Jin. FEDformer: Frequency enhanced decomposed transformer for long-term series forecasting. In International Conference on Machine Learning (ICML), pages 27268–27286, 2022. |