International Journal of Innovative Research in Computer and Communication Engineering

ISSN Approved Journal | Impact factor: 8.771 | ESTD: 2013 | Follows UGC CARE Journal Norms and Guidelines

| Monthly, Peer-Reviewed, Refereed, Scholarly, Multidisciplinary and Open Access Journal | High Impact Factor 8.771 (Calculated by Google Scholar and Semantic Scholar | AI-Powered Research Tool | Indexing in all Major Database & Metadata, Citation Generator | Digital Object Identifier (DOI) |


TITLE Optimizing Reasoning in Large Language Models: A Comprehensive Review
ABSTRACT Large Language Models (LLMs) have rapidly advanced in natural language understanding, but their ability to perform structured and logical reasoning continues to face limitations. This review explores major reasoning optimization techniques including chain-of-thought prompting, task decomposition, tool-augmented inference, program based reasoning, and self-refinement cycles. The study also examines challenges such as hallucinations, inconsistency, and high computational cost. Diagrams illustrating reasoning taxonomy, tool workflows, and refinement loops are included to provide a visual understanding of modern techniques. The paper concludes with the future scope of research in improving reasoning reliability and efficiency.
AUTHOR SHIFA SHAIKH, SHRUTI PARDESHI, PROF. OMKAR BARVE M.Sc. Student, Dept. of I.T., Department of Technology, Savitribai Phule Pune University, Pune, Maharashtra, India Assistant Professor, Dept. of I.T., Department of Technology, Savitribai Phule Pune University, Pune, Maharashtra, India
VOLUME 184
DOI DOI: 10.15680/IJIRCCE.2026.1405043
PDF pdf/43_Optimizing Reasoning in Large Language Models A Comprehensive Review.pdf
KEYWORDS
References [1] J. Wei et al., “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models,” arXiv preprint arXiv:2201.11903, 2022.
[2] X. Wang, Y. Bai, and D. Zhou, “Self-Consistency Improves Chain-of-Thought Reasoning in Language Models,” Advances in Neural Information Processing Systems (NeurIPS), 2022.
[3] A. Kojima, S. Gu, M. Reid, and Y. Matsuo, “Large Language Models are Zero-Shot Reasoners,” arXiv:2205.11916, 2022.
[4] S. Chen et al., “Program-Aided Language Models for Reasoning Tasks,” Proceedings of the 40th International Conference on Machine Learning (ICML), 2023.
[5] Y. Gou, L. Li, and Z. Zhang, “Tool-Augmented Language Models for Enhanced Logical Reasoning,” IEEE Transactions on Artificial Intelligence, vol. 4, no. 3, pp. 325–339, 2023.
[6] H. Shen and R. Zhang, “Self-Refine: Iterative Improvement for Large Language Model Reasoning,” arXiv:2303.17651, 2023.
[7] T. Snell, P. Heather, and A. Goldstein, “Reasoning Efficiency in Large Language Models: A Comprehensive Survey,” ACM Computing Surveys, vol. 56, no. 2, pp. 1–32, 2024.
[8] M. Wang and L. Xu, “Hybrid Neuro-Symbolic Reasoning Techniques for LLMs,” AI Journal (Elsevier), vol. 55, no. 4, pp. 112–129, 2024.
[9] J. Liu and H. Huang, “Multi-Step Reasoning with Decomposition-Based LLM Frameworks,” IEEE Access, vol. 12, pp. 55123–55135, 2024.
[10] R. Patel and S. Singh, “A Review on Reasoning Optimization Approaches in Next-Generation LLMs,” International Journal of Computer Applications, vol. 188, no. 23, pp. 10–18, 2025.
image
Copyright © IJIRCCE 2020.All right reserved