Engineering Journal: Science and InnovationELECTRONIC SCIENCE AND ENGINEERING PUBLICATION
Certificate of Registration Media number Эл #ФС77-53688 of 17 April 2013. ISSN 2308-6033. DOI 10.18698/2308-6033
  • Русский
  • Английский
Article

Development of the automated tools for registering the fuel ignition and flameout moments using the visual analysis methods

Published: 16.10.2025

Authors: Yanuk A.V., Tarasenko A.N., Panov E.N.

Published in issue: #10(166)/2025

DOI: 10.18698/2308-6033-2025-10-2485

Category: Aviation and Rocket-Space Engineering | Chapter: Thermal, Electric Jet Engines, and Power Plants of Aircrafts

The paper presents results of testing the ResNet18, ResNet50, DenseNet121, MobileNetV1, MobileNetV2 and MobileNetV3 neural networks and their variations applied to the task of classifying images obtained during the experimentat the stage of putting the engine into operation. It describes modifications made to the standard implementations of the considered neural network architectures and the learning methodology used. Certain data augmentation methods are used to expand and balance the sampling. They include alteration in the image spatial orientation by rotations of 90, 180, and 270 degrees, mirroring the image relative to the vertical axis, and alteration in the image color tone. The accuracy, precision, and recall metrics are computed for each image class; the runtime and the number of parameters for each model are measured. A set of models satisfying the requirements for classification accuracy, computational costs, and time consumption are compiled.

EDN LLAIKF


References
[1] Normy letnoy godnosti vozdushnskh sudov. Chast 33 [Airworthiness Standards for Aircraft. Part 33]. Utv. Prikazom No. 820-17 ot 17 noyabrya 2022 g. Ministerstva transporta Rossiyskoy Federatsii, Federalnogo agentstva vozdushnogo transporta [Approved by Order No. 820-17 of November 17, 2022, of the Ministry of Transport of the Russian Federation and the Federal Air Transport Agency].
[2] Bogdanov V.I., Golubev P.A., Chelyshev V.B. O vozmozhnosti otsenki nekotorykh vysotnykh kharakteristik kamery sgoraniya GTD v nazemnykh usloviyakh [On a possibility of assessing certain altitude characteristics of the gas turbine combustion chamber in the ground conditions]. Vestnik Rybinskoy gosydarstvennoy aviatsionnoy tekhnologicheskoy akademii im. P.A. Solovyeva, 2012, no. 2, pp. 91–94.
[3] Gumerov A.R., Yasoveev V.Kh. Optiko-elektronnaya sistema kontrolya i otsenki effektivnosti protsessov vosplameneniya topliva v kamere sgoraniya gazoturbinnogo dvigatelya [Optoelectronic system for monitoring and evaluating efficiency of fuel ignition processes in the combustion chamber of a gas-turbine engine]. In: Sovremennye problemy nauki i obrazovaniya v tekhnicheskom vuze: sb. statey [Modern problems of science and education in a technical university: collection of articles]. Ufa, Ufimskiy Gosudarstvennyi Aviatsionnyi Tekhnicheskiy Universitet Publ., 2015, pp. 128–132.
[4] Korshak S.A. Neyrosetevaya model klassifikatsii rezhimov raboty gazoturbinnogo dvigatelya po materialam obyektivnogo kontrolya [Neural network model of classification of operating models of a gas turbine engine based on the materials of objective control]. Aviatsionnyi vestnik — The Aviation Herald, 2023, no. 8, pp. 54–60.
[5] Howard A., Sandler M., Chu G., Chen L.C., Chen B., Tan M., et al. Searching for MobileNetv3. 2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, Korea (South), 2019, pp. 1314–1324. https://doi.org/10.1109/ICCV.2019.00140
[6] Tan M., Chen B., Pang R., Vasudevan V., Sandler M., Howard A., Le Q.V. MnasNet: Platform-aware neural architecture search for mobile. 2019 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, CA, USA, 2019, pp. 2820–2828. https://doi.org/10.1109/CVPR.2019.00293
[7] Zhang X., Zhou X., Lin M., Sun J. ShuffleNet: An extremely efficient convolutional neural network for mobile devices. 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, UT, USA, 2018, pp. 6848–6856. https://doi.org/10.1109/CVPR.2018.00716
[8] He K., Zhang X., Ren S., Sun J. Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA, 2016, pp. 770–778. https://doi.org/10.1109/CVPR.2016.90
[9] Huang G., Liu Z., Van Der Maaten L., Weinberger K.Q. Densely connected convolutional networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HI, USA, 2017, pp. 4700–4708. https://doi.org/10.1109/CVPR.2017.243
[10] Howard A.G. MobileNets: Efficient convolutional neural networks for mobile vision applications. 2017. https://doi.org/10.48550/arXiv.1704.04861
[11] Sandler M., Howard A., Zhu M., Zhmoginov A., Chen L.C. MobileNetV2: Inverted residuals and linear bottlenecks. 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, UT, USA, 2018, pp. 4510–4520. https://doi.org/10.1109/CVPR.2018.00474
[12] Kingma D.P., Ba J. Adam: A method for stochastic optimization. 3rd International Conference on Learning Representations, San Diego, 2015. https://doi.org/10.48550/arXiv.1412.6980
[13] Goodfellow I., Bengio Y., Courville A. Deep learning. Cambridge, MIT Press, 2016, 775 p.
[14] Ioffe S., Szegedy Ch. Batch normalization: Accelerating deep network training by reducing internal covariate shift. 2015. https://doi.org/10.48550/arXiv.1502.03167
[15] He K., Zhang X., Ren S., Sun J. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Santiago, Chile, 2015, pp. 1026–1034. https://doi.org/10.1109/ICCV.2015.123