Segmentation-based approach for object detection

Authors

DOI:

https://doi.org/10.15276/opu.1.71.2025.17

Keywords:

approach, image processing, video processing, image segmentation, object tracking, object detection

Abstract

This study proposes a segmentation-based approach for object detection, developed for analyzing aquatic behavior in controlled laboratory environments. The research focuses on overcoming detection challenges in long-term video recordings of bullheads housed in enclosed aquariums, where sediment drift, background instability, and partial occlusions often confound traditional tracking techniques. To address these issues, an approach based on the improved SLIC Superpixel segmentation method was proposed. The basic SLIC method was modified to incorporate multi-layer contrast features and neighborhood-based pixel uniformity checks. The proposed approach includes the following stages: preprocessing, segmentation, clustering, and post-processing. The preprocessing stage includes bilateral and median filtering, contrast and brightness normalization, and optional image upscaling to improve clarity. Subsequent background subtraction and context-aware thresholding within segmented regions help eliminate false positives caused by floating debris and occluded contours. At the clustering stage, a refined distance metric is introduced to evaluate pixel coherence in a multilayered feature space, which include LAB components, subtraction results, and histogram-equalized grayscale representations, improving segmentation accuracy. Additionally, at the post-processing stage fragmented object blobs are merged to enhance spatial continuity. Empirical validation was conducted on a dataset of bullhead video frames recorded under realistic aquatic conditions. The approach based on the improved SLIC Superpixel segmentation method demonstrated an increase in object detection accuracy of more than 6% compared to the approach based on the basic SLIC method. The modularity and simplicity of the proposed approach allow it to be easily extended to other biological objects − in particular, for the behavioral analysis of rodents − without relying on deep neural networks or computationally intensive frameworks, making it suitable for tasks in ethology, neuroscience, and precision aquaculture. Further research will be devoted to implementing the approach in real-time and advanced trajectory analysis.

Downloads

Download data is not yet available.

References

Tinbergen, N. (1963). On aims and methods of ethology. Zeitschrift für Tierpsychologie, 20(4), 410-433. DOI: https://doi.org/10.1111/j.1439-0310.1963.tb01161.x

Dawkins, M. (2004). Behavior as a tool in welfare assessment. Applied Animal Behaviour Science, 86(3-4), 227-233. DOI:https://doi.org/10.1016/j.applanim.2004.02.001

Dell, A., Bender, J., Branson, K., Couzin, I., & de Polavieja, G., et al. (2014). Automated image-based tracking and its application in ecology. Trends in Ecology & Evolution, 29(7), 417-428. DOI:https://doi.org/10.1016/j.tree.2014.05.004

Anderson, D., & Perona, P. (2014). Toward a science of computational ethology. Neuron, 84(1), 18-31. DOI: https://doi.org/10.1016/j.neuron.2014.09.005

Yin, Z., Xiao, L., Ma, R., Han, Z., Li, Y., et al. (2020). Detecting abnormal animal behaviors using op-tical flow and background subtraction. Computers and Electronics in Agriculture, 174, 105471. DOI: https://doi.org/10.1016/j.compag.2020.105471

Beyan, C., & Fisher, R. (2018). Animal behavior recognition using spatio-temporal features. Pattern Recognition, 76, 12-22. DOI:https://doi.org/10.1016/j.patcog.2017.10.008

Manteuffel, G., Puppe, B., Schön, P., Bruckmaier, R., Janssen, D., et al. (2009). Sensor-based analysis of animal behavior. Animal, 3(9), 1197-1204. DOI: https://doi.org/10.1017/S1751731109004526

Neethirajan, S. (2017). Recent advances in wearable sensors for animal health management. Sensors and Biosensors Research, 20, 1-11. DOI: https://doi.org/10.1016/j.sbsr.2018.02.004

Spampinato, C., Palazzo, S., Boom, B., Lin, H., Wei, J., et al. (2014). Understanding fish behavior dur-ing typhoon events in real-life underwater environments. Multimedia Tools and Applications, 70 (1), 199-236. DOI: https://doi.org/10.1007/s11042-012-1101-5

Khan, S. U., Ghazali, K. H., & Khan, S. N. (2025). Advances in fish tracking technologies for aquacul-ture: Overcoming challenges and shaping future research. International Journal of Innovation and In-dustrial Revolution, 7(20), 29-61. DOI: https://doi.org/10.35631/IJIREV.720003

Lopez-Marcano, S., Jinks, E. L., Buelow, C. A., Brown, C. J., Wang, D., et al. (2021). Automatic detec-tion of fish and tracking of movement for ecology. Ecology and Evolution, 11(12), 8254-8263. DOI:https://doi.org/10.1002/ece3.7656

Salman, A., Siddiqui, S. A., Shafait, F., Mian, A., Shortis, M. R., et al. (2020). Automatic fish detection in underwater videos by a deep neural network-based hybrid motion learning system. ICES Journal of Marine Science, 77(4), 1295-1307. DOI:https://doi.org/10.1093/icesjms/fsz025

Siddiqui, S. A., Salman, A., Malik, M. I., Shafait, F., Mian, A., et al. (2018). Automatic fish species classification in underwater videos. ICES Journal of Marine Science, 75(2), 374-389. DOI:https://doi.org/10.1093/icesjms/fsx109

Alsmadi, M. K., & Almarashdeh, I. (2022). A survey on fish classification techniques. Journal of King Saud University – Computer and Information Sciences, 34, 1625-1638. DOI:https://doi.org/10.1016/j.jksuci.2020.07.005

Rachman, F., Akbar, M. N. S., & Putera, E. (2023). Fish disease detection of Epizootic Ulcerative Syn-drome using deep learning image processing technique. In Proceedings of the 9th International Confer-ence on Fisheries and Aquaculture (Vol. 8, No. 1, pp. 23-34). DOI:https://doi.org/10.17501/23861282.2023.8102

Li, Z., Alraie, H., Solpico, D., Nishida, Y., Ishii, K., et al. (2024). Recognition of fish in aqua cage by machine learning with image enhancement. In 2024 IEEE/SICE International Symposium on System In-tegration (SII) (pp. 637-643). DOI:https://doi.org/10.1109/SII58957.2024.10417229

Zhou, X., Chen, S., Ren, Y., Zhang, Y., Fu, J., et al. (2022). Atrous Pyramid GAN segmentation net-work for fish images with high performance. Electronics, 11(911), 1-21. DOI:https://doi.org/10.3390/electronics11060911

Saifullah, S., Suryotomo, A. P., & Yuwono, B. (2021). Fish detection using morphological approach based on k-means segmentation. Universitas Pembangunan Nasional Veteran Yogyakarta. DOI:https://doi.org/10.48550/arXiv.2101.06352

Gao, T., Jin, J., & Xu, X. (2021). Study on detection image processing method of offshore cage. Jour-nal of Physics: Conference Series, 1769, 012070. DOI:https://doi.org/10.1088/1742-6596/1769/1/012070

Li, D., Wang, Q., Li, X., Niu, M., Wang, H., & Liu, C. (2022). Recent advances of machine vision technology in fish classification. ICES Journal of Marine Science, 79(2), 263-284. DOI:https://doi.org/10.1093/icesjms/fsab264

Deep, B. V., & Dash, R. (2019). Underwater fish species recognition using deep learning techniques. In Proceedings of the 2019 6th International Conference on Signal Processing and Integrated Networks (SPIN) (pp. 665-669). DOI:https://doi.org/10.1109/SPIN.2019.8711584

Alsmadi, M. K., & Almarashdeh, I. (2022). A survey on fish classification techniques. Journal of King Saud University – Computer and Information Sciences, 34, 1625-1638. DOI:https://doi.org/10.1016/j.jksuci.2020.07.005

Knausgård, K. M., Wiklund, A., Sørdalen, T. K., Halvorsen, K. T., Kleiven, A. R., Jiao, L., et al. (2021). Temperate fish detection and classification: A deep learning-based approach. Applied Intelli-gence, 1-14. DOI:https://doi.org/10.1007/s10489-021-02527-7

Volkova, N., & Shvandt, M. (2024). Image preprocessing algorithm for object detection & tracking. Information management systems and technologies (IMST-2024), September 23-25, 194-198.

OpenCV Developers. (n.d.). How to use background subtraction methods. OpenCV Documentation. Re-trieved from https://docs.opencv.org/3.4/d1/dc5/tutorial_background_subtraction.html (Accessed: May 2025)

Paris, S., Kornprobst, P., Tumblin, J., & Durand, F. (2009). Bilateral filtering: Theory and applications. Foundations and Trends® in Computer Graphics and Vision, 4(1), 1-73. DOI:https://doi.org/10.1561/0600000020

Raid, A. M., Khedr, W. M., El-dosuky, M. A., & Aoud, M. (2014). Image restoration based on morpho-logical operations. International Journal of Computer Science, Engineering and Information Technolo-gy (IJCSEIT), 4(3), 9-21. DOI:https://doi.org/10.5121/ijcseit.2014.4302

Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., et al. (2012). SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(11), 2274-2282. DOI:https://doi.org/10.1109/TPAMI.2012.120

Zhu, A., Mei, J., Qiao, S., Yan, H., & Zhu, Y., et al. (2023). Superpixel transformers for efficient se-mantic segmentation. arXiv Preprint. Retrieved from https://arxiv.org/abs/2309.16889

Hum, Y. C., Lai, K. W., & Salim, M. I. M. (2014). Multiobjectives bihistogram equalization for image contrast enhancement. Complexity, 20(1), 22-36. DOI:https://doi.org/10.1002/cplx.21499

Joshi, P., Escrivá, D. M., & Godoy, V. (2016). OpenCV by example. Packt Publishing Ltd.

Downloads

Published

2025-06-12

How to Cite

[1]
Volkova, N. and Shvandt, M. 2025. Segmentation-based approach for object detection. Proceedings of Odessa Polytechnic University. 1(71) (Jun. 2025), 145–156. DOI:https://doi.org/10.15276/opu.1.71.2025.17.

Issue

Section

Informacion technology. Automation