Skip to main content

T Line and C Line Detection and Ratio Reading of the Ovulation Test Strip Based on Deep Learning

  • Conference paper
  • First Online:
  • 1498 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 13113))

Abstract

The ovulation test strip is a tool for ovulation detection. At present, many APPs have been developed to analyze the photos taken from ovulation test strips to read the T/C ratio for detecting the level of luteinizing hormone (LH) in human urine. However, in the process of detecting the T and C lines, the background colors of the area near the T line or C line may be red, the T and C lines are sometimes fuzzy, and the colors of the T and C lines are sometimes distributed unevenly. In these cases, these APPs will be confronting difficulty to accurately detect the T and C lines and further read their ratio. To solve these problems, we proposed a method that consists of two steps. The first step is to use Mask R-CNN to locate the T and C lines, and the second step is to use a trained Pseudo-Siamese ratio reading network (PSRRNet) to read the ratio value based on the output results of the first step. The proposed PSRRNet consists of a Pseudo-Siamese network for simultaneously extracting the features of the T and C lines and a fully connected network to use the extracted features to predict the T/C ratio value. The experimental results illustrated that the proposed method suits to handle ovulation test strip reading.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)

    Google Scholar 

  2. Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2

    Chapter  Google Scholar 

  3. Lin, T., Goyal, P., Girshick, R., He, K.M., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)

    Google Scholar 

  4. Jeong, J., Park, H., Kwak, N.: Enhancement of SSD by concatenating feature maps for object detection. ArXiv, arXiv:1705.09587 (2017)

  5. Zhang, S.F., Wen, L.Y., Bian, X., Lei, Z., Li, S.: Single-shot refinement neural network for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4203–4212 (2018)

    Google Scholar 

  6. He, K.M., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)

    Google Scholar 

  7. Girshick, R.: Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1440–1448 (2015)

    Google Scholar 

  8. Ren, S.Q., He, K.M., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1137–1149 (2017)

    Google Scholar 

  9. Dai, J.F., Li, Y., He, K.M., Sun, J.: R-FCN: Object detection via region-based fully convolutional networks. ArXiv, arXiv:1605.06409 (2016)

  10. Cai, Z.W., Vasconcelos, N.: Cascade R-CNN: delving into high quality object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6154–6162 (2018)

    Google Scholar 

  11. Badrinarayanan, V., Kendall, A., Cipolla, R.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. In: IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 2481–2495 (2017)

    Google Scholar 

  12. Bolya, D., Zhou, C., Xiao, F.Y., Lee, Y.J.: YOLACT: real-time instance segmentation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 9157–9166 (2019)

    Google Scholar 

  13. Zeng, N.Y., Li, H., Wang, Z.D., Liu, W.B., Liu, X.H.: Deep-reinforcement-learning-based images segmentation for quantitative analysis of gold immunochromatographic strip. Neurocomputing 425, 173–180 (2021)

    Article  Google Scholar 

  14. Zeng, N.Y., Li, H., Li, Y.R., Luo, X.: Quantitative analysis of immunochromatographic strip based on convolutional neural network. IEEE Access 7, 16257–16263 (2019)

    Article  Google Scholar 

  15. Gao, J.Y., Xiao, C., Glass, L., Sun, J.M.: COMPOSE: cross-modal pseudo-siamese network for patient trial matching. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 803–812. Association for Computing Machinery, New York (2020)

    Google Scholar 

  16. He K.M., Zhang X.Y., Ren S.Q., Sun J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  17. Lin T.Y., Dollár P., Girshick R., He K., Hariharan B., Belongie S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125 (2017)

    Google Scholar 

  18. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 640–651 (2015)

    Google Scholar 

  19. Howard, A.G., et al.: MobileNets: Efficient convolutional neural networks for mobile vision applications. ArXiv, arXiv:1704.04861 (2017)

  20. Hu, J., Shen, L., Sun, G., Wu E.H.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2011–2023 (2018)

    Google Scholar 

  21. Lin, M., Chen, Q., Yan, S.C.: Network in network. CoRR, arXiv:1312.4400 (2014)

  22. Shorten, C., Khoshgoftaar, T.M.: A survey on image data augmentation for deep learning. J. Big Data 6(1), 60 (2019)

    Article  Google Scholar 

Download references

Acknowledgment

This work was partially supported by the Innovation Project of GUET Graduate Education (2020YCXS057), Natural Science Foundation of Guangxi District (2018GXNSFDA138006), the National Natural Science Foundation of China (61866007), and Image Intelligent Processing Project of Key Laboratory Fund (GIIP201505).

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, L., He, L., Li, J., Zeng, D., Wen, Y. (2021). T Line and C Line Detection and Ratio Reading of the Ovulation Test Strip Based on Deep Learning. In: Yin, H., et al. Intelligent Data Engineering and Automated Learning – IDEAL 2021. IDEAL 2021. Lecture Notes in Computer Science(), vol 13113. Springer, Cham. https://doi.org/10.1007/978-3-030-91608-4_60

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-91608-4_60

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-91607-7

  • Online ISBN: 978-3-030-91608-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics