Abstract
We design a two stage image segmentation method, comprising a distance transform estimating neural network and watershed segmentation. It allows segmentation and tracking of colliding objects without any assumptions on object behavior or global object appearance as the proposed machine learning step is trained on contour information only. Our method is also capable of segmenting partially vanishing contact surfaces of visually merged objects. The evaluation is performed on a dataset of collisions of Drosophila melanogaster larvae manually labeled with pixel accuracy. The proposed pipeline needs no manual parameter tuning and operates at high frame rates. We provide a detailed evaluation of the neural network design including 1200 trained networks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Please note that reasonable small values (\(1 < \sigma \le 7\)) lead to comparable results.
References
Bai, M., Urtasun, R.: Deep watershed transform for instance segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2858–2866 (2017). https://doi.org/10.1109/CVPR.2017.305
Dell, A.I., et al.: Automated image-based tracking and its application in ecology. Trends Ecol. Evol. 29(7), 417–428 (2014). https://doi.org/10.1016/j.tree.2014.05.004
Fiaschi, L., et al.: Tracking indistinguishable translucent objects over time using weakly supervised structured learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2736–2743 (2014). https://doi.org/10.1109/CVPR.2014.356
Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)
Jia, Y., et al.: Caffe: Convolutional architecture for fast feature embedding. CoRR abs/1408.5093 (2014)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR abs/1412.6980 (2014)
Klambauer, G., Unterthiner, T., Mayr, A., Hochreiter, S.: Self-normalizing neural networks. In: Proceedings of the 30th Annual Conference on Neural Information Processing Systems, pp. 972–981 (2017)
Klemm, S., Scherzinger, A., Drees, D., Jiang, X.: Barista - a graphical tool for designing and training deep neural networks. CoRR abs/1802.04626 (2018)
LeCun, Y., Bottou, L., Orr, G.B., Müller, K.: Efficient backprop. In: Montavon, G., Orr, G.B., Müller, K. (eds.) Neural Networks: Tricks of the Trade. Lecture Notes in Computer Science, vol. 7700, 2nd edn, pp. 9–48. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-35289-8_3
Michels, T., Berh, D., Jiang, X.: An RJMCMC-based method for tracking and resolving collisions of drosophila larvae. IEEE/ACM Trans. Comput. Biol. Bioinform. (2017). https://doi.org/10.1109/TCBB.2017.2779141
Pérez-Escudero, A., Vicente-Page, J., Hinz, R.C., Arganda, S., de Polavieja, G.G.: idTracker: tracking individuals in a group by automatic identification of unmarked animals. Nat. Methods 11, (2014). https://doi.org/10.1038/nmeth.2994
Risse, B., Otto, N., Berh, D., Jiang, X., Kiel, M., Klämbt, C.: FIM\(^{2{\rm c}}\): multicolor, multipurpose imaging system to manipulate and analyze animal behavior. IEEE Trans. Biomed. Eng. 64(3), 610–620 (2017). https://doi.org/10.1109/TBME.2016.2570598
Romero-Ferrero, F., Bergomi, M.G., Hinz, R., Heras, F.J.H., de Polavieja, G.G.: idtracker.ai: tracking all individuals in large collectives of unmarked animals. CoRR abs/1803.04351 (2018)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241 (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Scherzinger, A., Klemm, S., Berh, D., Jiang, X.: CNN-based background subtraction for long-term in-vial FIM imaging. In: Proceedings of the 17th International Conference on Computer Analysis of Images and Patterns, pp. 359–371 (2017). https://doi.org/10.1007/978-3-319-64689-3_29
Yurchenko, V., Lempitsky, V.S.: Parsing images of overlapping organisms with deep singling-out networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4752–4760 (2017). https://doi.org/10.1109/CVPR.2017.505
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Klemm, S., Jiang, X., Risse, B. (2019). Deep Distance Transform to Segment Visually Indistinguishable Merged Objects. In: Brox, T., Bruhn, A., Fritz, M. (eds) Pattern Recognition. GCPR 2018. Lecture Notes in Computer Science(), vol 11269. Springer, Cham. https://doi.org/10.1007/978-3-030-12939-2_29
Download citation
DOI: https://doi.org/10.1007/978-3-030-12939-2_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-12938-5
Online ISBN: 978-3-030-12939-2
eBook Packages: Computer ScienceComputer Science (R0)