Skip to main content

Synchronization Is All You Need: Exocentric-to-Egocentric Transfer for Temporal Action Segmentation with Unlabeled Synchronized Video Pairs

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Abstract

We consider the problem of transferring a temporal action segmentation system initially designed for exocentric (fixed) cameras to an egocentric scenario, where wearable cameras capture video data. The conventional supervised approach requires the collection and labeling of a new set of egocentric videos to adapt the model, which is costly and time-consuming. Instead, we propose a novel methodology which performs the adaptation leveraging existing labeled exocentric videos and a new set of unlabeled, synchronized exocentric-egocentric video pairs, for which temporal action segmentation annotations do not need to be collected. We implement the proposed methodology with an approach based on knowledge distillation, which we investigate both at the feature and Temporal Action Segmentation model level. Experiments on Assembly101 and EgoExo4D demonstrate the effectiveness of the proposed method against classic unsupervised domain adaptation and temporal alignment approaches. Without bells and whistles, our best model performs on par with supervised approaches trained on labeled egocentric data, without ever seeing a single egocentric label, achieving a \(+15.99\) improvement in the edit score (28.59 vs 12.60) on the Assembly101 dataset compared to a baseline model trained solely on exocentric data. In similar settings, our method also improves edit score by \(+3.32\) on the challenging EgoExo4D benchmark. Code is available here: https://github.com/fpv-iplab/synchronization-is-all-you-need.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    As we show in our experiments, our method works also when videos are not perfectly synchronized, hence sophisticated synchronization systems are not needed.

  2. 2.

    More information on view selection in supplementary material.

  3. 3.

    Additional implementation details are in supplementary material.

  4. 4.

    https://ego-exo4d-data.org/.

References

  1. Camporese, G., Coscia, P., Furnari, A., Farinella, G.M., Ballan, L.: Knowledge distillation for action anticipation via label smoothing. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 3312–3319. IEEE (2021)

    Google Scholar 

  2. Chen, M.H., Kira, Z., AlRegib, G., Yoo, J., Chen, R., Zheng, J.: Temporal attentive alignment for large-scale video domain adaptation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2019)

    Google Scholar 

  3. Chen, M.H., Li, B., Bao, Y., AlRegib, G.: Action segmentation with mixed temporal domain adaptation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 605–614 (2020)

    Google Scholar 

  4. Chen, X., et al.: Long video-based action segmentation for earthmoving excavators using improved temporal convolutional network models. In: IOP Conference Series: Earth and Environmental Science, vol. 1101, p. 092003. IOP Publishing (2022)

    Google Scholar 

  5. Choi, J., Sharma, G., Chandraker, M., Huang, J.B.: Unsupervised and semi-supervised domain adaptation for action recognition from drones. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1717–1726 (2020)

    Google Scholar 

  6. Choi, J., Sharma, G., Schulter, S., Huang, J.-B.: Shuffle and attend: video domain adaptation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12357, pp. 678–695. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58610-2_40

    Chapter  Google Scholar 

  7. Crasto, N., Weinzaepfel, P., Alahari, K., Schmid, C.: Mars: motion-augmented rgb stream for action recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7882–7891 (2019)

    Google Scholar 

  8. Csurka, G.: Domain adaptation for visual applications: a comprehensive survey. CoRR arxiv:1702.05374 (2017)

  9. Damen, D., et al.: Rescaling egocentric vision. CoRR arxiv:2006.13256 (2020)

  10. Ding, G., Sener, F., Yao, A.: Temporal action segmentation: an analysis of modern technique. arXiv preprint arXiv:2210.10352 (2022)

  11. Dwibedi, D., Aytar, Y., Tompson, J., Sermanet, P., Zisserman, A.: Temporal cycle-consistency learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1801–1810 (2019)

    Google Scholar 

  12. Fernando, B., Herath, S.: Anticipating human actions by correlating past with the future with jaccard similarity measures. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13224–13233 (2021)

    Google Scholar 

  13. Furnari, A., Farinella, G.M.: Streaming egocentric action anticipation: an evaluation scheme and approach. Comput. Vis. Image Underst. 234, 103763 (2023)

    Article  Google Scholar 

  14. Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. In: International Conference on Machine Learning, pp. 1180–1189. PMLR (2015)

    Google Scholar 

  15. Girdhar, R., Singh, M., Ravi, N., van der Maaten, L., Joulin, A., Misra, I.: Omnivore: a single model for many visual Modalities. In: CVPR (2022)

    Google Scholar 

  16. Grauman, K., et al.: Ego-exo4d: understanding skilled human activity from first-and third-person perspectives. In: CVPR (2024)

    Google Scholar 

  17. Gretton, A., Borgwardt, K.M., Rasch, M.J., Schölkopf, B., Smola, A.: A kernel two-sample test. J. Mach. Learn. Res. 13(1), 723–773 (2012)

    MathSciNet  Google Scholar 

  18. Hadji, I., Derpanis, K.G., Jepson, A.D.: Representation learning via global temporal alignment and cycle-consistency. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11068–11077 (2021)

    Google Scholar 

  19. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)

  20. Huang, Z., Wang, N.: Like what you like: knowledge distill via neuron selectivity transfer (2017)

    Google Scholar 

  21. Kim, D., et al.: Learning cross-modal contrastive features for video domain adaptation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 13618–13627 (2021)

    Google Scholar 

  22. Kim, D., et al.: Learning cross-modal contrastive features for video domain adaptation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 13618–13627 (October 2021)

    Google Scholar 

  23. Kuehne, H., Richard, A., Gall, J.: A hybrid rnn-hmm approach for weakly supervised temporal action segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 42(4), 765–779 (2020). https://doi.org/10.1109/TPAMI.2018.2884469

    Article  Google Scholar 

  24. Lea, C., Flynn, M.D., Vidal, R., Reiter, A., Hager, G.D.: Temporal convolutional networks for action segmentation and detection. In: proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 156–165 (2017)

    Google Scholar 

  25. Li, G., Jampani, V., Sun, D., Sevilla-Lara, L.: Locate: localize and transfer object parts for weakly supervised affordance grounding. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10922–10931 (2023)

    Google Scholar 

  26. Li, S.J., AbuFarha, Y., Liu, Y., Cheng, M.M., Gall, J.: Ms-tcn++: multi-stage temporal convolutional network for action segmentation. IEEE Trans. Pattern Anal. Mach. Intell. (2020)

    Google Scholar 

  27. Li, Y., Nagarajan, T., Xiong, B., Grauman, K.: Ego-exo: transferring visual representations from third-person to first-person videos. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6943–6953 (2021)

    Google Scholar 

  28. Li, Y., Liu, M., Rehg, J.M.: In the eye of beholder: joint learning of gaze and actions in first person video. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 619–635 (2018)

    Google Scholar 

  29. Lin, J., Gan, C., Han, S.: Tsm: temporal shift module for efficient video understanding. In: Proceedings of the IEEE International Conference on Computer Vision (2019)

    Google Scholar 

  30. Liu, D., Li, Q., Dinh, A., Jiang, T., Shah, M., Xu, C.: Diffusion action segmentation. arXiv preprint arXiv:2303.17959 (2023)

  31. Liu, M., Chen, X., Zhang, Y., Li, Y., Rehg, J.M.: Attention distillation for learning video representations. arXiv preprint arXiv:1904.03249 (2019)

  32. Liu, X., Zhou, S., Lei, T., Jiang, P., Chen, Z., Lu, H.: First-person video domain adaptation with multi-scene cross-site datasets and attention-based methods. IEEE Trans. Circ. Syst. Video Technol. 33, 7774–7788 (2023). https://doi.org/10.1109/TCSVT.2023.3281671

    Article  Google Scholar 

  33. Munro, J., Damen, D.: Multi-modal domain adaptation for fine-grained action recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 122–132 (2020)

    Google Scholar 

  34. Oquab, M., et al.: Dinov2: learning robust visual features without supervision. arXiv preprint arXiv:2304.07193 (2023)

  35. Passalis, N., Tzelepi, M., Tefas, A.: Heterogeneous knowledge distillation using information flow modeling. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2339–2348 (2020)

    Google Scholar 

  36. Reza, S., Sundareshan, B., Moghaddam, M., Camps, O.: Enhancing transformer backbone for egocentric video action segmentation. arXiv preprint arXiv:2305.11365 (2023)

  37. Romero, A., Ballas, N., Kahou, S.E., Chassang, A., Gatta, C., Bengio, Y.: Fitnets: hints for thin deep nets. arXiv preprint arXiv:1412.6550 (2014)

  38. Sayed, S., Ghoddoosian, R., Trivedi, B., Athitsos, V.: A new dataset and approach for timestamp supervised action segmentation using human object interaction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3132–3141 (2023)

    Google Scholar 

  39. Sener, F., et al.: Assembly101: a large-scale multi-view video dataset for understanding procedural activities. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21096–21106 (2022)

    Google Scholar 

  40. Sener, F., Singhania, D., Yao, A.: Temporal aggregate representations for long-range video understanding. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12361, pp. 154–171. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58517-4_10

    Chapter  Google Scholar 

  41. Sigurdsson, G.A., Gupta, A., Schmid, C., Farhadi, A., Alahari, K.: Actor and observer: joint modeling of first and third-person videos. In: proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7396–7404 (2018)

    Google Scholar 

  42. Singhania, D., Rahaman, R., Yao, A.: Coarse to fine multi-resolution temporal convolutional network. arXiv preprint arXiv:2105.10859 (2021)

  43. Singhania, D., Rahaman, R., Yao, A.: Iterative contrast-classify for semi-supervised temporal action segmentation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 2262–2270 (2022)

    Google Scholar 

  44. Spriggs, E.H., De La Torre, F., Hebert, M.: Temporal segmentation and activity classification from first-person sensing. In: 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 17–24 (2009). https://doi.org/10.1109/CVPRW.2009.5204354

  45. Stroud, J., Ross, D., Sun, C., Deng, J., Sukthankar, R.: D3d: distilled 3d networks for video action recognition. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 625–634 (2020)

    Google Scholar 

  46. Sun, B., Saenko, K.: Deep CORAL: correlation alignment for deep domain adaptation. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9915, pp. 443–450. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-49409-8_35

    Chapter  Google Scholar 

  47. Tran, V., Wang, Y., Zhang, Z., Hoai, M.: Knowledge distillation for human action anticipation. In: 2021 IEEE International Conference on Image Processing (ICIP), pp. 2518–2522. IEEE (2021)

    Google Scholar 

  48. Wang, X., Hu, J.F., Lai, J.H., Zhang, J., Zheng, W.S.: Progressive teacher-student learning for early action prediction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3556–3565 (2019)

    Google Scholar 

  49. Wilson, G., Cook, D.J.: A survey of unsupervised deep domain adaptation. ACM Trans. Intell. Syst. Technol. (TIST) 11(5), 1–46 (2020)

    Article  Google Scholar 

  50. Xu, Z., Rawat, Y., Wong, Y., Kankanhalli, M.S., Shah, M.: Don’t pour cereal into coffee: differentiable temporal logic for temporal action segmentation. Adv. Neural. Inf. Process. Syst. 35, 14890–14903 (2022)

    Google Scholar 

  51. Xue, Z., Grauman, K.: Learning fine-grained view-invariant representations from unpaired ego-exo videos via temporal alignment. arXiv preprint arXiv:2306.05526 (2023)

  52. Yim, J., Joo, D., Bae, J., Kim, J.: A gift from knowledge distillation: fast optimization, network minimization and transfer learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4133–4141 (2017)

    Google Scholar 

  53. Yu, H., Cai, M., Liu, Y., Lu, F.: What i see is what you see: joint attention learning for first and third person video co-analysis. In: Proceedings of the 27th ACM International Conference on Multimedia, pp. 1358–1366 (2019)

    Google Scholar 

Download references

Acknowledgements

This research has been supported by the project Future Artificial Intelligence Research (FAIR) - PNRR MUR Cod. PE0000013 - CUP: E63C22001940006

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Camillo Quattrocchi .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 11268 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Quattrocchi, C., Furnari, A., Di Mauro, D., Giuffrida, M.V., Farinella, G.M. (2025). Synchronization Is All You Need: Exocentric-to-Egocentric Transfer for Temporal Action Segmentation with Unlabeled Synchronized Video Pairs. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15130. Springer, Cham. https://doi.org/10.1007/978-3-031-73220-1_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-73220-1_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-73219-5

  • Online ISBN: 978-3-031-73220-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics