Abstract
Multitask learning has been a common technique for improving representations learned by artificial neural networks for decades. However, the actual effects and trade-offs are not much explored, especially in the context of document analysis. We demonstrate a simple and realistic scenario on real-world datasets that produces noticeably inferior results in a multitask learning setting than in a single-task setting. We hypothesize that slight data-manifold and task semantic shifts are sufficient to lead to adversarial competition of tasks inside networks and demonstrate this experimentally in two different multitask learning formulations.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
We consider the tasks 1a and 1b as independent, as they feature different datasets and labels.
References
Bronstein, M.M., Bruna, J., Cohen, T., Velivckovi’c, P.: Geometric deep learning: grids, groups, graphs, geodesics, and gauges. ArXiv abs/2104.13478 (2021)
Brown, T.B., et al.: Language models are few-shot learners. ArXiv abs/2005.14165 (2020)
Caruana, R.: Multitask learning. Mach. Learn. 28(1), 41–75 (1997)
Cloppet, F., Eglin, V., Stutzmann, D., Vincent, N., et al.: ICFHR 2016 competition on the classification of medieval handwritings in latin script. In: 2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), pp. 590–595. IEEE (2016)
Cloppet, F., Eglin, V., Helias-Baron, M., Kieu, C., Vincent, N., Stutzmann, D.: ICDAR 2017 competition on the classification of medieval handwritings in latin script. In: 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), vol. 01, pp. 1371–1376 (2017)
Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. ArXiv abs/2010.11929 (2021)
Farahani, A., Voghoei, S., Rasheed, K.M., Arabnia, H.R.: A brief review of domain adaptation. ArXiv abs/2010.03978 (2021)
Ganin, Y., et al.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. (2016)
Greenland, S., Pearl, J., Robins, J.M.: Confounding and collapsibility in causal inference. Stat. Sci. 14(1), 29–46 (1999)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)
Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. ArXiv abs/1502.03167 (2015)
Jaegle, A., et al.: Perceiver IO: a general architecture for structured inputs & outputs. ArXiv abs/2107.14795 (2021)
Jaegle, A., Gimeno, F., Brock, A., Zisserman, A., Vinyals, O., Carreira, J.: Perceiver: general perception with iterative attention. In: ICML (2021)
Kendall, A., Gal, Y., Cipolla, R.: Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7482–7491 (2018)
Lin, B., Ye, F., Zhang, Y.: A closer look at loss weighting in multi-task learning. ArXiv abs/2111.10603 (2021)
Liu, S., Johns, E., Davison, A.J.: End-to-end multi-task learning with attention. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1871–1880 (2019)
Otsu, N.: A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 9(1), 62–66 (1979)
Ponti, A.: Multi-task learning on networks. ArXiv abs/2112.04891 (2021)
Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015). https://doi.org/10.1007/s11263-015-0816-y
Schlag, I., Irie, K., Schmidhuber, J.: Linear transformers are secretly fast weight programmers. In: ICML (2021)
Schmidhuber, J.: Learning to control fast-weight memories: an alternative to dynamic recurrent networks. Neural Comput. 4(1), 131–139 (1992)
Sener, O., Koltun, V.: Multi-task learning as multi-objective optimization. In: NeurIPS (2018)
Seuret, M., Limbach, S., Weichselbaumer, N., Maier, A., Christlein, V.: Dataset of pages from early printed books with multiple font groups, August 2019
Seuret, M., et al.: ICDAR 2021 competition on historical document classification. In: Lladós, J., Lopresti, D., Uchida, S. (eds.) ICDAR 2021. LNCS, vol. 12824, pp. 618–634. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-86337-1_41
Stein, C.: Inadmissibility of the usual estimator for the mean of a multivariate normal distribution. In: Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, pp. 197–206 (1956)
Sun, C., Shrivastava, A., Singh, S., Gupta, A.: Revisiting unreasonable effectiveness of data in deep learning era. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), October 2017
Vaswani, A., et al.: Attention is all you need. ArXiv abs/1706.03762 (2017)
Yang, Z., Dai, Z., Yang, Y., Carbonell, J.G., Salakhutdinov, R., Le, Q.V.: XLNet: generalized autoregressive pretraining for language understanding. In: NeurIPS (2019)
Zhang, Y., Yang, Q.: A survey on multi-task learning. ArXiv abs/1707.08114 (2017)
Zhang, Z., Huang, X., Huang, Q., Zhang, X., Li, Y.: Joint learning of neural networks via iterative reweighted least squares. ArXiv abs/1905.06526 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Mattick, A., Mayr, M., Maier, A., Christlein, V. (2022). Is Multitask Learning Always Better?. In: Uchida, S., Barney, E., Eglin, V. (eds) Document Analysis Systems. DAS 2022. Lecture Notes in Computer Science, vol 13237. Springer, Cham. https://doi.org/10.1007/978-3-031-06555-2_45
Download citation
DOI: https://doi.org/10.1007/978-3-031-06555-2_45
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-06554-5
Online ISBN: 978-3-031-06555-2
eBook Packages: Computer ScienceComputer Science (R0)