The goal of the multitask learning (MTL) topic is simultaneously learning multiple related tasks in order to achieve an improvement in the overall performance, when compared to independent training strategies. For appropriate applications, MTL algorithms need less training samples per task to achieve the same classification results as independent learning strategies , and a MTL classifier trained with a sufficient number of related tasks should be able to find good solutions to a novel related task. The MTL techniques are specially useful for overcoming the main drawbacks of automatic visual classification, such as the lack of training samples or the high variability among elements of the same class.


  • Àgata Lapedriza, (2009) Multitask Learning Techniques for Automatic Face Classication. PhD thesis, CVC.
  • Agata Lapedriza, David Masip and Jordi Vitrià. On the use of independent tasks for face recognition. IEEE Conference on Computer Vision and Pattern Recognition, 2008. CVPR 2008. 23-28 June 2008 Page(s):1 – 6. Anchorage, Alaska.
  • David Masip, Agata Lapedriza, and Jordi Vitrià. Multitask Learning: An application to Incremental Face Recognition. International Conference on Computer Vision Theory and Applications – VISAPP 2008.
  • Àgata Lapedriza, David Masip, Jordi Vitrià. Subject Recognition Using a New Approach for Feature Extraction. International Conference on Computer Vision Theory and Applications – VISAPP 2008.
  • David Masip, Àgata Lapedriza, and Jordi Vitrià. Multitask learning applied to face recognition. 1st Spanish Workshop on Biometrics. SWB2007. June 2007, Girona.


Knowledge Transfer, Domain Adaptation, Multi-label Classification, Multi-class Classification.