TODTLER: Two-Order-Deep Transfer Learning

AAAI 2015 |

Published by AAAI Press

Publication

The traditional way of obtaining models from data, inductive learning, has proved itself both in theory and in many practical applications. However, in domains where data is difficult or expensive to obtain, e.g., medicine, deep transfer learning is a more promising technique. It circumvents the model acquisition difficulties caused by scarce data in a target domain by carrying over structural properties of a model learned in a source domain where training data is ample. Nonetheless, the lack of a principled view of transfer learning so far has limited its adoption. In this paper, we address this issue by regarding transfer learning as a process that biases learning in a target domain in favor of patterns useful in a source domain. Specifically, we consider a first-order logic model of the data as an instantiation of a set of second-order templates. Hence, the usefulness of a model is partly determined by the learner’s prior distribution over these template sets. The main insight of our work is that transferring knowledge amounts to acquiring a posterior over the second-order template sets by learning in the source domain and using this posterior when learning in the target setting. Our experimental evaluation demonstrates our approach to outperform the existing transfer learning techniques in terms of accuracy and runtime.