language-icon Old Web
English
Sign In

Domain adaptation

Domain adaptation is a field associated with machine learning and transfer learning. This scenario arises when we aim at learning from a source data distribution a well performing model on a different (but related) target data distribution. For instance, one of the tasks of the common spam filtering problem consists in adapting a model from one user (the source distribution) to a new one who receives significantly different emails (the target distribution). Domain adaptation has also been shown to be beneficial for learning unrelated sources.Note that, when more than one source distribution is available the problem is referred to as multi-source domain adaptation. Domain adaptation is a field associated with machine learning and transfer learning. This scenario arises when we aim at learning from a source data distribution a well performing model on a different (but related) target data distribution. For instance, one of the tasks of the common spam filtering problem consists in adapting a model from one user (the source distribution) to a new one who receives significantly different emails (the target distribution). Domain adaptation has also been shown to be beneficial for learning unrelated sources.Note that, when more than one source distribution is available the problem is referred to as multi-source domain adaptation. Let X {displaystyle X} be the input space (or description space) and let Y {displaystyle Y} be the output space (or label space). The objective of a machine learning algorithm is to learn a mathematical model (a hypothesis) h : X → Y {displaystyle h:X o Y} able to attach a label from Y {displaystyle Y} to an example from X {displaystyle X} . This model is learned from a learning sample S = { ( x i , y i ) ∈ ( X × Y ) } i = 1 m {displaystyle S={(x_{i},y_{i})in (X imes Y)}_{i=1}^{m}} . Usually in supervised learning (without domain adaptation), we suppose that the examples ( x i , y i ) ∈ S {displaystyle (x_{i},y_{i})in S} are drawn i.i.d. from a distribution D S {displaystyle D_{S}} of support X × Y {displaystyle X imes Y} (unknown and fixed). The objective is then to learn h {displaystyle h} (from S {displaystyle S} ) such that it commits the least error possible for labelling new examples coming from the distribution D S {displaystyle D_{S}} . The main difference between supervised learning and domain adaptation is that in the latter situation we study two different (but related) distributions D S {displaystyle D_{S}} and D T {displaystyle D_{T}} on X × Y {displaystyle X imes Y} . The domain adaptation task then consists of the transfer of knowledge from the source domain D S {displaystyle D_{S}} to the target one D T {displaystyle D_{T}} . The goal is then to learn h {displaystyle h} (from labeled or unlabelled samples coming from the two domains) such that it commits as little error as possible on the target domain D T {displaystyle D_{T}} .

[ "Computer vision", "Machine learning", "Artificial intelligence", "Pattern recognition", "Domain (software engineering)", "maximum mean discrepancy" ]
Parent Topic
Child Topic
    No Parent Topic