language-icon Old Web
English
Sign In

Transduction (machine learning)

In logic, statistical inference, and supervised learning,transduction or transductive inference is reasoning fromobserved, specific (training) cases to specific (test) cases. In contrast,induction is reasoning from observed training casesto general rules, which are then applied to the test cases. The distinction ismost interesting in cases where the predictions of the transductive model arenot achievable by any inductive model. Note that this is caused by transductiveinference on different test sets producing mutually inconsistent predictions. In logic, statistical inference, and supervised learning,transduction or transductive inference is reasoning fromobserved, specific (training) cases to specific (test) cases. In contrast,induction is reasoning from observed training casesto general rules, which are then applied to the test cases. The distinction ismost interesting in cases where the predictions of the transductive model arenot achievable by any inductive model. Note that this is caused by transductiveinference on different test sets producing mutually inconsistent predictions. Transduction was introduced by Vladimir Vapnik in the 1990s, motivated byhis view that transduction is preferable to induction since, according to him, induction requiressolving a more general problem (inferring a function) before solving a morespecific problem (computing outputs for new cases): 'When solving a problem ofinterest, do not solve a more general problem as an intermediate step. Try toget the answer that you really need but not a more general one.' A similarobservation had been made earlier by Bertrand Russell:'we shall reach the conclusion that Socrates is mortal with a greater approach to certainty if we make our argument purely inductive than if we go by way of 'all men are mortal' and then use deduction' (Russell 1912, chap VII). An example of learning which is not inductive would be in the case of binaryclassification, where the inputs tend to cluster in two groups. A large set oftest inputs may help in finding the clusters, thus providing useful informationabout the classification labels. The same predictions would not be obtainablefrom a model which induces a function based only on the training cases. Somepeople may call this an example of the closely related semi-supervised learning, since Vapnik's motivation is quite different. An example of an algorithm in this category is the Transductive Support Vector Machine (TSVM). A third possible motivation which leads to transduction arises through the needto approximate. If exact inference is computationally prohibitive, one may atleast try to make sure that the approximations are good at the test inputs. Inthis case, the test inputs could come from an arbitrary distribution (notnecessarily related to the distribution of the training inputs), which wouldn'tbe allowed in semi-supervised learning. An example of an algorithm falling inthis category is the Bayesian Committee Machine (BCM).

[ "Support vector machine", "Semi-supervised learning", "Machine learning", "Artificial intelligence", "Pattern recognition", "Transduction (psychology)" ]
Parent Topic
Child Topic
    No Parent Topic