There is no abstract available for this article.
This work was supported by National Key RD Program of China (Grant No. 2018YFB1004300) and National Natural Science Foundation of China (Grant No. 61751306).
[1] de Raedt L, Frasconi P, Kersting K, et al. Probabilistic Inductive Logic Programming. Berlin: Springer, 2008. Google Scholar
[2] Getoor L, Taskar B. Introduction to Statistical Relational Learning. Cambridge: MIT Press, 2007. Google Scholar
[3] Magnani L. Abductive Cognition: The Epistemological and Eco-Cognitive Dimensions of Hypothetical Reasoning. Berlin: Springer, 2009. Google Scholar
[4] Mooney R J. Integrating abduction and induction in machine learning. In: Abduction and Induction. Amsterdam, 2000. 181--191. Google Scholar
[5] Muggleton S H, Bryant C H. Theory completion using inverse entailment. In: Proceedings of the 10th International Conference on Inductive Logic Programming, London, 2000. 130--146. Google Scholar
[6] Dai W-Z, Zhou Z-H. Combining logic abduction and statistical induction: discovering written primitives with human knowledge. In: Proceedings of the 31st AAAI Conference on Artificial Intelligence, San Francisco, 2017. 2977--2983. Google Scholar
[7] Zhou Z H. A brief introduction to weakly supervised learning. Natl Sci Rev, 2018, 5: 44-53 CrossRef Google Scholar
[8] Zhou Z H. Learnware: on the future of machine learning. Front Comput Sci, 2016, 10: 589-590 CrossRef Google Scholar
[9] Dai W-Z, Xu Q-L, Yu Y, et al. Tunneling neural perception and logic reasoning through abductive learning. 2018,. arXiv Google Scholar
[10] Garcez A S D, Broda K B, Gabbay D M. Neural-Symbolic Learning Systems: Foundations and Applications. London: Springer, 2009. Google Scholar
Figure 1
(a) Conventional supervised learning where the ground-truth labels of training data are given and (b) abductive learning where a classifier and a knowledge base are given. The given information is highlighted in black; the machine learning and logical reasoning components are shown in blue and green, respectively. In (b), the given classifier helps generate pseudo-labels, leading to pseudo-groundings; then, revisions to the pseudo groundings (shown as a red triangle) are generated via logical abduction based on minimizing the inconsistency with the knowledge base. The abduced labels are used to train the classifier, which is then adopted to replace the original classifier in the next iteration.