logo

SCIENCE CHINA Information Sciences, Volume 63 , Issue 11 : 210203(2020) https://doi.org/10.1007/s11432-020-3080-3

Evidential combination of augmented multi-source of information based on domain adaptation

More info
  • ReceivedJun 16, 2020
  • AcceptedOct 6, 2020
  • PublishedOct 22, 2020

Abstract


Acknowledgment

This work was partially supported by National Natural Science Foundation of China (Grant Nos. 61672431, 61790552, 61790554, 61701409), Shaanxi Science Fund for Distinguished Young Scholars (Grant No. 2018JC-006), and Fundamental Research Funds for the Central Universities.


Supplement

Tables A1–A11.


References

[1] Pan S J, Yang Q. A Survey on Transfer Learning. IEEE Trans Knowl Data Eng, 2010, 22: 1345-1359 CrossRef Google Scholar

[2] Dai W, Yang Q, Xue G-R, et al. Boosting for transfer learning. In: Proceedings of the International Conference on Machine Learning, 2007. 193--200. Google Scholar

[3] Pan S J, Tsang I W, Kwok J T. Domain Adaptation via Transfer Component Analysis. IEEE Trans Neural Netw, 2011, 22: 199-210 CrossRef Google Scholar

[4] Long M, Wang J, Ding G, et al. Transfer feature learning with joint distribution adaptation. In: Proceedings of the IEEE International Conference on Computer Vision, 2013. 2200--2207. Google Scholar

[5] Fernando B, Habrard A, Sebban M, et al. Unsupervised visual domain adaptation using subspace alignment. In: Proceedings of the IEEE International Conference on Computer Vision, 2014. 2960--2967. Google Scholar

[6] Gong B, Shi Y, Sha F, et al. Geodesic flow kernel for unsupervised domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2012. 2066--2073. Google Scholar

[7] Wang J, Chen Y, Hao S, et al. Balanced distribution adaptation for transfer learning. In: Proceedings of the IEEE International Conference on Data Mining, 2017. 1129--1134. Google Scholar

[8] Deng W Y, Lendasse A, Ong Y S. Domain Adaption via Feature Selection on Explicit Feature Map. IEEE Trans Neural Netw Learning Syst, 2019, 30: 1180-1190 CrossRef Google Scholar

[9] Wen L, Gao L, Li X. A New Deep Transfer Learning Based on Sparse Auto-Encoder for Fault Diagnosis. IEEE Trans Syst Man Cybern Syst, 2019, 49: 136-144 CrossRef Google Scholar

[10] Long M, Cao Y, Cao Z. Transferable Representation Learning with Deep Adaptation Networks. IEEE Trans Pattern Anal Mach Intell, 2019, 41: 3071-3085 CrossRef Google Scholar

[11] Tzeng E, Hoffman J, Saenko K, et al. Adversarial discriminative domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. 7167--7176. Google Scholar

[12] Sun S, Shi H, Wu Y. A survey of multi-source domain adaptation. Inf Fusion, 2015, 24: 84-92 CrossRef Google Scholar

[13] Liu H, Shao M, Fu Y. Structure-preserved multi-source domain adaptation. In: Proceedings of the IEEE International Conference on Data Mining, 2016. 1059--1064. Google Scholar

[14] Lixin Duan , Dong Xu , Tsang I W. Domain Adaptation From Multiple Sources: A Domain-Dependent Regularization Approach. IEEE Trans Neural Netw Learning Syst, 2012, 23: 504-518 CrossRef Google Scholar

[15] Ding Z, Shao M, Fu Y. Incomplete Multisource Transfer Learning. IEEE Trans Neural Netw Learning Syst, 2018, 29: 310-323 CrossRef Google Scholar

[16] Zhu Y, Zhuang F, Zhuang D. Aligning domain-specific distribution and classifier for cross-domain classification from multiple sources. In: Proceedings of the AAAI Conference on Artificial Intelligence, 2019. 5989--5996. Google Scholar

[17] Xu R, Chen Z, Zuo W, et al. Deep cocktail network: multi-source unsupervised domain adaptation with category shift. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018. 3964--3973. Google Scholar

[18] Zhao H, Zhang S, Wu G, et al. Multiple source domain adaptation with adversarial learning. In: Proceedings of the International Conference on Learning Representations, 2018. 8559--8570. Google Scholar

[19] Pizzi N J, Pedrycz W. Aggregating multiple classification results using fuzzy integration and stochastic feature selection. Int J Approximate Reasoning, 2010, 51: 883-894 CrossRef Google Scholar

[20] Shafer G. A Mathematical Theory of Evidence. Princeton: Princeton University Press, 1976. 42. Google Scholar

[21] Liu Z G, Pan Q, Dezert J. Combination of Classifiers With Optimal Weight Based on Evidential Reasoning. IEEE Trans Fuzzy Syst, 2018, 26: 1217-1230 CrossRef Google Scholar

[22] Liu Z G, Huang L Q, Zhou K. Combination of Transferable Classification With Multisource Domain Adaptation Based on Evidential Reasoning. IEEE Trans Neural Netw Learning Syst, 2020, : 1-15 CrossRef Google Scholar

[23] Kobetski M, Sullivan J. Apprenticeship learning: transfer of knowledge via dataset augmentation. In: Proceedings of the Scandinavian Conference on Image Analysis, 2013. 432--443. Google Scholar

[24] Aftab U, Siddiqui G. Big data augmentation with data warehouse: a survey. In: Proceedings of the IEEE International Conference on Big Data, 2018. 2775--2784. Google Scholar

[25] Ling Shao , Fan Zhu , Xuelong Li . Transfer Learning for Visual Categorization: A Survey. IEEE Trans Neural Netw Learning Syst, 2015, 26: 1019-1034 CrossRef Google Scholar

[26] Chen Y, Song S, Li S. Domain Space Transfer Extreme Learning Machine for Domain Adaptation. IEEE Trans Cybern, 2019, 49: 1909-1922 CrossRef Google Scholar

[27] Raina R, Battle A, Lee H, et al. Self-taught learning: transfer learning from unlabeled data. In: Proceedings of the International Conference on Machine learning, 2007. 759--766. Google Scholar

[28] Xu Y, Pan S J, Xiong H. A Unified Framework for Metric Transfer Learning. IEEE Trans Knowl Data Eng, 2017, 29: 1158-1171 CrossRef Google Scholar

[29] Li S, Song S, Huang G. Cross-Domain Extreme Learning Machines for Domain Adaptation. IEEE Trans Syst Man Cybern Syst, 2019, 49: 1194-1207 CrossRef Google Scholar

[30] Borgwardt K M, Gretton A, Rasch M J. Integrating structured biological data by Kernel Maximum Mean Discrepancy. Bioinformatics, 2006, 22: e49-e57 CrossRef Google Scholar

[31] Ben-David S, Blitzer J, Crammer K, et al. Analysis of representations for domain adaptation. In: Proceedings of the International Conference on Neural Information Processing Systems, 2007. 137--144. Google Scholar

[32] Den?ux T. Decision-making with belief functions: A review. Int J Approximate Reasoning, 2019, 109: 87-110 CrossRef Google Scholar

[33] Den?ux T. Logistic regression, neural networks and Dempster-Shafer theory: A new perspective. Knowledge-Based Syst, 2019, 176: 54-67 CrossRef Google Scholar

[34] Denoeux T. A k-nearest neighbor classification rule based on Dempster-Shafer theory. IEEE Trans Syst Man Cybern, 1995, 25: 804-813 CrossRef Google Scholar

[35] Liu Z, Pan Q, Dezert J. Credal c-means clustering method based on belief functions. Knowledge-Based Syst, 2015, 74: 119-132 CrossRef Google Scholar

[36] Zhou Z, Liu T, Hu G. Fault-alarm-threshold optimization method based on interval evidence reasoning. Sci China Inf Sci, 2019, 62: 89202 CrossRef Google Scholar

[37] Si X S, Hu C H, Zhou Z J. Fault prediction model based on evidential reasoning approach. Sci China Inf Sci, 2010, 53: 2032-2046 CrossRef Google Scholar

[38] Han S P. A globally convergent method for nonlinear programming. J Optim Theor Appl, 1977, 22: 297-309 CrossRef Google Scholar

[39] Sun B, Feng J, Saenko K. Return of frustratingly easy domain adaptation. In: Proceedings of the AAAI Conference on Artificial Intelligence, 2016. 2058--2065. Google Scholar

[40] Long M, Wang J, Ding G, et al. Transfer joint matching for unsupervised domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014. 1410--1417. Google Scholar

[41] Dezert J, Tchamova A. On the Validity of Dempster's Fusion Rule and its Interpretation as a Generalization of Bayesian Fusion Rule. Int J Intell Syst, 2014, 29: 223-252 CrossRef Google Scholar

  • Figure 1

    (Color online) Patterns in different domains. (a) Patterns in ${\cal~D}_{S_1}$; (b) patterns in ${\cal~D}_{S_2}$; (c) patterns in ${\cal~D}_{S_n}$; (d) patterns in ${\cal~D}_T$.

  •   

    Algorithm 1 ECAMI

    Require:The $n$ source domain data sets: $D_{S_i}~=~\{(\boldsymbol~x_p^{S_i},y_p^{S_i})\}_{p~=~1}^{N_i},i=1,\ldots,n$ and one target domain data set: $D_T~=~\{\boldsymbol~x_q^{\rm~T}~\}_{q~=~1}^{N_T}$.

    Generate more information sources by simply merging source domain data sets;

    Select some high-quality information sources;

    Obtain the weighting factors by 8;

    for $q~=~1$ to $N_T$

    Obtain multiple classification results by the auxiliary of selected high-quality information sources;

    Discount these sources of evidence using 7;

    Combine discounted results by DS rule with 6;

    Make class decision using plausibility function value Pl($\cdot$);

    end for

    Output: Class decisions.

  • Table 1  

    Table 1Basic information of the selected benchmark datasets

    Dataset Domain Feature Sample Class
    4*Office+Caltech-10 Amazon (A) 800 958 10
    Caltech (C) 800 1123 10
    DSLR (D) 800 157 10
    Webcam (W) 800 295 10
    4*VLSC VOC2007 (V) 4096 3376 5
    LabelMe (L) 4096 2656 5
    SUN09 (S) 4096 3282 5
    Caltech101 (C) 4096 1415 5
    3*Office-31 Amazon (A) 800 2715 31
    DSLR (D) 800 482 31
    Webcam (W) 800 776 31
  • Table 2  

    Table 2Time cost (s) of proposed and related combination methods with TCA on Office+Caltech-31

    Task ECAMI-MV ECAMI-WMV ECAMI-AF ECAMI-WAF ECAMI-DS ECAMI-WDS
    C, D, W$\to$A 0.06 0.06 0.05 0.06 0.36 5.09
    A, D, W$\to$C 0.05 0.05 0.05 0.07 0.22 6.06
    A, C, W$\to$D 0.07 0.05 0.05 0.05 0.30 1.88
    A, C, D$\to$W 0.05 0.07 0.05 0.06 0.19 1.01
  • Table 3  

    Table 3Time cost (s) of proposed and related combination methods with TCA on VLSC

    Task ECAMI-MV ECAMI-WMV ECAMI-AF ECAMI-WAF ECAMI-DS ECAMI-WDS
    L, S, C$\to$V 0.06 0.05 0.05 0.05 0.29 2.09
    V, S, C$\to$L 0.05 0.06 0.04 0.05 0.20 2.42
    V, L, C$\to$S 0.06 0.05 0.05 0.05 0.30 2.53
    V, L, S$\to$C 0.05 0.05 0.04 0.05 0.21 1.82
  • Table 4  

    Table 4Time cost (s) of proposed and related combination methods with TCA on Office-31

    Task ECAMI-MV ECAMI-WMV ECAMI-AF ECAMI-WAF ECAMI-DS ECAMI-WDS
    D, W$\to$A 0.05 0.07 0.06 0.06 0.34 0.83
    A, W$\to$D 0.05 0.08 0.08 0.05 0.15 0.60
    A, D$\to$W 0.05 0.05 0.06 0.05 0.26 0.44