logo

SCIENCE CHINA Information Sciences, Volume 64 , Issue 11 : 212104(2021) https://doi.org/10.1007/s11432-019-2635-8

Sampling informative context nodes for network embedding

More info
  • ReceivedMay 6, 2019
  • AcceptedAug 20, 2019
  • PublishedOct 12, 2021

Abstract


Acknowledgment

This work was supported in part by Social Science Foundation of the Jiangsu Higher Education Institutions of China (Grant No. 2018SJA0455), National Nature Science Foundation of China (Grant No. 61472183), and Social Science Foundation of Jiangsu Province (Grant No. 19TQD002).


References

[1] Sen P, Namata G, Bilgic M, et al. Collective classification in network data. AI Mag, 2008, 29: 93. Google Scholar

[2] Chandola V, Banerjee A, Kumar V. Anomaly detection. ACM Comput Surv, 2009, 41: 1-58 CrossRef Google Scholar

[3] Yang J, McAuley J, Leskovec J. Community detection in networks with node attributes. In: Proceedings of the 13th International Conference on Data Minin, 2013. 1151--1156. Google Scholar

[4] Liben-Nowell D, Kleinberg J. The link-prediction problem for social networks. J Am Soc Inf Sci, 2007, 58: 1019-1031 CrossRef Google Scholar

[5] Perozzi B, Al-Rfou R, Skiena S. Deepwalk: online learning of social representations. In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2014. 701--710. Google Scholar

[6] Grover A, Leskovec J. node2vec: scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016. 855--864. Google Scholar

[7] Tang J, Qu M, Wang M Z, et al. Line: large-scale information network embedding. In: Proceedings of the 24th International Conference on World Wide Web, 2015. 1067--1077. Google Scholar

[8] Wang D X, Cui P, Zhu W W. Structural deep network embedding. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016. 1225--1234. Google Scholar

[9] Abu-El-Haija S, Perozzi B, Al-Rfou R, et al. Watch your step: learning node embeddings via graph attention. In: Proceedings of the 32nd Conference on Neural Information Processing System, 2018. 9198--9208. Google Scholar

[10] Dong Y X, Chawla N V, Swami A. metapath2vec: scalable representation learning for heterogeneous networks. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2017. 135--144. Google Scholar

[11] Zhang D K, Yin J, Zhu X Q, et al. MetaGraph2Vec: complex semantic path augmented heterogeneous network embedding. In: Proceedings of Pacific-Asia Conference on Knowledge Discovery and Data Mining, 2018. 196--208. Google Scholar

[12] Liao L, He X, Zhang H. Attributed Social Network Embedding. IEEE Trans Knowl Data Eng, 2018, 30: 2257-2270 CrossRef Google Scholar

[13] Mikolov T, Chen K, Corrado G, et al. Efficient estimation of word representations in vector space. 2013,. arXiv Google Scholar

[14] Qiu J Z, Dong Y X, Ma H, et al. Network embedding as matrix factorization: unifying deepwalk, line, pte, and node2vec. In: Proceedings of the 11th ACM International Conference on Web Search and Data Mining, 2018. 459--467. Google Scholar

[15] Guia?u S. Weighted entropy. Rep Math Phys, 1971, 2: 165-179 CrossRef Google Scholar

[16] Kingma D P, Ba J. Adam: a method for stochastic optimization. 2014,. arXiv Google Scholar

[17] Tang L, Liu H. Relational learning via latent social dimensions. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2009. 817--826. Google Scholar

[18] Cao S S, Lu W, Xu Q K. Grarep: learning graph representations with global structural information. In: Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, 2015. 891--900. Google Scholar

[19] Zou K H, O'Malley A J, Mauri L. Receiver-operating characteristic analysis for evaluating diagnostic tests and predictive models.. Circulation, 2007, 115: 654-657 CrossRef PubMed Google Scholar

[20] Yang M S, Nataliani Y. A Feature-Reduction Fuzzy Clustering Algorithm Based on Feature-Weighted Entropy. IEEE Trans Fuzzy Syst, 2018, 26: 817-835 CrossRef Google Scholar

[21] Park E, Ahn J, Yoo S. Weighted-entropy-based quantization for deep neural networks. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2017. Google Scholar

[22] Feng X, Qin B, Liu T. A language-independent neural network for event detection. Sci China Inf Sci, 2018, 61: 092106 CrossRef Google Scholar

[23] Li X, Zhuang Y, Fu Y. A trust-aware random walk model for return propensity estimation and consumer anomaly scoring in online shopping. Sci China Inf Sci, 2019, 62: 052101 CrossRef Google Scholar

[24] Perozzi B, Kulkarni V, Skiena S. Walklets: multiscale graph embeddings for interpretable network classification. 2016,. arXiv Google Scholar

  • Figure 1

    (Color online) Node degree w.r.t. sampled frequency ratio of MWENE/Deepwalk.

  • Figure 2

    (Color online) Weighted entropy w.r.t. the performance of classification and link prediction tasks. corr and sig denote the correlation coefficient and the significance, respectively.

  • Figure 3

    Block diagram of MWENE algorithm.

  • Figure 4

    (Color online) Results on node classification task.

  • Figure 5

    (Color online) Parameter sensitivity. (a) Iteration w.r.t. performance; (b) embedding size w.r.t. performance; (c) walk length w.r.t. performance.

  • Figure 6

    (Color online) Properties of MWENE. (a) Iteration w.r.t. weighted entropy; (b) node degree w.r.t. weighted information.

  •   

    Algorithm 1 Maximum weighted entropy sampling for network embedding (MWENE)

    Require:Network $G=(V,E)$, walk length $t$, number of walks per node $n$;

    Output: The sampled context matrix $C~\in~\mathbb{N}^{|V|~\times~|V|}$.

    Initialization: context matrix $C$ where each element is $0$, weighted entropy vector $q~\in~\mathbb{R}^{|V|}$ where each element is $1$;

    for ${\rm~iteration}=1$ to $n$

    for $i=1$ to $|V|$

    $s~=~i$; //$v_s$ is the current node

    for $j=1$ to $t$

    ${\rm~neighbors}~=~{\rm~get\_neighbors}(G,s)$;

    $\pi_{\rm~neigh}~=~[w_{st}q_t$ for $t$ in ${\rm~neighbors}]$;

    $\pi_{\rm~neigh}~=~\pi_{\rm~neigh}/{\rm~sum}(\pi_{\rm~neigh})$;

    $k={\rm~AliasSample}({\rm~neighbors},\pi_{\rm~neigh})$; //Sample the next node

    $C_{ik}~=~C_{ik}~+~1$;

    $s=~k$;

    end for

    end for

    for $i=0$ to $|V|$

    Update $q_i$ according to Eq. (1);

    end for

    end for

    return $C$.

  • Table 1  

    Table 1Correlation coefficient between the information measurement and the evaluation indicators. F1 and ROC denote Macro-f1 and AUROC respectively. $^{**}$ denotes that the significant value $p$ is below 0.01

    Information indicatorF1 (DBLP)ROC (DBLP)F1 (CITESEER)ROC (CITESEER)
    Entropy0.132$-$0.1090.3970.509
    KL-divergence$-$0.5340.2160.1630.274
    Weighted entropy0.670$^{**}$0.660$^{**}$0.861$^{**}$0.758$^{**}$
  • Table 2  

    Table 2Statistics of the datasets

    DatasetsBlogCatalogDBLPCITESEER
    $|V|$10312607443312
    $|E|$333983529144675
    Label count3946
    Label InterestAreaArea
  • Table 3  

    Table 3Results of link prediction task

    MethodBlogCatalogDBLPCITESEER
    Deepwalk0.7210.8430.796
    Node2vec0.8900.8720.815
    GraGep0.7580.8580.553
    Graph attention0.9040.831
    MENE0.8380.8540.802
    MWENE0.9020.8750.837
qqqq

Contact and support