Fair Graph Representation Learning with Imbalanced and Biased Data

2022 
Graph-structured data is omnipresent in various fields, such as biology, chemistry, social media and transportation. Learning informative graph representations are crucial in effectively completing downstream graph-related tasks such as node/graph classification and link prediction. Graph Neural Networks (GNNs), due to their inclusiveness on handling graph-structured data and distinguished data-mining power inherited from deep learning, have achieved significant success in learning graph representations. Nonetheless, most existing GNNs are mainly designed with unrealistic data assumptions, such as the balanced and unbiased data distributions while abounding real-world networks exhibit skewed (i.e., long-tailed) node/graph class distributions and may also encode patterns of previous discriminatory decisions dominated by sensitive attributes. Even further, extensive research efforts have been invested in developing GNN architectures towards improving model utility while most of the time totally ignoring whether the obtained node/graph representations conceal any discriminatory bias, which could lead to prejudicial decisions as GNN-based machine learning models are increasingly being utilized in real-world applications. In light of the prevalence of the above two types of unfairness originated from quantity-imbalanced and discriminatory bias, my research expects to propose novel node/graph representation learning frameworks through constructing innovative GNN architectures and devising novel graph-mining algorithms to learn both fair and expressive node/graph representations that can enjoy a favorable fairness-utility tradeoff.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    0
    Citations
    NaN
    KQI
    []