图神经网络 (GNN) 通过利用图数据取得了显着的性能。GNN 模型的成功总是取决于丰富的特征和相邻关系。然而,在实践中,这些数据通常被不同的数据所有者(客户端)隔离,因此很可能是非独立和相同分布的(Non-IID)。同时,考虑到数据所有者有限的网络状态,协作学习方法的超参数优化在数据隔离场景中是耗时的。为了解决这些问题,我们提出了一种自动分离联邦图神经网络 (ASFGNN) 学习范式。ASFGNN 由两个主要部分组成,即 GNN 的训练和超参数的调整。具体来说,解决数据Non-IID问题,我们首先提出了一个分离联邦的 GNN 学习模型,它将 GNN 的训练解耦为两部分:由客户端单独完成的消息传递部分,以及由客户端联邦学习的损失计算部分。为了处理耗时的参数调整问题,我们利用贝叶斯优化技术自动调整所有客户端的超参数。我们在基准数据集上进行了实验,结果表明 ASFGNN 在准确性和参数调整效率方面都明显优于朴素联合 GNN。我们利用贝叶斯优化技术自动调整所有客户端的超参数。我们在基准数据集上进行了实验,结果表明 ASFGNN 在准确性和参数调整效率方面都明显优于朴素联合 GNN。我们利用贝叶斯优化技术自动调整所有客户端的超参数。我们在基准数据集上进行了实验,结果表明 ASFGNN 在准确性和参数调整效率方面都明显优于朴素联合 GNN。
Graph Neural Networks (GNNs) have achieved remarkable performance by taking
advantage of graph data. The success of GNN models always depends on rich
features and adjacent relationships. However, in practice, such data are
usually isolated by different data owners (clients) and thus are likely to be
Non-Independent and Identically Distributed (Non-IID). Meanwhile, considering
the limited network status of data owners, hyper-parameters optimization for
collaborative learning approaches is time-consuming in data isolation
scenarios. To address these problems, we propose an Automated
Separated-Federated Graph Neural Network (ASFGNN) learning paradigm. ASFGNN
consists of two main components, i.e., the training of GNN and the tuning of
hyper-parameters. Specifically, to solve the data Non-IID problem, we first
propose a separated-federated GNN learning model, which decouples the training
of GNN into two parts: the message passing part that is done by clients
separately, and the loss computing part that is learnt by clients federally. To
handle the time-consuming parameter tuning problem, we leverage Bayesian
optimization technique to automatically tune the hyper-parameters of all the
clients. We conduct experiments on benchmark datasets and the results
demonstrate that ASFGNN significantly outperforms the naive federated GNN, in
terms of both accuracy and parameter-tuning efficiency.