# 计算机代写|机器学习代写machine learning代考|COMP3670

## 计算机代写|机器学习代写machine learning代考|Message passing GNNs

The original graph neural network (GNN) model of [GMS05; Sca $+09$ ] was the first formulation of deep learning methods for graph-structured data. It views the supervised graph embedding problem as an information diffusion mechanism, where nodes send information to their neighbors until some stable equilibrium state is reached. More concretely, given randomly initialized node embeddings $\mathbf{Z}^0$, it applies the following recursion:
$$\mathbf{Z}^{t+1}=\operatorname{ENC}\left(\mathbf{X}, \mathbf{W}, \mathbf{Z}^t ; \Theta^E\right),$$
where parameters $\Theta^E$ are reused at every iteration. After convergence $(t=T)$, the node embeddings $\mathbf{Z}^T$ are used to predict the final output such as node or graph labels:
$$\hat{y}^S=\operatorname{DEC}\left(\mathbf{X}, \mathbf{Z}^T ; \Theta^S\right) .$$
This process is repeated several times and the GNN parameters $\Theta^E$ and $\Theta^D$ are learned with backpropagation via the Almeda-Pineda algorithm [Alm87; Pin88]. By Banach’s fixed point theorem, this process is guaranteed to converge to a unique solution when the recursion provides a contraction mapping. In light of this, Scarselli et al. [Sca $+09]$ explore maps that can be expressed using message passing networks:
$$\mathbf{Z}i^{t+1}=\sum{j \mid\left(v_i, v_j\right) \in E} f\left(\mathbf{X}_i, \mathbf{X}_j, \mathbf{Z}_j^t ; \Theta^E\right)$$
where $f(\cdot)$ is a multi-layer perception (MLP) constrained to be a contraction mapping. The decoder function, however, has no constraints and can be any MLP.

Li et al. [Li+15] propose Gated Graph Sequence Neural Networks (GGSNNs), which remove the contraction mapping requirement from GNNs. In GGSNNs, the recursive algorithm in Equation $23.22$ is relaxed by applying mapping functions for a fixed number of steps, where each mapping function is a gated recurrent unit [Cho+14a] with parameters shared for every iteration. The GGSNN model outputs predictions at every step, and so is particularly useful for tasks which have sequential stručture (such ås temmporal grạphs).

Gilmer et al. [Gil+17] provide a framework for graph neural networks called message passing neural networks (MPNNs), which encapsulates many recent models. In contrast with the GNN model which runs for an indefinite number of iterations, MPNNs provide an abstraction for modern approaches, which consist of multi-layer neural networks with a fixed number of layers.

## 计算机代写|机器学习代写machine learning代考|Spectral Graph Convolutions

Spectral methods define graph convolutions using the spectral domain of the graph Laplacian matrix. These methods broadly fall into two categories: spectrum-based methods, which explicitly compute an eigendecomposition of the Laplacian (e.g., spectral CNNs [Bru+14]) and spectrum-free methods, which are motivated by spectral graph theory but do not actually perform a spectral decomposition (e.g., Graph convolutional networks or GCN [KW16a]).

A major disadvantage of spectrum-based methods is that they rely on the spectrum of the graph Laplacian and are therefore domain-dependent (i.e. cannot generalize to new graphs). Moreover, computing the Laplacian’s spectral decomposition is computationally expensive. Spectrum-free methods overcome these limitations by utilizing approximations of these spectral filters. However, spectrum-free methods require using the whole graph $\mathbf{W}$, and so do not scale well.
For more details on spectral approaches, see e.g., [Bro $+17 \mathrm{~b}$; Cha $+21]$.

Spectrum-based methods have an inherent domain dependency which limits the application of a model trained on one graph to a new dataset. Additionally, spectrum-free methods (e.g. GCNs) require using the entire graph $\mathbf{A}$, which can quickly become unfeasible as the size of the graph grows.
To overcome these limitations, another branch of graph convolutions (spatial methods) borrow ideas from standard CNNs – applying convolutions in the spatial domain as defined by the graph topology. For instance, in computer vision, convolutional filters are spatially localized by using fixed rectangular patches around each pixel. Combined with the natural ordering of pixels in images (top, left, bottom, right), it is possible to reuse filters’ weights at every location. This process significantly reduces the total number of parameters needed for a model. While such spatial convolutions cannot directly be applied in graph domains, spatial graph convolutions take inspiration from them. The core idea is to use neighborhood sampling and attention mechanisms to create fixed-size graph patches, overcoming the irregularity of graphs.

# 机器学习代考

## 计算机代写|机器学习代写machine learning代考|Message passing GNNs

[GMS05;的原始图神经网络 (GNN) 模型；斯卡 $+09$ ] 是针对图结构数据的深度学习方法的第一个公式。

$$\mathbf{Z}^{t+1}=\operatorname{ENC}\left(\mathbf{X}, \mathbf{W}, \mathbf{Z}^t ; \Theta^E\right)$$

$$\hat{y}^S=\operatorname{DEC}\left(\mathbf{X}, \mathbf{z}^T ; \Theta^S\right) .$$

$$\mathbf{Z} t^{t+1}=\sum j \mid\left(v_i, v_j\right) \in E f\left(\mathbf{X}_i, \mathbf{X}_j, \mathbf{Z}_j^t ; \Theta^E\right)$$

## 计算机代写|机器学习代写machine learning代考|Spectral Graph Convolutions

myassignments-help数学代考价格说明

1、客户需提供物理代考的网址，相关账户，以及课程名称，Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明，让您清楚的知道您的钱花在什么地方。

2、数学代写一般每篇报价约为600—1000rmb，费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵)，报价后价格觉得合适，可以先付一周的款，我们帮你试做，满意后再继续，遇到Fail全额退款。

3、myassignments-help公司所有MATH作业代写服务支持付半款，全款，周付款，周付款一方面方便大家查阅自己的分数，一方面也方便大家资金周转，注意:每周固定周一时先预付下周的定金，不付定金不予继续做。物理代写一次性付清打9.5折。

Math作业代写、数学代写常见问题

myassignments-help擅长领域包含但不是全部: