# 计算机代写|机器学习代写machine learning代考|CS446

## 计算机代写|机器学习代写machine learning代考|graph auto-encoders

Kipf and Welling [KW16b] use graph convolutions (Section 23.4.2) to learn node embeddings $\mathbf{Z}=$ $\operatorname{GCN}\left(\mathbf{W}, \mathbf{X} ; \Theta^E\right)$. The decoder is an outer product: $\operatorname{DEC}\left(\mathbf{Z} ; \Theta^D\right)=\mathbf{Z Z}^{\boldsymbol{\top}}$. The graph reconstruction term is the sigmoid cross entropy between the true adjacency and the predicted edge similarity scores:
$$\mathcal{L}{G, \operatorname{RECON}}(\mathbf{W}, \widehat{\mathbf{W}} ; \Theta)=-\left(\sum{i, j}\left(1-\mathbf{W}{i j}\right) \log \left(1-\sigma\left(\widehat{\mathbf{W}}{i j}\right)\right)+\mathbf{W}{i j} \log \sigma\left(\widehat{\mathbf{W}}{i j}\right)\right) .$$
Computing the regularization term over all possible nodes pairs is computationally challenging in practice, so the Graph Auto Encoders (GAE) model uses negative sampling to overcome this challenge.

Whereas GAE is a deterministic model, the authors also introduce variational graph auto-encoders (VGAE), which relies on variational auto-encoders (as in Section 20.3.5) to encode and decode the graph structure. In VGAE, the embedding $\mathbf{Z}$ is modeled as a latent variable with a standard multivariate normal prior $p(\mathbf{Z})=\mathcal{N}(\mathbf{Z} \mid \mathbf{0}, \mathbf{I})$ and a graph convolution is used as the amortized inference network, $q_{\Phi}(\mathbf{Z} \mid \mathbf{W}, \mathbf{X})$. The model is trained by minimizing the corresponding negative evidence lower bound:
\begin{aligned} \operatorname{NELBO}(\mathbf{W}, \mathbf{X} ; \Theta) &=-\mathbb{E}{q{\circledast}(\mathbf{Z} \mid \mathbf{W}, \mathbf{X})}[\log p(\mathbf{W} \mid \mathbf{Z})]+\operatorname{KL}\left(q_{\Phi}(\mathbf{Z} \mid \mathbf{W}, \mathbf{X}) | p(\mathbf{Z})\right) \ &=\mathcal{L}{G, \operatorname{RECON}}(\mathbf{W}, \widehat{\mathbf{W}} ; \Theta)+\operatorname{KL}\left(q{\Phi}(\mathbf{Z} \mid \mathbf{W}, \mathbf{X}) | p(\mathbf{Z})\right) \end{aligned}

## 计算机代写|机器学习代写machine learning代考|Methods based on contrastive losses

The deep graph infomax method of $[\mathrm{Vel}+19]$ is a GAN-like method for creating graph-level embeddings. Given one or more real (positive) graphs, each with its adjacency matrix $\mathbf{W} \in \mathbb{R}^{N \times N}$ and node features $\mathbf{X} \in \mathbb{R}^{N \times D}$, this method creates fake (negative) adjacency matrices $\mathbf{W}^{-} \in \mathbb{R}^{N^{-} \times N^{-}}$ and their features $X^{-} \in \mathbb{R}^{N^{-} \times D}$. It trains (i) an encoder that processes both real and fake samples, respectively giving $Z=\operatorname{ENC}\left(\mathbf{X}, \mathbf{W} ; \Theta^E\right) \in \mathbb{R}^{N \times L}$ and $\mathbf{Z}^{-}=\operatorname{ENC}\left(\mathbf{X}^{-}, \mathbf{W}^{-} ; \Theta^E\right) \in \mathbb{R}^{N^{-} \times L}$, (ii) a (readout) graph pooling function $\mathcal{R}: \mathbb{R}^{N \times L} \rightarrow \mathbb{R}^L$, and (iii) a descriminator function $\mathcal{D}: \mathbb{R}^L \times \mathbb{R}^L \rightarrow$ $[0,1]$ which is trained to output $\mathcal{D}\left(\mathbf{Z}_i, \mathcal{R}(\mathbf{Z})\right) \approx 1$ and $\mathcal{D}\left(\mathbf{Z}_j^{-}, \mathcal{R}\left(\mathbf{Z}^{-}\right)\right) \approx 0$, respectively, for nodes corresponding to given graph $i \in V$ and fake graph $j \in V^{-}$. Specifically, DGI optimizes:
$$\min {\Theta}-\underset{\mathbf{X}, \mathbf{W}}{\mathbb{E}} \sum{i=1}^N \log \mathcal{D}\left(\mathbf{Z}i, \mathcal{R}(\mathbf{Z})\right)-\underset{\mathbf{x}^{-}, \mathbf{w}^{-}}{\mathbb{E}} \sum{j=1}^{N^{-}} \log \left(1-\mathcal{D}\left(\mathbf{Z}_j^{-}, \mathcal{R}\left(\mathbf{Z}^{-}\right)\right)\right),$$
where $\Theta$ contains $\Theta^E$ and the parameters of $\mathcal{R}, \mathcal{D}$. In the first expectation, DGI samples from the real (positive) graphs. If only one graph is given, it could sample some subgraphs from it (e.g. connected components). The second expectation samples fake (negative) graphs. In DGI, fake samples use the real adjacency $W^{-}:=W$ but fake features $X^{-}$are a row-wise random permutation of real $X$. The ENC used in DGI is a graph convolutional network, though any GNN can be used. The readout $\mathcal{R}$ summarizes an entire (variable-size) graph to a single (fixed-dimension) vector. Veličković et al. $[$ Vel $+19]$ use $\mathcal{R}$ as a row-wise mean, though other graph pooling might be used e.g. ones aware of the adjacency.

The optimization of Equation (23.34) is shown by [Vel $+19]$ to maximize a lower-bound on the mutual information between the outputs of the encoder and the graph pooling function, i.e., between individual node representations and the graph representation.

In [Pen+20] they present a variant called Graphical Mutual Information. Rather than maximizing MI of nodee informátion and an entiree graph, GMI maximizees thé MI beetween thé representation of a node and its neighbors.

# 机器学习代考

## 计算机代写|机器学习代写machine learning代考|graph auto-encoders

Kipf 和 Welling [KW16b] 使用图卷积 (第 23.4.2 节) 来学习节点嵌入 $\mathbf{Z}=\operatorname{GCN}\left(\mathbf{W}, \mathbf{X} ; \Theta^E\right)$. 解码器是 一个外积: $\operatorname{DEC}\left(\mathbf{Z} ; \Theta^D\right)=\mathbf{Z Z}^{\top}$. 图重建项是真实邻接与预测边相似度得分之间的 $\mathrm{S}$ 型交叉嫡:
$$\mathcal{L} G, \operatorname{RECON}(\mathbf{W}, \widehat{\mathbf{W}} ; \Theta)=-\left(\sum i, j(1-\mathbf{W} i j) \log (1-\sigma(\widehat{\mathbf{W}} i j))+\mathbf{W} i j \log \sigma(\widehat{\mathbf{W}} i j)\right)$$

$$\operatorname{NELBO}(\mathbf{W}, \mathbf{X} ; \Theta)=-\mathbb{E} q \circledast(\mathbf{Z} \mid \mathbf{W}, \mathbf{X})[\log p(\mathbf{W} \mid \mathbf{Z})]+\operatorname{KL}\left(q_{\Phi}(\mathbf{Z} \mid \mathbf{W}, \mathbf{X}) \mid p(\mathbf{Z})\right) \quad=\mathcal{L} G, \operatorname{RECON}$$

## 计算机代写|机器学习代写machine learning代考|Methods based on contrastive losses

$$\min \Theta-\underset{\mathbf{X}, \mathbf{w}}{\mathbb{E}} \sum i=1^N \log \mathcal{D}(\mathbf{Z} i, \mathcal{R}(\mathbf{Z}))-\underset{\mathbf{x}^{-}, \mathbf{w}^{-}}{\mathbb{E}} \sum j=1^{N^{-}} \log \left(1-\mathcal{D}\left(\mathbf{Z}_j^{-}, \mathcal{R}\left(\mathbf{Z}^{-}\right)\right)\right)$$

myassignments-help数学代考价格说明

1、客户需提供物理代考的网址，相关账户，以及课程名称，Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明，让您清楚的知道您的钱花在什么地方。

2、数学代写一般每篇报价约为600—1000rmb，费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵)，报价后价格觉得合适，可以先付一周的款，我们帮你试做，满意后再继续，遇到Fail全额退款。

3、myassignments-help公司所有MATH作业代写服务支持付半款，全款，周付款，周付款一方面方便大家查阅自己的分数，一方面也方便大家资金周转，注意:每周固定周一时先预付下周的定金，不付定金不予继续做。物理代写一次性付清打9.5折。

Math作业代写、数学代写常见问题

myassignments-help擅长领域包含但不是全部: