II, we give a brief introduction to generative models and summarize the concepts that are necessary to train RBMs and VAEs. The distance metric depends on the nature of the samples. We considered Jaccard for the binary data we model. 3, ). better understand the representational characteristics of both generative Am. We can think of generative models as neural networks with θ, describing the underlying weights and activation function parameters. We show the two-dimensional scatter plots of E(T) and M(T) in Fig. There are six additional units that we use to encode the temperature of Ising samples. where ~x corresponds to the vector x but with a flipped i-th component. ∙ We train the complete architecture over 103 epochs using the Adam optimizer Kingma and Ba (2014) and a learning rate of η=10−4. 0 Next, we determine magnetization, energy, and correlation functions of the generated samples at different temperatures (see Fig. An alternative to CD-k methods is persistent contrastive divergence (PCD) Tieleman (2008). In the context of generative models this means that we can calculate the KL divergence in terms of the Jaccard distances ρk(xi) and νk(xi). Torlai and Melko (2016), the partition function of RBMs has been computed directly since the considered system sizes were relatively small. 0 share, Spin-glass systems are universal models for representing many-body pheno... In the following subsections, we describe additional methods that allow us to monitor approximations of log-likelihood (see Eq. A.1, we show the resulting temperature dependence of magnetization, energy, and spin correlations. where ⟨i,j⟩ denotes the summation over nearest neighbors and H is an external magnetic field. After training both generative neural networks with 20×104 realizations of Ising configurations at different temperatures, we can now study the ability of RBMs and cVAEs to capture physical features of the Ising model. and S. Verdú, IEEE Transactions c��jM��3�aA�5Z�H��rc�
����#�̽P��y�� KB��#�E1�������_í��l��T���2NQ�:P�;!P�)a�F���+/gR��*�� �
�L�y�a ��"-�]� DCTRGAN: Improving the Precision of Generative Models with Reweighting, Scale-invariant Feature Extraction of Neural Network and Renormalization Phys. C.-H. Wang, A. G. Day, C. Richardson, C. K. Fisher, and D. J. Schwab, Phys. 2), PCD uses the final state of the model from the previous sampling process to initialize the current one. 5, we observe that RBMs that were trained for single temperatures are able to capture the temperature dependence of magnetization, energy, and spin correlations. (2018). B.1 shows the distribution of E(T) and M(T) in the original data. << /Filter /FlateDecode /Length 3532 >> ∙ We consider a general approximation of an m-dimensional parametric distribution: where x−i is the set of all variables apart form xi. ∙ In this way, we are able to quantitatively compare the learned distributions to corresponding theoretical predictions of a well-characterized physical system. IV.3), we compute different physical features including energy E(T) and magnetization M(T). To study the learning progress of both models, we provide an overview of different monitoring techniques in Sec. Approximations are important since it is computationally too expensive to exactly determine log-likelihood and KL divergence. ETH Zurich Usually, the distribution ^p is unknown and thus approximated by a generative model distribution p(θ), where θ is a corresponding parameter set. We also find that convolutional layers in VAEs are important to model However, in the absence of convolutional layers, we found that some physical features are not well-captured anymore (see App. D. M. Blei, A. Kucukelbir, and J. D. McAuliffe, J. We initialize each Markov chain with uniformly at random distributed binary states, , train over 200 epochs, and repeat this procedure 10 times.