This week’s article is “Conditional Neural Processes” by Garnelo et al. . To understand this post, you need to have a basic understanding of neural networks and Gaussian processes.
In my own words
A neural process (NP) is a novel framework for regression and classification tasks that combines the strengths of neural networks (NNs) and Gaussian processes (GPs) . In particular, similar to GPs, NPs learn distributions over functions and predict their uncertainty about the predicted function values. But in contrast to GPs, NPs scale linearly with the number of data points (GPs typically scale cubically ). A well-known special case of an NP is the generative query network (GQN) that has been invented to predict 3D scenes from unobserved viewpoints .
Neural processes should come in handy for several parts of my Rubik’s Cube project. Thus, I aim to build a Python package that lets the user implement NPs and all their variations with a minimal amount of code. As a first step, here I reproduce some of Garnelo et al.’s work on conditional neural processes (CNPs), which are the precursors of NPs.
If you just want to know what you can do with CNPs, feel free to skip ahead to the next section, but a little bit of mathematical background can’t hurt 🙂
Consider the following scenario. We want to predict the values \boldsymbol{y}^{(t)} = f(\boldsymbol{x}^{(t)}) of an (unknown) function f at a given set of target coordinates \boldsymbol{x}^{(t)}. We are provided with a set of context points {\boldsymbol{x}^{(c)}, \boldsymbol{y}^{(c)}} at which the function values are known, i.e. \boldsymbol{y}^{(c)} = f(\boldsymbol{x}^{(c)}). In addition, we can look at an arbitrarily large set of graphs of other functions that are members of the same class as f, i.e. they have been generated by the same stochastic process. A CNP solves this prediction problem by training on these other functions, thereby parametrizing the stochastic process with an NN.
Specifically, the CNP consists of three components: an encoder, an aggregator, and a decoder. The encoder h is applied to each context point (x_i^{(c)}, y_i^{(c)}) and yields a representation vector \boldsymbol{r}_i of that point. The aggregator is a commutative operation \oplus that takes all the representation vectors {\boldsymbol{r}_i} and combines them into a single representation vector \boldsymbol{r} = \boldsymbol{r}_1 \oplus \dots \oplus \boldsymbol{r}_n. In this work, the aggregator simply computes the mean. Finally, the decoder g takes a target coordinate x_i^{(t)} and the representation vector \boldsymbol{r}, and (for regression tasks) predicts the mean and variance for each function value that is to be estimated.
Here, both h and g are multi-layer perceptrons (MLPs) that learn to parametrize the stochastic process by minimizing the negative conditional log-probability to find \boldsymbol{y}^{(t)}, given the context points and \boldsymbol{x}^{(t)}.
Ok, now to applications. I reproduced two of the application examples that Garnelo et al. demonstrate. I plan to add more results and a generalization to NPs at a later stage. Please refer to my GitHub repository for updates.
As a first example, we generate functions from a GP with a squared-exponential kernel and train a CNP to predict these functions from a set of context points. After only 10^5 episodes of training, the CNP already performs quite well:
In the plot above, the gray line is the mean function that the CNP predicts, and the blue band is the predicted variance. For this example, the CNP is provided with the context points indicated by red crosses. as well as 100 target points on the interval [-1,1] that constitute the graph.
Notice that the CNP is less certain in regions far away from the given context points (see left panel around x \approx 0.75). When more points are given, the prediction improves and the uncertainty decreases.
In contrast to a GP, however, the CNP does not predict exactly the context points, even though they are given.
Of course, a GP with the same kernel as the GP that the ground truth function was sampled from performs better:
but this is kind of an unfair comparison, since the CNP had to “learn the kernel function” and we did not spend much time on training.
Now comes the really cool thing about CNPs. Since they scale linearly with the number of sample points, and since they can learn to parametrize any stochastic process, we can also conceive the set of all possible handwritten digit images as samples from a stochastic process and use a CNP to learn them.
After just 4.8\times10^5 training episodes, the same CNP that I used for 1-D regression above, has learned to predict the shapes of handwritten digits, given a few context pixels:
Garnelo et al.’s results look much nicer than mine, but my representation vector was only half the size of the one they used and they probably also spent more resources on training the CNP.
Opinion, and what I have learned
CNPs and their generalizations promise great potential, as they alleviate the curse of dimensionality of Gaussian processes and have already shown to be powerful tools in the domain of computer vision .
Garnelo et al. provide enough details about the implementation, so it was straightforward to reproduce their work. I only encountered one minor issue: When training the CNP, I sometimes find that it outputs NaN values. This problem disappears if we enforce a positive lower bound on the output variance.
Implementing CNPs was a good exercise for me to learn more about Tensorflow. Since the results are very rewarding and implementation is not too difficult, I recommend you try this yourself!
There are quite a few things that can be improved upon CNPs, which leads us to NPs and their extensions. But this is material for a later post.
References
{543838:FM73XDYE};{543838:DX2QCEFW};{543838:Z9XCML3L};{543838:XNE6FVER};{543838:FM73XDYE};{543838:FM73XDYE};{543838:XNE6FVER};{543838:FM73XDYE}
nature
default
asc
no
76566
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3A%22zotpress-49994240adfb2fa0e867beebe32abe7e%22%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22FM73XDYE%22%2C%22library%22%3A%7B%22id%22%3A543838%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Garnelo%20et%20al.%22%2C%22parsedDate%22%3A%222018-07-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%202%3B%20%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%20style%3D%5C%22clear%3A%20left%3B%20%5C%22%3E%5Cn%20%20%20%20%3Cdiv%20class%3D%5C%22csl-left-margin%5C%22%20style%3D%5C%22float%3A%20left%3B%20padding-right%3A%200.5em%3B%20text-align%3A%20right%3B%20width%3A%201em%3B%5C%22%3E1.%3C%5C%2Fdiv%3E%3Cdiv%20class%3D%5C%22csl-right-inline%5C%22%20style%3D%5C%22margin%3A%200%20.4em%200%201.5em%3B%5C%22%3EGarnelo%2C%20M.%20%3Ci%3Eet%20al.%3C%5C%2Fi%3E%20Conditional%20Neural%20Processes.%20%3Ci%3EarXiv%3A1807.01613%20%5Bcs%2C%20stat%5D%3C%5C%2Fi%3E%20%282018%29.%3C%5C%2Fdiv%3E%5Cn%20%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Conditional%20Neural%20Processes%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marta%22%2C%22lastName%22%3A%22Garnelo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dan%22%2C%22lastName%22%3A%22Rosenbaum%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chris%20J.%22%2C%22lastName%22%3A%22Maddison%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tiago%22%2C%22lastName%22%3A%22Ramalho%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Saxton%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Murray%22%2C%22lastName%22%3A%22Shanahan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yee%20Whye%22%2C%22lastName%22%3A%22Teh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Danilo%20J.%22%2C%22lastName%22%3A%22Rezende%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22S.%20M.%20Ali%22%2C%22lastName%22%3A%22Eslami%22%7D%5D%2C%22abstractNote%22%3A%22Deep%20neural%20networks%20excel%20at%20function%20approximation%2C%20yet%20they%20are%20typically%20trained%20from%20scratch%20for%20each%20new%20function.%20On%20the%20other%20hand%2C%20Bayesian%20methods%2C%20such%20as%20Gaussian%20Processes%20%28GPs%29%2C%20exploit%20prior%20knowledge%20to%20quickly%20infer%20the%20shape%20of%20a%20new%20function%20at%20test%20time.%20Yet%20GPs%20are%20computationally%20expensive%2C%20and%20it%20can%20be%20hard%20to%20design%20appropriate%20priors.%20In%20this%20paper%20we%20propose%20a%20family%20of%20neural%20models%2C%20Conditional%20Neural%20Processes%20%28CNPs%29%2C%20that%20combine%20the%20benefits%20of%20both.%20CNPs%20are%20inspired%20by%20the%20flexibility%20of%20stochastic%20processes%20such%20as%20GPs%2C%20but%20are%20structured%20as%20neural%20networks%20and%20trained%20via%20gradient%20descent.%20CNPs%20make%20accurate%20predictions%20after%20observing%20only%20a%20handful%20of%20training%20data%20points%2C%20yet%20scale%20to%20complex%20functions%20and%20large%20datasets.%20We%20demonstrate%20the%20performance%20and%20versatility%20of%20the%20approach%20on%20a%20range%20of%20canonical%20machine%20learning%20tasks%2C%20including%20regression%2C%20classification%20and%20image%20completion.%22%2C%22date%22%3A%222018-07-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1807.01613%22%2C%22collections%22%3A%5B%223D85EIQF%22%5D%2C%22dateModified%22%3A%222019-03-30T09%3A13%3A44Z%22%7D%7D%2C%7B%22key%22%3A%22Z9XCML3L%22%2C%22library%22%3A%7B%22id%22%3A543838%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Rasmussen%20and%20Williams%22%2C%22parsedDate%22%3A%222008%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%202%3B%20%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%20style%3D%5C%22clear%3A%20left%3B%20%5C%22%3E%5Cn%20%20%20%20%3Cdiv%20class%3D%5C%22csl-left-margin%5C%22%20style%3D%5C%22float%3A%20left%3B%20padding-right%3A%200.5em%3B%20text-align%3A%20right%3B%20width%3A%201em%3B%5C%22%3E1.%3C%5C%2Fdiv%3E%3Cdiv%20class%3D%5C%22csl-right-inline%5C%22%20style%3D%5C%22margin%3A%200%20.4em%200%201.5em%3B%5C%22%3ERasmussen%2C%20C.%20E.%20%26amp%3B%20Williams%2C%20C.%20K.%20I.%20%3Ci%3EGaussian%20processes%20for%20machine%20learning%3C%5C%2Fi%3E.%20%28MIT%20Press%2C%202008%29.%3C%5C%2Fdiv%3E%5Cn%20%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22book%22%2C%22title%22%3A%22Gaussian%20processes%20for%20machine%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Carl%20Edward%22%2C%22lastName%22%3A%22Rasmussen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christopher%20K.%20I.%22%2C%22lastName%22%3A%22Williams%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222008%22%2C%22language%22%3A%22eng%22%2C%22ISBN%22%3A%22978-0-262-18253-9%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fwww.gaussianprocess.org%5C%2Fgpml%5C%2F%22%2C%22collections%22%3A%5B%22GM6225YA%22%5D%2C%22dateModified%22%3A%222019-03-11T13%3A51%3A25Z%22%7D%7D%2C%7B%22key%22%3A%22DX2QCEFW%22%2C%22library%22%3A%7B%22id%22%3A543838%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Garnelo%20et%20al.%22%2C%22parsedDate%22%3A%222018-07-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%202%3B%20%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%20style%3D%5C%22clear%3A%20left%3B%20%5C%22%3E%5Cn%20%20%20%20%3Cdiv%20class%3D%5C%22csl-left-margin%5C%22%20style%3D%5C%22float%3A%20left%3B%20padding-right%3A%200.5em%3B%20text-align%3A%20right%3B%20width%3A%201em%3B%5C%22%3E1.%3C%5C%2Fdiv%3E%3Cdiv%20class%3D%5C%22csl-right-inline%5C%22%20style%3D%5C%22margin%3A%200%20.4em%200%201.5em%3B%5C%22%3EGarnelo%2C%20M.%20%3Ci%3Eet%20al.%3C%5C%2Fi%3E%20Neural%20Processes.%20%3Ci%3EarXiv%3A1807.01622%20%5Bcs%2C%20stat%5D%3C%5C%2Fi%3E%20%282018%29.%3C%5C%2Fdiv%3E%5Cn%20%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Neural%20Processes%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marta%22%2C%22lastName%22%3A%22Garnelo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jonathan%22%2C%22lastName%22%3A%22Schwarz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dan%22%2C%22lastName%22%3A%22Rosenbaum%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fabio%22%2C%22lastName%22%3A%22Viola%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Danilo%20J.%22%2C%22lastName%22%3A%22Rezende%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22S.%20M.%20Ali%22%2C%22lastName%22%3A%22Eslami%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yee%20Whye%22%2C%22lastName%22%3A%22Teh%22%7D%5D%2C%22abstractNote%22%3A%22A%20neural%20network%20%28NN%29%20is%20a%20parameterised%20function%20that%20can%20be%20tuned%20via%20gradient%20descent%20to%20approximate%20a%20labelled%20collection%20of%20data%20with%20high%20precision.%20A%20Gaussian%20process%20%28GP%29%2C%20on%20the%20other%20hand%2C%20is%20a%20probabilistic%20model%20that%20defines%20a%20distribution%20over%20possible%20functions%2C%20and%20is%20updated%20in%20light%20of%20data%20via%20the%20rules%20of%20probabilistic%20inference.%20GPs%20are%20probabilistic%2C%20data-efficient%20and%20flexible%2C%20however%20they%20are%20also%20computationally%20intensive%20and%20thus%20limited%20in%20their%20applicability.%20We%20introduce%20a%20class%20of%20neural%20latent%20variable%20models%20which%20we%20call%20Neural%20Processes%20%28NPs%29%2C%20combining%20the%20best%20of%20both%20worlds.%20Like%20GPs%2C%20NPs%20define%20distributions%20over%20functions%2C%20are%20capable%20of%20rapid%20adaptation%20to%20new%20observations%2C%20and%20can%20estimate%20the%20uncertainty%20in%20their%20predictions.%20Like%20NNs%2C%20NPs%20are%20computationally%20efficient%20during%20training%20and%20evaluation%20but%20also%20learn%20to%20adapt%20their%20priors%20to%20data.%20We%20demonstrate%20the%20performance%20of%20NPs%20on%20a%20range%20of%20learning%20tasks%2C%20including%20regression%20and%20optimisation%2C%20and%20compare%20and%20contrast%20with%20related%20models%20in%20the%20literature.%22%2C%22date%22%3A%222018-07-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1807.01622%22%2C%22collections%22%3A%5B%223D85EIQF%22%5D%2C%22dateModified%22%3A%222019-03-09T05%3A52%3A01Z%22%7D%7D%2C%7B%22key%22%3A%22XNE6FVER%22%2C%22library%22%3A%7B%22id%22%3A543838%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Eslami%20et%20al.%22%2C%22parsedDate%22%3A%222018-06-15%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%202%3B%20%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%20style%3D%5C%22clear%3A%20left%3B%20%5C%22%3E%5Cn%20%20%20%20%3Cdiv%20class%3D%5C%22csl-left-margin%5C%22%20style%3D%5C%22float%3A%20left%3B%20padding-right%3A%200.5em%3B%20text-align%3A%20right%3B%20width%3A%201em%3B%5C%22%3E1.%3C%5C%2Fdiv%3E%3Cdiv%20class%3D%5C%22csl-right-inline%5C%22%20style%3D%5C%22margin%3A%200%20.4em%200%201.5em%3B%5C%22%3EEslami%2C%20S.%20M.%20A.%20%3Ci%3Eet%20al.%3C%5C%2Fi%3E%20Neural%20scene%20representation%20and%20rendering.%20%3Ci%3EScience%3C%5C%2Fi%3E%20%3Cb%3E360%3C%5C%2Fb%3E%2C%201204%26%23x2013%3B1210%20%282018%29.%3C%5C%2Fdiv%3E%5Cn%20%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Neural%20scene%20representation%20and%20rendering%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22S.%20M.%20Ali%22%2C%22lastName%22%3A%22Eslami%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Danilo%20Jimenez%22%2C%22lastName%22%3A%22Rezende%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Frederic%22%2C%22lastName%22%3A%22Besse%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fabio%22%2C%22lastName%22%3A%22Viola%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ari%20S.%22%2C%22lastName%22%3A%22Morcos%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marta%22%2C%22lastName%22%3A%22Garnelo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Avraham%22%2C%22lastName%22%3A%22Ruderman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Andrei%20A.%22%2C%22lastName%22%3A%22Rusu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ivo%22%2C%22lastName%22%3A%22Danihelka%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Karol%22%2C%22lastName%22%3A%22Gregor%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%20P.%22%2C%22lastName%22%3A%22Reichert%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lars%22%2C%22lastName%22%3A%22Buesing%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Theophane%22%2C%22lastName%22%3A%22Weber%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Oriol%22%2C%22lastName%22%3A%22Vinyals%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dan%22%2C%22lastName%22%3A%22Rosenbaum%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Neil%22%2C%22lastName%22%3A%22Rabinowitz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Helen%22%2C%22lastName%22%3A%22King%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chloe%22%2C%22lastName%22%3A%22Hillier%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Matt%22%2C%22lastName%22%3A%22Botvinick%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Daan%22%2C%22lastName%22%3A%22Wierstra%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Koray%22%2C%22lastName%22%3A%22Kavukcuoglu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Demis%22%2C%22lastName%22%3A%22Hassabis%22%7D%5D%2C%22abstractNote%22%3A%22A%20scene-internalizing%20computer%20program%5CnTo%20train%20a%20computer%20to%20%5Cu201crecognize%5Cu201d%20elements%20of%20a%20scene%20supplied%20by%20its%20visual%20sensors%2C%20computer%20scientists%20typically%20use%20millions%20of%20images%20painstakingly%20labeled%20by%20humans.%20Eslami%20et%20al.%20developed%20an%20artificial%20vision%20system%2C%20dubbed%20the%20Generative%20Query%20Network%20%28GQN%29%2C%20that%20has%20no%20need%20for%20such%20labeled%20data.%20Instead%2C%20the%20GQN%20first%20uses%20images%20taken%20from%20different%20viewpoints%20and%20creates%20an%20abstract%20description%20of%20the%20scene%2C%20learning%20its%20essentials.%20Next%2C%20on%20the%20basis%20of%20this%20representation%2C%20the%20network%20predicts%20what%20the%20scene%20would%20look%20like%20from%20a%20new%2C%20arbitrary%20viewpoint.%5CnScience%2C%20this%20issue%20p.%201204%5CnScene%20representation%5Cu2014the%20process%20of%20converting%20visual%20sensory%20data%20into%20concise%20descriptions%5Cu2014is%20a%20requirement%20for%20intelligent%20behavior.%20Recent%20work%20has%20shown%20that%20neural%20networks%20excel%20at%20this%20task%20when%20provided%20with%20large%2C%20labeled%20datasets.%20However%2C%20removing%20the%20reliance%20on%20human%20labeling%20remains%20an%20important%20open%20problem.%20To%20this%20end%2C%20we%20introduce%20the%20Generative%20Query%20Network%20%28GQN%29%2C%20a%20framework%20within%20which%20machines%20learn%20to%20represent%20scenes%20using%20only%20their%20own%20sensors.%20The%20GQN%20takes%20as%20input%20images%20of%20a%20scene%20taken%20from%20different%20viewpoints%2C%20constructs%20an%20internal%20representation%2C%20and%20uses%20this%20representation%20to%20predict%20the%20appearance%20of%20that%20scene%20from%20previously%20unobserved%20viewpoints.%20The%20GQN%20demonstrates%20representation%20learning%20without%20human%20labels%20or%20domain%20knowledge%2C%20paving%20the%20way%20toward%20machines%20that%20autonomously%20learn%20to%20understand%20the%20world%20around%20them.%5CnA%20computer%20vision%20system%20predicts%20how%20a%203D%20scene%20looks%20from%20any%20viewpoint%20after%20just%20a%20few%202D%20views%20from%20other%20viewpoints.%5CnA%20computer%20vision%20system%20predicts%20how%20a%203D%20scene%20looks%20from%20any%20viewpoint%20after%20just%20a%20few%202D%20views%20from%20other%20viewpoints.%22%2C%22date%22%3A%222018%5C%2F06%5C%2F15%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1126%5C%2Fscience.aar6170%22%2C%22ISSN%22%3A%220036-8075%2C%201095-9203%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fscience.sciencemag.org%5C%2Fcontent%5C%2F360%5C%2F6394%5C%2F1204%22%2C%22collections%22%3A%5B%22H4SSYTCY%22%5D%2C%22dateModified%22%3A%222019-03-06T12%3A47%3A37Z%22%7D%7D%5D%7D
1.
Garnelo, M. et al. Conditional Neural Processes. arXiv:1807.01613 [cs, stat] (2018).
1.
Rasmussen, C. E. & Williams, C. K. I. Gaussian processes for machine learning. (MIT Press, 2008).
1.
Garnelo, M. et al. Neural Processes. arXiv:1807.01622 [cs, stat] (2018).
1.
Eslami, S. M. A. et al. Neural scene representation and rendering. Science 360, 1204–1210 (2018).