{"id":1621,"date":"2022-05-05T12:20:44","date_gmt":"2022-05-05T12:20:44","guid":{"rendered":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/?p=1621"},"modified":"2022-05-05T14:56:36","modified_gmt":"2022-05-05T14:56:36","slug":"gaussian-processes-in-regression","status":"publish","type":"post","link":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/2022\/05\/05\/gaussian-processes-in-regression\/","title":{"rendered":"Gaussian Processes in Regression"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"1621\" class=\"elementor elementor-1621\">\n\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-a40b97b elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"a40b97b\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-8c85f06\" data-id=\"8c85f06\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-f6a9782 elementor-widget elementor-widget-text-editor\" data-id=\"f6a9782\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Lets assume we are interested in making predictions, based on a set of data points represented in Figure 1. Naturally, if we want to make predictions for a specific value <span class=\"wp-katex-eq\" data-display=\"false\"> x_* <\/span> where\u00a0<span style=\"font-family: Poppins;font-size: 13px;font-style: normal;font-weight: 400\"><span class=\"wp-katex-eq\" data-display=\"false\"> x <\/span><\/span><span style=\"font-size: 13px\">\u00a0is a continuous variable, then it is very unlikely to have already made an observation for this new value\u00a0<\/span><span style=\"font-family: Poppins;font-size: 13px;font-style: normal;font-weight: 400\"><span class=\"wp-katex-eq\" data-display=\"false\"> x_* <\/span><\/span><span style=\"font-size: 13px\">. Thus, having discrete data is very limiting. Ideally, we would like to find a function,\u00a0<\/span><span style=\"font-family: Poppins;font-size: 13px;font-style: normal;font-weight: 400\"><span class=\"wp-katex-eq\" data-display=\"false\"> f <\/span><\/span><span style=\"font-size: 13px\">, which can be use instead of the discrete data to make predictions. Typically, this can be done in several different ways but one of them introduces Gaussian Processes in the context of regression. To obtain the desired function\u00a0<\/span><span style=\"font-family: Poppins;font-size: 13px;font-style: normal;font-weight: 400\"><span class=\"wp-katex-eq\" data-display=\"false\"> f <\/span><\/span><span style=\"font-size: 13px\">\u00a0which goes through all of our observed data points, we attribute a prior probability to every possible functions, reflecting how likely we believe they are to best represent our data. It is immediately obvious that assigning a prior probability to the infinite number of existing functions is a major limitation, as it would potentially require an infinite amount of time. This problem is solved by Gaussian Processes which can be used as a prior probability\u00a0<\/span><span style=\"font-size: 13px\">distribution over all the functions. Inference in the GP made on a finite subset of the function f while ignoring the infinite number of remaining points will produce the same solution as if we had accounted for them <\/span><a style=\"font-size: 13px\" href=\"http:\/\/gaussianprocess.org\/gpml\/chapters\/RW.pdf\" target=\"_blank\" rel=\"noopener\">(Williams &amp; Rasmussen 2006)<\/a><span style=\"font-size: 13px\">.<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-cee7196 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"cee7196\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-0cab826\" data-id=\"0cab826\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-4daf9f4 elementor-widget elementor-widget-image\" data-id=\"4daf9f4\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img fetchpriority=\"high\" decoding=\"async\" width=\"768\" height=\"498\" src=\"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-content\/uploads\/sites\/37\/2022\/05\/dataset-768x498.png\" class=\"attachment-medium_large size-medium_large wp-image-1634\" alt=\"Dataset\" srcset=\"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-content\/uploads\/sites\/37\/2022\/05\/dataset-768x498.png 768w, https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-content\/uploads\/sites\/37\/2022\/05\/dataset-300x194.png 300w, https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-content\/uploads\/sites\/37\/2022\/05\/dataset-1024x664.png 1024w, https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-content\/uploads\/sites\/37\/2022\/05\/dataset.png 1276w\" sizes=\"(max-width: 768px) 100vw, 768px\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">Figure 1: Discrete time series data plot containing 13 observations.<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-9fc469c elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"9fc469c\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-5226020\" data-id=\"5226020\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-29af143 elementor-widget elementor-widget-heading\" data-id=\"29af143\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Definition (Gaussian Process)<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-6d4bb98 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"6d4bb98\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-afc1c1d\" data-id=\"afc1c1d\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-c3756ee elementor-widget elementor-widget-text-editor\" data-id=\"c3756ee\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>A Gaussian Process is a collection of random variables (indexed by time or space), of which, any finite number have a joint Gaussian distribution (i.e. multivariate normal distribution).\u00a0<\/p><p>We denote a GP as follows:<\/p><p style=\"text-align: center\"><span class=\"wp-katex-eq\" data-display=\"false\"> f(\\boldsymbol{x}) \\ \\sim \\ \\mathcal{GP}(m(\\boldsymbol{x}), k(\\boldsymbol{x}, \\boldsymbol{x&#039;})),\u00a0 <\/span>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0(1)<\/p><p>where,\u00a0<\/p><ul><li><span class=\"wp-katex-eq\" data-display=\"false\"> \\boldsymbol{x} <\/span> and <span class=\"wp-katex-eq\" data-display=\"false\"> \\boldsymbol{x^&#039;} <\/span>, are input vectors of dimension <span class=\"wp-katex-eq\" data-display=\"false\"> D <\/span>,<\/li><li><span class=\"wp-katex-eq\" data-display=\"false\"> m(\\boldsymbol{x}) <\/span>, is the mean function,<\/li><li><span class=\"wp-katex-eq\" data-display=\"false\"> k(\\boldsymbol{x,x^&#039;}) <\/span>, is the covariance function also know as the kernel.\u00a0<\/li><\/ul>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-2d96186 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"2d96186\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-97b6ee4\" data-id=\"97b6ee4\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-aba0297 elementor-widget elementor-widget-text-editor\" data-id=\"aba0297\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Therefore, a GP is completely specified by its mean function and kernel, corresponding to Equation (2) and (3) respectively. However, it is common to set the mean function to be zero and de-mean the data. This is done to simplify the model and works well if interested only in the local behaviour of our model.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-dbb9618 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"dbb9618\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-9cfb03d\" data-id=\"9cfb03d\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-61b7785 elementor-widget elementor-widget-text-editor\" data-id=\"61b7785\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span class=\"wp-katex-eq\" data-display=\"false\"> m(\\boldsymbol{x}) = \\mathbb{E}[f(\\boldsymbol{x})], <\/span>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0(2)<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-8eb0990 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"8eb0990\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-1928f51\" data-id=\"1928f51\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-ed8b0d9 elementor-widget elementor-widget-text-editor\" data-id=\"ed8b0d9\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span class=\"wp-katex-eq\" data-display=\"false\"> k(\\boldsymbol{x}, \\boldsymbol{x&#039;}) = \\mathbb{E}[(f(\\boldsymbol{x})-m(\\boldsymbol{x}))(f(\\boldsymbol{x&#039;})-m(\\boldsymbol{x&#039;}))]. <\/span>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 (3)<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-ede46ea elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"ede46ea\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-0f307ae\" data-id=\"0f307ae\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-ccd734d elementor-widget elementor-widget-text-editor\" data-id=\"ccd734d\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Generally, the covariance function chosen will contain some free parameters which in the context of GP\u2019s are referred to as hyperparameters. Hyperparameters have a very strong influence on the predictions from our GP. Graphically, they define the \u2018shape\u2019 of our GP and control the level of fitting to the data. There exist many different covariance functions and the methods used to choose the right one are discussed in Section 4. The hyperparameters for the squared-exponential covariance function (Equation (4)) are the signal variance <span class=\"wp-katex-eq\" data-display=\"false\"> \\sigma^2_f <\/span>, the length-scale <span class=\"wp-katex-eq\" data-display=\"false\"> l <\/span> and the noise variance <span class=\"wp-katex-eq\" data-display=\"false\"> \\sigma^2_n <\/span>.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-4aa55c6 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"4aa55c6\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-ffe2c36\" data-id=\"ffe2c36\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-2f5ae3c elementor-widget elementor-widget-text-editor\" data-id=\"2f5ae3c\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><span class=\"wp-katex-eq\" data-display=\"false\"> k(x_a,x_b)=\\sigma^2_{f}\\exp{(-\\frac{1}{2l^2}}(x_a &#8211; x_b)^2)+\\sigma^2_{n}\\mathbb{I}. <\/span>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 (4)<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-26f0c1f elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"26f0c1f\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-17d1fd3\" data-id=\"17d1fd3\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-17c02c3 elementor-widget elementor-widget-text-editor\" data-id=\"17c02c3\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>This is one of the most commonly used covariance function as samples from a GP with squared-exponential covariance function will be continuous and infinitely differentiable. Continuity produces smooth curves which makes it easy to interpret the results and the infinite differentiability is useful to integrate prior knowledge about likely values of the derivatives.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-6f8c802 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"6f8c802\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-6473fb0\" data-id=\"6473fb0\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-07f40f1 elementor-widget elementor-widget-text-editor\" data-id=\"07f40f1\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>GPs are interesting because of their marginalisation property. This property means the distribution of the existing data will not be affected by the addition of new data points. For example, if the GP specifies <span class=\"wp-katex-eq\" data-display=\"false\"> (z_1, z_2) \\sim \\mathcal{N}(\\boldsymbol{\\mu}, \\Sigma) <\/span> then <span class=\"wp-katex-eq\" data-display=\"false\"> z_1 \\sim \\mathcal{N}(\\mu_1, \\Sigma_{1,1}) <\/span> where <span class=\"wp-katex-eq\" data-display=\"false\"> \\Sigma_{1,1} <\/span> where <span class=\"wp-katex-eq\" data-display=\"false\"> \\Sigma_{1,1} <\/span> is the relevant submatrix of <span class=\"wp-katex-eq\" data-display=\"false\"> \\Sigma <\/span>. Given a chosen covariance function, we can now produce a random Gaussian vector, where, <span class=\"wp-katex-eq\" data-display=\"false\"> X_\u2217 <\/span>, is a <span class=\"wp-katex-eq\" data-display=\"false\"> D \u00d7 n <\/span> matrix of tests inputs and <span class=\"wp-katex-eq\" data-display=\"false\"> K <\/span>, is the covariance function,<\/p><p style=\"text-align: center\"><span class=\"wp-katex-eq\" data-display=\"false\"> \\textbf{ f}_{*} \\ \\sim \\ \\mathcal{N}(\\boldsymbol{0}, K(X_*,X_*)). <\/span>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0(5)<\/p><p>Throughout this paper, the bold lowercase notation <span class=\"wp-katex-eq\" data-display=\"false\"> x <\/span> represents a special case of <span class=\"wp-katex-eq\" data-display=\"false\"> X <\/span> for dimension <span class=\"wp-katex-eq\" data-display=\"false\"> D \u00d7 1 <\/span>, the corresponding covariance functions are written as <span class=\"wp-katex-eq\" data-display=\"false\"> k <\/span> and <span class=\"wp-katex-eq\" data-display=\"false\"> K <\/span> for <span class=\"wp-katex-eq\" data-display=\"false\"> x <\/span> and <span class=\"wp-katex-eq\" data-display=\"false\"> X <\/span>, respectively. Similarly, the subscript \u201c \u2217 \u201d indicates test inputs (i.e. the indices of the unobserved values we are trying to predict). Figure 2 shows three such samples generated by Equation (5), these are functions drawn randomly from a GP prior.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-077b14a elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"077b14a\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-df91227\" data-id=\"df91227\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-4e27a9a elementor-widget elementor-widget-image\" data-id=\"4e27a9a\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"768\" height=\"485\" src=\"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-content\/uploads\/sites\/37\/2022\/05\/prior-768x485.png\" class=\"attachment-medium_large size-medium_large wp-image-1791\" alt=\"Prior\" srcset=\"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-content\/uploads\/sites\/37\/2022\/05\/prior-768x485.png 768w, https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-content\/uploads\/sites\/37\/2022\/05\/prior-300x190.png 300w, https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-content\/uploads\/sites\/37\/2022\/05\/prior-1024x647.png 1024w, https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-content\/uploads\/sites\/37\/2022\/05\/prior.png 1331w\" sizes=\"(max-width: 768px) 100vw, 768px\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">Figure 2: Three randomly drawn functions from GP prior; the blue points indicate response variable values sampled; the two other functions have (less correctly) been drawn as lines by joining sampled points and smoothing the curve. The shaded area represents the pointwise mean plus and minus two times the standard deviation for each input value (corresponding to the 95% confidence region).<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-e90c2dc elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"e90c2dc\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-7876a5b\" data-id=\"7876a5b\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-2880220 elementor-widget elementor-widget-text-editor\" data-id=\"2880220\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>Next, we would also like to account for the information given by the observed data points (e.g. the data in Figure 1). This can be done using the joint distribution of the training output <span class=\"wp-katex-eq\" data-display=\"false\"> f <\/span> and the test outputs <span class=\"wp-katex-eq\" data-display=\"false\"> f _\u2217 <\/span>, given by:<\/p><p style=\"text-align: center\"><span class=\"wp-katex-eq\" data-display=\"false\"> \\begin{bmatrix} \\textbf{f} \\\\ \\textbf{f}_{*} \\end{bmatrix} \\ \\sim \\ \\mathcal{N}\\Bigg(\\boldsymbol{0}, \\begin{bmatrix} K(X,X) &amp; K(X,X_*) \\\\ K(X_*,X) &amp; K(X_*,X_*) \\end{bmatrix}\\Bigg). <\/span>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 (6)<\/p><p>Although, this assumes that our observed values are noise free. In practice it is more realistic to assume some white noise exist. Writting it as <span class=\"wp-katex-eq\" data-display=\"false\"> y = f(x) + \u03f5 <\/span>, the joint distribution of the observed target values and the function values at the chosen test locations is now:<\/p><p style=\"text-align: center\"><span class=\"wp-katex-eq\" data-display=\"false\"> \\begin{bmatrix} \\textbf{y} \\\\ \\textbf{f}_{*} \\end{bmatrix} \\ \\sim \\ \\mathcal{N}\\Bigg(\\boldsymbol{0}, \\begin{bmatrix} K(X,X) + \\sigma_{n}^{2}\\mathbb{I} &amp; K(X,X_*) \\\\ K(X_*,X) &amp; K(X_*,X_*) \\end{bmatrix}\\Bigg). <\/span>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 (7)<\/p><p>Here, <span class=\"wp-katex-eq\" data-display=\"false\"> K(X, X\u2217) <\/span> is a <span class=\"wp-katex-eq\" data-display=\"false\"> n \u00d7 n_\u2217 <\/span> matrix for the n training points and <span class=\"wp-katex-eq\" data-display=\"false\"> n_\u2217 <\/span> chosen test points.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-88c8833 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"88c8833\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-91f6323\" data-id=\"91f6323\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-4b312f2 elementor-widget elementor-widget-text-editor\" data-id=\"4b312f2\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>We can finally obtain the posterior distribution over all functions which go through the data points observed. Instead of checking the functions one by one, the conditional joint Gaussian prior distribution on the observed data can be used. This is done by sampling the function values <span class=\"wp-katex-eq\" data-display=\"false\"> f_* <\/span> from the joint posterior distribution by evaluating the mean and covariance matrix from:<\/p><p style=\"text-align: center\"><span class=\"wp-katex-eq\" data-display=\"false\"> \\textbf{f}_* | X_{*}, X, \\boldsymbol{y}\u00a0 \\sim\u00a0 \u00a0\\mathcal{N}\\big(K(X_{*},X) [K(X,X)+\\sigma_{n}^{2}\\mathbb{I}]^{-1}\\boldsymbol{y},\u00a0 K(X_{*},X_{*})-K(X_{*},X)[K(X,X)+\\sigma_{n}^{2}\\mathbb{I}]^{-1}K(X,X_{*})\\big). \u00a0<\/span>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0(8)<\/p><p>Generating samples from Equation (8), we can produce Figure 3. Although this is straight forward in theory, in practice an additional step using Cholesky factorisation was included to avoid numerical instabilities from inverting the kernels (see code in Section 8). The coloured functions are directly sampled from the posterior whereas the grey areas are obtained by calculating the pointwise mean of all the sampled functions plus and minus twice the standard deviation for each input value (this corresponds a 95% confidence region).<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-18d1924 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"18d1924\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-3b879cb\" data-id=\"3b879cb\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-b37260b elementor-widget elementor-widget-image\" data-id=\"b37260b\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"1024\" height=\"732\" src=\"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-content\/uploads\/sites\/37\/2022\/05\/noisefree-1024x732.png\" class=\"attachment-large size-large wp-image-1892\" alt=\"Noisefree\" srcset=\"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-content\/uploads\/sites\/37\/2022\/05\/noisefree-1024x732.png 1024w, https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-content\/uploads\/sites\/37\/2022\/05\/noisefree-300x215.png 300w, https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-content\/uploads\/sites\/37\/2022\/05\/noisefree-768x549.png 768w, https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-content\/uploads\/sites\/37\/2022\/05\/noisefree.png 1078w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">Figure 3.A: Shows three randomly drawn functions from the GP posterior, i.e. the prior conditioned on the 13 observations without noise. Squared exponential covariance function was used with hyperparameters, signal variance = 1, length-scale = 0.8, and noise parameter  = 0. The shaded area represents the pointwise mean plus and minus two times the standard deviation for each input value (corresponding to the 95% confidence region). These were produced on RStudio and obtained using Cholesky factorisation to avoid numerical instabilities from inverting the kernels.<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-top-column elementor-element elementor-element-9b2f337\" data-id=\"9b2f337\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-0b3e228 elementor-widget elementor-widget-image\" data-id=\"0b3e228\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"726\" src=\"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-content\/uploads\/sites\/37\/2022\/05\/noise-1024x726.png\" class=\"attachment-large size-large wp-image-1893\" alt=\"Noise\" srcset=\"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-content\/uploads\/sites\/37\/2022\/05\/noise-1024x726.png 1024w, https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-content\/uploads\/sites\/37\/2022\/05\/noise-300x213.png 300w, https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-content\/uploads\/sites\/37\/2022\/05\/noise-768x545.png 768w, https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-content\/uploads\/sites\/37\/2022\/05\/noise.png 1076w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">Figure 3.B: Shows three randomly drawn functions from the GP posterior, i.e. the prior conditioned on the 13 observations with noise. Squared exponential covariance function was used with hyperparameters, signal variance = 1, length-scale = 0.8, and noise parameter   = 1e \u2212 2. The shaded area represents the pointwise mean plus and minus two times the standard deviation for each input value (corresponding to the 95% confidence region). These were produced on RStudio and obtained using Cholesky factorisation to avoid numerical instabilities from inverting the kernels.<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-8f2aacb elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"8f2aacb\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-bf92f8d\" data-id=\"bf92f8d\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-252c7d6 elementor-widget elementor-widget-heading\" data-id=\"252c7d6\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\"> Code availability\n<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-deb5ade elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"deb5ade\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-9938ccc\" data-id=\"9938ccc\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-34d8532 elementor-widget elementor-widget-text-editor\" data-id=\"34d8532\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>The code used to produce results in Section 2 is available at the following link:<\/p><p><a href=\"https:\/\/github.com\/NewmanTHP\/Gaussian-Process-regression\/blob\/main\/GitHub%20GP%20code.R\">Gaussian-Process-regression\/GitHub GP code.R at main \u00b7 NewmanTHP\/Gaussian-Process-regression \u00b7 GitHub<\/a><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>Lets assume we are interested in making predictions, based on a set of data points represented in Figure 1. Naturally, if we want to make predictions for a specific value where\u00a0\u00a0is a continuous variable, then it is very unlikely to have already made an observation for this new value\u00a0. Thus, having discrete data is very [&hellip;]<\/p>\n","protected":false},"author":40,"featured_media":1908,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"ocean_post_layout":"full-width","ocean_both_sidebars_style":"","ocean_both_sidebars_content_width":0,"ocean_both_sidebars_sidebars_width":0,"ocean_sidebar":"0","ocean_second_sidebar":"0","ocean_disable_margins":"enable","ocean_add_body_class":"","ocean_shortcode_before_top_bar":"","ocean_shortcode_after_top_bar":"","ocean_shortcode_before_header":"","ocean_shortcode_after_header":"","ocean_has_shortcode":"","ocean_shortcode_after_title":"","ocean_shortcode_before_footer_widgets":"","ocean_shortcode_after_footer_widgets":"","ocean_shortcode_before_footer_bottom":"","ocean_shortcode_after_footer_bottom":"","ocean_display_top_bar":"default","ocean_display_header":"default","ocean_header_style":"","ocean_center_header_left_menu":"0","ocean_custom_header_template":"0","ocean_custom_logo":0,"ocean_custom_retina_logo":0,"ocean_custom_logo_max_width":0,"ocean_custom_logo_tablet_max_width":0,"ocean_custom_logo_mobile_max_width":0,"ocean_custom_logo_max_height":0,"ocean_custom_logo_tablet_max_height":0,"ocean_custom_logo_mobile_max_height":0,"ocean_header_custom_menu":"0","ocean_menu_typo_font_family":"0","ocean_menu_typo_font_subset":"","ocean_menu_typo_font_size":0,"ocean_menu_typo_font_size_tablet":0,"ocean_menu_typo_font_size_mobile":0,"ocean_menu_typo_font_size_unit":"px","ocean_menu_typo_font_weight":"","ocean_menu_typo_font_weight_tablet":"","ocean_menu_typo_font_weight_mobile":"","ocean_menu_typo_transform":"","ocean_menu_typo_transform_tablet":"","ocean_menu_typo_transform_mobile":"","ocean_menu_typo_line_height":0,"ocean_menu_typo_line_height_tablet":0,"ocean_menu_typo_line_height_mobile":0,"ocean_menu_typo_line_height_unit":"","ocean_menu_typo_spacing":0,"ocean_menu_typo_spacing_tablet":0,"ocean_menu_typo_spacing_mobile":0,"ocean_menu_typo_spacing_unit":"","ocean_menu_link_color":"","ocean_menu_link_color_hover":"","ocean_menu_link_color_active":"","ocean_menu_link_background":"","ocean_menu_link_hover_background":"","ocean_menu_link_active_background":"","ocean_menu_social_links_bg":"","ocean_menu_social_hover_links_bg":"","ocean_menu_social_links_color":"","ocean_menu_social_hover_links_color":"","ocean_disable_title":"default","ocean_disable_heading":"default","ocean_post_title":"","ocean_post_subheading":"","ocean_post_title_style":"","ocean_post_title_background_color":"","ocean_post_title_background":0,"ocean_post_title_bg_image_position":"","ocean_post_title_bg_image_attachment":"","ocean_post_title_bg_image_repeat":"","ocean_post_title_bg_image_size":"","ocean_post_title_height":0,"ocean_post_title_bg_overlay":0.5,"ocean_post_title_bg_overlay_color":"","ocean_disable_breadcrumbs":"default","ocean_breadcrumbs_color":"","ocean_breadcrumbs_separator_color":"","ocean_breadcrumbs_links_color":"","ocean_breadcrumbs_links_hover_color":"","ocean_display_footer_widgets":"default","ocean_display_footer_bottom":"default","ocean_custom_footer_template":"0","slim_seo":{"title":"Gaussian Processes in Regression - Thomas Newman","description":"Lets assume we are interested in making predictions, based on a set of data points represented in Figure 1. Naturally, if we want to make predictions for a spec"},"ocean_post_oembed":"","ocean_post_self_hosted_media":"","ocean_post_video_embed":"","ocean_link_format":"","ocean_link_format_target":"self","ocean_quote_format":"","ocean_quote_format_link":"post","ocean_gallery_link_images":"off","ocean_gallery_id":[],"footnotes":""},"categories":[11],"tags":[],"class_list":["post-1621","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-statistics","entry","has-media"],"_links":{"self":[{"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-json\/wp\/v2\/posts\/1621","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-json\/wp\/v2\/users\/40"}],"replies":[{"embeddable":true,"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-json\/wp\/v2\/comments?post=1621"}],"version-history":[{"count":266,"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-json\/wp\/v2\/posts\/1621\/revisions"}],"predecessor-version":[{"id":1911,"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-json\/wp\/v2\/posts\/1621\/revisions\/1911"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-json\/wp\/v2\/media\/1908"}],"wp:attachment":[{"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-json\/wp\/v2\/media?parent=1621"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-json\/wp\/v2\/categories?post=1621"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.lancaster.ac.uk\/stor-i-student-sites\/thomas-newman\/wp-json\/wp\/v2\/tags?post=1621"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}