# markov Random Fields for Super-Resolution

A Summary on

**Markov Networks for Super-Resolution**

by

W. T. Freeman and E. C. Pasztor

## Contents

## Introduction

There are some applications in computer vision in which the task is to infer the unobserved image called “scene” from the observed “image”. Typically, estimating the entire “scene” image at once is too complex and infeasible, and thus a common approach is to process the image regions locally and then generalize the interpretations across space. The interpretation of images can be done by modeling the relationship between local regions of “images” and “scenes”, and between neighboring local “scene” regions. The former allows us to estimate initial guess for “scene”, and the latter propagates the estimation. These problems are so-called low-level vision problems. In this paper the authors try to exploit training method using “image”/ “scene” pairs and apply the Bayesian inference of graphical models. The method is called VISTA, Vision by Image/Scene TrAining. The authors have shown the advantages of the proposed model in different practical applications. Here we focus on super-resolution application where the problem is to estimate high resolution details from low resolution images.

## Markov Networks for low-level vision problems

A common graphical model for low-level vision problems is Markov networks. For a given "image", [math]y[/math], the underlying "scene" [math]x[/math] should be estimated. The posterior probability, [math] P(x|y)= cP(x,y)[/math] is calculated considering the fact that the parameter [math] c = 1/P(y) [/math] is a constant over [math]x[/math]. The best scene estimate [math]\hat{x}[/math] is the minimum mean squared error, MMSE, or the maximum a posterior, MAP. Without any approximation the [math]\hat{x}[/math] is difficult to compute. Therefore the "image" and "scene" are divided into patches and one node of the Markov network is assigned to each patch. Figure depicts the undirected graphical model for mentioned problem where the nodes connected by lines indicate statistical dependencies. Each “scene” node is connected to its corresponding “image” node as well as its neighbors.To make use of Markov networks the unknown parameters should be learned from training data in learning phase, and then in inference phase the “scene” estimation can be made. For a Markov random field, the joint probability over the “scene” [math]x[/math] and the “image” [math] y[/math] is given by:

where [math]\psi[/math] and [math]\phi[/math] are potential functions and they are leaned from training data. In this paper the authors prefer to call these functions compatibility functions. Then one can write the MAP and the MMSE estimates for <math\hat{x}_j</math> by marginalizing or taking the maximum over all other variables in the posterior probability, respectively. For discrete variables the expression is:

For large networks the computation of Eq(2) and Eq(3) are infeasible to evaluate directly; however, the task is easier for network wich are trees or chains.

### Inference in Networks without loops

For networks with no loop the inference is the simple “message-passing” rule which enables us to compute MAP and MMSE estimate. For example for the network in Fig.2 the MAP estimation for node [math]j[/math] is determined by:The similar expressions for [math]x_{2MAP}[/math] and [math]x_{3MAP}[/math] can be used. Equations (3) and (2) can be computed by iterating the following steps. The MAP estimate at node j is

Where k runs over all “scene” node neighbors of node j, and [math] M^k_j [/math] is the message from node k to node j. The [math]M^k_j[/math] message is calculated by:

where [math]\hat{M}^l_k[/math] is [math]M^k_l[/math] from the previous iteration. The initial [math]\hat{M}^k_j[/math]'s are set to column vector of 1’s, with the same dimension as [math]x_j[/math]. Because the initial messages are 1’s at the first iteration, all the message in the network are:

The second iteration uses the messages above as the [math]\hat{M}[/math] variables in Eq(6) :

And thus

Eventually the MAP estimates for [math]x_1[/math] becomes:

The MMSE estimate Eq(3) has analogous formulate with the [math]max_{x_k}[/math] of Eq(8) replaced by [math]\sum_{x_k}[/math] and [math]\arg\max_{x_j}[/math] replaced by [math]\sum_{x_j} x_j[/math].

### Networks with loops

Although the belief propagation algorithm is derived for the networks without loops, (Weiss, 1998, Weiss and Freeman 1999;Yedidia et al. 2000) demonstrate applying the propagation rules works even in the network with loops Fig.3## General view of the paper

Basically this paper aims to develop a new super-resolution scheme utilizing Markov random fields by which given low resolution image is resized to required high resolution size reproducing high frequency details. Typical interpolation techniques such as bilinear, nearest neighbor and bicubic expand the size of the low-resolution image, yet the result suffers from blurriness and in some cases blocking artifact.

According to Freeman, in this method they collect many pairs of high resolution images and their corresponding low-resolution images as the training set. Given low-resolution image y a typical bicubic interpolation algorithm is employed to create a high resolution image which is interpreted as the “image” in Markov network. Of course this image does not look suitable, and thus they try to estimate original high resolution image which in we may call it “scene” image based on the definitions provided above. In order to do that, the images in training set and also the low-resolution image is divided into patches so that each patch represents the Markov network node (figure1). Therefore, [math]y_i[/math]’s in figure are observed, and thus should be shaded. Subsequently, for each patch in y 10 or 20 nearest patches from the training database are selected using Euclidian distance. The ultimate job is to find the best patch in the candidate set for each patch in y using MAP estimate. In other words, the estimated scene at each patch is always some example from the training set.

## Implementation of low-level vision problems using Markov network

The “image” and “scene” are arrays of pixel values, and so the complete representation is cumbersome. In this research, the principle component analysi (PCA) is applied for each patch to find a set of lower dimentional basis function. Moreover, potential functions [math]\psi[/math] and [math]\phi[/math] should be determined. A nice idea is to define Gaussian mixtures over joint spaces [math]x_i*x_j [/math] and [math] x_j*x_k [/math];however, it is very difficult. The authors prefer a discrete representation where the most straightforward approach is to evenly sample all possible states of each image and scene variable at each patch.

For each patch in "image" y a set of 10 or 20 "scene" candidates from the training set are chosen. Figure4 illustrates an example of each patch in y and the associated "scene" candidates.### Learning the Potential (Compatibility) Functions

The potential functions are defined arbitrary, but they have to be introduced wisely. In this paper, a simple way is used to find potential functions. They assume “scene” patches have overlap shown is figure…. Therefore, the scene patches themselves may be use to define potential functions [math]\psi[/math] between nodes [math]x_i[/math]’s. Recall for node [math]x_i[/math] and its neighbor [math]x_j[/math] there are two sets of candidate patches. Let assume the lth candidate in node j and the mth candidate in node k have some overlap. Also, we can think that the pixels in the overlapped region in [math]x_j[/math] ([math]d^l_{kj}[/math]) and their correspondent in [math]x_k[/math] ([math]d^m_{jk}[/math]) are some variation of each other , and thus eventually the potential function between node [math]x_j[/math] and node [math]x_k[/math] are given by:

Where [math]\sigma_s[/math] has to be determined. The authors assume that the image and scene training samples differ from the "ideal" or original high resolution image by Gaussian noise with covariance <math\sigma_i</math> and [math]\sigma_s[/math], respectively. Threfore, [math]\psi[/math] function between node j and k can be represented by a matrix whose lth row and mth column is [math]\psi(x^l_k, x^m_j)[/math]. Potential function [math]\phi[/math] which is defined between "scene" node x and "image" node y is determined based on another intuitive assumption. We say that a "scene" candidate [math]x^k_l[/math] is compatible with an observed image patch [math]y_0[/math] if the image patch, [math]y^k_l[/math], associated with the scene candidate [math]x^k_l[/math] in the training database matches [math]y_0[/math]. Of course, it will not exactly match, but we may suppose that the training data is “noisy” version of original image resulting in:

## Super-Resolution

For the super-resolution problem, the input is a low-resolution image, and thus the “scene” to be estimated is its high resolution version. At the first glance, the task may seem impossible since the high resolution data is missing. However, the human eye is able to identify edges and sharp details in low resolution image and we know these structural information should remain at higher resolution level. The authors attempt to solve this problem using aforementioned Markov model and they name the method VISTA. There are some preprocessing steps in order to increase the efficiency of the training set. First, consider three scales Laplacian pyramid decomposition. The first sub-band, H, represents the detail in high frequency while the second and the thirs sub-bands indicate the middle, M, and the low, L, frequency components. The assumption is that high frequency band, H, is conditionally independent of the lower frequency bands, given the middle frequency band, M, yielding:

Hence, to predict the high frequency components, we will need the middle frequency details, M, not the low frequency band, L. This hypothesis greatly reduces the computation costs. Second, The reaserchers in this paper assume that statistical relationships between image bands are independent of image contrast. Furtheremore, they take the absolute value of the mid-frequency band, and then pass it through a lowpass filter resulting in a normalized mid frequency band. They also do the same procedure for high-frequency band. Figure .. and … show the original “scene” and the input “image” and also contrasr normalized version of the high frequency components. The “image” and “scene” images are divided into local patches. The patch size is a difficult task since choosing small size gives very little information for estimating the underlying “scene” patches. On the other hand, large patches would make the learning processs of [math]\phi[/math]’s very complex. The authors use 7*7 patch size for low frequency band and 3*3 for high frequency components.