Abstract: |
The visual cortex of the human brain has the ability of extracting different features from the visual retinal input. In particular the stimulus function on the first layer of the visual cortex (called V1) encodes positions and orientations of the image contours. Cortical cells act on the image received through the eyes by differentiating the stimulus as operators changing from point to point [1]. An important problem of contemporary neuroscience consists in understanding whether the perceived image can still be reconstructed starting from the partial information carried by the feedforward action of cells in V1. In order to do this we consider the cortical Receptive Profiles (RPs) as Gaussian derivatives with heterogeneous metrics and derivation orders [2], the reconstructed
image therefore would be a solution of the associated inverse problem, which is a Poisson-kind equation with differential operator changing from point to point. We can write this down as Lu=m,
where u is the function encoding the reconstructed image, L=L_{x,y,\theta} is a differential operator that varies depending on position and orientation, and m=LI is the transform of the visual stimulus, that is often obtained via a convolution process.
In order to solve this, we consider discretized second order operators on regular grids and their convergence results. In particular if partial_z^\varepsilon is the discrete differential along z in Z^d, then for any Lambda subset of Z^d finite and symmetric with respect to 0, and any matrix function A: R^d--> R^{Lambda x Lambda}, we can define a second order operator denoted by A (with a little abuse of notation) such that for any function u: R^d-->R,
Au(x):=sum_{z,z'\in Lambda}\partial_{-z}^\varepsilon(a_{zz'}(x)\partial_{z'}^\varepsilon u(x)).
The same construction generalizes to matrices defined over regular grids, A^\varepsilon: \varepsilonZ^d-->R^{Lambda x Lambda}, and we can introduce a notion of ellipticity over these operators which is compatible with the usual notion. The second order discrete elliptic operators A^\varepsilon that we consider, are stochastically defined, that is A^\varepsilon=A^\varepsilon(x)(w) with w lying in a probability space. With the opportune definition of H-convergence [3], we obtain that if the A^\varepsilon satisfy an ergodicity-type condition, then they converge to a classic elliptic operator A^0 which
is non-stochastic, meaning that it does not depend on w. This framework applies to various discrete distributions analogous to the distribution of the V1 cortex.
In the end, we perform a numerical implementation of different distributions of second and fourth order differential operators, evaluating the reconstruction of the perceived image. In particular we focus on the perceptual phenomena of lightness and color constancy, that is the ability of reconstructing constant lightness and color perceptions under different illuminations.
References
[1] Richard A. Young. The gaussian derivative model for spatial vision. i- retinal mechanisms. Spatial vision, 2(4):273–293, 1987.
[2] Ron Kimmel, Michael Elad, Doron Shaked, Renato Keshet, and Irwin Sobel. A variational framework for retinex. International Journal of computer vision, 52(1):7–23, 2003.
[3] Ennio De Giorgi. G-operators and Γ-convergence. In Proceedings of the International Congress of Mathematicians, volume 1, 1984. |