Laurent DELSOL

PhD in Applied Mathematics, Maître de Conférence

Welcome on my personal webpage!

Français | English | Castellano




   My coordinates:

   e-mail:
   laurent.delsol@univ-orleans.fr

   Office: E17 (1st floor)

   Phone numbers:
   Mobile: 0033 (0)6 65 00 65 05
   Office: 0033 (0)2 38 49 26 96

   Postal address:
   Laboratoire MAPMO
   Université d'Orléans
   B.P. 6759
   45067 Orléans cedex 2






Research topics:


This page gives an overview of my current research fields and activities. You will find more details in my Curiculum Vitae (in French) or on the page devoted to my publications.
 
These works led me to engage in fruitful collaborations with Christophe Crambes, Frédéric Ferraty, Ali Laksaci, Cécile Louchet, Adela Matinez Calvo, Catherine Timmermans, Ingrid Van Keilegom, Philippe Vieu and Rainer von Sachs.

Finally, my PhD manuscript is available on my publications page or directely from this link.

A. Functional statistics
An important part of my research activity concerns the conception, the study and the use of semi and nonparametric methods from functional statistics (suitable for the study of curves samples). This field of statistics is nowadays encountering an increasing interest due to the great variety of methods that still remain to develop and the large scope of potential applications (see for instance Ramsay et Silverman, 2002, 2005, Bosq, 2000, Ferraty et Vieu, 2006, Dabo-Niang et Ferraty, 2008, Ramsay et al., 2009Ferraty, 2011, Ferraty et Romain, 2011, Kokoszka et Horvath, 2012, Zhang, 2013, Bongiorno et al., 2014, Hsing et Eubank, 2015).

A.1. Regression on functional variable
My research particularly focuses on the study of models in which a real variable of interest Y depends on a curve (or more generally a functional explnantory variable) X. Over the last years, I was interested in the study of this link through the following regression (on functional variable) model
Y = m(X) + R
in which Y is a real-valued random variable, X is a random variable taking values in a pseudo-metric space (E,d), m is an unknown operator that we want to study while R is such that E[R | X] = 0 and stands for the residual term.

I first obtained results on the asymptotic normality and Lp convergence (with explicit expression of the moments) for the Nadaraya-Watson estimator extended to a functional covariate (see Ferraty and Vieu, 2006)  in the case of an alpha-mixing sample. We have then introduced (with my advisors F. Ferraty and P. Vieu) a general framework to construct a great variety a structural tests on m (no-effect, linearity,  ...) together with a bootstrap method to compute the threshold in practice. The specific case of no-effect tests has been specifically considered and experimented (in various ways) on spectrometric data. A more recent joint work with C. Timmermans and R. von Sachs  provides first theoretical results justifying the use of cross-validation to choose the pseudo-metric used to define the extended kernel estimator.

I also focused, in a joint work with C. Crambes and A. Laksaci, on the asymptotic study of robust estimators in this kind of model. We obtained similar results as those established for the kernel estimator of m in terms of Lp convergence (with explicit expression of the moments) in the case of an alpha-mixing sample.

A.2. Segmentation of hyperspectral images
Splitting a picture into a set of homogenous regions (namely groups of pixels with similar characteristics) is a common problem, called segmentation, in image analysis. The detection of such regions is usually a relevant way to identify specific parts of the scene (which usually have a concrete meaning). Various methods have been proposed to segment gray-level or multispectral images. Several of them are based on bayesian frameworks (see for instance Besag, 1989, Deng and Clausi, 2004, Orbanz et Buhmann, 2008, Chen et al.,2010, Pereyra et al., 2013, and the references therein). The maximum a posteriori approach (M.A.P.), is a commonly used region-based segmentation method. It consists in looking for the most likely segmented image x conditionnaly to the original image y. This is equivalent (via Bayes Formula) to solve
xMAP = argmaxx P(X = x|Y = y)= argmaxxfY|X=x(y) P(X = x)
A Potts random field is used as prior for X to model spatial regularity (between pixels) inside the segmented image. While fY|X=x(y) may be estimated (under some hypotheses) from estimated densities (usually assumed to be gaussian) on each region defined by x.

I work with C. Louchet on the extension of these segmentation methods to hyperspectral images (or more generally images for which a curve - discretized into a large number of points - is associated to each pixel). Combining recent advances in nonparametric functional statistics on density estimation (namely Dabo-Niang, 2004) with the M.A.P. approach described in the previous paragraph, we propose an innovative procedure. An Iterated Conditional Mode algorithm is used to search for the maximum a posteriori. First experiments of the proposed method on both simulated and real images are promising.

B. Semiparametric M-estimation with non smooth criterion function
I have also worked with I. Van Keilegom as part of ERC project "M- and Z-estimation in semiparametric statistics : applications in various fields". More precisely, we were interested in the theoretical study of  semiparametric M-estimation methods whose aim is to estimate a parameter of interest µ0 that maximises a semiparametric criterion
M (µ ; h0) = E[m(X1; µ ; h0)]
in which h0 is an unknown nuisance parameter and the function m is not differntiable with respect to µ.

A natural estimator of µ0 is hence the value that maximizes the following empirical criterion
Mn ; ĥ) := Σi=1,...,n m(Xi; µ ; h0) / n

Our work aims to combine previous works considering non smooth criterion function: Chen et al. (2003) in the case of Z-estimation and Van der Vaart et Wellner (1996) in the case of M-estimation without nuisance parameter. Entropy assumptions on the set containing the nuisance parameter and empirical process properties are some of the tools used to get, under fairly general assumptions, the consistency, the convergence rate and the asymptotic distribution of these estimators.