Here is a table summarizing the different projects I had in my scholarship :

IT projects | ||

IT languages | Topics | |

Programming | ADA | Red donkey game |

Assembler | Processor programming | |

C | Memory management Operating system | |

C++ | Merging pictures | |

JAVA | Dictionary Database management of a clinic | |

SQL | Database management of a clinic | |

Mathematics | Matlab | Fourrier transform |

Scilab | Smoothing and polynomial interpolation | |

R | Inference / Tests Multidimensional statistical analysis |

(*) : but also smaller projects in other courses.

Please find below a comprehensive description of these projects with links to associated files.

Developed an option pricer, and realized limits of the modelling (gaussian asumption on log-returns). Applications were coded in VBA, with special focus on generating random numbers. We then compare different methods to lower the variance of estimations. We studied:

- vanilla options,
- asiatic options,
- loockback options.

> See our report here.

> Back to the top

The aim of this project was to show that using the put formula in order to compute the cost of hedging this kind of products leads to some errors. We were mainly asked to study:

- the optimal hedging,
- the problem of deaths, which cannot be mutualized,
- the assessment of these errors.

> See our report here.

> Back to the top

In the insurance framework, computing the aggregated loss over a large portfolio is one of the most difficult things to do. Usually, this requires using numerical methods since there is no closed-formula for the global loss distribution. That is why some practicioners developed the collective model. However, we often have to consider risks (policyholders) individually: stochastic orders enables then to get qualitative results in terms of riskiness of the whole portfolio. This is an alternative to quantitative algorithms such as Panjer. The three stochastic orders of this project are:

- integral stochastic orders,
- harmonic average mean remaining life order,
- Lorenz order.

> See the report here.

> Back to the top

Risk measures have been defined to assess the portfolio exposure to some given risk. The most famous one is the Value-at-Risk (VaR), which is nothing else than a quantile of the loss distribution. On financial markets, the pricing of assets is often based on the gaussian assumption of log-returns (knowing that the asset price is usually a geometric brownian motion). However, reality shows that we sometimes experience some jumps of this price, making the classical assumption wrong. This project has therefore been the opportunity to learn new jump diffusion models such as:

- jump diffusion Merton model,
- double-exponential model with jumps,
- one-sided jump model with diffusion.

> See the report here.

> Back to the top

To cover huge and unexpected losses due to natural catastrophes, insurers are used to transfering their risk on financial markets. To make it, they create a special purpose vehicle (SPV) where the investors' cash is basically invested without any risk. The different steps of this project were the following:

- understand the risk transfer mecanism,
- price the cat bond,
- use Wang transform as a risk measure,
- discussion about CAT risk management.

> See the report here.

> Back to the top

CDS are very famous financial products on the interbank market (this is somewhat an insurance contract that covers the potential default of one of the two banks). We had to price CDS thanks to the method proposed by Dominic O'Kane et Stuart Turnbull (in their published paper). The goal was to estimate default probabilities, and build the curve of risk-neutral probabilities to experience a default. To perform this study, following topics were addressed:

- implement CDS pricing model,
- fit default probabilities,
- study the sensitiveness of default probabilities curves to market downturns,
- study the sentiveness to spread curve movements.

> See the report here.

> Back to the top

Applying classical estimation techniques to unclassical distributions was the core of this project. Typical probability laws from insurance claims were studied, especially: Poisson (0-inflated Poisson) and Negative binomial laws (0-inflated one) for frequency; Exponential, Gamma, Lognormale, Pareto and Burr distributions in case of severity modelling. Questions were about:

- graphical analysis: qq-plot, empirical mean excess function, empirical cumulative distribution function,
- moment and maximum likelihood estimation,
- the study of the excess-of-loss function,
- goodness-of-fit tests,
- model selection.

Another part of the project was to improve our knowledge on modelling correlation (with copulas). The report of this project is here .

> Back to the top

The aim of this project was to link numerical estimations of parametric distributions to ruin theory. We studied:

- modelling the correlation by copulas,
- aggregated loss distribution process (with or without limit),
- surplus process of an insurance company,
- adjustment coefficient and ruin probability,
- exponential bounds.

> See the report here.

> Back to the top

Greeks are useful indicators of the option price sensitiveness to financial markets. In this project, we compute greeks for a call option whose main characteristics are known. MCMC simulations enable us to the sentiveness depending on the return, the volatility, the duration... This allows to quantify the impact of market downturns on the call option price. This project was performed in C++ and we had to:

- simulate the geometric brownian motions,
- compute greeks by simulation techniques: pathwise et likelyhood ratio methods,
- lower the variance of estimations,
- use the Malliavin calculus on asiatic options.

> See the report here.

> Back to the top

- Actuar: Development of credibility regression models including temporal trends: Hachemeister model. Academic supervisor: Professor Vincent Goulet (University Laval, 2008)

> See the memoir here.

> Back to the top

We had to estimate parameters from the Gamma distribution by different techniques: moment method, maximum likelihood method, pivotal function to build new confidence bands and tests. The different topics addressed were:

- properties of different estimators (moment versus maximum likelihood),
- simulations,
- empirical validation of maximum likelihood estimator (MLE) properties (asymptotically gaussian),
- computation of the mean squared error (MSE) for the MLE,
- same study with the moment estimator,
- comparison between both,
- confidence intervals.

> See the report here.

> Back to the top

Data are the accidents listed in Isere (France) in 2003. The goal was to determine the main drivers' characteristics leading to abnormal accident frequency or severity. To explain these reponse variables, we investigated the following models:

- linear regression (simple and multiple),
- analysis of variance (ANOVA),
- analysis of variance with two risk factors (ANOVA 2),
- principal component analysis (ACP),
- AFC.

> Have a look to the report here.

> Back to the top

The goal was to code the game: this was done in three distinct steps:

- programming the area to play,
- programming the exceution of players' directives => here
- looking for the optimal solution to win => here

> Back to the top

Project in assembler: programming low-level instructions for a processor.

> Back to the top

In IT, the memory management is a key point which must be rigorously questioned. Depending on the operating system, this management system differs. The aim was to realize the main differences between existing possible choices, and to see whether one of them is better whatever the situation. This project has been done in C language.

> Have a look to this work here.

> Back to the top

We learnt in this project the management of instructions, memory blocks and threads.

> Back to the top

Studying various algorithms in C++ in order to find the shortest way to reach a given endpoint. Implemented famous algorithms: Dijkstra and Bellman. Issues from the merging region were the core of the project.

> Back to the top

This project, in Java, was the opportunity to discover our first object-oriented language at ENSIMAG. We had to implement the tree structure of a dictionary, as well as some classical commands: add a new word, delete another one... Finally, we got familiarized with important notions of objects, attributes and methods.

> See the report of the project here.

> Back to the top

Nowadays, we can get more and more data (because of the high performance of computers). A good database management system is thus very important to be effective, and practitioners are used to improving it. Here the purpose was to code the database management system of a clinic, both in C and Java. This required four different steps:

- assess the functional dependencies,
- discuss about an entity / association schema,
- get a relational version of it,
- manage requests from customers and staff of the clinic.

See our report here

> Back to the top

A Matlab code was designed to perform Fourrier transform so as to highlight well-known problems of this technique:

- Gibbs phenomenon and Fourrier infinite sums,
- the computation of Fourrier coefficients by the Fast Fourrier Transform (FFT),
- solving a differential equation with the FFT,
- filtering data with noise.

> See the report here.

> Back to the top

The aim was to get used to Scilab language by programming different polynomial interpolations:

- Vandermonde matrix => here ,
- Newton method => here ,
- stepwise linear interpolation => here ,
- interpolation spline et courbes splines => here ,
- programming a chinese word with all these methods => here .

> Back to the top