Post-Doctoral Research Visit F/M Optimization and Statistics

September 30, 2022
Offerd Salary:Negotiation
Working address:N/A
Contract Type:Other
Working Time:Negotigation
Working type:N/A
Job Ref.:N/A

2022-05334 - Post-Doctoral Research Visit F/M Optimization and Statistics

Contract type : Fixed-term contract

Renewable contract : Oui

Level of qualifications required : PhD or equivalent

Fonction : Post-Doctoral Research Visit


The position is in SIERRA team, which is based in the Laboratoire d'Informatique de l'École Normale Superiéure (CNRS/ENS/INRIA UMR 8548) and is a joint research team between INRIA Rocquencourt, École Normale Supérieure de Paris and Centre National de la Recherche Scientifique.

One of the main research field of Sierra is Machine Learning, a recent scientific domain, positioned between applied mathematics, statistics and computer science. Its goals are the optimization, control, and modelization of complex systems from examples. It applies to data from numerous engineering and scientific fields (e.g., vision, bioinformatics, neuroscience, audio processing, text processing, economy, finance, etc.), the ultimate goal being to derive general theories and algorithms allowing advances in each of these domains. Machine learning is characterized by the high quality and quantity of the exchanges between theory, algorithms and applications: interesting theoretical problems almost always emerge from applications, while theoretical analysis allows the understanding of why and when popular or successful algorithms do or do not work, and leads to proposing significant improvements.

Our academic positioning is exactly at the intersection between these three aspects---algorithms, theory and applications---and our main research goal is to make the link between theory and algorithms, and between algorithms and high-impact applications in various engineering and scientific fields, in particular computer vision, bioinformatics, audio processing, text processing and neuro-imaging.

Machine learning is now a vast field of research and the team focuses on the following aspects: supervised learning (kernel methods, calibration), unsupervised learning (matrix factorization, statistical tests), parsimony (structured sparsity, theory and algorithms), and optimization (convex optimization, bandit learning). These four research axes are strongly interdependent, and the interplay between them is key to successful practical applications.


Nowadays machine learning applications deal with extremely large-scale datasets (examples ~10^12, dimensions ~10^9). A prototypical example is the data produced by Large Hadron Collider at CERN, that is in the order of petabytes per day. While Deep Learning could be able to scale to such dimensions, it cannot be used in the context of scientific analysis or safety critical engineering applications due to the lack of guarantees on their predictions. Classical non-parametric learning algorithms on the other side are computationally demanding and do not scale up with modern large-scale datasets. However, they achieve state of the art results on medium scale datasets and are optimal from a statistical viewpoint (they achieve best possible accuracy) 1.

So the question arises if it is possible to have reduced computational complexity and optimal statistical guarantees.

Main activities

In the last few years part of the community started exploring different algorithms (like deep learning approaches) which have reduced computational complexity, but no theoretical guarantees. Recently a new trend of works 2,3,4 investigates how to build fast approximations of non-parametric algorithms that achieve optimal statistical guarantees and low computational requirements in the context of regression. Recently, in 6 techniques from 4 with recent advancements in multi-GPU computation have been merged to obtain the first algorithm with guarantees to scale up to 10^9 examples (and just on a simple workstation).

The goal of this position is to extend the results in 4,6 in the context of classification. The work is at the intersection of algorithms, statistics and optimization, and may focus primarily on any these three aspects depending on the candidate.

Students interested in this project should contact Alessandro Rudi to discuss it further.

1 Shalev-Shwartz, Ben-David, 2014. Understanding machine learning: From theory to algorithms. Cambridge university press.

2 Bach, F., 2013. Sharp analysis of low-rank kernel matrix approximations. In Conference on Learning Theory (pp. 185-209).

3 Rudi, A., Camoriano, R. and Rosasco, L., 2015. Less is more: Nyström computational regularization. In Advances in Neural Information Processing Systems (pp. 1657-1665).

4 Rudi, A., Carratino, L., & Rosasco, L. (2017). Falkon: An optimal large scale kernel method. Advances in neural information processing systems , 30 , (pp. 3888-3898).

5 https: // www. inventing-machine-learning

6 Meanti, G., Carratino, L., Rosasco, L., & Rudi, A. 2020. Kernel methods through the roof: handling billions of points efficiently. Advances in Neural Information Processing Systems , 33.


Technical skills and level required: Proficiency in linear algebra, statistics, probability, prediction and statistics required for machine learning. Excellent understanding of machine learning techniques and algorithms. In-depth knowledge of advanced statistical modeling techniques, including predictive modeling; Classification techniques.

Languages: English (oral and written)

Relational skills: Excellent communication skills (oral and written)

Software experience: Python, pytorch, bash, kernel methods, basic machine learning algorithms

Benefits package
  • Subsidized meals
  • Partial reimbursement of public transport costs
  • Leave: 7 weeks of annual leave + 10 extra days off due to RTT (statutory reduction in working hours) + possibility of exceptional leave (sick children, moving home, etc.)
  • Possibility of teleworking and flexible organization of working hours
  • Professional equipment available (videoconferencing, loan of computer equipment, etc.)
  • Social, cultural and sports events and activities
  • Access to vocational training
  • General Information
  • Theme/Domain : Optimization, machine learning and statistical methods Scientific computing (BAP E)

  • Town/city : Paris

  • Inria Center : CRI de Paris
  • Starting date : 2022-11-01
  • Duration of contract : 12 months
  • Deadline to apply : 2022-09-30
  • Contacts
  • Inria Team : SIERRA
  • Recruiter : Rudi Alessandro /
  • About Inria

    Inria is the French national research institute dedicated to digital science and technology. It employs 2,600 people. Its 200 agile project teams, generally run jointly with academic partners, include more than 3,500 scientists and engineers working to meet the challenges of digital technology, often at the interface with other disciplines. The Institute also employs numerous talents in over forty different professions. 900 research support staff contribute to the preparation and development of scientific and entrepreneurial projects that have a worldwide impact.

    Instruction to apply

    Defence Security : This position is likely to be situated in a restricted area (ZRR), as defined in Decree No. 2011-1425 relating to the protection of national scientific and technical potential (PPST).Authorisation to enter an area is granted by the director of the unit, following a favourable Ministerial decision, as defined in the decree of 3 July 2012 relating to the PPST. An unfavourable Ministerial decision in respect of a position situated in a ZRR would result in the cancellation of the appointment.

    Recruitment Policy : As part of its diversity policy, all Inria positions are accessible to people with disabilities.

    Warning : you must enter your e-mail address in order to save your application to Inria. Applications must be submitted online on the Inria website. Processing of applications sent from other channels is not guaranteed.

    From this employer

    Recent blogs

    Recent news