PhD Position F/M Balancing Performance and Sustainability for FaaS in the Fog

Inria
April 08, 2023
Contact:N/A
Offerd Salary:Negotiation
Location:N/A
Working address:N/A
Contract Type:Other
Working Time:Negotigation
Working type:N/A
Ref info:N/A

2023-05768 - PhD Position F/M Balancing Performance and Sustainability for FaaS in the Fog

Contract type : Fixed-term contract

Level of qualifications required : Graduate degree or equivalent

Other valued qualifications : Master's degree

Fonction : PhD Position

About the research centre or Inria department

The Inria Rennes - Bretagne Atlantique Centre is one of Inria's eight centres and has more than thirty research teams. The Inria Center is a major and recognized player in the field of digital sciences. It is at the heart of a rich R&D and innovation ecosystem: highly innovative PMEs, large industrial groups, competitiveness clusters, research and higher education players, laboratories of excellence, technological research institute, etc.

Context

The PhD student will work in the MYRIADS team, part of IRISA and INRIA RENNES – BRETAGNE ATLANTIQUE at Rennes. The team focuses on building next generation utility computing platforms for highly distributed cloud and fog infrastructures (https:// www. irisa.fr/myriads). Rennes is the capital city of Brittany, in the western part of France. It is easy to reach thanks to the high–speed train line to Paris. Rennes is a lively city and a major center for higher education and research.

The work will be carried out in collaboration with the University of Guadalajara in Mexico. The student will be co-supervised by Professor Hector Duran-Limon at that institution.

Assignment

Fog computing is an extension of the traditional cloud computing model in which compute, storage, and network capabilities are distributed closer to users 1. Fog computing is motivated by the need to support Internet of Things (IoT) applications, such as smart cities and AI-enabled surveillance systems, that have strict demands for bandwidth and low-latency computation. A compelling programming model for developing such applications is the Function-as-a-Service (FaaS) model 2, the core element of serverless computing. FaaS supports easy movement of functions along the cloud-to-thing continuum, allowing optimizing for diverse factors, such as latency and energy efficiency. Moreover, FaaS supports fine-grained, short-lived resource allocations, enabling increased infrastructure utilization.

Managing FaaS applications in fog environments presents significant challenges 3. First, fog resources are geo-distributed, heterogeneous (e.g., sensors, mobile devices, micro data centers) with diverse power sources (e.g., battery-powered, grid-powered) and subject to unpredictable changes (e.g., fog nodes joining, failing), making it difficult to make effective management decisions. Second, FaaS workloads are highly dynamic and have a high deployment density due to the short duration and small size of individual functions. This exacerbates interferences between workloads 4, making it difficult to predict the performance and energy impact of management actions. Third, FaaS platforms must carefully balance meeting the Quality of Service (QoS) requirements of applications with reducing the energy consumption and carbon footprint of fog infrastructure, which is particularly pressing given growing concerns over sustainability 5. These objectives often conflict with each other, such as in the case where promptly releasing unused functions saves energy but leads to cold start latencies, reducing QoS 3.

Main activities

This thesis will explore QoS-driven, energy-aware management of FaaS applications in fog environments. Specifically, the goal is to develop an automated management solution that can ensure QoS requirements for FaaS applications while also reducing energy and carbon usage. This solution will have the ability to evaluate the potential costs and benefits of different management actions 6 by predicting their impact on performance and energy consumption 7. To achieve this, the solution will use performance interference analysis techniques that were initially developed for High Performance Computing (HPC) applications 8,9 and adapt them to the specific characteristics of FaaS workloads 4. Energy and carbon will be considered as first-class resources, and management will be guided by QoS requirements as well as energy consumption requirements 10, formalized in Service Level Agreements (SLAs).

The developed techniques and management tools will be deployed and tested in an environmental monitoring project on the Beaulieu campus, in partnership with researchers from the Observatoire des Sciences de l'Univers de Rennes (OSUR). The project uses a fog infrastructure made up of sensors, actuators, resource-constrained edge nodes, and cloud nodes. This infrastructure supports hosting various FaaS workloads, such as applications for monitoring wildlife and water and air quality. The developed tools will build on the Kubernetes resource orchestration system and an existing open-source FaaS platform, such as OpenFaaS.

References

1 R. Mahmud, R. Kotagiri, and R. Buyya. “Fog Computing: A Taxonomy, Survey and Future Directions”, In: I nternet of Everything: Algorithms, Methodologies, Technologies and Perspective s. Ed. by B. Di Martino, K.-C. Li, L. T. Yang, and A. Esposito. Springer Singapore, Singapore, 2018, pp. 103–130, doi: 10.1007/978- 981-10-5861-55

2 J. Schleier-Smith, V. Sreekanti, A. Khandelwal, J. Carreira, N. J. Yadwadkar, R. A. Popa, J. E. Gonzalez, I. Stoica, and D. A. Patterson, “What serverless computing is and should become: The next phase of cloud computing”, Commun. ACM , vol. 64, no. 5, p. 76–84, Apr. 2021, doi: 10.1145/3406011

3 M. S. Aslanpour, A. N. Toosi, C. Cicconetti, B. Javadi, P. Sbarski, D. Taibi, M. Assuncao, S. S. Gill, R. Gaire, and S. Dustdar, "Serverless Edge Computing: Vision and Challenges", in 2021 Australasian Computer Science Week Multiconference (ACSW '21), ACM, New York, NY, USA, Article 10, 1–10, doi:/10.1145/3437378.3444367

4 L. Zhao, Y. Yang, Y. Li, X. Zhou, and K. Li. 2021, “Understanding, predicting and scheduling serverless workloads under partial interference”, In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC '21). ACM, New York, NY, USA, Article 22, 1–15, doi:10.1145/3458817.3476215

5 P. Patros, J. Spillner, A. V. Papadopoulos, B. Varghese, O. Rana and S. Dustdar, “Toward Sustainable Serverless Computing,” in IEEE Internet Computing , vol. 25, no. 6, pp. 42-50, 1 Nov.-Dec. 2021, doi: 10.1109/MIC.2021.3093105.

6 N. Parlavantzas, L. M. Pham, A. Sinha and C. Morin, “Cost-Effective Reconfiguration for Multi-Cloud Applications”, 26th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP) , Cambridge, UK, 2018, pp. 521-528, doi: 10.1109/PDP2018.2018.00088.

7 J. Flores-Contreras, H.A., Duran-Limon, A. Chavoya, et al. Performance prediction of parallel applications: a systematic literature review”. J Supercomput 77, 4014–4055 (2021), doi:10.1007/s11227-020-03417-5

8 J. Weinberg and A. Snavely. “User-guided symbiotic space-sharing of real workloads”, In Proceedings of the 20th annual international conference on Supercomputing (ICS '06). ACM, New York, NY, USA, 345–352, doi:10.1145/1183401.1183450

9 D. Yokoyama, B. Schulze, H. Kloh, M. Bandini, and V. Rebello, “Affinity aware scheduling model of cluster nodes in private clouds”, J. Netw. Comput. Appl. 95, C (October 2017), 94–104, doi: 10.1016/j.jnca.2017.08.001

10 T. E. Anderson, A. Belay, M. Chowdhury, A. Cidon, and I. Zhang, “Treehouse: A Case For Carbon-Aware Datacenter Software”, ArXiv preprint, arXiv:2201.02120

Skills
  • Excellent communication and writing skills
  • Strong programming and scripting skills in Linux environments
  • Knowledge and experience in one or more of the following areas: distributed systems, cloud, fog, IoT, performance and energy modeling, adaptive systems, HPC
  • Benefits package
  • Subsidized meals
  • Partial reimbursement of public transport costs
  • Possibility of teleworking (90 days per year) and flexible organization of working hours
  • Partial payment of insurance costs
  • Remuneration

    monthly gross salary amounting to :

  • 2051 euros for the first and second years and
  • 2158 euros for the third year
  • General Information
  • Theme/Domain : Distributed Systems and middleware System & Networks (BAP E)

  • Town/city : Rennes

  • Inria Center : Centre Inria de l'Université de Rennes
  • Starting date : 2023-10-02
  • Duration of contract : 3 years
  • Deadline to apply : 2023-04-08
  • Contacts
  • Inria Team : MYRIADS
  • PhD Supervisor : Parlavantzas Nikolaos / [email protected]
  • About Inria

    Inria is the French national research institute dedicated to digital science and technology. It employs 2,600 people. Its 200 agile project teams, generally run jointly with academic partners, include more than 3,500 scientists and engineers working to meet the challenges of digital technology, often at the interface with other disciplines. The Institute also employs numerous talents in over forty different professions. 900 research support staff contribute to the preparation and development of scientific and entrepreneurial projects that have a worldwide impact.

    Instruction to apply

    Please submit online : your resume, cover letter and letters of recommendation eventually

    For more information, please contact [email protected]

    Defence Security : This position is likely to be situated in a restricted area (ZRR), as defined in Decree No. 2011-1425 relating to the protection of national scientific and technical potential (PPST).Authorisation to enter an area is granted by the director of the unit, following a favourable Ministerial decision, as defined in the decree of 3 July 2012 relating to the PPST. An unfavourable Ministerial decision in respect of a position situated in a ZRR would result in the cancellation of the appointment.

    Recruitment Policy : As part of its diversity policy, all Inria positions are accessible to people with disabilities.

    Warning : you must enter your e-mail address in order to save your application to Inria. Applications must be submitted online on the Inria website. Processing of applications sent from other channels is not guaranteed.

    From this employer

    Recent blogs

    Recent news