Scientists have created a ‘digital mask’ that will allow facial images to be stored in medical records while preventing potentially sensitive personal biometric information from being extracted and shared.
In research published today in Nature Medicine, a team led by scientists from Cambridge and China used three-dimensional (3D) reconstruction and deep learning algorithms to erase identifiable features from facial images while retaining disease-relevant features needed for diagnosis.
Facial images can be useful for identifying signs of disease. For example, features such as deep forehead wrinkles and wrinkles around the eyes are significantly associated with coronary heart disease, while abnormal changes in eye movement can indicate poor visual function and visual cognitive developmental problems. However, facial images also inevitably record other biometric information about the patient, including their race, sex, age and mood.
With the increasing digitalisation of medical records comes the risk of data breaches. While most patient data can be anonymised, facial data is more difficult to anonymise while retaining essential information. Common methods, including blurring and cropping identifiable areas, may lose important disease-relevant information, yet even so cannot fully evade face recognition systems.
Due to privacy concerns, people often hesitate to share their medical data for public medical research or electronic health records, hindering the development of digital medical care.
“During the COVID-19 pandemic, we had to turn to consultations over the phone or by video link rather than in person. Remote healthcare for eye diseases requires patients to share a large amount of digital facial information. Patients want to know that their potentially sensitive information is secure and that their privacy is protected.”
Professor Haotian Lin, Sun Yat-sen University, Guangzhou, China
Professor Lin and colleagues developed a ‘digital mask’, which inputs an original video of a patient’s face and outputs a video based on the use of a deep learning algorithm and 3D reconstruction, while discarding as much of the patient’s personal biometric information as possible – and from which it was not possible to identify the individual.
Deep learning extracts features from different facial parts, while 3D reconstruction automatically digitises the shapes and movement of 3D faces, eyelids, and eyeballs based on the extracted facial features. Converting the digital mask videos back to the original videos is extremely difficult because most of the necessary information is no longer retained in the mask.