Soon, we won’t need to use the Help function. The computer will sense that we have a problem and come to the rescue by itself. This is one of the possible implications of new research at University of Copenhagen and University of Helsinki.
“We can make a computer edit images entirely based on thoughts generated by human subjects. The computer has absolutely no prior information about which features it is supposed to edit or how. Nobody has ever done this before,” says Associate Professor Tuukka Ruotsalo, Department of Computer Science, University of Copenhagen.
The results are presented in an article accepted for publication at the CVPR 2022 (Computer Vision and Pattern Recognition), the most prestigious conference of the field.
Brain activity as the sole input
In the underlying study, 30 participants were equipped with hoods containing electrodes which map electrical brain signals (electro-encephalo-graphy; EEG). All participants were given the same 200 different facial images to look at. Also, they were given a series of tasks such as looking for female faces, looking for older people, looking for blonde hair etc.
The participants would not perform any actions, just look briefly at the images – 0.5 second for each image. Based on their brain activity, the machine first maps the given preference and then edits the images accordingly. So, if the task was to look for older people, the computer would modify the portraits of the younger persons, making them look older. And if the task was to look for a given hair color, everybody would get that color.
“Notably, the computer had no knowledge of face recognition and would have no idea about gender, hair color, or any other relevant features. Still, it only edited the feature in question, leaving other facial features unchanged,” comments PhD Student Keith Davis, University of Helsinki.
Some may argue that plenty of software capable of manipulating facial…