Computer vision experts from Queen Mary University of London have developed software that’s already better than humans at recognising what hand-drawn sketches are supposed to represent.
The program, known as Sketch-a-Net, is a ‘deep neural network’, meaning it emulates how the human brain processes inputs. It gathers data from both the shape of the object and the order in which the strokes were drawn to try to work out what it is.
In early testing, it managed a success rate of 74.9%, compared to human tests which averaged out at just 73.1%. It’s particularly good at distinguishing between similar objects – being able to determine different types of bird with 42.5% accuracy compared to 24.8% for humans.
It’s hoped the technology could be applied in mobile phones and tablets so touchscreens can understand what you’re drawing. Its inventors suggest it could also be used to search image libraries by quickly sketching what you’re looking for.
“It’s exciting that our computer program can solve the task even better than humans can,” said Timothy Hospedales, co-author of the study. “Sketches are an interesting area to study because they have been used since prehistoric times for communication and now, with the increase in use of touchscreens, they are becoming a much more common communication tool again.”
He added: “This could really have a huge impact for areas such as police forensics, touchscreen use and image retrieval, and ultimately will help us get to the bottom of visual understanding.”
Image credit: Karin Dirkx // CC BY-NC-ND 2.0