David Frohlich is an author and Professor of Interaction Design at the University of Surrey in the UK. He has contributed numerous studies and patents to the field of digital photography, augmented paper, and memorabilia in his fourteen years as a Senior Research Scientist at HP Labs and during his tenure as the Director of the Digital World Research Centre at the University of Surrey. I had the chance to sit down with David to discuss his twenty years of research in the area of “audiophotography”.
Can you please explain the term “audiophotography”?
It’s a simple definition I use: [audiophotographs] are just ‘photographs with sound’. I pitch it as a new media form between photos and video. So if you think about slowing down the frame rate of video you get to the design space I’ve been playing in, which is a sequence of images with sound of various types. It’s sonically richer than video, in the sense that it can incorporate the ambient sounds that video does, as well as voiceover, storytelling, and associating photos with music. This turns out to be very nostalgic in its own right and also atmospheric in communicating a mood. In that sense, it’s kind of fundamental to what we do already with photos. One of the tricks has been [to ask] ‘what do consumers already want?’ because you can do very complex things with media, but not everyone wants to be a multimedia editor. I came at it from HP and thinking of how to make the process very simple. Since then, I’ve moved on to working in India and South Africa with developing communities where the needs are very different, but also similar in making [the technology] very easy. We’ve thought of it as an audio-visual communication tool for people with low levels of literacy because you don’t need text.
Does audiophotography work with printed photographs? If so, how?
This has been one of my big interests, especially coming out of HP who is a printing company, as well as a computing company. You can, of course, play back audiophotos just like video on screen, so they are compatible with screen-based technology. But one of the intriguing things is that you can also play back sound from print. When I got into this, I asked one of the engineers at HP, “How can you do this? Can you encode sound into paper?” There are about six different technologies that you can do this with. Printing the sound file as a pattern of speckled dots is one method. Our very first demo was an audio scanner, which scanned the printed sound file to turn it back into a .wav file. Embedding chips in the paper is another way of doing it, however we didn’t pursue this method. Now with advances in printed electronics, you can begin to print circuits and embed chips in paper and books. More commonly (and you’re seeing this come into production now) is augmented reality with optical recognition. QR codes are an example, as well as Blippar or Layar [apps] that actually recognize the image itself. That’s not embedding sound into the paper but tagging the paper so you can then fetch the sound from somewhere else. It’s surprising how many technologies make this possible. It then becomes an issue of ‘do people want that?’
Do you think people want it?
I think people don’t know they want it… If they see it, they like it. One of my latest demos is a photobook, which has photosensitive tags on each page wired to a chip in the back. The chip sends the data from the tags off the paper, which allows a sound-scape to play for each page. So, if you imagine a beautiful photobook memory keepsake that you’ve made of a special trip or to commemorate a birthday, you could have a whole range of sounds linked to that album and the sounds could, technically, just play in the room when you open the book. You can’t do that very easily with commercial technology today; [instead] you’d have to hold a mobile phone over the print to link to the sounds and this doesn’t provide the same experience. I think people will like it if they see it and some of the strong applications are educational books for kids, as well as language learning where you need to hear the text and the sound of language. Furthermore, anything where people might have some kind of disability in one domain that you can compensate for with sound is another application. So if people are visually impaired, some parts of it could read aloud to them. Also, cross-language pieces such as leaflets would benefit from this technology, where adjunct information or additional languages could be read aloud.
You are obviously very passionate about this topic. What has kept you interested in studying it for twenty years?
I think it’s the belief that it’s a new form of photography that has kept me interested. I once thought of it like early movies and “talkies” because it seemed to be a similar value proposition. If you added sound to silent movies, why not add sound to silent photographs? In fact, the words “silent photographs” is raised by the possibility of talking photographs. So I still think there’s something amazing about reinventing the photograph by having this other layer. Many people feel that’s already been done with video, but as you’ve heard from this interview, there’s more creative and interesting things you can do with audiophotos sonically and also, you can’t really print video. I didn’t really mean to work on it for so long but I kept coming back to it. A scientist friend of mine once said “You only really have two or three good ideas in your life and you tend to keep coming back to them”. For me, audiophotography has been one of mine.
Audio represents one of our five senses. Have you ever thought about incorporating, or layering, other sensory experiences?
I have thought about this and I actually have one of the few granted patents on a smell camera. Smell is very evocative for memory and we’d thought about how you’d combine smell technically, but it is very difficult to miniaturize smell synthesis technology. It’s all technically possible, which is one of the great things about being a scientist or inventor. It keeps you motivated to think, “Wow, I wonder how you could make that actually happen?” Although we patented it, we didn’t actually make [the smell camera]. I’m not even sure it could be made now fifteen years later. I heard that Kodak had been making ‘scratch and sniff’ type smell photos, but they had a problem where the photos went off and the smell kept getting worse and worse over time, which is probably not what you want for a consumer product.
Are there any examples of audiophotography in today’s world of smartphone cameras and apps?
Yes! In my book [Fast design, slow innovation: Audiophotography ten years on] I list about twenty audiophoto apps and twenty audiophoto narrative apps. Audiophoto apps are mainly for annotating photos with sound, usually with a single sound clip and they’re highly suitable for sending to other people. Audiophoto narratives have more to do with digital storytelling. Different companies have promoted it in different contexts but it’s essentially like making an audiophoto album, where you can browse through or watch through a sequence. On a recent trip to Niagara Falls, I took audiophoto frames and at the end of the day I had a whole story to which I could later add voiceover audio.
What’s next on your research agenda? Can you tell us about your latest work at the Digital World Research Centre?
I’m about to focus on the art of audiophotography a little bit harder than I have before. I have tended to come at these technologies from a platform and system perspective thinking about what system is needed to support this behaviour and then looking at the practice of the behaviour and how it might change. I also have a bid in called “Next Generation Paper”, thinking about exactly what you were asking about before [including] the uses for augmented paper, if it was better designed than simply holding a phone over a piece of paper. We’re thinking of developing both optical and printed electronic technologies for that.
If you would like to see examples of David’s work or you would like to contact him directly, please follow the links below: