Basic principles

There are a number of key technology trends that are currently developing. Modern hardware is able to capture higher resolution photos and videos and, in the case of pictures, more than a single frame (burst mode).
Clearly, the challenge with those images and videos is to transfer and view them on the large variety of today's devices like smartphones and tablets but also very large displays that have 5K resolution and more. Wireless connections were once the exception but are now the rule. All these factors contribute to several issues with today’s handling of images and videos.


The potential is in the difference

Cameras are able to capture a large amount of pixels in an image, even with mobile devices. The amount of pixels, that for instance a mobile phone can display to cover the entire screen, is far less. A typical camera in a mobile phone captures 8 million pixels but can only display 2 million on its screen. What happens is that the images are transferred from the cloud storage to a mobile device entirely to be viewed.
For the viewing though, the images are too large so the device has to shrink them algorithmically. In other words, first too much data is transferred which causes long waiting times and higher costs, only to invest tedious calculations to make the image fit the screen.
Our technology eliminates any outsized transfer but instead retrieves a minimal amount of data according to the user's viewing behaviour. The larger the difference between the amount of pixels captured and the amount of pixels to fill the screen is, the more data, time and money we can save with our approach.


Attempts to bypass the issues

There are some "brute" ways to view larger images and videos on smaller resolution devices:

  • Create lower resolution versions of the images and videos
  • Compress heavily and accept loss of quality
  • Discard image data
All these approaches have significant disadvantages (for both the provider and the user). Compression or complete omission of image data degrades the quality of the viewer's experience and, in some cases, the quality of interpretation (think of medicine or security applications). Creating lower quality versions of the images or videos requires a large amount of files in most situations. Netflix, for example, stores about 120 versions of the same video to serve the various devices of its customers. Facebook uses five formats. This is extremely inefficient and requires a tremendous amount of infrastructure to manage.


An image becomes a database

Would you ever consider to copy an entire database to your device to get a certain piece of information? You would not as you rely on flexible ways to just query the information that you absolutely need. Although the difference between what you need and the information that is available is not a large as in the database example, the same difference exists when you need some visual information from an image or video. However, until now the flexible ways to query an image or video did not exist.


Our technology closes this gap by providing all required means to query information (e.g. zoom level, color space, quality level, specific region etc.) for a transfer of imagery over the different types of networks.

The principles how this is done are described in more details in "our view of the image world" and "The DaVinci project".