You will never hear someone describe underwater photography as a conventional hobby. 

It’s a hard and expensive task that needs a platitude of special tools and methods just to take a picture. All of these together will get you a blue-washed, somehow distorted image that looks like it’s been taken through a thick layer of glass. Water not only washes all colors in a blue shade but distorts the pattern of light, so you’re not getting the sharp image you were aiming for. This is not just a problem of aesthetics: losing true colors makes it more difficult to use computer vision on underwater images for scientific research.

Like many other fields, AI has something to say here. The newly developed Sea-thru system uses a physics-based machine vision algorithm to drain all the water out of the picture, producing the image you desired to record. Sea-thru is not a simple color filter as may seem to some. It does not just boost red and yellow values to make a photo look better like a Photoshop filter. Using AI techniques, it corrects both colors and physical aspects of a photograph to their true values.

This algorithm takes raw rgbd images with natural lighting as input, Thus removing the need for special underwater lighting. Another problem is the half-transparent ocean water and use of artificial lighting make the objects that are far away from the light source appear faded and unclear in the photo. Sea-thru uses rgbd format images to make This big pain go away by analyzing the depth map in combination with the raw image. The author uses a new image formation model for undersea images to calculate the underwater range-dependent attenuation coefficient. 

Another obstacle in high-quality underwater photography is higher noise caused by reflection of light from floating particles in seawater, called backscatter. The Sea-thru method estimates backscatter using the dark pixels and their known range information. Then, it uses an estimate of the spatially varying illuminant to obtain the range-dependent attenuation coefficient.

The research dataset  includes RAW images and corresponding depth maps divided into 5 subsets based on camera, lens used, recording angle, depth, scene type and backscatter coefficient. The researchers have photographed the same scene from several different angles in order to compute the depth map. This valuable collection of more than 1,100 images from two optically different water bodies has been made freely available for non-profit research purposes. The researchers published a paper in CVPR journal showing that their method with the revised model outperforms those using the atmospheric model. 

dataset description table

 Acquisition of high-quality undersea imaging by Consistent removal of water will open up large underwater datasets to various computer vision and machine learning algorithms and clears colorful, exciting paths for the future of underwater exploration and conservation in a time when our seas and oceans are constantly getting more polluted and warmer. 

You can read the full paper here