The work done by these artists is great, but I think they are taking the wrong approach. Here's how I would solve the problem of accurately reconstructing the color of old photographs.
The approach I'd take eliminates much of the painstaking work, but does involve some other hard work initially. The main objective of my approach is to replace the guessing of colors and reliance on external information, i.e. information outside of the photograph itself, in order to choose colors more accurately. However, the approach I propose does involve work for each combination of camera and film, and I suspect there are many such combinations. It also relies on experts being able to tell what kind of camera and film were used in order to get the best results.
The first thing I would do is replicate the period camera and film as accurately as possible with one alteration. I would put a mostly transparent glass that reflected something like 20% of the light. This quasi-mirror would be at an angle to the film-lense line. The majority of the light would go to the film and create a picture like those that would have been taken in the period. The remaining light would go to a modern digital camera that would take a color digital photograph. Let's call this camera a dicamera because it takes two images of a single input scene. Let's call the first image the Grayscale Analog Picture or gap for short. Let's call the second image the Digital Reference Image or dri for short. Finally, we digitize the gap to create the Analog to Digital Grayscale Image or adgi by scanning the gap.
I would take lots of pictures with this reconstructed dicamera. I would try to take similar pictures to the ones being analyzed, but I would also take a lot of dissimilar images. More to the point, I would take pictures with lots of different colors running the entire spectrum and lighting gamut.
Now here's the part where art and engineering diverge. Using the tuplets of pictures created with the dicamera, i.e. the adgi and dri for each click of the dicamera, I would create a map of grayscale values to colors by comparing the grayscale value in the adgi with the correct color value in the dri. I would do this for every pixel in every picture tuple. The result would be a probability map of grayscale values (0..255) to rgb values (0..255, 0..255, 0.255).
Here's where it all comes together. Now we take the digitized and cleansed images we want to colorize. Using the color probability map, or cpm, we just created above, we present the original image and a colorized image to the artist using the most probable colors. The most probable colors picked could just be the highest values in the cpm, or it could take into account facial and object recognition.
In any case, the artist is presented with an original grayscale image, an automatically colored image, and a third image in which he picks other colors from the cpm. He's also shown the probability mapping of colors when he selects or mouse overs an area of the picture. The cpm can be illustrated as a 2D graph where the x-axis is hue and the y-axis is probability.
Using edge detection or area selection by your favorite methods, the artist can pick different hues or colors from the cpm to colorize objects or parts of objects in the third image.
This would be a more precise way of guessing the colors using information in the original photograph. Deep learning based on images produced by the artist compared to the automatic color generation could result in more accurate automated coloring. Also, this approach may reduce the tediousness of hand-coloring the images.
www.vubuBrcAwtY
The work done by these artists is great, but I think they are taking the wrong approach. Here's how I would solve the problem of accurately reconstructing the color of old photographs.
The approach I'd take eliminates much of the painstaking work, but does involve some other hard work initially. The main objective of my approach is to replace the guessing of colors and reliance on external information, i.e. information outside of the photograph itself, in order to choose colors more accurately. However, the approach I propose does involve work for each combination of camera and film, and I suspect there are many such combinations. It also relies on experts being able to tell what kind of camera and film were used in order to get the best results.
The first thing I would do is replicate the period camera and film as accurately as possible with one alteration. I would put a mostly transparent glass that reflected something like 20% of the light. This quasi-mirror would be at an angle to the film-lense line. The majority of the light would go to the film and create a picture like those that would have been taken in the period. The remaining light would go to a modern digital camera that would take a color digital photograph. Let's call this camera a dicamera because it takes two images of a single input scene. Let's call the first image the Grayscale Analog Picture or gap for short. Let's call the second image the Digital Reference Image or dri for short. Finally, we digitize the gap to create the Analog to Digital Grayscale Image or adgi by scanning the gap.
I would take lots of pictures with this reconstructed dicamera. I would try to take similar pictures to the ones being analyzed, but I would also take a lot of dissimilar images. More to the point, I would take pictures with lots of different colors running the entire spectrum and lighting gamut.
Now here's the part where art and engineering diverge. Using the tuplets of pictures created with the dicamera, i.e. the adgi and dri for each click of the dicamera, I would create a map of grayscale values to colors by comparing the grayscale value in the adgi with the correct color value in the dri. I would do this for every pixel in every picture tuple. The result would be a probability map of grayscale values (0..255) to rgb values (0..255, 0..255, 0.255).
Here's where it all comes together. Now we take the digitized and cleansed images we want to colorize. Using the color probability map, or cpm, we just created above, we present the original image and a colorized image to the artist using the most probable colors. The most probable colors picked could just be the highest values in the cpm, or it could take into account facial and object recognition.
In any case, the artist is presented with an original grayscale image, an automatically colored image, and a third image in which he picks other colors from the cpm. He's also shown the probability mapping of colors when he selects or mouse overs an area of the picture. The cpm can be illustrated as a 2D graph where the x-axis is hue and the y-axis is probability.
Using edge detection or area selection by your favorite methods, the artist can pick different hues or colors from the cpm to colorize objects or parts of objects in the third image.
This would be a more precise way of guessing the colors using information in the original photograph. Deep learning based on images produced by the artist compared to the automatic color generation could result in more accurate automated coloring. Also, this approach may reduce the tediousness of hand-coloring the images.
#scitech