Is Probabilistic Programming the Future of Computer Graphics?

MIT computer scientists revisit a once-impossible computer vision technique.

Apr 19 2015, 8:25pm

Image: MIT/Kulkarni ​

Code done right should be small. A good programmer never repeats code that could be made modular and reused (as a function, object, or, eventually a complete library), nor do they allow a machine to do unnecessary or redundant work. Still, even a lightweight a program might wind up being many thousands of lines of code. 

Such is software engineering—or maybe not. Computer scientists are currently hard at work on so-called probabilistic programming languages, in which machine learning techniques are applied to the process of coding itself (graphics, specifically). Next month at the Computer Vision and Pattern Recognition conference, MIT researchers will present probabilistic models that reduce computer-vision programs/tasks formerly taking thousands of lines of code to less than 50, while achieving the same results.

The MIT tool is called Picture, which is described in ​a paper released last week. It is, in the words of the MIT team, "a probabilistic programming language for scene understanding that allows researchers to express complex generative vision models, while automatically solving them using fast general-purpose inference machinery. We use Picture to write programs for 3D face analysis, 3D human pose estimation, and 3D object reconstruction—each competitive with specially engineered baselines."

Video: MIT/Kulkarni

The technique at work actually dates from the earliest days of artificial intelligence research and is known as inverse graphics. As Tejas D Kulkarni, the MIT PhD student leading the Picture research, notes in ​a separate blog post, this is the real achievement of the work—reviving a once-impossible or at least prohibitively difficult computer graphics technique—and not so much lines-of-code (which is more incidental).

The idea is that rather than recreate a scene as we imagine it to be pixel by painstaking pixel, we can take information about the scene like lighting and reflectivity and infer what the scene is most likely to look like given some conditions. So, with help from a training set of IRL photographs, the algorithm starts to learn how different things should look given some set of conditions as inputs. So rather than painting a picture, it's as if a picture paints itself given some parameters—a much easier computation. 

"Using learning to improve inference will be task-specific, but probabilistic programming may alleviate re-writing code across different problems," Kulkarni offers in a separate statement. "The code can be generic if the learning machinery is powerful enough to learn different strategies for different tasks."