Models of Light Reflection for Computer Synthesized Pictures
- Jim Blinn
Siggraph 1977 |
Published by Association for Computing Machinery, Inc.
When I was a grad student at Utah I spend a lot of time poking around the engineering library for interesting information. There I happened to stumble upon the archived editions of the Transactions of the Illumination Engineering Society which seemed like it would have useful info. Sure enough in the 1910 volume there was an article by F. H. Gilpin where he actually measured the reflection off of some surfaces and had actual graphs of the reflection amount as a function of various angles. I originally attempted to digitize these graphs to make a sort of table-look-up function that was driven by the physical measurements. At the time there were no such things as scanners or digitizers. The most sophisticated tool I had was a Xerox machine that could make enlargements on paper. I did this and started to measure points on the curves with a ruler and enter the measurements manually. This got a bit boring, so I decided to go back to the library to troll for a more recent result. I ultimately found a whole lot of articles on the subject but most of them were models that were not understandable or real useful for computer graphics. I was particularly amused, though, by one with a title something like “The bidirectional reflectance characteristics of snow” which came out of the University of Minnesota (home of much snow). This led me to the Torrance-Sparrow paper that ultimately made the most sense to me. They modeled specular reflection in terms of the vector H, halfway between the light and eye. If the surface was a perfect mirror and if the surface normal, N, aligned with H then the light and eye were in the right position for perfect mirror reflection. If the surface was rougher, then the specular amount was a function of what percentage was pointed in the direction of H. Again this led to a function of the dot product (N.H). The particular function they derived was in terms of spherical geometry giving a very complicated function involving many trig functions. I spent weeks trying to simplify this formulation filling up dozens of pages with trig identities. One of the hardest things was that the function was a choice between three different functions which applied to different regions of (N,L,E) space and whose boundaries in spherical geometry had to be calculated in order to figure out which sub-function to use. Finally I gave up and went back to their original geometric ideas and rederived the whole thing using dot and cross products. And I realized that the boundaries were simply the regions where two of the three functions were equal. That led to the considerably simpler formulation that I wrote up for Siggraph as a practical solution that you could evaluate relatively quickly in rendering software. When I wrote up the result I was focused on the final result as a function of (N.H) and so I introduced the paper with a simpler model that was reminiscent of Phong’s model. He did (E.R) to a power and I did (N.H) to a power. I realized that these two functions were different, but they were similar. They both had a maximum of 1 when E=R (or equivalently (N=H)) and fell off to smaller values as these vectors moved apart. In the second section of the paper I then proceeded to show my derivation of the more complex Torrance-Sparrow model as a function of (N.H). One thing has always bothered me a bit. The TS model gives the fall-off function in terms of a physically meaningful quantity (statistical distribution of micro-facet directions, a property of the surface) while Phong’s calculation (cosine to a power) was just picked as a convenient way to sculpt the function to a desired shape. But taking a number to an arbitrary power (that is not a power of 2) is actually more computationally expensive than the TS model calculations. So I always wondered why people still speak of “cosine power” as a property of a surface even today. I understand that there is a menu item in programs such as Maya called "Blinn Shading" (as an alte
In the production of computer generated pictures of three-dimensional objects, one stage of the calculation is the determination of the intensity of a given object once its visibility has been established. This is typically done by modeling the surface as a perfect diffuser, sometimes with a specular component added for the simulation of hilights. This paper presents a more accurate function for the generation of hilights that is based on some experimental measurements of how light reflects from real surfaces. It differs from previous models in that the intensity of the hilight changes with the direction of the light source. Also the position and shape of the hilights is somewhat different from that generated by simpler models. Finally, the hilight function generates different results when simulating metallic vs. nonmetallic surfaces. Many of the effects so generated are somewhat subtle and are apparent only during movie sequences. Some representative still frames from such movies are included.
Copyright © 2007 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept, ACM Inc., fax +1 (212) 869-0481, or permissions@acm.org. The definitive version of this paper can be found at ACM's Digital Library --http://www.acm.org/dl/.