The Visual angle
The visual angle is the angle subtended by an object at the eye of an observer. Visual angles are generally defined in degrees, minutes and seconds. (A minute is 1/60 degree and a second is 1/60 minute). As a general rule, a thumbnail held at arm's length subtends about 1 degree of visual angle. Another useful fact is that a 1-cm object viewed at 57 cm has a visual angle of approximately 1 degree.
computer graphics & related stuff
cg, animation, virtual reality, non photorealistic rendering, shading, & rendering
Monday, December 06, 2004
Two-stage model of human visual information processing
Stage 1 Processing includes:
- Rapid parallel processing
- Extraction of features, orientation, color, texture and movement patterns
- Transitory nature of information, which is briefly held in an iconic store
- Bottom-up, data driven model of processing
Stage 2 Processing includes:
- Slow serial processing
- Involvement of both working memory and long-term memory
- More emphasis on arbitrary aspects of symbols
- Top-down processing
- Different pathways for object recognition and visually guided motion
Psychophysics and Cognitive Psychology
Psychophysics is the set of techniques that are based on applying the methods of physics to measurements of human sensation.
In cognitive psychology, the brain is treated as a set of interlinking processing modules.
Sensory and Arbitrary Representations
- Sensory is one for which the meaning is perceived without additional training
- Arbitrary is used to define aspects of representation that must be learned, having no perceptual basis.
Monday, March 22, 2004
glCullFace() - Indicates which polygons should be discarded (culled) before they're converted to screen coords.
glHint() - this command can be used to tradeoff between speed and image quality (not ALL implementations will take the hint).
- GL_PERSPECTIVE_CORRECTION_HINT - This refers to how the color values and texture coordinates are interpolated across a primitive (linearly or perspective-correct manner).
glColorMaterial() - Technique for minimizing performance costs associated with changing material properties.
glCopyTexSubImage2D - specify a two-dimensional texture subimage with pixels from the color buffer
glColorMask -- enable and disable writing of frame buffer color components
Specify, whether red, green, blue, and alpha can or cannot be written into the frame buffer. The default values are all GL_TRUE, indicating, that the color components can be written.
Tuesday, February 24, 2004
Friday, June 27, 2003
Friday:
meeting with the team. was intense but worth it. tim mcgraw was good, this place is throwing surprises every second.
to do :
apply to dartmouth
meet mueller and give him the books
read the 'fast algos' gaussian paper
Thursday, June 26, 2003
Thursday:
talked to christophe.. got some input and have to get that leFohn paper working.. that will do it.
got 2 textures in with 2 tex coords .. so :)
lets go for matmul now.. cmon.
Wednesday, June 25, 2003
Wednesday morn:
plan to integrate the whole idea. find more about nv_float_buffer.. go beyond
man, just got the sobel, smooth and gaussian smooth to work. it works like magic. God is truly great!! just PRAY man.
Tuesday, June 24, 2003
just got the TEXTURE_RECTANGLE_NV program working with multitexturing, so we can have a POT texture and a NPOT texture and both work jussst fine :)
NV_float_buffer limitations
There are several significant limitations on the use of floating-point
color buffers. First, floating-point color buffers do not support frame
buffer blending. Second, floating-point texture maps do not support
mipmapping or any texture filtering other than NEAREST. Third,
floating-point texture maps must be 2D, and must use the
NV_texture_rectangle extension.
NV_float_buffer
This extension has many uses. Some possible uses include:
(1) Multi-pass algorithms with arbitrary intermediate results that
don't have to be artifically forced into the range [0,1]. In
addition, intermediate results can be written without having to
worry about out-of-range values.
(2) Deferred shading algorithms where an expensive fragment program is
executed only after depth testing is fully complete. Instead, a
simple program is executed, which stores the parameters necessary
to produce a final result. After the entire scene is rendered, a
second pass is executed over the entire frame buffer to execute
the complex fragment program using the results written to the
floating-point color buffer in the first pass. This will save the
cost of applying complex fragment programs to fragments that will
not appear in the final image.
(3) Use floating-point texture maps to evaluate functions with
arbitrary ranges. Arbitrary functions with a finite domain can be
approximated using a texture map holding sample results and
piecewise linear approximation.
The NV_fragment_program extension provides a general computational model that supports floating-point numbers constrained only by the precision of the underlying data types.
To enhance the extended range and precision available through fragment programs, this extension NV_float_buffer provides floating-point RGBA color buffers that can be used instead of conventional fixed-point RGBA color buffers.
A floating-point RGBA color buffer consists of one to four floating-point components stored in the 16- or 32-bit floating-point formats (fp16 or fp32) defined in the NV_half_float and NV_fragment_program extensions.
A floating-point color buffer can also be used as a texture map, either by reading back the contents and then using conventional TexImage calls, or by using the buffer directly via the ARB_render_texture extension.
tuesday:
- reading 'NV30 opengl extensions' - mark kilgard
- 'using p-buffers for off-screen rendering' - chris wynn
- pbuffer program to rotate a cube