Friday, April 29, 2016

Colors in Computer Graphics



___
___Color
___
    URL:  http://alfonse.bitbucket.org/oldtut/Illumination/Tutorial%2009.html
In the real world, our eyes see by detecting light that hits them. The structure of our iris and lenses use a number of photorecepters (light-sensitive cells) to resolve a pair of images. The light we see can have one of two sources. A light emitting object like the sun or a lamp can emit light that is directly captured by our eyes. Or a surface can reflect light from another source that is captured by our eyes. Light emitting objects are called light sources.
The interaction between a light and a surface is the most important part of a lighting model. It is also the most difficult to get right. The way light interacts with atoms on a surface alone involves complicated quantum mechanical principles that are difficult to understand. And even that does not get into the fact that surfaces are not perfectly smooth or perfectly opaque.
This is made more complicated by the fact that light itself is not one thing. There is no such thing as “white light.” Virtually all light is made up of a number of different wavelengths. Each wavelength (in the visible spectrum) represents a color. White light is made of many wavelengths (colors) of light. Colored light simply has fewer wavelengths in it than pure white light.
Surfaces interact with light of different wavelengths in different ways. As a simplification of this complex interaction, we will assume that a surface can do one of two things: absorb that wavelength of light or reflect it. 
A surface looks blue under white light because the surface absorbs all non-blue parts of the light and only reflects the blue parts. If one were to shine a red light on the surface, the surface would appear very dark, as the surface absorbs non-blue light, and the red light does not have much blue light in it.
http://alfonse.bitbucket.org/


____
_____________________________________________________
____

      URL:  http://www.learnopengl.com/#!Lighting/Colors
The colors we see in real life are not the colors the objects actually have, but are the colors reflected from the object; the colors that are not absorbed (rejected) by the objects are the colors we perceive of them. For example, the light of the sun is perceived as a white light that is the combined sum of many different colors (as you can see in the image). So if we would shine the white light on a blue toy, it absorbs all the white color's sub-colors except the blue color. Since the toy does not absorb the blue value, it is reflected and this reflected light enters our eye, making it look like the toy has a blue color. The following image shows this for a coral colored toy where it reflects several colors with varying intensity:

www.learnopengl.com


You can see that the white sunlight is actually a collection of all the visible colors and the object absorbs a large portion of those colors. It only reflects those colors that represent the object's color and the combination of those is what we perceive (in this case a coral color). 
These rules of color reflection apply directly in graphics-land. When we define a light source in OpenGL we want to give this light source a color. In the previous paragraph we had a white color so we'll give the light source a white color as well. If we would then multiply the light source's color with an object's color value, the resulting color is the reflected color of the object (and thus its perceived color).



Windows and Coordinates




___
___Screen Window ( TO BE )
___

___
___Application Window (app window / window)
___

     :.../gl_FragCoord (build-in variable in FragmentShader)

gl_FragCoord is a built-in variable that is only available in a fragment shader. It is a vec3, so it has an X, Y, and Z component. The X and Y values are in window coordinates, so the absolute value of these numbers will change based on the window's resolution. Recall that window coordinates put the origin at the bottom-left corner. So fragments along the bottom of the triangle would have a lower Y value than those at the top.


___
___Texture Coordinate ( TO BE )
___

___
___OpenGL Coordinate ( TO BE )
___


Wednesday, April 27, 2016

Texture



___To Be Continued...


[State]

[ Conception ]
...
    URL:http://alfonse.bitbucket.org/oldtut/Texturing/Tutorial%2014.html
A texture is an object that contains one or more arrays of data, with all of the arrays having some dimensionality. The storage for a texture is owned by OpenGL and the GPU, much like they own the storage for buffer objects. Textures can be accessed in a shader, which fetches data from the texture at a specific location within the texture's arrays. 
The arrays within a texture are called images; this is a legacy term, but it is what they are called. Textures have a texture type; this defines characteristics of the texture as a whole, like the number of dimensions of the images and a few other special things.

...
___
___glTexImage*D()
___
Tutorial's URL: http://alfonse.bitbucket.org/oldtut/Texturing/Tutorial%2014.html (source)

Parameters Explanation: What and Why

  • Texture Objects
  • Pixel Transfer and Formats
  • Textures in Shaders
  • Texture Sampling
  • Texture Binding
  • Sampler Objecs
  • Texture Resolution

    ***  Tutorial
All we need to do is to send texture data to server/GPU, and then 
tell OpenGL in which format we stored them.  
Function for sending data to GPU is glTexImage2D(). 
    Parameters in order are:

  • target - in our case it is GL_TEXTURE_2D 
  • texture LOD - Level Of Detail - we set this to zero - this parameter is used for defining mipmaps. The base level (full resolution) is 0. All subsequent levels (1/4 of the texture size, 1/16 of the texture size...) are higher, i.e. 1, 2 and so on. But we don't have to do it manually (even though we can, and we don't even have to define ALL mipmap levels if we don't want to, OpenGL doesn't require that), there is luckily a function for mipmap generation (soon we'll get into that). 
  • internal format - specification says it's number of components per pixel, but it doesn't accept numbers, but constants like GL_RGBand so on (see spec). And even though we use BGR as format, we put here GL_RGB anyway, because this parameter doesn't accept GL_BGR, it really only informs about the number of components per texel. I don't find this very intuitive, but it's probably because of some backwards compatibility. 
  • width - Texture width 
  • height - Texture height 
  • border - width of border - in older OpenGL specifications you could create a border around texture (it's really useless), in new 3.3 specification (and also in future specifications, like 4.2 in time of writing this tutorial), this parameter MUST be zero 
  • format - Format in which we specify data, GL_BGR in this case 
  • type - data type of single value, we use unsigned bytes, and thus GL_UNSIGNED_BYTE as data type 
  • data - finally a pointer to the data
___
___ Texture ID
___

Textures in OpenGL are used similarly as other OpenGL objects - first we must tell OpenGL to generate textures, and then it provides us a texture name (ID), with which we can address the texture.
Very important thing about textures is, that their dimensions MUST be powers of 2.

/**
 **
 ***********
...:comment
          NPOT Texture
An NPOT Texture is a texture whose dimensions are not powers of 2 (Non-Power-Of-Two). In earlier hardware, there was a requirement that the dimensions of a texture were a power of two in size. NPOT textures are textures that are not restricted to powers of two.

...:
[URL]
     https://www.talisman.org/opengl-1.1/Reference/glTexImage2D.html
     http://pyopengl.sourceforge.net/documentation/manual-3.0/glTexImage2D.html
   
     [ internalFormat ]
  •           Specifies the number of color components in the texture.
    [ format ]  
  •           Specifies the format of the pixel data.
 ************
 **
 **/
FreeImage doesn't store our images in RGB format, on Windows, it's actually BGR, and this thing should be platform-dependant as far as I know. But this is no problem, when sending data to GPU, we'll just tell it that they're in BGR format. And now we really are ready to upload data to GPU... or we are? Yes, but a little word about texture filters should be said.

After a brief explanation of texture filters, we can proceed with its creation. All we need to do is to send texture data to GPU, and then tell OpenGL in which format we stored them. Function for sending data to GPU is glTexImage2D. It's parameters (in order) are:
  1. target - in our case it is GL_TEXTURE_2D
  2. texture LOD - Level Of Detail - we set this to zero - this parameter is used for defining mipmaps. The base level (full resolution) is 0. All subsequent levels (1/4 of the texture size, 1/16 of the texture size...) are higher, i.e. 1, 2 and so on. But we don't have to do it manually (even though we can, and we don't even have to define ALL mipmap levels if we don't want to, OpenGL doesn't require that), there is luckily a function for mipmap generation (soon we'll get into that).
  3. internal format - specification says it's number of components per pixel, but it doesn't accept numbers, but constants like GL_RGB and so on (see spec). And even though we use BGR as format, we put here GL_RGB anyway, because this parameter doesn't accept GL_BGR, it really only informs about the number of components per texel. I don't find this very intuitive, but it's probably because of some backwards compatibility.
  4. width - Texture width
  5. height - Texture height
  6. border - width of border - in older OpenGL specifications you could create a border around texture (it's really useless), in new 3.3 specification (and also in future specifications, like 4.2 in time of writing this tutorial), this parameter MUST be zero
  7. format - Format in which we specify data, GL_BGR in this case
  8. type - data type of single value, we use unsigned bytes, and thus GL_UNSIGNED_BYTE as data type
  9. data - finally a pointer to the data
 [ so what's the difference between internal format and format ; ( ]
___
___
___

Tuesday, April 26, 2016

Image Format


___TO BE CONTINUED...
___
___
URL: https://www.opengl.org/wiki/Image_Format

         Image Format

An Image Format describes the way that the images in Textures and renderbuffers store their data. They define the meaning of the image's data.
There are three basic kinds of image formats: color, depth, and depth/stencil. Unless otherwise specified, all formats can be used for textures and renderbuffers equally. Also, unless otherwise specified, all formats can be multisampled equally.

Color formats

Colors in OpenGL are stored in RGBA format. That is, each color has a Red, Green, Blue, and Alpha component. The Alpha value does not have an intrinsic meaning; it only does what the shader that uses it wants to. Usually, Alpha is used as a translucency value, but do not make the mistake of confining your thinking to just that. Alpha means whatever you want it to.
Note: Technically, any of the 4 color values can take on whatever meaning you give them in a shader. Shaders are arbitrary programs; they can consider a color value to represent a texture coordinate, a Fresnel index, a normal, or anything else they so desire. They're just numbers; it's how you use them that defines their meaning.
Color formats can be stored in one of 3 ways: normalized integers, floating-point, or integral. Both normalized integer and floating-point formats will resolve, in the shader, to a vector of floating-point values, whereas integral formats will resolve to a vector of integers.
Normalized integer formats themselves are broken down into 2 kinds: unsigned normalized and signed normalized. Unsigned normalized integers store floating-point values on the range [0, 1], while signed normalized integers store values on the range [-1, 1].
Integral formats are also divided into signed and unsigned integers. Signed integers are 2's complement integer values.
Image formats do not have to store each component. When the shader samples such a texture, it will still resolve to a 4-value RGBA vector. The components not stored by the image format are filled in automatically. Zeros are used if R, G, or B is missing, while a missing Alpha always resolves to 1.

___
___ How To Get the Image Format with Third Library FreeImage.h
___

FREE_IMAGE_FORMAT fif = FIF_UNKNOWN;
FIBITMAP* dib(0);
///\ check the file signature and deduce its format..
const char * is_Path = "data\textures\golddiag.jpg" ;
fif = FreeImage_GetFileType(is_Path, 0)
if (FIF_UNKNOWN == fif) {
    // try to guess the file format from the file extension..
    fif = FreeImage_GetFIFFromFilename(is_Path);
}
if (FIF_UNKNOWN == fif) {
    return;
}

/** I/O image format identifiers. --From FreeImage.h
*/
FI_ENUM(FREE_IMAGE_FORMAT) {
FIF_UNKNOWN = -1,
FIF_BMP = 0,
FIF_ICO = 1,
FIF_JPEG = 2,
FIF_JNG = 3,
FIF_KOALA = 4,
FIF_LBM = 5,
FIF_IFF = FIF_LBM,
FIF_MNG = 6,
FIF_PBM = 7,
FIF_PBMRAW = 8,
FIF_PCD = 9,
FIF_PCX = 10,
FIF_PGM = 11,
FIF_PGMRAW = 12,
FIF_PNG = 13,
FIF_PPM = 14,
FIF_PPMRAW = 15,
FIF_RAS = 16,
FIF_TARGA = 17,
FIF_TIFF = 18,
FIF_WBMP = 19,
FIF_PSD = 20,
FIF_CUT = 21,
FIF_XBM = 22,
FIF_XPM = 23,
FIF_DDS = 24,
FIF_GIF = 25,
FIF_HDR = 26,
FIF_FAXG3 = 27,
FIF_SGI = 28,
FIF_EXR = 29,
FIF_J2K = 30,
FIF_JP2 = 31,
FIF_PFM = 32,
FIF_PICT = 33,
FIF_RAW = 34,
FIF_WEBP = 35,
FIF_JXR = 36
};


Pixel.Transfer.



___TO BE CONTINUED...
___
___PIXEL TRANSFER
___
URL: https://www.opengl.org/wiki/Pixel_Transfer

Pixel Transfer operation is the act of taking pixel data from an unformatted memory buffer and copying it in OpenGL-owned storage governed by animage format. Or vice-versa: copying pixel data from image format-based storage to unformatted memory. There are a number of functions that affect how pixel transfer operation is handled; many of these relate to how the information in the memory buffer is to be interpreted.

Terminology

Pixel transfers can either go from user memory to OpenGL memory, or from OpenGL memory to user memory (the user memory can be client memory or buffer objects). Pixel data in user memory is said to be packed. Therefore, transfers to OpenGL memory are called unpack operations, and transfers from OpenGL memory are called pack operations.


Pixel transfer initiation

There are a number of OpenGL functions that initiate a pixel transfer operation. These functions are:
Transfers from OpenGL to the user:
Transfers from the user to OpenGL:
There are also special pixel transfer commands for compressed image formats. These are not technically pixel transfer operations, as they do nothing more than copy memory to/from compressed textures. But they are listed here because they can use pixel buffers for reading and writing.
The discussion below will ignore the compressed texture functions, since none of what is discussed pertains to them.

Camera Setting In OpenGL



___To Be Continued.......


___
___Camera Settings 
___                           Position, LookTarget, UpVector...
___                           View Matrix
___
URL: ATTENTION-HERE
URL: http://www.learnopengl.com/#!Getting-started/Camera

When we're talking about camera/view space we're talking about all the vertex coordinates as seen from the camera's perpective as the origin of the scene: the view matrix transforms all the world coordinates into view coordinates that are relative to the camera's position and direction. To define a camera we need its position in world space, the direction it's looking at, a vector pointing to the right and a vector pointing upwards from the camera. A careful reader might notice that we're actually going to create a coordinate system with 3 perpendicular unit axes with the camera's position as the origin.
...: glm::vec3 CameraPosition;//in World-Space
...: glm::vec3 CameraLookAt;//in World-Space
...: glm::vec3 CameraRight;//in World-Space
...: glm::vec3 CameraUP;//in World-Space

          ..........................................
A great thing about matrices is that if you define a coordinate space using 3 perpendicular (or non-linear) axes you can create a matrix with those 3 axes plus a translation vector and you can transform any vector to that coordinate space by multiplying it with this matrix.
 ... 
The LookAt matrix then does exactly what it says: it creates a view matrix that looks at a given target.
Luckily for us, GLM already does all this work for us. We only have to specify a camera position, a target position and a vector that represents the up vector in world space (the up vector we used for calculating the right vector). GLM then creates the LookAt matrix that we can use as our view matrix:
 ....: ViewMatrix BY glm::LookAt()


        ...............................................................

___
___Keyboard Input
___

After fiddling around with this basic camera system you probably noticed that you can't move in two directions at the same time (diagonal movement) and when you hold down one of the keys, it first bumps a little and after a short break starts moving. This happens because most event-input systems can handle only one keypress at a time and their functions are only called whenever we activate a key. While this works for most GUI systems, it is not very practical for smooth camera movement. We can solve the issue by showing you a little trick.
The trick is to only keep track of what keys are pressed/released in the callback function. In the game loop we then read these values to check what keys are active and react accordingly. So we're basically storing state information about what keys are pressed/released and react upon that state in the game loop.

___
___ Camera Speed
___
URL: http://www.glprogramming.com/red/chapter03.html (Official Document)
URL: http://learnopengl.com/#!Getting-started/Camera (Tutorial with source VSC++)

Currently we used a constant value for movement speed when walking around. In theory this seems fine, but in practice people have different processing powers and the result of that is that some people are able to draw much more frames than others each second. Whenever a user draws more frames than another user he also calls do_movement more often. The result is that some people move really fast and some really slow depending on their setup. When shipping your application you want to make sure it runs the same on all kinds of hardware.

Graphics applications and games usually keep track of a deltatime variable that stores the time it takes to render the last frame. We then multiply all velocities with this deltaTime value. The result is that when we have a large deltaTime in a frame, meaning that the last frame took longer than average, the velocitoy for that frame will also be a bit higher to balance it all out. When using this approach it does not matter if you have a very fast or slow pc, the velocity of the camera will be balanced out accordingly so each user will have the same experience.

___
___Control Based On Keyboard Keys
___

    [0] global variable for keeping key status obtained with KeyCallbackFunction
        // Bool Array For key status store;
        // Key status obtained with key_callback function;
        bool keys[1024];

    [1] do something with the position offset according to different key status.
        if (keys[GLFW_KEY_W])// CamPos Move Forward, while object Backward
        if (keys[GLFW_KEY_S])// CamPos Move Backward, while object Forward
        if (keys[GLFW_KEY_A])// CamPos Move Leftward, while object Right
        if (keys[GLFW_KEY_D])// CamPos Move Rightward, while object Left
        ....
    [2] calculate position offset.
        -direction the camera position offset
            glm::cross(_camUP, _camFront)
            BY the above, a Leftward Vector obtained.
            Normalize the Result Vector...
        -scalar multiplied
            GLfloat cameraSpeed = (kConstant * _DeltaOfPreviousFrameLast);
        -offset
            CameraPosition +/-=

___
___Source Demo..
___


___
___Eular Angles..
___
URL: http://learnopengl.com/#!Getting-started/Camera (Tutorial with source VSC++)
www.learnopengl.com

The PITCH is the angle that depicts how much we're looking up or down .
The YAW  represents the magnitude we're looking to the left or to the right.
The ROLL represents how much we rolls.

Each of the Euler angles are represented by a single value and with the combination of all 3 of them we can calculate any rotation vector in 3D.
For our camera system we only care about the yaw and pitch values so we won't discuss the roll value here. Given a pitch and a yaw value we can convert them into a 3D vector that represents a new direction vector.

:../Mouse Input

The yaw and pitch values are obtained from mouse (or controller/joystick) movement where horizontal mouse-movement affects the yaw and vertical mouse-movement affects the pitch. The idea is to store the last frame's mouse positions and in the current frame we calculate how much the mouse values changed in comparison with last frame's value. The higher the horizontal/vertical difference, the more we update the pitch or yaw value and thus the more the camera should move.

When handling mouse input for an FPS style camera there are several steps we have to take before eventually retrieving the direction vector:
  • Calculate the mouse's offset since the last frame.
  • Add the offset values to the camera's yaw and pitch values.
  • Add some constraints to the maximum/minimum yaw/pitch values
  • Calculate the direction vector(The fourth and last step is to calculate the actual direction vector from the resulting yaw and pitch value)

:../

URL: https://en.wikipedia.org/wiki/Flight_dynamics
URL: http://tuttlem.github.io/2013/12/30/a-camera-implementation-in-c.html




The pitch describes the orientation around the X-axis,
    such as moving the head Up and Down... (rotate around X)

The yaw describes the orientation around the Y-axis
    such as moving the head Left <-> Right... (rotate around Y)


The roll describes the orientation around the Z-axis.
    such as moving the head Front <-> Back... (rotate around Z)

    With all of this information on board, the requirements of our camera should become a little clearer.We need to keep track of the following about the camera:
          Position
          Up orientation (yaw axis)
          Right direction (pitch axis)
          Forward (or view) direction (roll axis)
    We’ll also keep track of how far we’ve gone around the yaw, pitch and roll axis.



*****************************************************************************
*
*                                 IMPORTANT   CLUE
*
*****************************************************************************
******
******

[ Clip Volume – The Default Camera ]

...
To understand how to make a "camera" in 3D, we must first understand the concept of the clip volume.
 The clip volume is a cube. Whatever is inside the clip volume appears on the screen, and anything outside the clip volume is not visible.
The clip volume is a cube. Whatever is inside the clip volume appears on the screen, and anything outside the clip volume is not visible. It has the exact same size as the cube we made above. It ranges from -1 to +1 on the X, Y and Z axes. -X is left, +X is right, -Y is bottom, +Y is top, +Z is away from the camera, and -Z is toward the camera.
Because our cube is the exact same size as the clip volume, all we can see is the front side of the cube.
This also explains why our cube looks wider than it is tall. The window displays whatever is in the clip volume. The left and right edges of the window are -1 and +1 on the X axis, the bottom and top edges of the window are -1 and +1 on the Y axis. The clip volume gets stretched to fit the size of the viewport in the window, so our cube doesn't look square anymore.
...

[ Moving The World While The Camera Stays Still ]

...
The clip volume can not be changed. It is always the same size, in the same position. So, instead of moving the camera, we must move the entire 3D scene so that it fits inside the clip volume cube correctly.
We want to make a camera that can move around, look in different directions, and maybe zoom in and out. 
However, the clip volume can not be changed. It is always the same size, and in the same position. 
So, instead of moving the camera, we must move the entire 3D scene so that it fits inside the clip volume cube correctly. 
For example, if we want to rotate the camera to the right, we actually rotate the whole world to the left. If we want to move the camera closer to the player, we actually move the player closer to the camera. 
This is how "cameras" work in 3D, they transform the entire world so that it fits into the clip volume and looks correct.
When you walk somewhere, it feels like the world is standing still, and you are moving. But you can also imagine that you are not moving at all, and the whole world is rotating underneath your feet, like you are on a treadmill. This is the difference between "moving the camera" and "moving the world." Either way, it looks exactly the same to the viewer.
So how do we transform the 3D scene to fit into the clip volume? This is where we need to use matrices.
...
[]


              ______________________________________________

Monday, April 25, 2016

Texture Displaying



______________________________________________________________


___
___API  glutBitmapCharacter
___
URL: https://www.opengl.org/documentation/specs/glut/spec3/node76.html
______________________________________________________________


___
___ VARIOUS WAYS WITH SOURCE DEMO
___
URL: https://www.opengl.org/archives/resources/features/fontsurvey/
______________________________________________________________



___
___ ONLY LETTERS
___
URL: https://mycodelog.com/tag/glutbitmapcharacter/
This can be done by using C’s va_list type, which is C’s approach to defining functions with variables number of arguments.
  • In the function prototype, place ellipsis (…) as the last argument
  • Define a variable argument list: va_list args;
  • Call va_start on the args list and the first real argument in the function prototype preceding the ellipsis:va_start(args, format);
  • Use _vscprintf to get the number of characters that would be generated if the string pointed to by the list of arguments was printed using the specified format
  • Allocate memory for a string with the specified number of characters
  • Call vsprintf_s to build the string we want from the list of arguments
  • Call va_end to end the use of the variables argument list
  • Draw our beautified string
  • Free allocated memory
Function implementation with full comments shown below:
//-------------------------------------------------------------------------
//  Draws a string at the specified coordinates.
//-------------------------------------------------------------------------
   #include <stdio.h>    //  Standard Input\Output C Library
#include <stdarg.h>   //  To use functions with variables arguments
#include <stdlib.h>   //  for malloc
#include <gl/glut.h>  //  Include GLUT, OpenGL, and GLU libraries


//-------------------------------------------------------------------------

//  Draws a string at the specified coordinates.
//-------------------------------------------------------------------------
void printw (float x, float y, float z, char* format, ...)
{
    va_list args;   //  Variable argument list
    int len;        // String length
    int i;          //  Iterator
    char * text;    // Text
    //  Initialize a variable argument list
    va_start(args, format);
    //  Return the number of characters in the string 
// referenced the list of arguments.
    // _vscprintf doesn't count terminating '\0' (that's why +1)
    len = _vscprintf(format, args) + 1;
    //  Allocate memory for a string of the specified size
    text = malloc(len * sizeof(char));
    //  Write formatted output using a pointer to the list of arguments
    vsprintf_s(text, len, format, args);
    //  End using variable argument list
    va_end(args);

    //  Specify the raster position for pixel operations.
    glRasterPos3f (x, y, z);

    //  Draw the characters one by one
    for (i = 0; text[i] != '\0'; i++) {
     glutBitmapCharacter(font_style, text[i]);
}
    //  Free the allocated memory for the string
    free(text);
}