Friday, August 26, 2016

Define: The "Bind Pose" Explanation ********

The "Bind Pose" Explanation


The "bind pose" is often the most confusing aspect of learning to program the skinnig API yet the absolute most important concept to understand. The "bind pose" is the pose of the mesh object, the skeleton and their relative offsets at the moment the skeleton is bound to the mesh object and before any deformations begin to occur. This pose will often look like that shown in the figure above with the arms stretched out and level with the shoulders, and with the skeleton aligned with the limbs of the mesh object:

At this very moment, when the skeleton bones get bound to the mesh (via the bone/vertex assignments with corresponding weight values), a "snapshot" matrix called the "(Worldspace) Bind Pose Matrix" is taken for every bone/joint and the mesh itself (not to be confused with the local transformation matrix). These are very key and important matrices for skinning. The Bind Pose Matrices are stored in the instance definitions defining the mesh object and the skeleton bones/joints (bones/joints in the NuGraf toolkit are just NULL nodes, or empty instances as they are often called). The Bind Pose Matrices define the original world-space location of the mesh and bones/joints at the time of binding.

How are Bind Pose Matrices used during skinning deformation? This is the key point to comprehend: the matrices allow a raw vertex of the mesh (in local coordinates) to be transformed into world-space and then to each bone's local coordinate space, after which each bone's animation can be applied to the vertex in question (under the influence of the weighting value). 

The mesh's bind pose takes the vertex from local space to world-space, and then each bone's inverse bind pose takes the mesh vertex from world-space to the local space of that bone. Once in the bone's local space, the bone's current animated transformation matrix is used to transform (deform) the vertex's location. After all the bone influences have been taken into account, the vertex ends up in world-space in its final deformed location. In other words, these bind pose matrices relate the location of a vertex in its local mesh space to the same location of the vertex relative to each bones' local coordinate systems. Once this relation is known, via the bind pose matrices, it is easy to deform the mesh vertices by animating the bones.

All you need to keep in mind is that the mesh's Bind Pose Matrix takes its vertices into world-space, to the location at the time of binding, and each bones' Bind Pose Matrix takes the bones from local space to world-space at the time of binding.





Define: Skinning



....

Skinning: Mesh Deformation via Smooth/Rigid Binding of Bones and Skeletons


"Skinning" is the process of binding a skeleton to a single mesh object, and skinning deformation is the process of deforming the mesh as the skeleton is animated or moved. As the skeleton of bones is moved/animated, a matrix associated with the vertices of the mesh causes them to deform in a weighted manner.
....










....

Thursday, August 25, 2016

fgets() .. sscanf()


fgets()

char * fgets(char * str, int num, FILE * stream);
Get string from stream
Reads characters from stream and stores them as a C-string into str until(num-1) characters have been read or either a newline or the end-of-file is reached, whichever happens first...

A newline character makes fgets() stop reading, but it is considered a valid character by the function and included in the string copied to str...

A terminating null character is automatically appended after the characters copied to str...

NOTICE that fgets() is quite different from get(): NOT ONLY fgets() accepts a stream argument, BUT ALSO allows to specify the maximum size of str and includes in the string any ending newline character..

Parameters
str    Pointer to an array of chars where the string read is copied....
num  Maximum number of characters to be copied into str (including the terminating null-character)..

stream   Pointer to a FILE object that identifies an input stream..stdin can be used as argument to read from the standard input..

Return Values:
On success, the function returns str.
If the end-of-file is encountered while attempting to read a character, the eof indicator is set (feof). If this happens before any characters could be read, the pointer returned is a null pointer (and the contents of str remain unchanged).
If a read error occurs, the error indicator (ferror) is set and a null pointer is also returned (but the contents pointed by strmay have changed).


//...char * sFile = "shader.vert";FILE * streamFile = fopen(sFile, "rt");if (NULL == streamFile ) return ;std::vector<std::string> sLines;char sLine[256] = {0};while (fgets(sLine, 256, streamFile ))    sLines.push_back(sLine);fclose(streamFile);streamFile = NULL;//...



sscanf()

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

int main()
{
   int day, year;
   char weekday[20], month[20], dtm[100];

   strcpy( dtm, "Saturday March 25 1989" );
   sscanf( dtm, "%s %s %d  %d", weekday, month, &day, &year );

   printf("%s %d, %d = %s\n", month, day, year, weekday );
    
   return(0);
}

Define: Animation,[class] for basic bone structure for a skeleton

Define:


VOXELS_3DAnimation
  mName    /std::string //animation name
  mLength   /float    // animation duration of time
  mTracks[]   /vector<VOXELS_3DTrack> //keyFrame storation for each bone/joint
...
VOXELS_3DTrack
  mBone    /std::string //bone-joint name
  mKeyFrames[]  /vector<VOXELS_3DKeyFrame> // keyFrame storation for the "mBone"

...
VOXELS_3DKeyFrame
  mTime     /float  //tells at which time the this VOXELS_3DKeyFrame occurs
  mTrans    /vec3f  //translate vector3f
  mRotate  /float  //angle for rotation of the bone at current keyFrame
  mAxis     /float  //axis with which to rotate

.......................................

 Skeletal Animation #1    Feb 01, 2006 


With skeletal animation each character is a hierarchy of bones( ie. skeleton ), and the bones control the deformation of the character mesh. What follows is an quick overview of one way to handle skeletal animation, based on a much simplified version of how I do it in my own program. As usual, my advice for anyone just starting is to get something simple working and build from there, and make sure your program can display enough visuals&values that you can tell exactly what's going on if you need to debug.When I initially set up skeletons in my program I had joints (which controled the rotations between bones) and bones as separate types, each able to have multiple children of the other type. While there's not really anything wrong with this, I found there wasn't any need for it either. Here what I called a joint is included in my definition of bone.
Structure
Off the top of my head, the basic bone structure for a skeleton might look like this:
struct Bone {
  quat qRotate;
  CVec3 vOffset;
  float fHingeAngle, fMin, fMax; 
  float fLength;
  int nJointType;
  int Child[ MAX_BONE_CHILDREN ];
  int Parent;
  char sName[ 32 ];
};


qRotate 
    - a quaternion representing the rotation of the bone in relation to its parent

vOffset
     - a 3D vector for how to offset the bone from the end of its parent ( defaults to 0,0,0 )

fHingeAngle, fMin, fMax
     - Hinge angle with a minimum and maximum. These values are only needed forInverse Kinematics. I will ignore it in this article. 

fLength
     - The length of the bone 

nJointType 
    - The joint type ( normal, hinge, fixed... ) 

Child
     - The numbers in the skeleton of any child bones. (initialized to invalid) 

Parent
     - The number of the parent bone. Not strictly neccessary, but you'll probably end up wanting it. 

sName
     - The name of this bone. In some situations not needed, but definitely is for my editor.

Animation

You'll want a separate structure for Bone keyframes. At their simplest, a keyframe need only include a quaternion for rotation and its frame number. I also include location and velocity (for IK & offset control,) as well as a few other elements.

For the actual animating, we start with setting the proper values for the current frame by interpolating between the left and right keyframe. Spherical Linear Interpolation (Slerp) can be used for quaternions, and a linear average or hermite curves for the location vector. Once we have set the values of current keyframe, we use this keyframe to calculate a matrix for each bone. This matrix could be added to the Bone Structure above, but I can have multiple objects with separate animations that are instances of a single skeleton, so I prefer to keep it in an array of size numBones that is part of each skeleton object. The bone matrices are computed by starting with main bone and recursively processing each child bone in the same manner. For each bone we send in the current matrix, add to it the bone's rotation, store as BoneMatrix for that bone, then translate by length.

Rendering

Once we have a matrix for each bone, we can draw some visual representation of the skeleton structure in its current position. On the simple end you can loop through each bone, load the bone matrix, and draw a box( perhaps of size bone[nBone].fLength,1,1 ) representing the bone.

On the more complex end, you can use the skeleton to deform a mesh. The details of that will have to wait to a later article. Here's an example of how you'd do it if each vert is only attached to one bone. The InvBindMatrix of a bone is the inverse of that bone's matrix in the position the skeleton was bound to the mesh. Therefore if a bone's current matrix is the same as its bind matrix, meaning it hasn't moved at all, InvBindMatrix * BoneMatrix will cancel out as we want.
CMatrix MBone[ MAX_BONES ];

for (int nBone = 0; nBone < nTotalBones; nBone++)
  {
  MBone[nBone] = InvBindMatrices[ nBone ] * BoneMatrices[ nBone ];
  }

// Loop through verts using just a single attached bone
// (for verts attached to multiple bones we would compute the location
//  given by each bone for the vert and use the weighted average.)
for (int i = 0; i < numVerts; i++)
  {
  pRenderVertices[ i ] = MBone[ pnBoneAttached[ i ] ] * pInVerts[ i ];
  }

.........................................................

 Skeletal Animation #2    Apr 05, 2006 


This article builds on my first skeletal animation article. I recommend reading that one first if you haven't already. In this article I'll discuss a couple of techniques used to improve the results of deforming a mesh with skeletal animation.

Weighted Vertices

Multiple attachments are a standard technique to improve the look of the deformed mesh, for things like reducing bunching at joints or adding subtle control bones, like putting a bone in a character's stomach to allow visible breathing. Somewhere along the way someone decided that 4 was a good number of different bones to allow a vertex to be attached to, and allowing up to 4 attachments has been sufficient for me for many years. So for each vertex we store each bone it is attached to (pnBoneAttached), and the weight for that attachment (pfWeights). If you add up all the weights, they should add up to 1.0. The code for computing the vertex locations deformed by a weighted skeleton follows:
for (i = 0; i < numVerts; i++)
   {
   pRenderVertices[ i ].SetZero();
   for (int x = 0; x < 4; x++)
      {
      int nBone = pnBoneAttached[ 4*i+x ];
      if ( nBone == NO_BONE ) break;

      pRenderVertices[ i ] += (MBones[ nBone ] * pVerts[ i ]) * pfWeights[ 4*i+x ];
      }
   }


With multiple attachments not just the vertices need to be blended, the vertex normals do too, or in the case of object space normal mapping, the light vectors. This can be done using pretty much the same method used for blending the verts. The main difference is you'll just want to rotate the normal without any translating. What I do is use the line pNormal[i].RotByMatrix( MBones[ nBone ] ), where RotByMatrix uses three dot products to rotate pNormal by the matrix without translating.

Blending Multiple Mesh Chunks

Another way to animate meshes is to blend between multiple chunks of a mesh to create the final mesh. This is used most often for facial animation, like blinking or smiling, although it can be used in plenty of other ways. (It is also possible to use skeletal animation for facial animation.) You might have one head mesh chunk with the eyes open, and on with the eyes closed, and blend between them to blink. You can allow stacking of multiple blend shapes, and sliders to adjust their influence.

The actual blending between the chunks can be simple, although what I do is more complicated than what I'll present here. For instance you might want to define vertices to be blended instead of an entire chunk, and there are specific implementation issues that vary between programs, so are not covered in this article. Let's say you have 1 chunk for the default position, and 2 additional blend chunks, each with a weight between 0 and 1. I'm assuming in this example that the blend chunks aren't offset from the default chunk, if they are you'll need to translate them first. The weights are scaled so that they all add up to one. If the weights of all the blend chunks total less than 1, the weight of the default chunk is assigned so that they'll total 1.


int i;
float fTotal = 0.0f;
pfWeights[0] = 0.0f; // Chunk 0 is the default chunk

// Set Weights
for ( i = 1; i < nChunks; i++ ) {
   fTotal += pfWeights[i];
   }
if ( fTotal < 1.0f ) {
   pfWeights[0] = 1.0f - fTotal;
   fTotal = 1.0f;
   }
for ( i = 1; i < nChunks; i++ ) {
   pfWeights[i] /= fTotal;
   }

// Just blend vertex position linearly between all chunks
for (i = 0; i < nVertsInChunk; i++)
   {
   pOutVerts[i].SetZero();
   for ( int x = 0; x < nChunks; x++ ) {
      pOutVerts[i] += pInVerts[x][i] * pfWeights[x];
      }
   }


To use this with skeletal animation, you need to first run a pass to do the blending, and use the output as the input( pVerts ) in deforming the vertices by the attached bones.










.....................................................

 Skeletal Animation #3    Jun 23, 2006 


In my first skeletal animation article, I briefly mentioned setting joint rotations and how to compute the matrices for each bone from the joint quaternions. Since I just glossed over a possibly complicated subject in a couple lines, I decided to concentrate on it in this article.

Computing the Bone Matrices

In my skeletal animation system, each skeleton is made up of joint/bone pairs. See the skeleton screenshot here. Each joint has a quaternion that controls its rotation. Now this quaternion may be controlled in another way, like by IK, Euler Angles, or a physics system, but it's ultimately what's used to define the joint rotations. The skeleton in the screenshot was rendered by loading a bone matrix for each bone then drawing a non-uniform scaled cube for the bone. These bone matrices are also used for mesh skinning, as described in the previous skeletal animation articles. We'll start at the centerpoint position. Then we can recursively add in each bone and save the matrix, as shown by this code : ( I just wrote this without testing, hopefully it's bug free )

// InMatrix Will probably be the ObjToWorld Matrix for the centerpoint( joint 0 )
void CSkel::SetMatrixFromJoint( int nJoint, CMatrix InMatrix, CMatrix* pBoneMatrices )
{   
 // Set the Bone Matrix for nJoint by converting and adding the rotation quaternion
 pBoneMatrices[ nJoint ] = CMatrix( m_pJoints[ nJoint ].qRotate ) * InMatrix;

 // Process any further joints recursively
 for (int i = 0; i < MAX_JOINT_CHILDREN; i++)
 {
 int nJointChild = m_pJoints[ nJoint ].nChild[i];
 if (nJointChild != NO_CHILD ) 
  {
  // translate by joint offset and parent bone length
  CVec3 vTranslate = m_pJoints[ nJointChild ].vOffset;
  vTranslate.x += m_pJoints[ nJoint ].fBoneLength;

  // Use the parent matrix, then translate
  CMatrix TempM = pBoneMatrices[ nJoint ];
  TempM.Translate( -vTranslate );

  SetMatrixFromJoint( nJointChild, TempM, pBoneMatrices ); 
  }
 }
} 

You can get the global position of the a joint by taking the translation vector of its bone matrix. You could also get the global rotation of a joint from the bone matrix, or you could compute and store this rotation separately in a quat.

Rag-doll

If you're using a rigid-body physics system for your Rag-Doll, probably you'll set up physics shapes for certain bones and joint them together. From the physics system you'll get back the rotations of the shapes in world space. ( Another way to do rag-doll type effects is to just use points for joint location, and then calculate rotation, choosing twist separately, but I won't discuss that here. )

What I do is use these world space rotations to set the rotation values of each individual joint. Now you might wonder if we really need to convert back to joint rotations. There is a reason I did it this way. I want to be able to convert bones between ragdoll and animation whenever I want, including proper tweening. For instance, a character may use rag doll for a fall, then get back up. ( Getting up is an ambitious example, I've only done simpler cases so far. ) I find it's easiest if everything sets the joint quats.

We can send the global target rotations from the physics system into our SetMatrixFromJoint function. There are also other reasons to use a target rotation, such as a character pulling herself onto to a ledge, where you want her hands to stay aligned to the ledge. We use the targets to set the joint rotations where needed, by putting the code is below at the top of the function:


if ( pbUseTarget[ nJoint ] )
    m_pJoints[ nJoint ].qRotate = pqTargetRot[ nJoint ] * qParentGlobalRot.Invert();



.....................................................




.....................................................



.....................................................

Wednesday, August 24, 2016

Usage: std::shared_ptr

std::shared_ptr

#include <memroy>
template<typename T>class shared_ptr;  (since C++11)

..
TODO:
..

error LNK2019: unresolved external symbol _main referenced in function ___tmainCRTStartup

error LNK2019: unresolved external symbol _main referenced in function ___tmainCRTStartup


...

Common

Project -> Properties -> Configuration Properties -> Linker -> System->Console.


...

Specially

We also had this problem. My colleague found a solution. It turned up to be a redefinition of "main" in a third party library header:
#define main    SDL_main
So the solution was to add:
#undef main
before our main function.
This is clearly a stupidity!
Oh, in SDL_main.h, there's a comment about "Redefine main() on some platforms so that it is called by SDL." then: #ifdef WIN32 #def SDL_MAIN_AVAILABLE #endif ... #if defined(SDL_MAIN_NEEDED) || defined(SDL_MAIN_AVAILABLE) #define main SDL_main #endif As @Csq said, it looks like there's a better way to initialize SDL2. – Nick Desaulniers

Sunday, August 21, 2016

Define: Complex Numbers


Define:Quaternions


The root of quaternions is based on the concept of the complex number system.

In addition to the well-known number sets (Natural, Integer, Real, and Rational), the Complex Number system introduces a new set of numbers called imaginary numbers.



Adding and Subtracting Complex Numbers
Multiplying a Complex Number by a Scalar
Product of Complex Numbers
Square of Complex Numbers
Complex Conjugate
Absolute Value of a Complex Number
Quotient of Two Complex Numbers
Powers of i
Rotors


Quaternions

With this knowledge of the complex number system and the complex plane, we can extend this to 3-dimensional space by adding two imaginary numbers to our number system in addition to i.

...Hamilton also recognized that the i, j, and k imaginary numbers could be used to represent three cartesian unit vectors i, j, and k with the same properties of imaginary numbers, such that square(i) = square(j) = square(k) = -1..


  • Quaternions as an Ordered Pair
  • Adding and Subtracting Quaternions
  • Quaternion Products
  • A Real Quaternion
  • Multiplying a Quaternion by a Scalar
  • Pure Quaternions
  • Additive Form of a Quaternion
  • Unit Quaternion
Given an arbitrary vector v, we can express this vector in both its scalar magnitude and its direction.And we can also describe a unit quaternion that has a zero scalar and a unit vector.
  • Binary Form of a Quaternion
We can now combine the definitions of the unit quaternion and the additive form of a quaternion, we can create a representation of quaternions which is similar to the notation used to describe complex numbers..
This gives us a way to represent the quaternion that is very similar to complex numbers..
  • Quaternion Conjugate
The quaternion conjugate can be computed by negating the vector part of the quaternion..
  • Quaternion Norm
  • Quaternion Normalization
With the definition of a quaternion norm, we can use it to normalize a quaternion. A quaternion is normalized by dividing it by magnitude.
  • Quaternion Inverse
...To compute the inverse of a quaternion, we take the conjugate of the quaternion and divide it by the square of the norm..

Inverse(q) = Conjugate(q) / Square(Magnitude(q))

To show this, we can take the fact that by definition of the inverse:

(q) Inverse(q) = [1, Vector(0)] = 1

And multiply both sides by the conjugate of the quaternion gives:

Conjugate(q) (q) Inverse(q) = Conjugate(q)

And by substitution we get:

Square(Magnitude(q)) Inverse(q) = Conjugate(q)

Inverse(q) = Conjugate(q) / Square(Magnitude(q))

And for unit-norm quaternions whose norm is 1, we can write:

Inverse(q) = Conjugate(q)

  • Quaternion Dot Product
Similar to vector dot-product, we can also compute the dot product between two quaternions by multiplying the corresponding scalar parts and summing the results..
Quat(q1) = [S1, X1i + Y1j + Z1k]
Quat(q2) = [S2, X2i + Y2j + Z2k]
Quat(q1) Dot-Product Quat(q2) = S1S2 + X1X2 + Y1Y2 + Z1Z2

we can also use the quaternion dot-product to compute the angular difference between the quaternions.

And for unit-norm quaternions, we can simplify the equation:
Cosine( ) = S1S2 + X1X2 + Y1Y2 + Z1Z2
  • Rotations
  • Quaternion Interpolation
  • SLERP(Spherical Linear Interpolation)
  • SQUAD(Spherical and Quadrangle)


..........
quotient [ˈkwoʊʃənt]
1.the number that is the result of dividing one number by another



..........

Define: Quaternion :-)

Defination of Quaternion

Quaternions find uses in both theoretical and applied mathematics, in particular for calculations involving three-dimensional rotations such as in three-dimensional computer graphicscomputer vision and crystallographic textureanalysis.[5] In practical applications, they can be used alongside other methods, such as Euler angles and rotation matrices, or as an alternative to them, depending on the application.



...

...
CrossProduct
 given two 3D vectors U(U1,U2,U3) and V(V1,V2,V3), we can define the cross product CP of U and V as the vector:
  
CP = [ (U2V3 - U3V2),  (U3V1 - U1V3),  (U1V2 - U2V1)]

OR
         |  (U2V3 - U3V2)  | T
CP = |  (U3V1 - U1V3)  |
         |  (U1V2 - U2V1)  |

..
Properties:
    The cross product is orthogonal to both U and V..
    The vectors U, V, UxV align with the right hand rule..
    The length of the cross product is equal to the area of the parallelogram defined by U, V.
    
    U x V = (-V) x (U)     ---skew-symmetry
    U x ( V + W) = U x V + U x W
    (tU) x V = t (U x V)

...
parallelogram[.perə'lelə.ɡræm]
1. a two-dimensional geometric figure formed of four sides in which bothpairs of opposite sides are parallel and of equal length, and theopposite angles are equal

Thursday, August 18, 2016

Define: COLLADA, 3D Animation, Skeletal Hierarchy, Joint, Bone, Skin, Geometry, Model :-)

Source Title   [to be continued....]

Step by Step Skeletal Animation in C++ and OpenGL, Using COLLADA

...

Reading Geometry Data from COLLADA document

<library_geometries>

<geometry>

<mesh>

<source>
    <float_array>
    <NAME_array>
    <technique_common>
        <accessor>
            <param>

_____ 
...


....


....

Wednesday, August 17, 2016

Define: Frame versus Sample :-)

Source


Unfortunately
the term frame has more than one common meaning in the game industry. This can lead to a great deal of confusion. Sometimes a frame is taken to be a period of time that is 1/30 or 1/60 of a second in duration. But in other contexts, the term frame is applied to a single point in time (e.g., we might speak of the pose of the character “at frame 42”).



• If a clip is non-looping, an N-frame animation will have N + 1 unique samples. 
• If a clip is looping, then the last sample is redundant, so an N-frame animation will have N unique samples.


Define: Poses :-)

Source

[Abstract]
Joint Space, Bind Pose, 

Bind Pose
This is the pose of the 3D mesh prior to being bound to the skeleton (hence the name). In other words, it is the pose that the mesh would assume if it were rendered as a regular, unskinned triangle mesh, without any skeleton at all. The bind pose is also called the T-pose because the character is usually standing with his feet slightly apart and his arms outstretched in the shape of the letter T.


Local Poses
Every joint in a skeletal hierarchy defines a set of local coordinate space axes, known as joint space...

Global Poses
A global pose can be calculated by walking the hierarchy from the joint in question towards the root and model space origin, concatenating the child-to-parent (local) transforms of each joint as we go.


....
.....
concatenate:
v.  1. to put two or more computer files or pieces of computer information together in order to form a single unit
adj.  1. used for describing two or more computer files or pieces of computer information that have been put together to form a single unit
...

Define: Skeletons :-)



Abstract:

Joint, Bone, Skeletal Hierarchy;




Skeletal Animation


[0th]
Here is a description of one way to do it :
1. Your object will be in a default pose.
2. All the vertices are in their respective places.
3. You have an array of bones.
3. You have an array of offsets.
4. What's a bone? It is a rotation matrix. You only need a 3x3 matrix.
5. What's an offset? It is a XYZ value you use to translate your vertex.
6. A bone and its offset together make a 4x4 matrix.
7. Each vertex can have a certain amount of bones that influence it. 0, 1, 2, ...you name it!
8. For each bone that influences your vertex, you need to decide how much it influences. This is called a weight or blend factor. 1 weight per bone/offset matrix
9. In order to animate your object, you need to change the bones and offset matrix. Normally, you would not change the weights.
[1st]

[2nd]

[3rd]

[4th]



.....

A skeleton is comprised of a hierarchy of rigid pieces known as joints . In the game industry, we oft en use the terms “joint” and “bone” interchangeably, but the term bone is actually a misnomer. Technically speaking, the joints are the objects that are directly manipulated by the animator, while the bones are simply the empty spaces between the joints. 



The Skeletal Hierarchy




Head  
Spine_5  Spine_1
Left_elbow     Left_hand    Left_wrist
Right_elbow  Right_hand  Right_wrist


Pelvis 
Left_knee   Left_ankle   Left_toe
Right_knee  Right_ankle  Right_toe

.......
.......

Joint


A joint allows relative movement within the skeleton. Joints are essentially 4x4 matrix transformations. Joints can be rotational, translational, or some non-realistic types as well..


Bone

Bone is really just a synonym for joint for the most part. For example, one might refer to the shoulder joint or upper arm bone (humerus) and mean the same thing...





Define: Types of Character Animation :-)

Source

Cel Animation

Rigid Hierarchical Animation

Per-Vertex Animation and Morph Targets

Skinned Animation




....
To be continued
......

Define: 3D Modeling Terminology :-)

Key 3D Modeling Terminology to Master:


Polygon geometry

Polygons are the most commonly used geometry type in 3D. While polygons are commonly used for all types of objects, in order to create very smooth surfaces with polygons means that you’d need to add a lot more geometry than you would with either NURBS or subdivision surfaces.

NURBS surfaces

NURBS stands for non-uniform rational b-spline (NURBS). NURBS are commonly used for very smooth objects because they don’t require as many points to create the same look as polygon geometry would. A NURBS surface always has four sides that are defined by control points.


Subdivision surfaces

Subdivision surfaces, which are sometimes referred to as NURMS (non-uniform rational mesh smooth), are closely related to polygonal geometry. Subdivision surfaces use an algorithm to take polygon geometry and smooth it automatically. For example, in the image above you can see the polygon cage (the cube shape) around the smoothed subdivision surface (the spherical shape inside the cube). You can think of a subdivision surface as a mix of polygonal and NURBS geometry.



Faces

A face is the most basic part of a 3D polygon. When three or more edges are connected together, the face is what fills in the empty space between the edges and makes up what is visible on a polygon mesh. Faces are the areas on your model that gets shading material applied to them.


Vertex

A vertex is the smallest component of a polygon model. It is simply a point in 3D space. By connecting multiple vertices (the plural of a single vertex) together you can create a polygon model. These points can be manipulated to create the desired shape.

Edges

An edge is another component of a polygon. Edges help define the shape of the models, but they can also be used to transform them. An edge is defined by two vertices at their end points. Together, vertices, edges and faces are the components that all help to define the shape of a polygonal object.


Topology
Whatever type of geometry you use it will either be created by NURBS, or points, edges, and faces. The way these components are connected together and the flow around the 3D object is the topology. You can think of topology as the type of polygon faces, the type of vertices and the flow of the edges.


Triangle
A triangle is the simplest polygon that is made up of three sides or edges connected by three vertices, making a three sided face. When modeling, triangles are typically a polygon type often avoided. When creating complex meshes, triangles tend to pose a problem when subdividing geometry to increase resolution, and when a mesh will be deformed or animated.


Normals
Surface normals are used by your 3D application to determine the direction that light will bounce off of geometry. This is very helpful to get control over how the light reacts to certain materials on your 3D objects.






























.....;;;;;;;;;;;............