Dev C++ Directx Tutorial

The Ultimate DirectX Tutorial. Navigation Home DirectX for Windows 8 DirectX 11.2 DirectX 11.1 DirectX for Desktop DirectX 11 DirectX 9 Useful Resources About DirectXTutorial DirectXTutorial Premium Testimonials. Contact Contact me here. The DirectX SDK June 2010 3. A basic knowledge of C 4. A burning desire to make games. If you know DirectX, you can develop a DirectX app using native C and HLSL to take full advantage of graphics hardware. Use this basic tutorial to get started with DirectX app development, then use the roadmap to continue exploring DirectX. Windows desktop app with C and DirectX. Jul 30, 2016  A tutorial series teaching C for beginners with a games-based theme. A great way to learn programming for beginners! Beginner C Game Programming Tutorial 0 DirectX Introduction/Setup. Apr 11, 2017  Leverage the full power of C to build high-end games powered by DirectX to run on a variety of devices in the Windows family, including desktops, tablets, and phones. Robotic auto tune tune free online. In this blog post we will dive into DirectX development with C in Visual Studio.

For an explanation about why to use tangent space, read this tidbit of text.
Converting to Tangent (or texture) space
Normals stored in the texture are surface orientation dependent and are stored in what's called Tangent Space. But all the other lighting components such as view direction are supplied in world space. Because we can't use world space, why not convert every lighting component we need to compare the normal with, to this format called tangent space? Why not compare apples to apples?
Changing coordinate systems requires transformation. I'll just skip the hardcore math, but what I do want to explain here is that we need a matrix to transform world to tangent space. Just like we need a matrix to get world space from object space, we need a matrix to convert to tangent space. Remember this:
  • We need the surface orientation, because that's where the texture normals depend on.
  • We know everything about our surface (a triangle).
  • Any lighting component we need in PS (lightdir,viewdir,surfacedir) needs to be multiplied by the resulting matrix.
/* We need 3 triangle corner positions, 3 triangle texture coordinates and a normal. Tangent and bitangent are the variables we're constructing */


// Determine surface orientation by calculating triangles edges
D3DXVECTOR3 edge2 = pos3 - pos1;
D3DXVec3Normalize(&edge2, &edge2);
// Do the same in texture space
D3DXVECTOR2 texEdge2 = tex3 - tex1;
D3DXVec2Normalize(&texEdge2, &texEdge2);
// A determinant returns the orientation of the surface
float det = (texEdge1.x * texEdge2.y) - (texEdge1.y * texEdge2.x);
// Account for imprecision
if(fabsf(det) < 1e-6f) {
// Equal to zero (almost) means the surface lies flat on its back
tangent.y = 0.0f;

bitangenttest.y = 0.0f;
} else {

tangent.x = (texEdge2.y * edge1.x - texEdge1.y * edge2.x) * det;
tangent.y = (texEdge2.y * edge1.y - texEdge1.y * edge2.y) * det;
tangent.z = (texEdge2.y * edge1.z - texEdge1.y * edge2.z) * det;

Dev C++ Tutorial Pdf

bitangenttest.x = (-texEdge2.x * edge1.x + texEdge1.x * edge2.x) * det;
bitangenttest.y = (-texEdge2.x * edge1.y + texEdge1.x * edge2.y) * det;
bitangenttest.z = (-texEdge2.x * edge1.z + texEdge1.x * edge2.z) * det;
D3DXVec3Normalize(&tangent, &tangent);
D3DXVec3Normalize(&bitangenttest, &bitangenttest);

// As the bitangent equals to the cross product between the normal and the tangent running along the surface, calculate it

// Since we don't know if we must negate it, compare it with our computed one above
float crossinv = (D3DXVec3Dot(&bitangent, &bitangenttest) < 0.0f) ? -1.0f : 1.0f;


We need to create a 3x3 matrix to be able to use it to convert object normals to surface-relative ones. This matrix should be built by adding the three components up in a matrix, and then transposing it in de Vertex Shader:
// tangentin, binormalin and normalin are 3D vectors supplied by the CPU
float3x3 tbnmatrix = transpose(float3x3(tangentin,binormalin,normalin));
// then multiply any vector we need in tangent space (the ones to be compared to
// the normal in the texture). For example, the light direction:

Then we're almost done. The only thing we need to do now is pass all the converted stuff to the Pixel Shader. Inside the same Pixel Shader retrieve the normal from the texture. Now you're supposed to end up with for example the light direction in tangent space. Then do your lighting calculations as you would always do, with the only exception being the source of the normal:
// we're inside a Pixel Shader now
// texture coordinates are equal to the ones used for the diffuse color map
float3 normal = tex2D(normalmapsampler,coordin);


// color is stored in the [0,1] range (0 - 255), but we want our normals to be
// in the range op [-1,1].
// solution: multiply them by 2 (yields [0,2]) and substract one (yields [-1,1]).
normal = 2.0f*normal-1.0f;


// now that we've got our normal to work with, obtain (for example) lightdir
// for Phong shading
// lightdirtangentin is the same vector as lightdir in the VS around
// 20 lines above
float3 lightdir = normalize(lightdirtangentin);


/* use the variables as you would always do with your favourite lighting model */
If you don't have a clue what Tangent Space is about, read this.
This time, Parallax Mapping will be discussed.
Theory
Well, what is a Parallax supposed to be anyway? It's quite a common phenomenon. Actually, it's so common most people wouldn't even notice it as anything out of the ordinary. Let's take a spedometer as a common example for people not sitting behind the steering wheel.
Let's suppose dad's driving at 100km/h. His spedometer also shows that amount more or less. But mom sitting next to him, will see him driving a tad slower. Why, you might ask? Well, it's because dad's viewing the spedometer from the front, so the pointer will sit on top of '100'. From the point of view of mom, it'll be hovering above, let's say, 95km/h. This is because she is viewing it at an angle and there's a depth difference between the needle and the text.
Moving from A to B, passing a static Object,
will make the background appear to be moving
Let's agree that a parallax effect occurs when viewing nearer (foreground) objects at a changing angle. The background were an objects is in front of will change depending on viewing angle. This will lead to us thinking the background is moving with us, because we're seeing different portions of the background next tothe same object:.
Luckily, this effect will be automatically implemented and hardware accelerated in 3D space for us.
But what about textures? They too are 3D worlds, but flattened by our limited camera sensors translated to 2D, lacking parallax, and thus looking fake: objects that were positioned far away from the camera will move at the same speed as nearer ones for the viewer because of lack of depth.
Programming
Looks like we want to bring parallax and thus 3D back into our textures. Remember the variables needed for it? Yep, depth and viewing angle. More depth means more parallax, more angle means more parallax too.
We can get depth from a regular heightmap, that's no problem at all. As the heightmap has the same texture coords we can sample from it as we would do with a regular color texture:
// a snippet from inside a Pixel Shader.
// coordin is the interpolated texture coordinate passed by the Vertex Shader
// heightmapsampler is a standard wrap sampler, which samples from a generic height map

Dev C++ Directx TutorialDev
// what we also need in the Pixel Shader is the viewing direction.
// As we're doing calculations relative to our surface (Tangent Space)..
// we need to transform it to texture space. If you don't know what tbnMatrix
// is, read the tutorial over here.


/* VERTEX SHADER
outVS.toeyetangent = mul((camerapos - worldpos),tbnMatrix);
*/


// PIXEL SHADER
float3 toeyetangent = normalize(toeyetangentin);


// The only thing we're doing here is skewing textures. We're only moving
// textures around. The higher a specific texel is, the more we move it.
// We'll be skewing in the direction of the view vector too.


// This is a texture coordinate offset. As I said it increases when height
// increases. Also required and worth mentioning is that we're moving along with
// the viewing direction, so multiply the offset by it.
// We also need to specify an effect multiplier. This normaly needs to about 0.4
float2 offset = toeyetangentin.xy*height*0.04f;
texcoordin += offset;


In its most basic form, this is all you need to do Parallax Mapping working. Let's sum things up, shall we?
  • Textures lack depth. Depth is necessary to bring back a 3D feel to it.
  • An important part of the depth illusion is Parallax. We want to bring it back into our textures.
  • To do that, we need to obtain texel depth and the viewing direction. The viewing direction needs to be supplied in Tangent Space to the pixel shader.
  • To do that we need to supply normals, binormals and tangents to the vertex shader. Combining these to a matrix and transposing it gives us the opportunity to transform from world to texture space.
  • Then supply texture coordinates and the tangent space view vector to the pixel shader.
  • Then sample depth from a regular heightmap. Multiply it by the view vector.
  • And tada, you've got your texture coordinate offset. You're then supposed to use this texture coordinate for further sampling.
Let's post a couple of screenshots then. First the one without any parallax mapping.


Now it does include a Parallax Map. It uses a multiplier of 0.4 and a single sample.


Yes, a single sample is all you need. No need to do PCF averaging or anything. Just a single tex instruction per pixel. But as you can see in the latter picture, there are some minor artifacts, especially on steeper viewing angles. To partially fix this, you need to include an offset constant, like this:
float2 offset = toeyetangentin.xy*(height*0.04f-0.01f);



With this result:


Dev C++ Directx Tutorials

Well, that's pretty much all there is to it. Have fun with it!