Friday, 13 July 2012

Crepuscular (God) Rays and Web UI Sample

OK, this is two samples rolled into one, the first part of this sample will cover the post processing effect of Crepuscular rays I have created in XNA based on the GPU Gems 3 article Volumetric Light Scattering as a Post-Process. The second part of this post covers the Web UI that is used in the sample. I initially intended to give a short talk on it at the September 2011 XNA-UK meeting but we had a great talk from the guys over at IndieCity.com and so I never got around to it.
Crepuscular Rays Effect Overview
So, after reading the GPU Gems article I thought it should be easy to get the effect working in my Post Processing framework, which, if you missed it, I posted the source for a while ago, have made a few changes in my latest version, but that framework should still fly. I also thought that I could use my existing sun post process, again, if you missedthat, you can find it on stg conker, and incorporate it into the effect. So, the steps used to create this effect are, render the sun to a render target, black out any occluded pixels in that image by matching them against the depth buffer (created when you rendered the scene), pass this image onto the GPU Gems god ray pixel shader, use a bright pass pixel shader (taken from my Bloom shader) to brighten the rays, then render this final texture and blend it back with the scene image.
All in all it’s a 5 pass effect, this could be reduced by having the occlusion pass in with the rendering of the original sun/light source pass, and negating the bright pass. So, lets get into the shaders, probably wont post any C# here as you have the March 2011 talk and code I gave to fall back on, a change you will notice is that I have moved away from using SpriteBatch to render the RT’s as this restricted the post processing framework to Shader model 2.
LightSourceMask.fx (or the old Sun shader tidied up a bit)
#include "PPVertexShader.fxh"
float3 lightPosition;
float4x4 matVP;
float2 halfPixel;
float SunSize = 1500;
texture flare;
sampler Flare = sampler_state
{
    Texture = (flare);
    AddressU = CLAMP;
    AddressV = CLAMP;
};
float4 LightSourceMaskPS(float2 texCoord : TEXCOORD0 ) : COLOR0
{
    texCoord -= halfPixel;
    // Get the scene
    float4 col = 0;
    // Find the suns position in the world and map it to the screen space.
    float4 ScreenPosition = mul(lightPosition,matVP);
    float scale = ScreenPosition.z;
    ScreenPosition.xyz /= ScreenPosition.w;
    ScreenPosition.x = ScreenPosition.x/2.0f+0.5f;
    ScreenPosition.y = (-ScreenPosition.y/2.0f+0.5f);
    // Are we lokoing in the direction of the sun?
    if(ScreenPosition.w > 0)
    {      
        float2 coord;
        float size = SunSize / scale;
        float2 center = ScreenPosition.xy;
        coord = .5 - (texCoord - center) / size * .5;
        col += (pow(tex2D(Flare,coord),2) * 1) * 2;                      
    }
    return col;  
}
technique LightSourceMask
{
    pass p0
    {
        VertexShader = compile vs_2_0 VertexShaderFunction();
        PixelShader = compile ps_2_0 LightSourceMaskPS();
    }
}
You can see at the top there a reference to aPPVertexShader.fxh, this is just a header that has the vertex shader in it, I do this so I don’t have to repeat the shader in shaders that share the same vertex shader.
So, like in the sun shader, we find the point in world space of the light source and render the texture.
So we end up with an image like this:


LightSceneMask.fx
#include "PPVertexShader.fxh"
float3 lightPosition;
float4x4 matVP;
float4x4 matInvVP;
float2 halfPixel;
sampler2D Scene: register(s0){
    AddressU = Mirror;
    AddressV = Mirror;
};
texture depthMap;
sampler2D DepthMap = sampler_state
{
    Texture = <depthMap>;
    MinFilter = Point;
    MagFilter = Point;
    MipFilter = None;
};
float4 LightSourceSceneMaskPS(float2 texCoord : TEXCOORD0) : COLOR0
{
    float depthVal = 1 - (tex2D(DepthMap, texCoord).r);
    float4 scene = tex2D(Scene,texCoord);
    float4 position;
    position.x = texCoord.x * 2.0f - 1.0f;
    position.y = -(texCoord.y * 2.0f - 1.0f);
    position.z = depthVal;
    position.w = 1.0f;
    // Pixel pos in the world
    float4 worldPos = mul(position, matInvVP);
    worldPos /= worldPos.w;
    // Find light pixel position
    float4 ScreenPosition = mul(lightPosition, matVP);
    ScreenPosition.xyz /= ScreenPosition.w;
    ScreenPosition.x = ScreenPosition.x/2.0f+0.5f;
    ScreenPosition.y = (-ScreenPosition.y/2.0f+0.5f);
    // If the pixel is infront of the light source, blank it out..
    if(depthVal < ScreenPosition.z - .00025)
        scene = 0;
    return scene;
}
technique LightSourceSceneMask
{
    pass p0
    {
        VertexShader = compile vs_2_0 VertexShaderFunction();
        PixelShader = compile ps_2_0 LightSourceSceneMaskPS();
    }
}
In this shader we take the renderer light scene and then black out the pixels that are occluded by objects in the scene, which then gives us an image like this:


LightRays.fx
#include "PPVertexShader.fxh"
#define NUM_SAMPLES 128
float3 lightPosition;
float4x4 matVP;
float2 halfPixel;
float Density = .5f;
float Decay = .95f;
float Weight = 1.0f;
float Exposure = .15f;
sampler2D Scene: register(s0){
    AddressU = Clamp;
    AddressV = Clamp;
};
float4 lightRayPS( float2 texCoord : TEXCOORD0 ) : COLOR0
{
    // Find light pixel position
    float4 ScreenPosition = mul(lightPosition, matVP);
    ScreenPosition.xyz /= ScreenPosition.w;
    ScreenPosition.x = ScreenPosition.x/2.0f+0.5f;
    ScreenPosition.y = (-ScreenPosition.y/2.0f+0.5f);
    float2 TexCoord = texCoord - halfPixel;
    float2 DeltaTexCoord = (TexCoord - ScreenPosition.xy);
    DeltaTexCoord *= (1.0f / NUM_SAMPLES * Density);
    DeltaTexCoord = DeltaTexCoord * clamp(ScreenPosition.w * ScreenPosition.z,0,.5f);
    float3 col = tex2D(Scene,TexCoord);
    float IlluminationDecay = 1.0;
    float3 Sample;
    for( int i = 0; i < NUM_SAMPLES; ++i )
    {
        TexCoord -= DeltaTexCoord;
        Sample = tex2D(Scene, TexCoord);
        Sample *= IlluminationDecay * Weight;
        col += Sample;
        IlluminationDecay *= Decay;          
    }
    return float4(col * Exposure,1);
    if(ScreenPosition.w > 0)
        return float4(col * Exposure,1) * (ScreenPosition.w * .0025);
    else
        return 0;
}
technique LightRayFX
{
    pass p0
    {
        VertexShader = compile vs_3_0 VertexShaderFunction();
        PixelShader = compile ps_3_0 lightRayPS();
    }
}
As you can see, this is pretty much the same shader in the GPU Gems article, but we calculate the onscreen light source position in the shader. This gives an image like this:


Pretty eh :D
BrightPass.fx
#include "PPVertexShader.fxh"
uniform extern float BloomThreshold;
float2 halfPixel;
sampler TextureSampler : register(s0);
float4 BrightPassPS(float2 texCoord : TEXCOORD0) : COLOR0
{
    texCoord -= halfPixel;
    // Look up the original image color.
    float4 c = tex2D(TextureSampler, texCoord);
    // Adjust it to keep only values brighter than the specified threshold.
    return saturate((c - BloomThreshold) / (1 - BloomThreshold));
}
technique BloomExtract
{
    pass P0
    {
        VertexShader = compile vs_2_0 VertexShaderFunction();
        PixelShader = compile ps_2_0 BrightPassPS();
    }
}
This shader just takes the scene and based on a threshold brightens it up like this:


SceneBlend.fx
#include "PPVertexShader.fxh"
float2 halfPixel;
sampler2D Scene: register(s0){
    AddressU = Mirror;
    AddressV = Mirror;
};
texture OrgScene;
sampler2D orgScene = sampler_state
{
    Texture = <OrgScene>;
    AddressU = CLAMP;
    AddressV = CLAMP;
};
float4 BlendPS(float2 texCoord : TEXCOORD0 ) : COLOR0
{
    texCoord -= halfPixel;
    float4 col = tex2D(orgScene,texCoord) * tex2D(Scene,texCoord);
    return col;
}
float4 AditivePS(float2 texCoord : TEXCOORD0 ) : COLOR0
{
    texCoord -= halfPixel;
    float4 col = tex2D(orgScene,texCoord) + tex2D(Scene,texCoord);
    return col;
}
technique Blend
{
    pass p0
    {
        VertexShader = compile vs_2_0 VertexShaderFunction();
        PixelShader = compile ps_2_0 BlendPS();
    }
}
technique Aditive
{
    pass p0
    {
        VertexShader = compile vs_2_0 VertexShaderFunction();
        PixelShader = compile ps_2_0 AditivePS();
    }
}
And Finally we blend this with the original scene, the Additive technique is used in this sample, giving a final image like this:


So there you have the god ray post process.
Web UI
OK, so now onto the UI, I am using a third party library call Awesomium, and it is indeed Awesome, well I think so. It is basically a web renderer, you give it a url, it renders it and spits out a texture, we can then render this texture. Now, if it just did that it would not be much use, thankfully we can wire up call backs to it and pass mouse and keyboard events to it. This means we can, through our game interact with the web page. It means you can create all your UI’s in HTML using great stuff like JQuery and any other web tech you can pile into your game. Now this sample has all it’s web UI local but you could serve the entire game UI from your site.
I first came across this tool while working on ST:Excalibur as we use it to drive the UI and was really impressed with it, so thought I would do a version for XNA. In order to use this tool you need to download the Awsomium source and compile the AwesmiumSharp project, once you have that there are a number of assemblies you will need from that build adding to your project, all the details on how to do this can be found in the ppt what comes with this sample. Once you have all that in place you can create a DrawableGameComponent like this one to handle your Web UI
    public class AwesomiumUIManager : DrawableGameComponent
    {
        public int thisWidth;
        public int thisHeight;
        protected Effect webEffect;
        public WebView webView;
        public Texture2D webRender;
        protected int[] webData;
        public bool TransparentBackground = true;
        protected SpriteBatch spriteBatch
        {
            get { return (SpriteBatch)Game.Services.GetService(typeof(SpriteBatch)); }
        }
        public string URL;
        public AwesomiumUIManager(Game game, string baseUrl)
            : base(game)
        {
            URL = baseUrl;
            DrawOrder = int.MaxValue;
        }
        protected override void LoadContent()
        {
            WebCore.Config config = new WebCore.Config();
            config.enableJavascript = true;
            config.enablePlugins = true;
            WebCore.Initialize(config);
            thisWidth = Game.GraphicsDevice.PresentationParameters.BackBufferWidth;
            thisHeight = Game.GraphicsDevice.PresentationParameters.BackBufferHeight;
            webView = WebCore.CreateWebview(thisWidth, thisHeight);
            webRender = new Texture2D(GraphicsDevice, thisWidth, thisHeight, false, SurfaceFormat.Color);
            webData = new int[thisWidth * thisHeight];
            webEffect = Game.Content.Load<Effect>("Shaders/webEffect");
            ReLoad();
        }
        public virtual void LoadFile(string file)
        {
            LoadURL(string.Format("file:///{0}\\{1}", Directory.GetCurrentDirectory(), file).Replace("\\", "/"));
        }
        public virtual void LoadURL(string url)
        {
            URL = url;
            webView.LoadURL(url);
            webView.SetTransparent(TransparentBackground);
            webView.Focus();
        }
        public virtual void ReLoad()
        {
            if (URL.Contains("http://") || URL.Contains("file:///"))
                LoadURL(URL);
            else
                LoadFile(URL);
        }
        public virtual void CreateObject(string name)
        {
            webView.CreateObject(name);
        }
        public virtual void CreateObject(string name, string method, WebView.JSCallback callback)
        {
            CreateObject(name);
            webView.SetObjectCallback(name, method, callback);
        }
        public virtual void PushData(string name, string method, params JSValue[] args)
        {
            webView.CallJavascriptFunction(name, method, args);
        }
        public void LeftButtonDown()
        {
            webView.InjectMouseDown(MouseButton.Left);
        }
        public void LeftButtonUp()
        {
            webView.InjectMouseUp(MouseButton.Left);
        }
        public void MouseMoved(int X, int Y)
        {
            webView.InjectMouseMove(X, Y);
        }
        public void ScrollWheel(int delta)
        {
            webView.InjectMouseWheel(delta);
        }
        public void KeyPressed(Keys key)
        {
            WebKeyboardEvent keyEvent = new WebKeyboardEvent();
            keyEvent.type = WebKeyType.Char;
            keyEvent.text = new ushort[] { (ushort)key, 0, 0, 0 };
            webView.InjectKeyboardEvent(keyEvent);
        }
        public override void Update(GameTime gameTime)
        {
            WebCore.Update();
            if (webView.IsDirty())
            {
                Marshal.Copy(webView.Render().GetBuffer(), webData, 0, webData.Length);
                webRender.SetData(webData);
            }
            base.Update(gameTime);
        }
        public override void Draw(GameTime gameTime)
        {
            if (webRender != null)
            {
                spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.PointClamp, DepthStencilState.Default, RasterizerState.CullCounterClockwise);
                webEffect.CurrentTechnique.Passes[0].Apply();
                spriteBatch.Draw(webRender, new Rectangle(0, 0, Game.GraphicsDevice.Viewport.Width, Game.GraphicsDevice.Viewport.Height), Color.White);
                spriteBatch.End();
                Game.GraphicsDevice.Textures[0] = null;
            }
        }
        protected void SaveTarget()
        {
            FileStream s = new FileStream("UI.jpg", FileMode.Create);
            webRender.SaveAsJpeg(s, webRender.Width, webRender.Height);
            s.Close();
        }
    }
So, this call enables you to use the Awesomium WebView object, create objects that reside on the UI that can then call back into our C# code as well as access functions on the UI that we can call from our C# code. Again, the set up of these elements are described in the ppt and accompanying solution.
Effectively, i have created a html page in the Content project, made sure all it’s elements are not compiled and are copied over if newer, I can then tell my AwesomiumUIManager to go and get that html page in the constructor like this
            HUD = new AwesomiumUIManager(this, "Content\\UI\\MyUI.html");
In this sample I am not adding it to the Components list as I don’t want it included in the post processing, so I have to initialize, update and draw it my self in the respective Game methods.
In the Game.LoadContent method I set up two script objects in the HUD, these can then be called from the html to pass data back to my C# code, one for click events and one for the slider events.
            HUD.CreateObject("UIEventmanager", "click", webEventManager);
            HUD.CreateObject("UIEventmanager", "slide", webEventManager);
In my Game.Update I can push data back to the UI, the push method calls the two methods passing the param to them
            HUD.PushData("", "ShowSunPosition", new JSValue(sunPosition.X), new JSValue(sunPosition.Y), new JSValue(sunPosition.Z));
            HUD.PushData("", "SetVars", new JSValue(GodRays.BrightThreshold), new JSValue(GodRays.Decay), new JSValue(GodRays.Density), new JSValue(GodRays.Exposure), new JSValue(GodRays.Weight));
Also in the Game.Update method I have to ensure the WebView is getting the mouse and keyboard events
            // Manage the mouse and keyboard for the UI
            if (thisMouseState.LeftButton == ButtonState.Pressed)
                HUD.LeftButtonDown();
            if (thisMouseState.LeftButton == ButtonState.Released && lastMouseState.LeftButton == ButtonState.Pressed)
                HUD.LeftButtonUp();
            HUD.MouseMoved(thisMouseState.X, thisMouseState.Y);
            HUD.ScrollWheel(thisMouseState.ScrollWheelValue - lastMouseState.ScrollWheelValue);
            if (thisKBState.GetPressedKeys().Length > 0)
                HUD.KeyPressed(thisKBState.GetPressedKeys()[0]);
You may also notice in the Draw call of the AwesomiumUIManager I am using a shader, this is because  the texture returned from the WebView is in bgra format so I have to switch the channels around in a shader like this
uniform extern texture sceneMap;
sampler screen = sampler_state
{
    texture = <sceneMap>;  
};
struct PS_INPUT
{
    float2 TexCoord    : TEXCOORD0;
};
float4 Render(PS_INPUT Input) : COLOR0
{
    float4 col = tex2D(screen, Input.TexCoord).bgra;  
    return col;
}
technique PostInvert
{
    pass P0
    {
        PixelShader = compile ps_2_0 Render();
    }
}
So, that’s the end of this mammoth post, hope you find the content useful, as ever, let me know if you have any questions or issues. Oh, and before anyone else points it out, I know my web skill’s ain’t all that :P
The code samples for both this post, and the talk I was to give can be found here.




























Crepuscular Rays Post Process

So, I have been meaning to do god rays for ages and while doing the ST Excalibur post processing framework in SlimDX, thought I would give them a go.
This clip is of my XNA engine, I still have an issue or two with the render in SlimDX, but I am sure, once I have time, I'll write up an XNA blog post about how I did this and the references I used.
Sorry the clip is a bit messy, and forgive my jerky camera work, hope you like how it looks.

September 2011 Talk Preview

OK, so the next XNA UK meeting will be on the 7th of September and I will be giving a short talk on using Web UI's in XNA, I thought I would put up a quick post with a picture to show the sort of thing you will be able to do.


See you in September :D

XNA UK UG April 2011 Talk

I have finally got the source and ppt up for the talk I gave on the 6th of April.

It was just covering the UI I created for the talk the month before, have expanded on it slightly so that you can skin it.

You can find the down load here.

And now for something completely different….

Now, some of you, especially the XNAUK-UG admins will have noticed I have been a bit quieter than normal here, well that’s because, as well as working on Ghostscape for the WP7, and giving the odd talk or two for the user groupmeetings, I am now involved with this truly awesome project, ST Excalibur.

I helped out on the project last year but now I have become a member of the dev team, hoping to help out as best I can. Now, this doesn’t mean that I have stopped working with XNA, the contrary, I will be taking some of the samples I have created and hope to be using them in the project. I’ll also still be doing my best to attend the monthly user group meetings and give talks, also I intend to keep my blog going and to help you guys out as and when I can.

So, in the mean time, take a look at a few clips the team have already put together, I am sure you will agree it is a truly stunning achievement so far and I hope my contributions only improve an already stunning project.

So, I am going to be busy to say the least, but I am not going away, I will always be here ready to bore you with shaders :D

Bit More Terrain

Well I say Geo Clip Map, but it's my take on the GPU Gems 2 Articles on terrain. So, since the last clip I have added some lighting to the shader, I did a GPU terrain render before and so just took the code from that to generate the normals and lighting from the height map.

Still have issues I want to iron out and a load of performance changes to make, but I think it's getting there.

As ever comments welcome :)

Terrain: Geo Clip Mapping

So I decided to have a play with Geo Clip Mapping. I always wanted a decent terrain render and I think this could be a good start. Not much of a render I know, but that will come, need to get the mechanics working first.


As you can see I still have a few issues that need ironing out, but it's a good start I think for a couple of evenings..


OOPS! Forgot to credit the music, track is a bg track Psionic did over at PsionicGames.com, thanks Psi :D

Blender 2.57b & XNA 4.0 Part III

<< Prev | Next >>
OK, so still not much in the way of XNA, I guess I should have just called this set of posts Blender 2.57b, but all this stuff I am using form my own XNA projects, so I guess if you are using XNA and want to use Blender, then this shows what can be done, well if nothing else, what I am doing…
In this post I am going to look at creating normal maps for our cube mesh, we will UV map it as before, and this time I am going to create a six sided dice, but before we paint our texture onto our dice we are going to create a high poly (really detailed) version of our cube and add the number markers to it. We can then use this high poly mesh to generate a normal map that we can then use to render against our low poly model and give the impression that there are dips on the surface of the dice.
So, we are going to start off with a UV mapped cube, like we had in the previous post, like this


We have a 3D view window and a UV/Image Editor window in our Blender session. We now need to create a high poly version of our mesh, to do this go into Object mode in the 3D view and hit Shift-D to duplicate our mesh, the mesh’s edges will turn white, this is our copy of the mesh, hit enter to keep the mesh where it is. Now we want to move this duplicated mesh to another layer. Layers in blender are handy for separating your scene, for this we are going to use it as a holding bay for our high poly mesh. With the mesh selected in Object mode, hit the M key, this tells blender we want to move the selected item to another layer and we will get the layer selection box pop up like this


You can see the box at the top left is in dark grey, this is the layer we are in now, click the box next to it (does not have to be that one, you can pick any layer you like really) and the new mesh will be moved to that layer. You can see what layer you are in and if you have anything in a layer and indeed if you have a selected object in that layer by looking at the bottom of the 3D view window, there is the layers icon(s)


As can be seen in the image above, we now have 2 layers with objects in, the second layer has our active object in it. You can quickly move from layer to layer by simply clicking the layer icon you want to switch to, or while the mouse is over the 3D view, with the 3D view in Object Mode you can key the layers number (not from the number pad though.)
So we want to be in the second layer with the new duplicated mesh selected. We are now going to make our high poly mesh, go into edit mode, as we have in the previous posts, and now, on the panel to the left in the 3D view, find the Add panel


In here is an option called Subdivide, this will take all the selected elements and, well divide them in two, so, with all our vertices selected click the subdivide button about five or six times, this will give us a fair few faces to play with, you should have something like this


Now we have a high poly version of our cube, we ca now start to add a bit of detail, to do this I am going to use the Sculpt mode, so, select sculpt mode in the 3D view


We now need to set up out sculpt brush on the left of the 3D view window, I am going to set my radius to 40, strength to 1 and to subtract like this


I am now going to put the number dimples on the mesh, remember the opposite sides of a six sided dice add up to seven, ill start with four dots on the first face I do, then using the * key on the number pad, rotate over the top to the other side to mark another three dots. Now, as you do this, check out and if you like use the Symmetry tools on the brush panel, it will save you doing ALL the dots your self :D Oh, and be aware of the axis you are in when using it :P


So now we have a high poly mesh with lots of detail in it, how do we now create a normal map from it? First thing we need to do is create a texture so we can write the normal data to it. In the UV/Image Editor create a new image by clicking the +New option at the bottom


Now name the texture NormalMap and keep the defaults and click OK. Now, in Object Mode, select the high poly mesh, hit 1 to go to our low poly mesh and with Shift held down select our low poly mesh, we now have both mesh’s selected, with Shift still held down, select the high poly mesh layer from the layer icons, we now have both our mesh’s and both our layers selected and your screen should look something like this (I have circles in red what the layer icon should look like at this point)


We now have our mesh’s and layers selected and a texture set up ready to accept our normal data. To now write the high poly normal data to the texture we will need to use the Properties window

At the very bottom of this window is the Bake panel and this is what we will use to generate the normal data in the NormalMap texture, configure your Bake panel so that it looks like this


So you have the Bake Mode set to Normals and Selected to Act is selected. Now, hit the Bake button, the one at the top of the Bake panel with the camera on it. After a moment or two you should have a screen that looks like this


Now we have a normal map we can save the texture, exactly like we did in the last post with our color map.
How can I see what my normal map will look like on the mesh?
In the Properties panel select the texture option as we did in the post before, name the first texture NormalMap, type to Image or Movie, then in the preview panel below, select Both, and below that, in the Image panel select the image icon next to +New and choose the NormalMap texture we just generated and saved.


We now need to tell blender to use this texture as a normal map. In the Image Sampling panel set the Normal Map option, under the Mapping panel set the Coordinates to UV so it uses the UV map we have for the mesh. In the Influence panel below un select the color option and select the Normal option under Geometry. Now hit F12 to render the cube in blender and you will see the cube rendered with the normal map we have just created.


I am now going to texture my cube just as we did in the last post. So here is my dice rendered in Blender (I know, not the best dice ever, but you can see the effect, also on the face with 2 dots, think I have a modeling error there :P)


So to do this we need to let blender know to use our color map in the render. Just as we did with the Normal map, we have to set the texture up in the Properties window, create a new texture, call it ColorMap under the Image panel, select your color map image, under Mapping panel, set to Coordinates to UV, hit F12 and see it render :D
Here is the mesh, it’s color map and normal map rendered in my deferred lighting engine.


As ever, comments are welcome :)
<< Prev | Next >>

Blender 2.57b & XNA 4.0 Part II

<< Pev | Next >>
In the previous post I covered how and where to get Blender, briefly described the windows we start with, covered a little bit about the 3D View window and how we can then export the cube mesh and add it to our XNA project.
In this post I intend to show you how we can take that cube and create a UV map for it so that we can then apply a texture to the cube in a render.
What is a UV Map?
Well, as I am sure you know, a mesh can be a complicated object, lots of detail, lots of vertices and faces, yet we are able to wrap a flat texture around the mesh. Well, we can only do that IF we have a UV map. The UV map describes what part of the texture appears on what part of the mesh and is stored in the vertex data of the mesh.
How does it work?
OK, the texture that gets applied to the mesh has it’s own coordinate system, this is sometimes called texture space. This coordinate system ranges from 0,0 (top left hand corner of the texture) to 1,1 (bottom right hand corner of the texture. Each vertices in the mesh will have a UV value, this tells the shader what part of the texture belongs to this vertex.
How do we create a UV Map?
Before we create our UV map, lets have a mooch around blender and discover some of the tools we are going to use. At the bottom of our 3D View

we have a load of buttons and options


As you can see in the above image, we are in object mode to start with, this is the default mode and we can use this mode to look around our mesh as we did in the last post, but what we want to do is use Edit mode. So, where you can see the option “Object Mode” at the bottom of the 3D View


Click on it and change to “Edit Mode”, alternatively you can hit the Tab key.


Your view should look something like this:


You can see our cube is nice an orange, this means that the whole cube is selected, to toggle select all hit the A key, so the first time you hit A you deselect the cube, the second time you will re-select all the cube, but all what? In blender you can select by vertex, by edge or by face. At this point you will be in vertex select mode, the selection mode is again in the bottom panel of the 3D view and looks like this


In the image above you can see we are in Select Vertex Mode. While in this mode, hit A to de select all the vertices, as we look at the cube move your mouse near the bottom right corner nearest to us and right click your mouse and you will see that just this vertex is selected, right click near another vertex and it will change to the selected vertex. Now just being able to select one vertex or all vertices is not much use, how can I select the four verts that are nearest me?, well right click them as you did before, but before you start clicking old down the Shift key for each click (not really needed for the first) and you will see you can select as many verts as you like, you can also unselect a selected vertex with the Shift key held down, you may end up with a screen a little like this


What if I want to grab a bunch of vertices in one go? Hit A and deselect all the verts, hit the B key and you will get a large cross hair that tracks your mouse pointer. Move the cross hair above and to the left of the mesh, hold down the left mouse button and drag it bellow and to the right of the mesh, you will see a box is drawn, let go of the mouse button and the vertices you just surrounded are now selected


Now, it looks like you have selected all the vertices, but you haven't, rotate round to the back of the mesh (as described in the last post with the mouse or the Num Pad keys 4 or 5) and you will see the vertices you have missed.


If you get back round to the front of the mesh, clear all the selected vertices (A) and take another look at the bottom of the 3D View screen and find this button next to the selection modes


This button allows us to see through the mesh so we can select elements not visible from this side of the mesh, so if you click this button you should get a mesh that looks like this


See how you can see the other edges and vertices behind the mesh, doing a box select (B) as before in the mode will select all the vertices on the mesh. You can also do a circle select with the C key, you can use the mouse wheel or + and – keys to scale the circle. With the circle you can click or click and drag over vertices to select them, once done hit the Escape key to come out of Circle Select.
Try all of the above with the other selection modes, Edge and Face, especially play with edge as that’s what we are going to use next.
Creating the UV Map
So how do we go about creating the UV map? Well we need to look at our mesh, in this case it’s a cube, so pretty simple geometry, and decide how we would cut it up and lay it out flat on the ground, or how we would “unwrap” it. A good description I read to unwrapping a model of a person was to do it how clothes would be cut by a tailor, so look at your own clothes and see where the seams are to help you best judge where to cut your mesh, indeed the method of cutting the mesh is call marking the seems. So, as I say we have a cube, so a pretty simple shape, we could mark every edge and have six separate squares to make up our cube or we could cut it like a cardboard box and end up with a cross shape, and that’s what I am going to do.
First of all, and I am sure you have noticed in all the images of the 3D View above, blender has the Z coordinate as the up axis, we can quickly remedy this by hitting the Num Pad 7 key, this should give you a 3D view something like this


Now you don’t have to do this as we know from the last post we can rotate the mesh when we export it, and even if we forget we can do it in our content properties in our XNA Content project for the mesh, but I prefer to model in the same space I will be rendering in, so I always get my orientation this way. So, now we are in the correct perspective, make sure you are in edit mode and have edge selection selected and that the model is not in transparent mode. Select the far right edge as well as the top and bottom edge,


move round to the left hand side of the mesh and select the top and bottom edges,


move to the right hand side and select the top and bottom edges


Now we have our seem selected, we need to mark it. On the left hand side of the 3D view window there is a panel, on this panel is a section called UV Mapping


Click the Mark Seem option and you will see that the seam has been marked in red. Alternatively you can hit Ctrl-E (so the Ctrl key an E at the same time) which will give you a pop up menu, select mark seem from this menu.
Moving back to the Top view (Num Pad 7), hit A to deselect all and you should have a screen that looks like this


So we have our seam created we now need to unwrap the mesh to give us the UV map. First we need to create a new window so we can see our model and our UV map at the same time. In the top right hand corner of all Blender windows are these diagonal lines image Left click and hold on them and drag the mouse to the left, this will open up a new window next to the 3D view, in fact you should not have two 3D view windows like this


But we don’t want two 3D view windows, one is enough, we want a UV/Image Editor window. To turn the second window into a UV/Image Edit window, click on the 3D view icon and change it to the UV/Image Edit icon like this


And your second window should now look like this (note, you can zoom and move about the texture space in almost the same way a the 3D view, mouse wheel to zoom etc.)


Not much in there, yet. So, move your mouse over the first 3D View window (this gives it focus from your keyboard commands) hit A to select all, then on the right hand panel under UV Mapping again click the Unwrap option and you will get a pop up menu, select Unwrap. Alternatively you can hit the U key and get the same pop up menu.


And Kabooooom! You have a UV map visible in the UV/Image Editor.


Before we do anything else, I want to just point you at a few tools in the UV/Image Editor, click the UV option at the bottom of the UC/Image Editor window and you get a pop up menu, there are three options there I want to quickly cover. Pack Islands, Average Island Scale and Minimize Stretch. Now, these options will make little difference here with this simple mesh, but as your mesh’s get a bit more complicated, you will want to use them. Pack Islands will ensure that the UV “Islands”, that’s each area covered by vertices in the UV map are fully packed into the map, so if you have an island hanging over the edge of the map, this option will ensure that ALL the islands fin with in the map.Average Island Scale, this option, probably best applied before Pack Islands will average out the size of your islands so if you have lots of small islands and some big ones, this option will average them all out so they are of similar size.Minimize Stretch, this is  a great option, some times when UV mapping you can end up with a UV map that can cause the texture that is rendered over the mesh to appear stretched, this option will minimize this, once selected the option will run and run until you click the UV/Image Editor window, the longer toy leave it the better the results.
So, we can now export this mesh as before, but now we can apply a texture to it and it will be wrapped around the cube correctly. So using this texture (my, we are a pretty bunch)


The cube will render like this


Now, that’s all well and good, but how do I know what to draw where on my texture other than referring to the UV/Image Editor window in blender? Well we can export the UV layout, in the UV/Image Editor window, in the bottom panel, select the UV option to get a pop up menu, at the top select the Export UV Layout option


And you should then have a screen that looks like this


Change the name of the file to what ever you like, I have called it Cube1UVLayout.png, you can change the size of the exported image, default is 1024x1024 and I set the Fill Opacity to 1 so I can see the area clearly, and just in case select All UV’s then click Export UV Layout and get a png file like this


So, now you know the coverage of your UV map. At this point it might be a good idea for you to save your blender project, go up to the info window

, select File and from the pop up select Save


On the next screen choose a location and name your project and save it.
Can I create a texture in blender that I can then use on my model in XNA?
Yes you can, it is called a UV/Image Editor window after all :D. This takes a bit of setting up, first of all we need to create the texture we are going to paint on, to do this, in your UV/Image Editor window click the New image option, it looks like this


Enter a name for your texture, I am going to call mine Cube1ColorMap, select the size you want it to be and the color, then hit OK, I tend to stick to the defaults other than the names. When ever I create a texture, for some reason Blender zooms me right into it, use your mouse wheel or the – to zoom out a bit so you can see the whole texture in the window, you should have something like this


So, now we have to set blender up so we can paint on the mesh and it writes it to our texture map and we can see it rendered in the 3D view. So, first thing we are going to do is change the mode in the 3D View, so select the mode option and choose Texture Paint


Your windows should look a bit like this now


As you can see we have a suite of tools on the left hand panel of the 3D View window, you can alter the color, the strength and the brush type, once you have selected the color, brush or texture you want to paint with you can paint directly onto the mesh. You can also paint directly onto the texture map, but to do this you have to enable image painting mode. To do this, go to the bottom of the UV/Image Editor and select the Image Painting mode icon

and you can now click and drag your mouse over the texture and it will appear on the mesh.
To paint with a texture we first need to set up the texture to use. So, first we need to load up a texture in the UV/Image Editor. We need to make some changes to the Property window, the right most window

 select the texture option from the list of property options after the icon

and the property panel should look like this


We need to now set up the texture type, change the Type from None to Image or Movie


Now we can select the texture we want to use, move down the Properties window and find the Image Panel, now select the Open option and choose the image you want to use I have chosen a brick texture for me cube and I have named the texture Bricks


Now on the left panel in the 3D view find the Texture panel and click the red and white checked image to see the images you have to use


Now, for some reason, Blender does not load the  snapshot of the image at this point, but it is useable, so either click the texture you want to use or type it’s name. Now before I use this texture I am going to ramp up the strength of my brush. Simply move up the brush panel in the 3D view and slide the value up to one, or just click the center and type the value toy want. Also another tip, under the Project Paint in the brush panel, set the bleed to 0 or you can end up with some artifacts as paint bleeds from one face to the next, default is 2.
So I have done all my painting in blender, how do I get it out?


Well, quite simply, in the UV/Image Editor, click the Image option, select Save As Image and, well save it :)
 

And then you can render it in XNA as you would any image on a mesh


Now keep in mind that the texture is separate to the mesh and the last exported mesh did not have any textures associated with it, if you export now then this texture will be associated and will have to live along side the mesh in your project as the content pipeline will pick it up and compile it into the mesh’s effect. If you recall in the last post you could set the Path Mode in the export, this will specify how the texture path is stored in the FBX file.
Keep in mind that I have just lightly skimmed the surface here, and hope you have as much fun mining the features of Blender as much as I do :)
Well, that turned out to be a huge post and took me all evening to write, much more than I indented to post today, hope there is stuff in there that helps you out, and remember, I am no Blender expert, this is just how I am going about it, if you have any hints or tips of the content of this post, then pleas feel free to comment below :)
<< Pev | Next >>