Initializing the Direct3D API

Direct3D is a rendering API that allows you to write your game without having to worry about which graphics card or driver the user may have. By separating this concern through the Component Object Model (COM) system, you can easily write once and have your code run on the hardware from NVIDIA or AMD.

Now let's take a look at Direct3DBase.cpp , the class that we inherited from earlier. This is where DirectX is set up and prepared for use. There are a few objects that we need to create here to ensure we have everything required to start drawing.

Graphics device The first is the graphics device, represented by an ID3D11Device object. The device represents the physical graphics card and the link to a single adapter. It is primarily used to create resources such as textures and shaders, and owns the device context and swap chain. Direct3D 11.1 also supports the use of feature levels to support older graphics cards that may only support Direct3D 9.0 or Direct3D 10.0 features. When you create the device, you should specify a list of feature levels that your game will support, and DirectX will handle all of the checks to ensure you get the highest feature level you want that the graphics card supports. You can find the code that creates the graphics device in the Direct3DBase::CreateDeviceResources() method. Here we allow all possible feature levels, which will allow our game to run on older and weaker devices. The key thing to remember here is that if you want to use any graphics features that were introduced after Direct3D 9.0, you will need to either remove the older feature levels from the list or manually check which level you have received and avoid using that feature. Once we have a list of feature levels, we just need a simple call to the D3D11CreateDevice() method, which will provide us with the device and immediate device context. Note nullptr is a new C++11 keyword that gives us a strongly-defined null pointer. Previously NULL was just an alias for zero, which prevented the compiler from supporting us with extra error-checking. D3D11CreateDevice( nullptr, D3D_DRIVER_TYPE_HARDWARE, nullptr, D3D11_CREATE_DEVICE_BGRA_SUPPORT, featureLevels, ARRAYSIZE(featureLevels), D3D11_SDK_VERSION, &device, &m_featureLevel, &context ); Most of this is pretty simple: we request a hardware device with BGRA format layout support (see the Swap chain section for more details on texture formats) and provide a list of feature levels that we can support. The magic of COM and Direct3D will provide us with an ID3D11Device and ID3D11DeviceContext that we can use for rendering.

Device context The device context is probably the most useful item that you're going to create. This is where you will issue all draw calls and state changes to the graphics hardware. The device context works together with the graphics device to provide 99 percent of the commands you need to use Direct3D. By default we get an immediate context along with our graphics device. One of the main benefits provided by a context system is the ability to issue commands from worker threads using deferred contexts. These deferred contexts can then be passed to the immediate context so that their commands can be issued on a single thread, allowing for multithreading with an API that is not thread-safe. To create an immediate ID3D11DeviceContext , just pass a pointer to the same method that we used to create the device. Deferred contexts are generally considered an advanced technique and are outside the scope of this book; however, if you're looking to take full advantage of modern hardware, you will want to take a look at this topic to ensure that you can work with the GPU without limiting yourself to a single CPU core. If you're trying to remember which object to use when you're rendering, remember that the device is about creating resources throughout the lifetime of the application, while the device context does the work to apply those resources and create the images that are displayed to the user.

Swap chain Working with Direct3D exposes you to a number of asynchronous devices, all operating at different rates independent of each other. If you drew to the same texture buffer that the monitor used to display to the screen, you would see the monitor display a half-drawn image as it refreshes while you're still drawing. This is commonly known as screen tearing. To get around this, the concept of a swap chain was created. A swap chain is a series of textures that the monitor can iterate through, giving you time to draw the frame before the monitor needs to display it. Often, this is accomplished with just two texture buffers known as a front buffer and a back buffer. The front buffer is what the monitor will display while you draw to the Back Buffer. When you're finished rendering, the buffers are swapped so that the monitor can display the new frame and Direct3D can begin drawing the next frame; this is known as double buffering. Sometimes two buffers are not enough; for example, when the monitor is still displaying the previous frame and Direct3D is ready to swap the buffers. This means that the API needs to wait for the monitor to finish displaying the previous frame before it can swap the buffers and let your game continue. Alternatively, the API may discard the content that was just rendered, allowing the game to continue, but wasting a frame and causing the front buffer to repeat if the monitor wants to refresh while the game is drawing. This is where three buffers can come in handy, allowing the game to continue working and render ahead. The swap chain is also directly tied to the resolution, and if the display area is resized, the swap chain needs to be recreated. You may think that all the games are full screen now and should not need to be resized! Remember that the snap functionality in Windows 8 resizes the screen, requiring your game to match the available space. Alongside the creation code, we need a way to respond to the window resize events so that we can resize the swap chain and elegantly handle the changes. All of this happens inside Direct3DBase::CreateWindowSizeDependentResources . Here we will check to see if there is a resize, as well as some orientation-handling checks so that the game can handle rotation if enabled. We want to avoid the work that we don't need to do, and one of the benefits of Direct3D 11.1 is the ability to just resize the buffers in the swap chain. However, the really important code here executes if we do not already have a swap chain. Many parts of Direct3D rely on creating a description structure that contains the information required to create the device, and then pass that information to a Create() method that will handle the rest of the creation. In this case, we will make use of a DXGI_SWAP_CHAIN_DESC1 structure to describe the swap chain. The following code snippet shows what our structure will look like: swapChainDesc.Width = static_cast<UINT>(m_renderTargetSize.Width); swapChainDesc.Height = static_cast<UINT>(m_renderTargetSize.Height); swapChainDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; swapChainDesc.Stereo = false; swapChainDesc.SampleDesc.Count = 1; swapChainDesc.SampleDesc.Quality = 0; swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapChainDesc.BufferCount = 2; swapChainDesc.Scaling = DXGI_SCALING_NONE; swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL; There are a lot of new concepts here, so let's work through each option one by one. The Width and Height properties are self-explanatory; these come directly from the CoreWindow instance so that we can render at native resolution. If you want to force a different resolution, this would be where you specify that resolution. The Format defines the layout of the pixels in the texture. Textures are represented as an array of colors, which can be packed in many different ways. The most common way is to lay out the different color channels in a B8G8R8A8 format. This means that the pixel will have a single byte for each channel: Blue, Green, Red, and Alpha, in that order. The UNORM tells the system to store each pixel as an unsigned normalized integer. Put together, this forms DXGI_FORMAT_B8G8R8A8_UNORM . Often, the R8G8B8A8 pixel layout is also used, however, both are well-supported and you can choose either. The next flag, Stereo , tells the API if you want to take advantage of the Stereoscopic Rendering support in Direct3D 11.1. This is an advanced topic that we won't cover, so leave this as false for now. The SampleDesc substructure describes our multisampling setting. Multisampling refers to a technique, commonly known as MSAA or Multisample antialiasing. Antialiasing refers to a technique that is used to reduce the sharp, jagged edges on polygons that arise from trying to map a line to the pixels when the line crosses through the middle of the pixel. MSAA resolves this by sampling within the pixel and filtering those values to get a nice average that represents detail smaller than a pixel. With antialiasing you will see nice smooth lines, at the cost of extra rendering and filtering. For our purposes, we will specify a single count, and zero quality, which tells the API to disable MSAA. The BufferUsage enumeration tells the API how we plan to use the swap chain, which lets it make performance optimizations. This is most commonly used when creating normal textures, and should be left alone for now. The Scaling parameter defines how the back buffer will be scaled if the texture resolution does not match the resolution that the operating system is providing to the monitor. You only have two options here: DXGI_SCALING_STRETCH and DXGI_SCALING_NONE . The SwapEffect describes what happens when a swap between the front buffer and the back buffer(s) occurs. We're building a Windows Store application, so we only have one option here: DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL . If we were building a desktop application, we would have a larger selection, and our final choice would depend on our performance and hardware requirements. Now you may be wondering what DXGI is, and why we have been using it during the swap chain creation. Beginning with Windows Vista and Direct3D 10.0, the DirectX Graphics Infrastructure (DXGI) exists to act as an intermediary between the Direct3D API and the graphics driver. It manages the adapters and common graphics resources, as well as the Desktop Window Manager, which handles compositing multiple Direct3D applications together to allow multiple windows to share the same screen. DXGI manages the screen, and therefore manages the swap chain as well. That is why we have an ID3D11Device and an IDXGISwapChain object. Once we're done, we need to use this structure to create the swap chain. You may remember, the graphics device creates resources, and that includes the swap chain. The swap chain, however, is a DXGI resource and not a Direct3D resource, so we first need to extract the DXGI device from the Direct3D device before we can continue. Thankfully, the Direct3D device is layered on top of the DXGI device, so we just need to convert the ID3D11Device1 to an IDXGIDevice1 with the following piece of code: ComPtr<IDXGIDevice1> dxgiDevice; DX::ThrowIfFailed(m_d3dDevice.As(&dxgiDevice)); Then we can get the adapter that the device is linked to, and the factory that serves the adapter, with the following code snippet: ComPtr<IDXGIAdapter> dxgiAdapter; DX::ThrowIfFailed( dxgiDevice->GetAdapter(&dxgiAdapter)); ComPtr<IDXGIFactory2> dxgiFactory; DX::ThrowIfFailed( dxgiAdapter->GetParent( __uuidof(IDXGIFactory2), &dxgiFactory )); Using the IDXGIFactory2 , we can create the IDXGISwapChain that is tied to the adapter. Note The 1 and 2 at the end of IDXGIDevice1 and IDXGIFactory2 differentiate between the different versions of Direct3D and DXGI that exist. Direct3D 11.1 is an add-on to Direct3D 11.0, so we need a way to define the different versions. The same goes for DXGI, which has gone through multiple versions since Vista. dxgiFactory->CreateSwapChainForCoreWindow( m_d3dDevice.Get(), reinterpret_cast<IUnknown*>(window), &swapChainDesc, nullptr, &m_swapChain ); When we create the swap chain, we need to use a specific method for Windows Store applications, which takes the CoreWindow instance that we received when we created the application as a parameter. This would be where you pass a HWND Window handle, if you were using the old Win32 API. These handles let Direct3D connect the resource to the correct window and ensure that it is positioned properly when composited with the other windows on the screen. Now we have a swap chain, almost ready for rendering. While we still have the DXGI device, we can also let it know that we want to enable a power-saving mode that ensures only one frame is queued up for display at a time. dxgiDevice->SetMaximumFrameLatency(1); This is especially important in Windows Store applications, as your game may be running on a mobile device, and your players wouldn't want to suddenly lose a lot of battery rendering frames that they do not need.