Jump to content

Search the Community

Showing results for tags 'foveated rendering'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • VIVE
    • VIVE Community Forums
    • VIVE Technical Support
    • VIVE Developer Forums

Blogs

  • Viveport Blog
  • Viveport Reviews
  • Viveport SDKs and Downloads
  • Developer Blog
  • Vive Blog

Categories

  • Files

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me

Found 5 results

  1. In this post, I am going to show you how to integrate Variable Rate Shading (VRS) with your Unreal Engine project in order to enable Foveated Rendering using the HTC Vive Pro Eye headset. This article is going to focus on Unreal Engine. If you are using Unity instead of Unreal Engine then you can use the Vive Foveated Rendering plugin from the Unity Asset Store or the Github page. It is assumed that you’re somewhat familiar with Unreal Engine, C++ and Blueprints. Requirements HTC Vive Pro Eye headset VR Ready Quadro (Quadro Desktop: Quadro 4000 card and higher, Quadro Notebook: Quadro 3000 and higher) or VR Ready GeForce Turing based GPUs ( GeForce™ RTX 20 series or GTX-1660 / GTX-1660 Ti ) NVIDIA Driver version: 430.86 or later OS: Microsoft Windows This post is divided into 4 main parts: Part 1 – Foveated Rendering & NVIDIA Variable Rate Shading (VRS) Part 2 – Setting up Unreal Engine Source Code from the Vive Github Repository. Part 3 – Setting up SRanipal Plugin Part 4 – Combining everything for Foveated Rendering results So, let’s get started! Part 1 – Foveated Rendering & NVIDIA Variable Rate Shading (VRS) Allow me to begin by stressing the importance of Foveated Rendering and how beneficial it can be for VR applications. Creative directors and technical artists always try to raise the bar of visual fidelity of immersive VR applications, but the engineering teams often hit the bottleneck of achieving the required minimum of 90 FPS as new headsets are released with better and better display resolutions. From time to time some new smart rendering techniques are invented that allow us to optimize this process without having to drop the quality. This is where foveated rendering comes in to play. If you’re not already familiar with the term Foveated Rendering, what you need to know about it is that it is a rendering technique that can be used together with VR Headsets that support Eye Tracking, such as the HTC Vive Pro Eye, to reduce the rendering workload, thus allowing developers to improve the performance of their VR applications or push the visual quality of their content. The term “Foveated” derives from the word Fovea, the part of the eye retina that allows us to have a sharp central vision (Foveal Vision) which lets us focus on the visual details of what is important in a narrow region right in front of us. The Foveal and Para-Foveal areas cover a field of vision of about 5 degrees. Anything outside the fovea region belongs to the peripheral vision and despite the fact that it is also useful, it can only detect fast-moving content changes and color, hence why it feels comparatively less detailed and blurry. The Foveated Rendering technique exploits the anatomy of the human eye and suggests that computers can drop the quality of graphics in the peripheral vision by lowering the resolution there while focusing all of the available processing power on the smaller foveated area of the image. VRS_01_v008_white_logo.mp4 Dynamic (Eye-Tracked) Foveated Rendering should not be confused with a variant called Fixed Foveated Rendering. The latter assumes forward viewing direction and limits the rendering costs of display areas which will not be clearly visible in the headset mainly around in the lens distortion region. Although the concept of Foveated Rendering has been around for a while, the new NVIDIA Turing™ architecture for GPUs allowed the use of a rendering technique called Variable Rate Shading (VRS) that can enable this feature. This technique offers the ability to vary the shading rate within a frame and can be combined with Eye-Tracking data to perform Foveated Rendering in VR experiences, hence why you need a GPU that is based on Turing™ architecture. Graphics cards have a component called pixel shader (or fragment shader), that determines the visual characteristics of a single pixel in a virtual environment such as its color, brightness, contrast, etc. Instead of executing the pixel shader once per pixel, with VRS the GPU can not only apply a single pixel shader operation to a whole 16 x 16-pixel region but also dynamically change the shading rate during actual pixel shading in one of 2 ways: Coarse Shading. Execution of the Pixel Shader once per multiple raster pixels and copying the same shade to those pixels. Configurations: 1×1, 1×2, 2×1, 2×2, 2×4, 4×2, 4×4 Super Sampling. Execution of the Pixel Shader more than once per single raster pixel. Configurations: 2x, 4x, 8x For example, in the case of Coarse Shading, we can select a 2×2 coarse shading that indicates that a single shade operation evaluated by a pixel shader invocation will be replicated across a region of 2×2 pixels. So instead of sampling 4 times for each pixel in the 2×2 area, we can sample once and broadcast the result to all 4 pixels. On the other hand, Supersampling is a method that enables increased sharpness and better anti-aliasing on high-frequency textures not possible by simple MSAA. The pixel shader is run at every sample location within a pixel instead of just once per pixel. With VRS, a VR application doesn't need to perform supersampling on the entire render target, as a specific region can be selected instead, minimizing the performance impact. Shading rate such as 4x Supersampling indicates that pixel shader will be invoked upto 4 times evaluating upto 4 unique shades for the samples within 1 pixel. This is really useful and provides a smoother immersive VR experience for the users especially when they need for example to read text in VR. This fine level of control enables developers to combine varied shading rates and gaze tracking capabilities for foveated rendering. Besides Foveated Rendering, NVIDIA VRS supports the following 2 techniques: Content Adaptive Shading that takes into account factors like spatial and temporal color coherence and object variation every frame to lower the shading rate in successive frames, e.g for areas where detail remains unchanged from frame to frame, such as sky boxes and walls. Lens-Optimised VRS that can be used to render efficiently to a surface that closely approximates the lens corrected image, eliminating the need to render pixels that would otherwise be discarded before output on the headset, exploting the fact that lenses are designed to be sharper at the center and blurrier around the periphery. It is worth mentioning that all 3 techniques can be used in unison for ultimate customization, and there is no limit on how much supersampling and coarse shading can be done per frame by the developers. Part 2 – Setting up Unreal Engine Source Code from the Vive Github Repository. Usually, most engineering teams that use Unreal Engine for VR development have to make a choice depending on the VR project they will work on and the flexibility they need. They can use: a binary version of Unreal Engine provided from Epic Games via the Epic Games Launcher. In that case, the Engine can’t be modified and only binary plugins that are verified to work with that version of Unreal Engine can be used. This is usually the case if you want to build quickly a demo using technologies/features that are already supported out of the box. the source code of a specific version or branch of Unreal Engine. The engineering team can introduce entirely new features in their custom source code of Unreal Engine to reach their goals for a project. For example, if Unreal doesn’t support a feature, the engineers can modify the source code of the engine to introduce this feature. Or if they find a bug in Unreal Engine that has not been fixed yet, they can fix it themselves without the need to wait for Epic Games to fix it. Most engineering teams that work on projects with new technologies usually prefer the 2nd option. This gives them more control over their pipeline. Currently, if you want to integrate VRS and Foveated Rendering in your Unreal Engine project you need to download and use a custom modified version of the Unreal Engine source code from the HTC Vive Github Repository. HTC Vive currently supports 3 different versions of Unreal Engine for Foveated Rendering, so there is a branch for UE 4.21.0, another branch for UE 4.23.1 and one for UE 4.24.2. Don’t forget that in order to be able to access the Unreal Engine Source code repository, it’s required to link your Epic Games account to GitHub account and get authorized by Epic Games. This modified version of Unreal Engine contains: Changes to the Unreal Engine Rendering Pipeline source code to support VRS for certain render passes. The VRS Plugin that comes with NVIDIA API libraries We realize that you may want to integrate Foveated Rendering in your project and you may be using a binary/installed version of Unreal Engine or a different custom version of Unreal Engine. HTC Vive is working closely with NVIDIA and Epic Games to make this process easier in the future by integrating the required engine changes to the official Unreal Engine branch so that only the VRS plugin will be needed without any further modifications. For now, if you or your team already use a custom Unreal Engine source code for versions 4.23 or 4.21 and you don’t want to switch to a different engine, you can copy the changes we made in the Unreal Engine Rendering pipeline source code and of course, add the extra VRSPlugin. It’s only a few lines of code! More specifically, for Unreal version 4.23 we had to make changes to the following files: Engine/Source/Runtime/Renderer/Private/ScenePrivate.h Engine/Source/Runtime/Renderer/Private/PostProcess/PostProcessing.cpp Engine/Source/Runtime/Engine/Public/SceneViewExtension.h On the other hand for UE 4.24 we modified the following files as we had to call VRS functions from the RHI thread and not the Rendering thread: Engine/Source/Runtime/Renderer/Private/ScreenSpaceRayTracing.cpp Engine/Source/Runtime/Renderer/Private/ScenePrivate.h Engine/Source/Runtime/Renderer/Private/PostProcess/PostProcessUpscale.cpp Engine/Source/Runtime/Renderer/Private/PostProcess/PostProcessMaterial.cpp Engine/Source/Runtime/Renderer/Private/DeferredShadingRenderer.cpp Engine/Source/Runtime/Engine/Public/SceneViewExtension.h You can choose to download the source code branch as a .zip file or use Github Desktop to clone the repository locally. We recommend using Git to clone the repository, since this way you can quickly fetch the latest updates and bug fixes and update your local copy every time the HTC Vive engineers push a change. Here is the list of steps required after you download the custom Unreal Engine source code in order to get the Engine up and running: 1. If you downloaded a zip archive of the code, extract the archive contents to a directory where you would like to build the engine, e.g., C:\Unreal\4.23-vive. Make sure you use a short path, otherwise you may get errors in the next step with files that exceed the Windows length limit for file names. Alternatively, you can map your install directory as a Windows drive to reduce the path length. 2. Now navigate to the engine directory and run Setup.bat. You may need to Run as Administrator. The Setup.bat script will start downloading the required 3rd party dependencies for Unreal Engine and your system. This may take some time. Make sure that you check regularly the window for any warnings/issues. 3. The next step is to download all the required dependencies for Unreal Engine, so we need to run GenerateProjectFiles.bat to generate the Visual Studio Solution. By default, the script GenerateProjectFiles.bat creates project files for Visual Studio 2017. 4. Launch UE4.sln to open the project solution in Visual Studio. 5. In the menu bar, select Build > Configuration Manager. Verify that Active solution configuration is set to Development Editor and that Active solution platform is set to Win64. 6. In the Solution Explorer, right-click UE4 under Engine and select Set as Startup Project. This makes sure we build the Engine and not one of the other Programs that comes with Unreal. 7. Now, right-click UE4 under Engine again and, in the context menu, select Debug > Start New Instance to launch the engine. 8. Once the Unreal Engine is launched you can select the project you would like to open or specify a new project. If you are creating a new project, don’t forget to specify a project name. **There is currently an issue if you try to select a Blank Blueprint or Basic Code C++ Project where everything is black unless the mode is set to “Unlit” so instead try to select any of the other Template Projects. If you already have an Unreal Project built with a different version of Unreal Engine and you just want to swap the Engine versions, then you can try to open the Project using the Unreal Engine version that supports Foveated Rendering. To do that, find the Unreal Engine Project file, right-click on it and switch the Unreal Engine version used for that project to the new version of the engine we installed (browse to the installed path). That’s all, you should now be able to use the Custom Unreal Engine provided by HTC Vive with your project. Part 3 – Setting up SRanipal Plugin In order to optimize the rendering quality to match the user’s gaze, we need to feed Eye-Tracking gaze data to VRS. To get access to the data provided by the Eye-Tracking capabilities of the HTC Vive Pro Eye, you need to use the Vive SRanipal SDK. The SRanipal SDK allows developers to track and integrate users’ eye and lip movements, empowering them to read intention and model facial expression, create new interactions and experiences and improve existing ones exploiting the precise tracking of eye movement, attention and focus. The SRanipal SDK includes: the required runtime(SR_Runtime) which runs in the notification tray to show the current eye-tracking status for HTC VIVE Pro Eye. Plugins for Unreal/Unity along with sample code for native C development. Follow the list of steps below in order to integrate the SRanipal plugin in your project: 1. Visit the Vive SRanipal SDK page at the Vive Developer website, read the guidelines and start the download procedure. If you don’t have an HTC Developer account you will be asked to register and create a new one to be able to download the SRanipal SDK. 2. Download the Vive_SRanipalInstaller that contains the required runtime. Follow the instructions to install SR_Runtime. 3. Once installed, ensure that your Vive Pro Eye headset is connected to your PC and launch SR_Runtime.exe as an Administrator to start the runtime. Wait until you notice the SRanipal status icon that looks like a Robot appearing in the Windows notification tray. This icon’s eyes will turn orange if a headset with Eye-tracking capabilities is detected but the device is in idle mode and green if a program is retrieving eye data from it. 4. Start SteamVR (If not running already) 5. Put on your HMD. 6. The next step is to go through the Eye-Calibration Process. This is required for each new user and the calibration results are saved to the PC. To start eye calibration, press your VIVE controller’s system button and the calibration program will show an overlay window on your HMD. Select the Vive Pro button at the bottom grid of SteamVR menu. Note that for the highest level of precision, it is recommended to recalibrate for different users, as the eye positions and the pupillary distances are different for each individual. 7. You will need to read and accept the user agreement before turning on the Vive Eye-Tracking capabilities. 8. Press Calibrate to start. It will start by adjusting your HMD position. *If you have any issues with the Calibration Process and you get an error message “Initialization Failed” please follow the steps described in this troubleshooting guide. 9. Next, you need to adjust your IPD value, as shown below. 10. After that, you will be guided to do gaze calibration. Please look at the blue-dot sequentially shown at the center, right, left, upper and lower of the panel until the calibration has been successful, moving only your eyes and not your head. 11. Calibration is done. You are ready to develop eye-aware applications. Now you can try a mini-demo to light up the dots with your eye gaze or press the Vive controller’s system button to quit eye calibration. 12. Download the Vive SRanipal SDK .zip file. It contains the required plugins for each platform as well as documents with detailed instructions. 13. You will need to install the SRanipal Unreal Engine plugin in your Unreal project. To do that, copy the folder located in SRanipal SDK\v1.1.0.1\SRanipal_SDK_1.1.0.1\03_Unreal\Vive-SRanipal-Unreal-Plugin\SRanipal into the Plugins folder of your Project. If you don’t have a Plugins folder in your project structure, you can create a new one and name it “Plugins”. Then generate Visual Studio Project files again for your project. If you get any compilation issue about the Dartboard.h file that can not be included then please make the following change in the SRanipal.Build.cs file: 14. Open the Unreal project, go to Edit->Plugins and you will see the SRanipal plugin under the Project plugins. Make sure it is enabled. Now you can use the SRanipal SDK features for Eye Tracking with your HTC Vive Pro Eye. Part 4 – Combining everything for Foveated Rendering results At this point, the last thing we need is to tie everything together. We will combine the SRanipal plugin and the VRS plugin to enable Foveated Rendering by feeding eye gaze data each frame to the VRS algorithm. In Part 2 above, we used the custom Unreal Engine source code that comes with the VRS plugin. You can locate the VRS plugin folder at UnrealEngine\Engine\Plugins\Runtime\ViveSoftware\VRSPlugin. You will notice that it contains the required NVIDIA VRS APIs as well as code to easily modify VRS settings in your application. Also if you open your Unreal Engine project and go to Edit–>Plugins, you will find the Variable Rate Shading plugin under the Rendering category of the Unreal Engine plugins. Again make sure it is enabled. [UPDATE 9 Feb 2021] : If you are using the Unreal Engine branches for UE 4.24.2 or 4.23.1 then you don't need to write any extra code and you can ignore the following steps. Yo won't need to add GetEyeGazeDirectionsNormalized() and call UpdateStereoGazeDataToFoveatedRendering(), you just have to install the SRanipal runtime, import the SRanipal plugin into your game project and enable Eye Tracking in VRSSettings, then VRSPlugin will get gaze data from the engine automatically and you can just configure the settings.] In order to feed gaze data to VRS and enable Foveated Rendering, developers need to go through the following list of steps: 1. Create a new function to return the Normalized Eye Directions using SRanipal’s method GetVerboseData() and expose it to Blueprints. One way to do this is to add a new function in the list of methods found in the header file SRanipal_FunctionLibrary_Eye.h. located in Plugins\SRanipal\Source\SRanipal\Public\Eye. Of course, you can always create a new static method anywhere else in your project as long as it uses the SRAnipal module dependency. Go ahead and type in the following code for the new function GetEyeGazeDirectionsNormalized. This function will return the normalized Eye Gaze Directions for each eye. UFUNCTION(BlueprintCallable, Category = "SRanipal|Eye") static bool GetEyeGazeDirectionsNormalized(FVector& LeftEyeGazeDirectionNormalized, FVector& RightEyeGazeDirectionNormalized); Also, for the implementation of the function, you will need to type in the following code in the source file SRanipal_FunctionLibrary_Eye.cpp. bool USRanipal_FunctionLibrary_Eye::GetEyeGazeDirectionsNormalized(FVector& LeftEyeGazeDirectionNormalized,FVector& RightEyeGazeDirectionNormalized) { LeftEyeGazeDirectionNormalized = FVector(); RightEyeGazeDirectionNormalized = FVector(); ViveSR::anipal::Eye::VerboseData VerboseDataOut; bool Result = SRanipal_Eye::Instance()->GetVerboseData(VerboseDataOut); LeftEyeGazeDirectionNormalized = VerboseDataOut.left.gaze_direction_normalized; RightEyeGazeDirectionNormalized = VerboseDataOut.right.gaze_direction_normalized; return Result; } 2. Feed the Normalized Eye Directions to VRS plugin’s UpdateStereoGazeDataToFoveatedRendering function. This way, the center of the foveated region for every frame will be calculated by the VRS plugin automatically. You will need to connect the function GetEyeGazeDirectionsNormalized from the previous step to the function UpdateStereoGazeDataToFoveatedRendering from the VRS plugin during Event Tick. You can do this by creating a new Blueprint Actor and adding the following Blueprint nodes in its Event Graph. Don’t forget to place the actor in your Scene so that it can tick. Of course, you don’t necessarily need an Actor for this, you could use the same Blueprint nodes in your GameMode or your GameInstance. 3. Finally, you need to drag and drop an Actor of type SRanipal_Eye_Framework in your Scene and enable Eye tracking (tick box) on it. Only then you can get valid eye gaze directions. 4. If the VRS plugin was successfully enabled, VRS Settings will appear under the Project Settings ->Plugins category. Below you will find a list that describes what each setting can be used for. For more information, you should read this detailed article from NVIDIA. Enable VRS: Whether to enable Variable-Rate Shading or not. Enable Eye Tracking: If an eye-tracking device is properly set up and with its eye-tracker plugin enabled, checking this will automatically fetch eye-tracking data and change the center of the foveated region according to it. ScreenPercentage: In Unreal Engine, the VRSPlugin requires r.ScreenPercentage to be larger than 100 to trigger the UpScale pass renderer so VRS could take effect, that’s why it’s set to 101 by default. Foveation Pattern Preset: This property adjusts the region size of foveated rendering. There are several pre-defined region sizes, the smaller more aggressive. If you pick one of the predefined options, then the Foveation Pattern Preset Detail section will auto-fill the values for you. If you pick the Custom option as the Foveation Pattern Preset, you can customize the internal parameters to perform foveated rendering exactly the way you want. More specifically, the region size (Horizontal and Vertical Radius) for each of the available regions (Innermost, Middle, Peripheral) can be customized from a value of 0 to 10. When the radius is set to 1.0 for both the width and height, the region size will span the width and height of the window forming a circle fitting in the render target (this will depend on the aspect ratio of the render target as well). If you pick a value greater than 1 then the region will span outside the render target area. So the available options are: Narrow Balanced Wide Custom Shading Rate Preset: This property adjusts the shading rate of each region automatically according to a pre-defined rate without having to specify any numerical parameters. The available options are Highest Performance/ High Performance / Balanced / High Quality / Highest Quality. There is also a "Custom" option that allows you to define manually the shading rate for each region. The built-in shading rate preset details can be customized to: Culled: Nothing will be rendered (black) Normal Shading: Each Pixel is sampled once Supersampling (2x,4x,8x,16x): Each pixel is sampled more than once, which would result in less aliasing than Normal Rendering (but will take more computing power). Coarse Shading options (Sample once per 2×1,1×2,2×2,4×2,2×4,4×4 pixel): With coarse shading, a group of pixels is only sampled once, which would result in performance gain, but with fewer details rendered. All parameters in VRSSettings in Project Settings can be changed dynamically during a VR session or a PIE session. This means that developers can also dynamically changing the presets according to different game-dependent factors. Please be noted that the VRS configuration modified in Project Settings will not be carried to the packaged game. Therefore, a set of Blueprint functions is provided by the VRS plugin to alter them during runtime. Developers should call these functions at the initialization phase. The boolean return value indicates whether the setter function execution was successful or not. Generally, VRSPlugin will fallback to Fixed Foveated Rendering if there’s any problem with eye-tracking (e.g. invalid gaze data, device not connected, tracker module not loaded/implemented etc.) You should now be able to apply foveated rendering to your VR application and adjust shading rate and region size for either better performance or better quality according to your project requirements. Currently, there isn’t a way to visualize each region individually, e.g with a different color, but if you want to test that Foveated Rendering works indeed, you can use “Culled” for the Peripheral and Middle Regions so that only the Innermost Region is rendered. Performance Gains & Known Issues Using the Balanced option for both the Foveation Pattern and Shading Rate Presets should give you the best visual quality and performance. It performs 4x supersampling in foveal region, 1x in the middle and 2×2 coarse shading at periphery, while keeping the foveal region just large enough so that the periphery is outside the instantaneous field of view for most users. In general, performance gain from VRS ranges from 20% to 38% for static scenes and VRS gains less performance when the scene is full of dynamic objects like skeletal mesh, particle emitter, and post-processing-effects. We tested the performance improvement using some of the scenes includes in the Unreal Engine 4 sample scenes pack. SunTemple (Link) This scene was designed to showcase mobile features. This single level contains detailed PBR materials useful for a variety of graphics features and techniques and comes with many static and opaque actors as well as many post-process effects Frame time improved from 11.7ms to 9ms (~23%) with Narrow/Highest Performance preset Frame time improved from 11.7ms to 9.8ms (~16.2%) with Balanced/Balanced presets Default Scene View (NO VRS) VRS Enabled (Balanced/Balanced) VRS Enabled (Narrow/Highest Performance) RealisticRoom(Link) This scene shows off the realistic rendering possibilities of Unreal Engine 4 for architectural rendering. It utilizes physically-based Materials, Pre-calculated bounce light via Lightmass, Stationary lights using IES profiles (photometric lights), Post processing, Reflections. Frame time improved from 10.5ms to 7.8ms (~25%) with Narrow/Highest Performance presets Frame time improved from 10.5ms to 8.8ms (~19%) with Balanced/Balanced presets No VRS VRS Enabled (Balanced/Balanced) VRS Enabled (Narrow/Highest Performance) Virtual Studio(Link) The Virtual Studio Scene showcases the Unreal Engine's ability to integrate with professional quality video cards and contains high quality screen-space reflection, which is benefited by VRS a lot. Frame time improved from 38.2ms to 20.8ms (~45.5%) with Narrow/Highest Performance presets Frame time improved from 38.2ms to 26.6ms (~30.3%) with Balanced/Balanced presets No VRS VRS Enabled (Balanced/Balanced) VRS Enabled (Narrow/Highest Performance) Of course, VRS Foveated rendering is not without its own shortcomings. One of the most noticeable artifacts is magnified aliasing in the peripheral region. This artifact is more obvious for thin or glossy objects in the scene. To ease this kind of artifact, the following options are our recommendations: Use temporal anti-aliasing. Developers can choose AA method under Project Settings > Rendering > Default Settings > Anti-aliasing Method Tweak region parameters (size and resolution) based on the content. For scenes with a lot of text or glossy materials, developers should use a less aggressive setting. (UE 4.24 version) Some objects may flicker when VRS is enabled. Uncheck “Occlusion Culling” in Project Settings > Rendering can fix it. Other known issues so far: Keep in mind Bloom will not take effect unless you use the UE 4.24 version. If you use RenderDoc it may cause a conflict with VRS Plugin, so please disable RenderDoc plugin and try again. (UE 4.24 version) Typing console command may crash the editor. (UE 4.24 version) Editor viewport is affected by VRS. We support Unreal Engine 4.21.0 and 4.23.1 and 4.24.2. We injected some VRS function calls in UE rendering pipeline to make VRS happen only for certain render passes. The UpScale pass was introduced in UE 4.19. Thus, the minimum supported version will be UE4.19. If you want to learn more things about the VRS plugin code base, you should read this article from NVIDIA that describes the VRS Wrapper APIs. References What Is Variable Rate Shading? A Basic Definition of VRS NVIDIA Says New Foveated Rendering Technique is More Efficient, Virtually Unnoticeable VRWorks Variable Rate Shading (VRS) Turing™ Variable Rate Shading in VRWorks Easy VRS Integration with Eye Tracking Microsoft – Variable Rate Shading Realistic Virtual Vision with Dynamic Foveated Rendering - Tobii
  2. I have imported the plugin into my project and I can see the different resolution of different areas. However, I cannot see any boost in fps or any reduction in rendering time. How does this plugin reduce the rendering work load? Can anyone help?
  3. I'm working with Unity 20.19.4 and Vive Pro Eye and on Universal Pipeline (URP). Is there any way to enable foveated rendering on this configuration? The published asset only works on the old (built-in) pipeline. Thanks! @Corvus @Tony PH Lin
  4. We realise how important it is for the development community to have easy-to-use tools that improve content performance. We recently released the new update of Wave SDK 3.1.94 [Early Access] with several Experimental Features for developers who create content for standalone VR headsets of the Wave Ecosystem such as the Vive Focus Plus and Vive Focus. In this update, we introduced changes to the Adaptive Quality feature, which can help to automatically adjust the rendering quality of your VR application according to the system workload in order to achieve better performance and improve battery life by up to 15%. This blog post will explain more about Adaptive Quality, why it is important and how to apply it to your own projects. We will provide an overview of the solution and its design/implementation and share a few tips on how developers can get started using it. We also describe how it works in synergy with other features of the Wave SDK such as Dynamic Fixed Foveated Rendering and Dynamic Resolution for better results. * Please note that Wave SDK 3.1.94 is an Early Access version that includes new features for developers to experiment with and provide feedback or suggest changes. These features are available only with a specific developer ROM update (and Adaptive Quality requires ROM v3.07.623.336 for Focus Plus) but content developed with this Developer ROM can't be published to Viveport until we release a public ROM update (coming soon!). Please refer to this article for more information on how to get access to the Developer ROM and test Wave SDK 3.1.94. Introduction Standalone VR headsets may have all the necessary components required to provide VR experiences, but unlike PC-based VR headsets, utilising the full power of their hardware requires an intricate balance for VR apps to run smoothly and with consistent performance. Heat generated from headsets working extra hard trying to render VR content can result in Throttling. The hardware will detect the high temperatures and when a predefined limit is crossed it will attempt to lower the clock speed of the CPU/GPU to prevent the system from overheating. When the temperature levels get back to normal, the system will increase the CPU/GPU clock speed and performance will bounce back again. Unfortunately, this process may be repeated and leads to poor battery life and inconsistent performance as the system is not able to quickly get rid of the generated heat. Although developers can mitigate this issue by trying to make sure games perform at their best at all times, this is not always possible. Adaptive Quality Adaptive Quality works by providing a way for developers to balance the performance of their VR applications and power consumption in real time, offering automatic adjustment of the CPU/GPU performance according to the workload of the system. Furthermore, it allows defining a set of strategies on how the application should respond to system changes to improve FPS when the rendering performance is insufficient. Adaptive Quality can be combined with the Dynamic Resolution feature to adjust the image quality of the application according to system changes and with Fixed Foveated Rendering to dynamically change the quality of the peripheral region when improving the performance is essential. This results in better battery consumption management and smooth frame rates, as developers have more control and can create and customise their own policies to dynamically handle hardware changes. Especially for GPU fragment-bound apps, this leads to less throttling and a better experience for the end users. Adaptive Quality v1.0 was first introduced in Wave SDK 3.1.1 supporting automatic CPU/GPU adjustment and system events according to workload. Starting from Wave SDK 3.1.94 we introduced new features and changes (v2.0) adding Dynamic Resolution and Fixed Foveated Rendering to the mix. Auto CPU/GPU Adjustment Standalone VR headsets are powered by a battery so making sure that power is not drained too quickly is important. With Adaptive Quality, we’ve made the management of CPU and GPU clock rates much simpler by making it almost entirely automatic. If Adaptive Quality is enabled, the system can dynamically change the CPU and GPU performance level to maintain performance based on the system load. So when the performance of your application is insufficient, CPU/GPU clock speeds will increase to improve the FPS. Likewise, if the application already runs at high FPS and the complexity of the scene is low, Adaptive quality can scale down the clock rates to save battery power for the headset and prolong its usage. Although we don’t provide direct access to the maximum/minimum-allowed clock speeds, Adaptive Quality can configure those properties based on its knowledge about the current system load and define if the levels should be lowered or raised. Of course, when Adaptive Quality is disabled, developers can still manually increase/decrease the CPU/GPU performance based on practical demands according to this API. System Events & Custom Policy Although we can change the clock rates to reduce power consumption or improve performance, that may not always be enough to achieve better results. Sometimes, more things need to change within the VR application software itself to get a stable performance. Adaptive Quality can be configured to broadcast events whenever there are changes to the system workload. This mechanism is really useful, since developers can create and customise their own policy to reduce GPU load and ensure constant frame rates over a longer period of time. Depending on the situation, the VR application can react differently according to those events and developers may create their own policies and choose whether and how to handle those changes and change scene complexity themselves. Developers can subscribe to receive performance events that recommend lowering or raising the rendering quality of the application and actively modify the rendering settings of the application in an equivalent way by enabling or disabling MSAA or other rendering settings that can help boost performance. Dynamic Resolution Dynamic Resolution is a feature that works together with Adaptive Quality and helps adjust the image quality of the application by changing the eye buffer size according to the events broadcasted (that we mentioned in the previous section). More specifically, the Resolution Scale will be increased automatically when the event received denotes that the quality can be higher and similarly, when the event type received recommends lowering the quality of the running application, the resolution scale will be decreased. In order to ensure that the Resolution Scale will not decrease to a point where the application is unusable, Dynamic Resolution comes with built-in functionality that helps determine the lower bound of Resolution Scale for different VR devices to maintain text readability. What’s great about this feature is that there is no extra latency introduced with the change in resolution scale. Dynamic Fixed Foveated Rendering Foveated Rendering is a technique that exploits the anatomy of the human eye and suggests that applications can drop the quality of graphics in the peripheral vision by lowering the resolution in that region while focusing all of the available processing power on a smaller area of the image(foveated region). The term “Foveated” derives from the word “Fovea”, the part of the eye retina that allows us to have a sharp central vision (Foveal Vision) which lets us focus on the visual details of what is important in a narrow region right in front of us. Anything outside the fovea region belongs to the peripheral vision and despite the fact that it is also useful, it can only detect fast-moving content changes and color, hence why it feels comparatively less detailed and blurry. It’s worth noting that there are 2 types of Foveated Rendering and the terms are sometimes confusing: Fixed Foveated Rendering assumes that the foveated region should always be at the center of the field of the view of the user and that lower resolution pixels should be rendered at the distortion region around the lens as things are already not clearly visible there. Eye-Tracked or Dynamic Foveated Rendering can be used with headsets that support eye tracking modules (Check Vive Pro Eye) to accurately define the foveated region based on gaze direction. It’s called dynamic because as the human eye moves, the foveated region keeps changing and the peripheral region keep changing dynamically. Adaptive Quality can be combined with Fixed Foveated Rendering to help increase the performance of VR applications according to system workload (dynamically). Fixed Foveated Rendering can be automatically enabled by Adaptive Quality to further reduce the GPU load and improve performance whenever it’s required. Results The benefits of Adaptive Quality can become more clearly articulated through some examples. The graph below illustrates how Adaptive Quality helps deliver smooth and high frame rates with a native fragment-bound application. In green, you can see the frame rate fluctuating when Adaptive Quality is disabled, with FPS going down as GPU workload increases. In pink, you can see the stable results after enabling Adaptive Quality running at 75 FPS. The x-axis indicates the time passed since the start of the application and the y-axis shows the FPS on the left and the GPU Level on the right. The yellow letters show the fragment loading values that increase every 5 seconds. Notice how the GPU clock rates go down when the FPS are high to save power(after 11 seconds and 61 seconds). Another graph below shows the results of a test using a Unity application and a GPU fragment-bound application. When Adaptive Quality and Dynamic Resolution are both enabled there is an increase of 13 FPS on average. In green you can see the case where Adaptive Quality is disabled, while in yellow you can notice how Dynamic Resolution tries to decrease the resolution scale to improve FPS and when FPS are high enough we increase the resolution scale back up again. We also tested Adaptive Quality with other applications such as Viveport’s Sureshot Game and SPGM. As you can see from the results, the energy consumption was improved by 10% and 15% respectively, indicating how useful it can be to use this feature in order to extend the battery life of the VR headset. How to use Adaptive Quality WaveVR SDK 3.1.0 to 3.1.6: Adaptive Quality is not enabled by default. The Wave SDK provides an API function called WVR_EnableAdaptiveQuailty that needs to be called manually so that the CPU/GPU performance levels can be adjusted automatically. The system events WVR_EventType_RecommendedQuality_Lower or WVR_EventType_RecommendedQuality_Higher will be broadcasted based on system workload and can be monitored to initiate actions according to them using the Wave System Event. Check System event to know how to listen these system events. WaveVR SDK 3.1.94 or later: Adaptive Quality is enabled by default and system events are broadcasted to change rendering quality when needed. One or multiple strategies can be used to adjust display quality and improve the FPS when performance is insufficient (WVR_QualityStrategy_Default, WVR_QualityStrategy_SendQualityEvent, WVR_QualityStrategy_AutoFoveation) Unity The WaveSDK package for Unity supports all the features of Adaptive Quality including the new features introduced with Adaptive Quality 2.0 (Dynamic Resolution and Dynamic Fixed Foveated Rendering) and it’s really easy to use. The WaveVR_AdaptiveQuality script can be used to enable the AdaptiveQuality feature. However, starting from WaveVR 3.1.94, this component will be pre-attached to the WaveVRAdaptiveQuality GameObject of the WaveVR Prefab. In this case, WaveVR_AdaptiveQuality is enabled by default. As you can see in the screenshot below, the WaveVRAdaptiveQuality game object is part of the WaveVR game object that is required to build VR applications that support the Wave ecosystem. There are 2 scripts attached to it: WaveVR_AdaptiveQuality: When this script is enabled, automatic CPU/GPU clock adjustment will take place and developers can tick the boxes under the Rendering Performance Improve Strategy section to define what strategies should be used (e.g I want system events to be broadcasted and Fixed Foveated Rendering to be enabled. In this case I need to tick both boxes). WaveVR_Dynamic_Resolution: This script is responsible for the Dynamic Resolution feature that adjusts the resolution scale of the VR application according to workload. An list of Resolution Scale values can be defined that will be used to adjust the resolution scale whenever there are events triggered by Adaptive Quality. Also the Text Size slider can be used to define the smallest size of text that will be used in the application to avoid having Dynamic Resolution making text unreadable. Unreal Engine Adaptive Quality can also be used with the Wave Plugin in Unreal Engine. Two functions are provided to enable the Adaptive Quality Feature and query whether Adaptive Quality is enabled or not. Although Auto CPU/GPU Adjustment is automatically supported when Adaptive Quality is enabled, Dynamic Resolution and Dynamic Fixed Foveated Rendering can’t be utilised at the moment with this plugin. There will be more updates on this soon. Wave 3.2 is expected to be released soon, adding support for system events in Unreal Engine. Summary Table Best Practices & Tips Now that you know more about Adaptive Quality here are some tips and advice: Rendering improvements by WaveVR AdaptiveQuality has limits. It is still highly recommended to optimise your app as much as possible first. Check our Mobile VR performance optimisation tips. Enabling WaveVR AdaptiveQuality can help lightweight VR apps, such as photo or video playing apps, be used longer. Enabling WaveVR AdaptiveQuality with WVR_QualityStrategy_SendQualityEvent and WVR_QualityStrategy_AutoFoveation can improve the rendering quality by up to 15% if the rendering bottleneck is GPU fragment processing bound. Always build optimized versions of the application for distribution. Even if a debug build performs well, it will draw more power and heat up the device more than a release build. Power management is a crucial consideration for Android VR development. Wave SDK force disables Adaptive Quality during map loading to increase performance and restores Adaptive Quality status after the map is loaded. Useful Links To implement Adaptive Quality in your own application, check the documentation pages below: Wave Native SDK Unity Integration Unreal Engine Integration Also see: System Event & how to listen to events Foveated Rendering Dynamic Resolution We gave a talk about Adaptive Quality and the new features introduced in the latest Wave SDK during the Virtual Vive Ecosystem Conference back in March 2020. You can watch the presentation below: What do you think? Feel free to try this feature and provide feedback from your tests in our forums. You can also find the complete list of new features for each Wave SDK update in our release notes.
  5. To All, Here is the patch if you have encountered the issue that Foveated Rendering doesn't work when using IL2CPP builds. Attached the patch file. Modify the 2 files(Assets/WaveVR/Platform/WVR_Android.cs, Assets/WaveVR/Platform/wvr.cs b/plugin/Assets/WaveVR/Platform/wvr.cs). --- a/plugin/Assets/WaveVR/Platform/WVR_Android.cs +++ b/plugin/Assets/WaveVR/Platform/WVR_Android.cs @@ -520,8 +520,8 @@ public class WVR_Android : wvr.Interop.WVR_Base } [DllImportAttribute("wvr_api", EntryPoint = "WVR_PreRenderEye", CallingConvention = CallingConvention.Cdecl)] - public static extern void WVR_PreRenderEye_Android(WVR_Eye eye, [Out] WVR_TextureParams_t[] param, [Out] WVR_RenderFoveationParams[] foveationParams); - public override void PreRenderEye(WVR_Eye eye, [Out] WVR_TextureParams_t[] param, [Out] WVR_RenderFoveationParams[] foveationParams) + public static extern void WVR_PreRenderEye_Android(WVR_Eye eye, [In, Out] WVR_TextureParams_t[] param, [In, Out] WVR_RenderFoveationParams[] foveationParams); + public override void PreRenderEye(WVR_Eye eye, [In, Out] WVR_TextureParams_t[] param, [In, Out] WVR_RenderFoveationParams[] foveationParams) { WVR_PreRenderEye_Android(eye, param, foveationParams); } diff --git a/plugin/Assets/WaveVR/Platform/wvr.cs b/plugin/Assets/WaveVR/Platform/wvr.cs index 0c4a779..9529879 100644 --- a/plugin/Assets/WaveVR/Platform/wvr.cs +++ b/plugin/Assets/WaveVR/Platform/wvr.cs @@ -1054,7 +1054,7 @@ namespace wvr return WVR_Base.Instance.SubmitFrame(eye, param, pose, extendMethod); } - public static void WVR_PreRenderEye(WVR_Eye eye, [Out] WVR_TextureParams_t[] param, [Out] WVR_RenderFoveationParams[] foveationParams) + public static void WVR_PreRenderEye(WVR_Eye eye, [In, Out] WVR_TextureParams_t[] param, [In, Out] WVR_RenderFoveationParams[] foveationParams) { WVR_Base.Instance.PreRenderEye(eye, param, foveationParams); } forveation.zip
×
×
  • Create New...