// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1689 transmissions indexed — page 70 of 85

[ 2020 ]

20 entries
1381|blog.unity.com

The road to 240 million virtual kilometers: BMW’s autonomous driving journey with Unity

Before BMW’s autonomous driving (AD) technology is mass production-ready, it will need to drive 240 million virtual kilometers. Learn how Unity is helping BMW put more on its odometer every day. In our first post, we covered how a team at BMW Fully Autonomous Driving & Driver Assistance Systems has used Unity to develop custom tools for simulation visualization and scenario creation. With these tools, the BMW Group is well-equipped to tackle the most daunting daily challenges in AD development. Let’s walk through a few areas where Unity is helping out in a major way.By combining simulation with key performance indicators (e.g., continually maintaining a safe distance from traffic vehicles), BMW can assess how complete its features actually are. As individual components of its AD function master basic scenarios, BMW can conduct mass validation of its entire AD system in more complex situations.These tests come in multiple forms:Small-scale feature tests – These tests are the most common type of testing, and they enable BMW to rapidly evaluate portions of its AD system, such as vehicle trajectory planning. In a typical day, BMW’s team will log tens of thousands of virtual kilometers; the majority of these are short-distance tests (from hundreds of meters to 1 kilometer) in increments of less than a minute.Large-scale system tests – Instead of a series of mini-tests for a specific feature, this type of simulation is designed to test the broader AD system. It plays out as an extended scenario that can run for hours and strives to replicate real-world scenarios; for example, an autobahn trip between the German cities Munich and Stuttgart. These simulations are more complex, often involving a virtual world complete with moving vehicles, pedestrians, and variable weather conditions, as well as map data, sensor models as inputs to perception algorithms, vehicle trajectory planning, vehicle dynamics, and much more.Because driving situations can be repeated as often as required in simulation, BMW runs tests throughout each day, including “night drives.” After using the Unity-based scenario editor to set up tests, developers can review the results the next morning, leverage analytics to pinpoint exactly when functions failed, and bring up the exact point of failure in a frame rendered from Unity. The team can automatically extract those problematic situations as small-scale scenarios, and then visualize them to better understand why the situation was problematic.For instance, in this scenario, a traffic vehicle cuts in suddenly, yet the host vehicle does not decelerate immediately, resulting in a near accident. The scenario can be subsequently replayed after each incremental code update until the AD function reacts properly.After an initial failure in this scenario, improvements ensure that this vehicle brakes properly in response to a traffic vehicle merging into its lane aggressively.To reach the high automation level for their vehicles, BMW’s developers need to identify the limitations of their AD functions in as many situations as possible. Yet scenarios like the ones simulated in the video below are too cost-prohibitive, difficult, or dangerous to replicate in the real world.Using the Unity-based scenario editor, the developers can devise edge-case scenarios, such as a traffic vehicle running a stop sign. Uncovering these corner cases in the confines of a virtual world is much more cost-effective and safe while allowing for the opportunity to test reproducibly.BMW uses simulation to test scenarios that are too unusual to occur or too risky to create in a real-world driving environment. Here three edge cases are shown: 1) A pedestrian unexpectedly appearing in the host vehicle’s lane in a high-speed, highway setting; 2) A traffic vehicle cutting in suddenly; 3) A stopped vehicle in the host vehicle’s lane.Unity is used on a daily basis to help the 1,800 AD developers in the BMW Group continuously improve the code for which they are individually responsible. As they make changes to the codebase to add a new function or improve an existing one, they run integration tests before merging into the master.For instance, a developer focused on perception can use the Unity-based scenario editor to design multiple scenarios in which the vehicle passes a speed limit sign. These small-scale tests are simulated on the developer’s PC and can be visualized with Unity live as they are being run.The developer can visually validate their results as well as use evaluation metrics to identify improvements or confirm that the feature is ready to be merged into the master (i.e., the vehicle adjusts to the posted limit every time).Developers can simultaneously test and visualize the results of their incremental code updates. Post-merge, they can run acceptance tests to identify failures in other functions that arise as a result of their commit or vice versa. For instance, a merge from their peer could introduce a bug that affects the perception of speed limit signage. The developers can use Unity for visual debugging and easily inspect what is happening so they can fix things faster. BMW’s system is set up in such a way that developers can set breakpoints within the driving function and within the simulation code. The AD function and simulation are always in sync with one another, so the team can step through the code line by line and swap between the two worlds as they debug. The synchronicity is also mirrored by the visualization, allowing simultaneous inspection of the code and the simulated world. Because developers can still move around and inspect values in the Unity-based application, they can reduce the number of tools needing to be open at the same time, while still keeping the data as transparent as possible. All these elements ensure that the production code that will ultimately power BMW’s autonomous vehicles meets its standards for safety and reliability.---Check out Unity Industrial Collection or learn more about the ways that Unity is used for AD simulation in our whitepaper: Top 5 Ways Real-Time 3D Is Revolutionizing the Automotive Product Lifecycle.

>access_file_
1383|blog.unity.com

Visualizing BMW’s self-driving future

BMW employs Unity across its automotive lifecycle for a variety of use cases, from transforming production processes with AR and VR to marketing its vehicles in groundbreaking ways. Let’s explore one of BMW’s most innovative applications of real-time 3D technology – making it easier to navigate the complexity of autonomous driving (AD) and challenge its AD function across millions of simulated scenarios.The BMW Group – home to the BMW, MINI, Rolls-Royce, and BMW Motorrad brands – has been working on highly automated driving (AD) since 2006. In the upcoming years, the company hopes to offer drivers a groundbreaking opportunity – to buy a vehicle they will almost never need to drive themselves.The BMW Group is targeting to sell cars with Level 3-enabled automation for driver assistance systems, highway driving, and parking in the upcoming years. (SAE Level 3 is defined as conditional driving automation with some human intervention required.)Just 5% of all BMW’s test miles will be driven by actual vehicles (video credit: BMW).Around the world, a fleet of test vehicles from the BMW Group will pressure-test this technology. Because this fleet cannot gather all of the data needed for AD development, nearly 95% of all BMW’s test miles are driven by virtual vehicles in virtual worlds.These simulations take place at BMW’s Autonomous Driving Campus in Unterschleissheim, Germany, just north of Munich. Nicholas Dunning, a graphical simulation developer at the BMW Group, is part of the core 12-person development team that has built custom tools made with Unity to help the 1,800 AD developers at BMW’s campus visualize and advance their work.“At BMW, we believe simulation is key for developing autonomous driving,” says Dunning. “Unity plays a pivotal role in helping our team create, visualize, and evaluate the millions of virtual road trips needed to help us achieve our AD ambitions.”With the overwhelming majority of its testing taking place in BMW’s bespoke datacenter for AD development, BMW needed to give its AD developers an easy way to:1. Visualize the raw data from simulations in an immediately understandable, true-to-life way, beyond graphs and charts2. Evaluate the current state of their AD functions across countless simulated scenarios.Taking advantage of Unity’s extensibility, Dunning’s team developed a custom Unity-based solution to address these needs. Let’s dive into the unique way they are using Unity to help the BMW Group bring a safe, reliable AD system to the street on schedule.BMW used Unity to develop a graphical scenario editor that vastly simplifies the process to test and validate features in development. The interface makes it easy for AD developers to visualize and set up thousands of simulated scenarios that increase feature maturity and readiness.Here’s a sampling of various elements they can parameterize in the scenario editor to battle-test features in simulation:Quantity and type of traffic vehicles (car, bus, etc.)PedestriansTraffic signs (ground or mounted)Lanes (straight, curved, etc.)Lane boundaries (none, single-solid, double-solid, dashed, etc.)Environmental conditions (time of day, fog density, precipitation level)Vehicle trajectory planningIn addition to scenarios generated manually by BMW’s developers, scenarios are also extracted from traffic scenes recorded by the test fleet. This data is post-processed and automatically converted into simulation scenarios. A further analytic step identifies scenarios that would be interesting to develop and vary them.The video below shows a real-world scenario of a vehicle cut-in on a highway in Germany, as well as the converted scenario in the simulation. Because this was identified as an interesting scenario, it undergoes variations. In this case, these variations test the vehicle’s ability to maintain a safe distance from the cut-in car in various weather conditions, including rain, low sun position, and fog.A simulated scenario converted from fleet testing is varied across weather conditions.Using Unity as a visualization front end for simulated testing is highly beneficial to BMW’s AD developers. With real-time 3D, they have full control over how they interact with this immersive digital reality.As shown in the video below, they can experience a real-time, connected shift in point of view as they alter their perspective of the vehicle or any other object within the virtual scene. They can zoom in for a closer inspection or move back to get a sense of scale, making it easy to get a holistic understanding of everything happening in the simulated scenario.Unity lets BMW’s AD developers explore the simulated scenario from any vantage point. This scenario shows a vehicle surrounded by unknown objects (visualized as purple blocks) to help evaluate the AD function’s ability to operate with a mixture of known and unknown data.Initially, BMW built highly detailed, realistic environments, but over time found that switching to a more abstract visualization style and only rendering key components (e.g., road, vehicles) helped to eliminate data noise and allowed AD developers to better concentrate on the results of each simulation.BMW’s AD developers can not only quickly create scenarios for testing, but get immediate, visual feedback on the readiness of their AD function. They can literally see how the vehicle performed during the test in real-time 3D, rather than having to parse through data in 2D charts and graphs.The visualization and evaluation data (lower-right corner) are displayed and synced in real-time, making it easy for developers to analyze results in context.As BMW continues its progress in its AD ambitions, Dunning and his team hope to eventually extend their Unity-based solution beyond its core audience of AD developers. The team sees tremendous potential in collaborating with their colleagues responsible for in-car testing to ensure the pre-production Level 3 vehicles perform as promised before they go into full production.---Read part two, where we share how BMW is using Unity to overcome the daily challenges of AD development.Check out Unity Industrial Collection or learn more about how Unity is used for AD simulation in our whitepaper: Top 5 Ways Real-Time 3D Is Revolutionizing the Automotive Product Lifecycle.

>access_file_
1384|blog.unity.com

What the hell is ARPDAU and how can you boost it?

For an app developer wanting to understand more about the health of their app, there’s a whole load of metrics to choose from. Some give insight into the day-to-day, others take a longer view.ARPDAU is a staple day-to-day metric developers should be using - but what exactly does it tell you about your app, and how do you go about measuring it? What is ARPDAU?ARPDAU, which stands for Average Revenue Per Daily Active User, is a metric that helps you understand how well your monetization is working, whether it’s monetization from ads, from IAPs, or both. ARPDAU also tells you how any in-app changes you have made are affecting the success of your monetization.ARPDAU calculation and formulaWhen learning how to calculate ARPDAU, focus on: the revenue from IAPs, ads, or both in any given day, divided by the number of unique active users on that day.ARPDAU for mobile gamesIf you have a free-to-play game, you will need to be constantly monitoring and adjusting your monetization strategy to maximize engagement for all the different kinds of players.Looking at your ARPDAU over time, you can easily see how changes or events affect the revenue you make that day. Changed the IAP prices? Mixed up your ad placements, or added rewarded video ads to your app for the first time? This metric will show you the impact of those changes, and then you’ll be able to optimize for getting the maximum amount of revenue while keeping users happy.Because it’s a metric that applies to each individual daily active user, ARPDAU also excludes any fluctuations in your user base, giving you a clear picture of how much money you made in a given day from the users who were in your app.How to increase ARPDAUARPDAU depends on many factors in your app. First, in-app features - traffic drivers or prompts to invite users to watch a rewarded video or make an IAP, as well as the prices and variety of in-app purchases. Secondly, the ads in your app - where they are placed, how frequently they appear, what kind of ads they are, and how relevant they are for your users. Third, external factors such as days of the week and special holiday times.Let’s take a look at 5 ways to increase your ARPDAU in more detail:1. Integrate rewarded video ads into your appRewarded video ads can increase app revenue significantly. How? Click-through-rates for rewarded videos are about 4 to 5 times higher than typical display ads because users have a reason to engage. The ad unit is more enjoyable for users to interact with, and they also know they will get something concrete - like coins, swords or extra lives - in return. And since app developers get paid for every opt-in ad engagement, rewarded videos equal higher ARPDAU for you.2. Nudge non-paying users towards IAPsRewarded video ads are also a great way to encourage those users who haven’t ever paid for an in-app purchase to do so. If the reward is similar to an IAP, they get a taste of the premium features that they’re missing out on, and then they are more likely to purchase it in future. One study found that users who watch rewarded video ads are more likely to make in-app purchases - 6 times more in some cases.3. A/B test ad placements in your appAds that interrupt user experience are unlikely to convert well, and worse - will annoy your users. If you’re going to have ads in your game, timing is everything - if your game has levels, show an ad at the end of the level, rather than the beginning or in the middle. If you have a turn-based game – show an ad at the completion of every single game (not after each turn). A/B testing can help to make sure you’re optimizing ad placement. Make sure to also cap the frequency that users are seeing ads in your app - an optimal rate is around 3 to 4 ads per session for a gaming app. You can use ironSource's ad monetization A/B testing tool to test different ad monetization strategies. and learn with certainty which one maximizes ARPDAU. 4. Segment your usersFrom whales to minnows, not all users are equal. Making sure to treat them differently is going to help give your ARPDAU a boost. Try modifying your IAP prices according to different user behaviors or geos, for example offering Halloween special deals on coins, or double rewards on a rewarded video for a user who hasn’t been in the app for a while. This is going to increase ad engagement, and you’ll see your ARPDAU go up as a result. Learn 4 ways to segment your users and boost monetization here.5. Use in-app biddingDevelopers everywhere are turning to in-app bidding as the sole setup model for monetizing their apps. In contrast to the traditional waterfall solution, which requires developers to manually optimize and rearrange ad networks according to CPM, in-app bidding completely automates the process - holding an auction for each ad impression and serving the impression to the highest-paying ad network. Because the impression gets filled by the highest paying network each time, developers don't risk losing out on more dollars. Since using in-app bidding, developers are seeing massive boosts in revenue - from Budfarm by East Side games which boosted ARPDAU by 60% with ironSource's in-app bidding solution LevelPlay to Metamoki which increased ARPDAU by 50%.

>access_file_
1385|blog.unity.com

10 ways to speed up your programming workflows in Unity with Visual Studio 2019

Visual Studio 2019 offers world-class debugging, and lots of new tools and customization options so that you can set up your coding environment exactly the way you want it. It also comes with a range of powerful features for working with C# in Unity that helps you write and refactor code quicker than before. In this blog post, we will take a look at 10 tips on some of these features which may speed up your workflow too.Our Unity evangelist Arturo Nereu and Abdullah Hamed, program manager for Visual Studio Tools for Unity at Microsoft, recently hosted a Unite Now session sharing tips and tricks on how to get the most out of Visual Studio 2019 when developing in Unity.This post shows a quick overview of some of these tips. We also added links directly to those sections from the talk as well as other related content, if you want to dig deeper.Using Console.Log is an easy and quick way to help debug your project utilizing Unity’s console view. However, Visual Studio offers a more efficient way which becomes increasingly valuable as your project gets more complex. In Visual Studio, simply click the Attach to Unity button in the Visual Studio menu. This creates a connection between the two applications so that you can insert breakpoints and step through your code, while being in Play mode in Unity. You can also click Attach to Unity and play to start the execution without leaving your IDE. The beauty here is that it allows you to inspect the state of your code flow and values of the properties etc. at runtime. While this may seem trivial, being able to pause the execution at any time during gameplay and step through to inspect the specific state of the game and values of your properties in that exact frame, is a very powerful tool when debugging.Another handy option when working with breakpoints is that you can insert your own conditions with related actions such as a conditional expression which has to evaluate as true before applying in your debug flow.Visual Studio 2019 introduces Unity Analyzers. An analyzer works by detecting a code pattern and can offer to replace it with a more recommended pattern. Unity Analyzers are a collection of Unity-specific code diagnostics and code fixes that are open source and available on GitHub. Analyzers can provide you with a better understanding of Unity-specific diagnostics or simply help your project by removing general C# diagnostics that don’t apply to Unity projects. An example could be a simple conditional statement where you need to check if the GameObject has a specific tag to apply a certain behavior to it.if(collision.gameObject.tag == "enemy") { // Logic being applied to enemy GO }The analyzer would be able to analyze your code, will detect the pattern and offer to use the more optimized method instead. In this case, the analyzer would suggest the CompareTag method which is more efficient.if(collision.gameObject.CompareTag("enemy")) { // Logic being applied to enemy GO } While the above example represents a minor optimization tweak with no significant impact in a single script attached to a single GameObject, this may be different for a large scale project with 1000s of GameObjects with the script attached. It’s the sum of all parts when looking into performance optimization and Analyzers can make it easy to help you identify and improve your performance simply by reducing the unneeded overhead by optimizing the code syntax.You can also find a list of the analyzers here and if you are interested in learning more visit this blog post or jump directly to this part of the Unite Now talk.A common challenge when creating your scripts is the need to come back at a later point and revisit the code. That might be a result of implementing code snippets which eventually will need refactoring for better performance later but serves the current needs as you are f.x. testing out game mechanics. Visual Studio has a handy feature to keep track of this called Task List which allows you to track code comments that use tokens such as TODO and HACK, or even custom tokens. You can also manage shortcuts that take you directly to a predefined location in code. To create a task for later simply add the token in your code:// TODO: Change the collision detection once new colliders are readyThe Task List window, which you can access from under View in the menu, gives you an overview of all the tasks you tagged and links you to those specific parts of the code in just one click.As the list of action items in your project grows, you can even configure your own custom tokens in the task list and assign priorities and organizing for your refactoring process effectively. To customize your Task List tokens: go to Tools > Options.See the full example in the Unite Now session here.Code snippets are small blocks of reusable code that can be inserted in a code file using a right-click menu (context menu) command or a combination of hotkeys. They typically contain commonly used code blocks such as try-finally or if-else blocks, but you can also use them to insert entire classes or methods. In short, they offer you a handy way to save a lot of time by creating the boilerplate code for you.To surround your code with a snippet such as a namespace or region, press CTRL + K + S. That allows you to apply the snippet as demonstrated in the video below:You can find a step-by-step walkthrough of creating your own code snippets in Microsoft’s documentation (Visual Studio, Visual Studio for Mac).A common workflow when you are refactoring your code is renaming your variables to more descriptive and accurate names. Changing it one place obviously means you also have to fix all references to that variable. However, Visual Studio offers an easy shortcut to do this in one operation. Simply highlight the name of the variable you want to rename and right-click (or use the keyboard shortcut CTRL + R) and then rename the variable. Select preview changes to review the implications of the change before applying.You can use the same tip for changing the classes of your script but remember you have to rename the C# file accordingly to avoid the compilation errors. Learn more about the class renaming flow this part of the Unite Now session.Commenting or uncommenting blocks of code is another common workflow when refactoring or debugging your code. It can be a time-consuming task when you do it one line at the time. Visual Studio, however, allows you to comment out entire blocks of code using a simple shortcut command: Ctrl+K+C and Ctrl+K+U for uncommenting it again. If you are on Mac, simply use CMD+K+C to comment out and CMD+K+U to remove the comments again.Being able to comment out entire blocks quickly can be an efficient way to suppress specific game logic during your testing workflows.While Unity Collaborate makes it easy to save, share, and sync your project with others directly from Unity with a user-friendly visual interface, some teams and developers prefer using source control solutions such as GitHub. Working with GitHub for source control is now much easier with the Github for Unity plugin. The extension is completely open source and it allows you to view your project history, experiment in branches, craft a commit from your changes, and push your code to GitHub without leaving Unity. The GitHub authentication is embedded in Unity, including 2FA and with a click of a button, you can quickly initialize your game’s repository without having to use command-line instructions. It allows you to create a Unity specific .gitignore file so you don’t have to set it up manually. With Visual Studio 2019 also comes a new interface which makes it even easier to work with GitHub directly in the IDE.To activate the new interface in Visual Studio: Go to Tools > Options > Environment > Preview features > New Git user experience.You can also follow along with the video instructions from the Unite Now session, which shows a more in-depth walkthrough of getting started.Live Share enables you to share your instance of Visual Studio directly with your teammate using just a link, allowing them to edit your code and collaborate on your project in real-time. You don’t have to clone a repo or set up the environment first in order to do the sharing. You both just need to have Visual Studio installed and then it’s as easy as clicking a button to create the Live Share session.To get started simply select Live Share to generate a link to the parts of your code that you want to share with anyone that has Visual Studio or Visual Studio Code installed. A sharing session is created between you and your collaborators, allowing them to see your code without having to install anything except for the editor. It works almost instantly.You can learn more about Live Share from our Unite Session here, visit the Visual Studio product page or jump directly to the Quickstart guide here.Remembering the signature of all the MonoBehaviour methods is tricky and while the Unity documentation will have you covered, Visual Studio provides a neat feature that allows you to look it up directly in the IDE. Simply click CTRL + Shift + M, search for the function you would like to implement, and filter through the search result to find the method. Select the checkbox and click Ok to insert the boilerplate code for the method directly in your code ready for you to use.Several of the above tips are available with handy shortcuts and at the end of the day, knowing those shortcuts may be the biggest timesaver of them all. So let’s wrap up the list with a summary of the keyboard shortcuts to these tips and a few more as a bonus.What Windows Mac Search your entire project for anything. Use CTRL+T CMD + . Implement Unity Messages CTRL + Shift + M CMD + Shift + M Comment out code blocks CTRL + K / CTRL + C CMD + / Uncomment blocks of code CTRL + K / CTRL + U CMD + / Copy from clipboard history CTRL + Shift + V View task list CTRL + T No default keybinding, but you can bind it. Insert a surrounding snippet such as namespace CTRL + K + S: No default keybinding, but you can bind it. Renaming a variable while updating all references CTRL + R CMD +R Compile the code CTRL+SHIFT+B CMD + Shift + BVisual Studio 2019 is packaged with features and there are so many customization options that can enhance your productivity working with Unity depending on your specific workflows. There’s too many to cover them all here. We hope that the few tips that we shared here will inspire you to dive in and that you’re finding the format useful. Let us know if you have any tips we didn't cover, and feel free to share them with the community in the comments. We’d also love to hear if you would like more tips and tricks on how to improve your workflows in Unity and if there are any topics, in particular, that you would like to have covered in a future blog post.We are constantly working on improving the workflows and our teams are working closely with Microsoft in terms of giving you the best IDE experience. Hence we would love to hear from you if you have any ideas or feedback. Feel free to ping John Miller at @jmillerdev, Senior Program Manager, Visual Studio Tools for Unity at Microsoft, or share your feedback with us in our Scripting forum.

>access_file_
1386|blog.unity.com

Making of The Heretic: Digital Human tech package

Creating a realistic human is a complex technical challenge, as you need a huge amount of data to achieve a high level of visual fidelity.When working on The Heretic, the Demo Team developed tools to overcome many problems related to facial animation; attaching hair to skin; and eye, teeth and skin rendering in Unity. Those tools are now available on GitHub. Read on for a full technical breakdown of the process behind these solutions.My name is Lasse Jon Fuglsang Pedersen, and I am a Senior Software Engineer on the Unity Demo Team. During the production of The Heretic, one of the things I worked on is the set of technical solutions that drive the face of Gawain, the digital human in the film.This work was recently released as a standalone package on GitHub. In this blog post I will discuss some of the features of the package and share some insights into the development process behind those features.One of the goals for the digital human in The Heretic was to attempt to avoid the uncanny valley in terms of facial animation, while still taking a realistic approach to the character as a whole. To match the actor’s performance as closely as possible, we decided to try using 4D capture data (a 3D scan per frame of animation) for the animation of the face mesh, which would then at least have the potential to be geometry-accurate to the actor’s facial performance (where not obstructed).Using 4D capture data brought many interesting new challenges, as Krasimir Nechevski, our animation director, explains in more detail in this previous blog post. A lot of effort went into figuring out how to process and tweak the captured data, and then actually doing that, to get it into a state that we were happy with for the film.As an example, one of the issues we had was related to the geometry of the eyelids. Because eyelashes partially obstructed the eyelids during capture, the captured data also contained some influence from eyelashes, which manifested itself as noise in those regions. As a result, the geometry of the eyelids was inaccurate and jittery, and this meant that we had to find a way to reconstruct the geometry in those regions.The issue with the eyelid geometry was apparent quite early in the process, so as part of working on the importer for just getting the data into Unity, we also experimented with region-specific noise reduction and reconstruction, using differential mesh processing techniques. Specifically, we would perform noise reduction by smoothing the regional change in curvature over time, and we would perform reconstruction by transplanting the curvature of a (clean) base mesh onto each damaged region of each frame of the captured sequence.While the results were fairly robust, we felt they were unfortunately also a bit too synthetic when compared to the original data: The eyelids, while more stable, lost some of the original motion that effectively made them feel human. It became clear that we needed a middle ground, which might have required more research than we realistically had time for. So when an external vendor offered to tackle the reconstruction work, that was an easy choice. The GitHub package, however, includes the internal tools originally written for denoising and region transplant, as they might be useful as a learning resource.Another issue we had was that of fine surface details, or rather the lack of fine surface details, due to the resolution of the target face mesh: The face mesh of Gawain has ~28,000 vertices, and this was not sufficient for geometrically representing the fine wrinkles of the actor’s performance, much less the stretching of pores in the skin. Even if the raw 4D data had some of those details, we were not able to keep them after processing the data to fit the vertex budget of the mesh we were deforming and rendering. We considered baking a normal map per frame, but that would have required quite some space on disk, which we wanted to conserve.To handle the fine surface details, we decided to try to couple the geometry of the imported sequence with the pose-driven feature maps from a blend shape-based facial rig from Snappers Systems. The pose-driven feature maps from the facial rig contained the type of surface detail that we were missing in the imported sequence, like wrinkles and the stretching of pores. So the basic idea was this: If we could figure out which combination of blend shapes would best approximate each frame of 4D, then we should also be able to use those weights to drive just the pose-driven feature maps (excluding the deformation from the blend shapes), for added surface detail during 4D playback.Finding a good way to fit the blend shapes to the 4D was a two-step process. The first step was a least squares approach, for which we put the problem in matrix form. If we write up all the blend shapes (which are deltas to the base mesh) as one large matrix A, where each column holds the delta of a single blend shape, then the composite delta is given by Ax = b, where x represents the weights of the individual blend shapes.Solving for x is often not possible, due to A often not being invertible (in our case it is not invertible, simply because it is not square). It is, however, often possible to arrive at an approximate solution x*, by formulating the problem slightly differently: Using the so-called normal equation ATAx* = ATb, we can write the least squares solution as x* = (ATA)-1ATb, which then only requires that A has linearly independent columns. Working with blend shapes, we need to filter the included shapes to ensure that they are linearly independent, and then we can work towards an approximate solution: We precompute (ATA)-1AT for the filtered blend shapes of the rig, and then we plug in the delta b for each frame of 4D, to compute x* (the fitted weights) for each frame of 4D.While the unconstrained least squares approach outlined above was nice for building a basic understanding of the problem, it did not work well for us in practice. The solution would also sometimes contain negative weights, to get closer overall to the given 4D frame. But the facial rig expected blend shapes to be only added, not subtracted, so the fitted weights effectively exceeded the constraints of the rig, and therefore it was not always possible to translate them into meaningful wrinkles.In other words, we needed a non-negative solution to get the wrinkles that we wanted. To compute the non-negative solution, we used a subset of a third-party library called Accord.NET, which contains an iterative solver specifically for the non-negative least squares problem. After having dissected the problem and tested the unconstrained solution, we already had the filtered blend shape matrix A and the desired delta b, and it was straightforward to plug those into the iterative solver to obtain a non-negative set of fitted weights for each frame of 4D.As a side note, we also experimented with computing the fitted weights based on mesh edge lengths as well as based on edge curvatures, rather than base mesh position deltas. If we had not been able to remove the head motion from the 4D data, we would have needed to use one of these paths to make the fit independent of the head motion. For Gawain, we ended up fitting the position deltas, but the other two options are still available in the package.Before getting the 4D data into Unity, it is important to note that we first rely on external tools to ensure that the 4D capture data is turned into a sequence of meshes (delivered as .obj) with frame-to-frame matching topology. The topology also needs to match that of the target mesh for which the data is imported. (See Krasimir Nechevsky’s blog post for more details.)Then, to get the preprocessed 4D data into Unity and turn it into a runtime-ready clip, the package provides a custom type of asset, that we call the SkinDeformationClip. Once created, a SkinDeformationClip exposes the tools for importing (and optionally processing) a segment of 4D data, which can be specified as either a path to .obj files anywhere on disk (removing the need for including intermediate assets in the project) or as a path to Mesh assets already within the project.After configuring the SkinDeformationClip asset, click the Import button in the Inspector to start importing and processing the frame data. Note that if either mesh processing or frame fitting is enabled on the asset, this can take a while. After the import finishes, the clip asset now stores the imported frame intervals, fitted weights, etc., but not the final frame data. The frame data is stored in a separate binary next to the asset, so that we can stream the data efficiently from disk during playback.Once imported, you can play back the asset by dragging it onto a custom type of track for Unity Timeline, called the SkinDeformationTimeline. This type of track specifically targets a SkinDeformationRenderer component, which then acts as an output for the clip data on the track. The video below illustrates the process of sequencing and playing back the imported 4D data on the Timeline.Through using the custom track and the SkinDeformationRenderer, it is possible to also blend multiple 4D clips, which allows artists to get creative with the data. For example, for the first part of The Heretic we used only a very short segment of 4D data, which contained only a test phrase and the three-second performance for the initial close-up. And yet, through careful reuse (cutting, scaling, and blending), it was possible to use this same single clip for the remaining facial animation in the entire first part of the film.Since we chose to use the 4D data directly for the facial animation, we could not rely on bone-weighted skinning or blend shapes to resolve the positions of important secondary features, such as eyelashes, eyebrows, or stubble. Basically, we needed a way to resolve these features as a function of the animated face mesh itself.Technically, we could have loaded the processed 4D data into an external tool, modeled and attached the secondary features there, and baked out additional data for all of them. However, streaming in tens of thousands of extra vertices per frame was not viable in terms of storage, and the result also would not have been very dynamic. We knew that we needed to iterate on the 4D data throughout the production, so our solution would have to react to these iterations without a tedious baking step.To solve this problem, the Digital Human package has a feature that we called the skin attachment system. This system basically allows attaching arbitrary meshes and transforms to a given target mesh at editor time, and then resolves them at runtime to conform to the target mesh, independent of how the target mesh is animated.For the digital human in The Heretic, we used the skin attachment system primarily to drive the eyebrows, eyelashes, stubble and logical markers in relation to the skin. We also used the system to attach the fur mesh to the jacket, as Plamen Tamnev, Senior 3D Artist on the team, has described in more detail.To illustrate how to use the system, here are the steps to attach, for example, the transform of a GameObject to the face of Gawain: 1. Add a SkinAttachment component.2. In the Inspector, set the type of attachment to Transform.3. In the Inspector, point the target field to the SkinAttachmentTarget on the face.4. Move the transform to the desired relative location.5. Click the Attach button in the Inspector.Under the hood, when clicking the Attach button to attach a transform, the system uses the position of the transform to query a k-d tree for the closest vertex on the face mesh. The closest vertex is then used to identify all incident triangles to the closest vertex, and for each of the incident triangles, the system generates a local pose given the current position of the transform, resulting in a set of local poses for the transform.Each local pose is a projection of the attached point onto the plane of a given triangle, and it contains the vertex indices of the triangle, the normal distance from the attached point to the triangle, and the barycentric coordinates of the projected point.The reason that we generate multiple local poses for each attached point, rather than just for a single triangle, is to support points that do not belong to any particular triangle in the mesh. This is the case for some of our hair cards, for example, which float slightly above the mesh. To resolve the attached point based on multiple local poses, we first unproject the local pose for each triangle separately, and then average the results weighing by triangle area.Once generated, the local poses are stored in a large continuous array, along with all the other local poses for all the other attachments to the face. Each attachment keeps a reference into this data, along with a checksum, as a safety measure in case the underlying data is modified by other means.The process of attaching a mesh is very similar to that of a transform, just many times over. When attaching a mesh, the system simply generates a set of local poses for each vertex in the mesh, rather than for the single position of a transform.For meshes, there is also a secondary attachment mode called MeshRoots: With this mode, the system first groups the mesh into islands based on mesh connectivity, and then finds the “roots” of each island in relation to the face mesh. Finally, it attaches every vertex within each island, in relation to the closest root within the island. The MeshRoots mode is necessary for some use cases, because it ensures that the individual islands stay rigid. For example, the eyelashes are attached in this way, while the eyebrows are not. This is because the hair cards for the eyebrows are mostly flush with the skin and expected to deform, while the hair cards for the eyelashes are expected to maintain shape.At runtime, the system takes care that the positions and vertices of the attachments (transforms as meshes) are continuously updated to conform to the face mesh. Every frame, the final output state of the face mesh is calculated and used in combination with the known local poses to resolve all positions and vertices in relation to the skin. The image below illustrates the density of the attachments we used for Gawain.The runtime resolve is accelerated by the C# Job System and the Burst Compiler, which enables it to handle a relatively large amount of data. As an example, for the face of Gawain, the resolve jobs were collectively responsible for evaluating hundreds of thousands of local poses every frame, to resolve the secondary features of the face.When work started on the Digital Human package as a standalone release, one of the primary goals was to transition everything rendering-related to a strictly unmodified version of the High Definition Rendering Pipeline (HDRP), to ensure better upgradeability through new HDRP features for extensibility.For context: At the time when we started prototyping the visuals for The Heretic, we were still missing some rather general features in HDRP for extensibility. We did not yet have a sensible way of writing custom upgradeable shaders, and we did not yet have a way to inject custom commands during a frame, e.g., for a custom rendering pass.As a result, the custom shaders for the digital human (and several other effects in the film) were initially prototyped as direct forks of existing materials in HDRP, which at the time was still in Preview and still undergoing major structural changes. Many of the custom shaders also required core modifications to HDRP, which contributed to making upgrades often difficult. Thus, we were generally on the lookout for more extensibility features in HDRP, so that we would be able to reduce the number of customizations.Therefore, creating the Digital Human package involved transitioning those then-necessary customizations to use the current-day extensibility features now provided by HDRP. This meant porting the digital human custom shaders to Shader Graph, using the HDRP-specific master nodes, and using the CustomPass API for executing the necessary custom rendering passes. Also, thanks to Unity’s Lead Graphics Programmer Sebastien Lagarde and a team at Unity Hackweek 2019, HDRP gained the Eye Master node, which was feature-compatible with the custom work previously done for The Heretic and therefore a great help in porting the eyes.In the following sections I will go over the resulting shader graphs for skin, eyes and teeth, that can all be found in the package. There is also a shader graph for hair in the package, but it is mostly a pass-through setup for the Hair master node.In general, the skin shader relies heavily on the built-in subsurface scattering feature of HDRP, which readily enables artists to author and assign different diffusion profiles to emulate various materials, including different types of skin. The skin graph itself uses the StackLit master node, in order to provide artists with two specular lobes (a setup commonly used for skin and not supported by the Lit master node), and for this reason the skin shader is forward-only.For the two specular lobes, the primary smoothness value is provided via a mask map, similar to the regular Lit shader, while the secondary smoothness value is exposed as a tweakable constant in the material inspector. Similar to the regular Lit shader, the mask map also allows artists to control an ambient occlusion factor, as well as the influence of two detail maps: one for detail normals and one for detail smoothness, where the detail smoothness affects both the primary and the secondary smoothness value.In addition to the regular mask map, the skin shader also accepts a cavity map (single channel texture, with lower values in cavities), which can be used to control a specular occlusion factor and/or reduce smoothness in small cavities, such as pores in the skin. The influence of the cavity map can also optionally be blended out at grazing angles, to emulate the effect of small cavities being hidden from view at grazing angles.The skin shader also contains support for pose-driven features (e.g., wrinkles) from the specific Snappers facial rig that we used for Gawain. In the skin graph, this functionality is encapsulated in a custom function node, which has some hidden inputs that are not visible in the graph itself. These hidden inputs are driven by the SnappersHeadRenderer component in the package, which needs to be placed on the same GameObject as the SkinnedMeshRenderer that uses the shader.Another curious node in the skin graph is related to the tearline setup, which I will explain a bit later, following the eyes section. Basically, to allow the tearline setup to modify the normals of the skin, we have to compute and store the normals during depth prepass, and then specifically sample them again in the forward pass (instead of recomputing them, which would discard the intermediate processing).The custom eye shader for The Heretic was a collaboration with Senior Software Engineer Nicholas Brancaccio, who contributed some of the initial work, including the two-layer split lighting model, and the implementation of the evaluation function for the occlusion near the eyelids. For the Digital Human package, some of that previously custom functionality has moved to the Eye Master node in HDRP, which the eye graph uses as an output.The eye shader effectively models the eye as a two-layer material, where the first layer is used to describe the cornea and fluids at the surface, and the second layer is used to describe the sclera and the iris, visible through the first layer. Lighting is split between the two layers: specular lighting is evaluated exclusively for the top layer (representing cornea and surface fluids, which are more glossy), while the diffuse lighting is evaluated only for the bottom layer (iris and sclera).Refraction in the cornea is handled internally, and the effect depends on both the input geometry and a couple of user-specified parameters. The eye input geometry needs to be a single mesh that describes only the surface of the eye, including the bulge of the cornea.Then, given a user-specified cross-section that (roughly) describes where the surface of the cornea begins, we can determine during rendering if a given fragment is part of the cornea. If the fragment is part of the cornea, then we refract the view ray and intersect the refracted ray with a virtual plane that represents the iris. The iris plane is adjustable via an offset from the cornea cross-section, to enable artists to adjust the amount of visual parallax in the eye.To evaluate the diffuse lighting in the iris, the eye shader also specifically provides an option for refracting the incident direction of light towards the iris, based on the currently rasterized fragment of the surface (cornea). While this does not give us proper caustics (we only accumulate the contribution from a single fragment at the refracting surface), artists can at least rely on the iris not appearing unnaturally in shadow when the eye is lit, e.g., from the side. The refracted lighting feature is now part of the Eye Master node, and it can be enabled through the Eye Cinematic mode.We model the occlusion near the eyelids using an anisotropic spherical Gaussian. The distribution is driven by four markers (transforms) that track the eyelids using the skin attachment system. Specifically, we use two markers to track the corners of the eye to form a closing axis, and then another two markers to track the upper and lower eyelids, which allows us to infer a closing angle. The closing axis and the closing angle are then used to generate the necessary basis vectors for evaluating the anisotropic spherical Gaussian at the surface of the eye. We use the result of this evaluation directly as an input to the ambient and specular occlusion factors on the Eye Master node, as well as to (optionally) modulate the albedo to artificially darken the occluded region.In the eye graph, most of the described features, including the refraction in the cornea and the occlusion near the eyelids, are facilitated by a single custom function node in the graph, labelled EyeSetup, which provides a number of readable outputs to the graph itself. Much like the custom function node for the facial rig in the skin graph, the custom function node in the eye graph uses hidden parameters that are not controlled through the material inspector, but through script code, due to the complexity and per-frame nature of those parameters. For the eye graph specifically, the hidden parameters are driven by the EyeRenderer component in the package, which needs to be placed on the same GameObject as the renderer that uses the shader, in order for the shader to produce a meaningful result.The EyeRenderer component, in addition to computing and passing values to the shader, also provides some useful gizmos and handles that are meant to assist in setting up the eyes. For example, the gizmos allow artists to visualize and tweak the offset to the cross-section that defines the cornea region, or to inspect and slightly adjust the forward axis for the planar texture projection, in case the provided eye geometry is not exactly facing down the z-axis.Lastly, like in the skin graph, in the eye graph we also have a node that handles the integration with the tearline setup: Normals and smoothness are written during depth prepass, and then sampled again during the forward pass.To reconstruct the tearline (the wetness between the eyes and the skin), we rely on the HDRP CustomPass API, which allows applications to inject custom rendering work at certain stages during a frame.Using a custom pass that operates on the contents of the HDRP normal buffer (which holds both normal and smoothness data), we blur the normals and smoothness values in specific screen space regions on the face (e.g., where the eyes meet the skin). Since the skin and the eyes are forward-only materials, we also had to insert specific nodes in those graphs to ensure that they specifically sample the result during the forward pass.Creating a smooth transition in the normal buffer helps visually bridge the two surfaces. In combination with a high smoothness value in the region, this setup will often result in a specular highlight appearing in-between the two materials, which effectively makes the tearline region appear wet.To mark the regions where the blurring should occur, we use a simple setup of masking decals that are placed in a specific user layer and never render any color to the screen (except for debugging purposes). By placing the decals in a specific user layer, we can more easily filter and render them exclusively in a custom pass, which mostly just sets one of the HDRP user stencil bits. Once all the masking decals have been drawn into the stencil, then we effectively have a screen space mask for where to perform the blur. We also use this mask to avoid blurring past the edges of the desired blur regions, by dynamically shrinking the width of the blur kernel to the edge of the mask.For the tearline of Gawain, a masking decal was authored specifically for each eye, to visually overlap both the eyelid and eyeball in the neutral face pose, and then attached to the skin using the attachment system. To support a small gap between the eyeball and the eyelid (as was evident with some of our 4D data), we also slightly exaggerated the geometry, so that it would have more inward-facing overlap with the eyeball.The teeth shader relies on many of the features of the Lit master node, including the subsurface scattering and the mask for clear coat. Apart from using the existing features of Lit, the shader also adds a custom type of attenuation that we use to smoothly darken the interior of the mouth, based on the current size of the mouth opening.To approximate the current size of the mouth opening, we place six markers near the opening of the mouth, to form a polygon that roughly approximates the interior contour of the lips. For Gawain, we used the skin attachment system to drive these markers, so that they will follow the lips regardless of how the face mesh is animated.During rendering, we start by passing this polygon to the shader, and then in the shader we project the polygon onto the unit hemisphere around the current fragment to obtain a spherical polygon. Intuitively, this spherical polygon then tells us something about how much of the exterior is visible through the mouth opening, from the point of view of the current fragment.To darken the interior of the mouth, we then simply use the area of the spherical polygon, in relation to the area of the unit hemisphere, as a non-physical attenuation term (ignoring the cosine). Specifically, we attenuate the existing ambient and specular occlusion factors, the coat mask, and the albedo, before passing these to the Lit master node in the graph.Similar to the skin and eye graphs, the teeth graph also contains a custom function node whose inputs are not visible in the graph. For the teeth graph, the hidden inputs are provided by the TeethRenderer component in the package, which must be added to the same GameObject as the renderer that uses the shader.I hope this blog post has helped illustrate some of the challenges and work that went into creating the set of technical solutions for the face of Gawain.If you want to explore or build on the tools that we are sharing, you can download the library from GitHub and use the technology in your own projects, including commercial productions. We would love to see what you make with it!We’re sharing more learnings from the creation process behind this project on The Heretic landing page.

>access_file_
1388|blog.unity.com

Achieve better Scene workflow with ScriptableObjects

Managing multiple Scenes in Unity can be a challenge, and improving this workflow is crucial for both the performance of your game and the productivity of your team. Here, we share some tips for setting up your Scene workflows in ways that scale for bigger projects.Most games involve multiple levels, and levels often contain more than one Scene. In games where Scenes are relatively small, you can break them into different sections using Prefabs. However, to enable or instantiate them during the game you need to reference all these Prefabs. That means that as your game gets bigger and as those references take up more space in memory, it becomes more efficient to use Scenes.You can break down your levels into one or multiple Unity Scenes. Finding the optimal way to manage them all becomes key. You can open multiple Scenes in the Editor and at runtime using Multi-Scene editing. Splitting levels into multiple Scenes also has the advantage of making teamwork easier as it avoids merge conflicts in collaboration tools such as Git, SVN, Unity Collaborate and the like.In the video below, we show how to load a level more efficiently by breaking the game logic and the different parts of the level into several distinct Unity Scenes. Then, using Additive Scene-loading mode when loading these Scenes, we load and unload the needed parts alongside the game logic, which is persistent. We use Prefabs to act as “anchors” for the Scenes, which also offers a lot of flexibility when working in a team, as every Scene represents a part of the level and can be edited separately.You can still load these Scenes while in Edit Mode and press Play at any time, so that you can visualize them all together when creating the level design.We show two different methods to load those Scenes. The first one is distance-based, which is well suited for non-interior levels like an open world. This technique is also useful for some visual effects (like fog, for instance) to hide the loading and unloading process.The second technique uses a Trigger to check which Scenes to load, which is more efficient when working with interiors.Now that everything is managed inside the level, you can add a layer on top of it to better manage the levels.We want to keep track of the different Scenes for each level as well as all the levels during the entire duration of the gameplay. One possible way of doing this is to use static variables and the singleton pattern in your MonoBehaviour scripts, but there are a few problems with this solution. Using the singleton pattern allows rigid connections between your systems, so it is not strictly modular. The systems can’t exist separately and will always depend on each other.Another issue involves the use of static variables. Since you can’t see them in the Inspector, you need to change the code to set them, making it harder for artists or level designers to test the game easily. When you need data to be shared between the different Scenes, you use static variables combined with DontDestroyOnLoad, but the latter should be avoided whenever it is possible.To store information about the different Scenes, you can use ScriptableObject, which is a serializable class mainly used to store data. Unlike MonoBehaviour scripts, which are used as components attached to GameObjects, ScriptableObjects are not attached to any GameObjects and thus can be shared between the different Scenes of the whole project.You want to be able to use this structure for levels but also for menu Scenes in your game. To do so, create a GameScene class that contains the different common properties between levels and menus.Notice that the class inherits from ScriptableObject and not MonoBehaviour. You can add as many properties as you need for your game. After this step, you can create Level and Menu classes that both inherit from the GameScene class that was just created – so they are also ScriptableObjects.Adding the CreateAssetMenu attribute at the top lets you create a new level from the Assets menu in Unity. You can do the same for the Menu class. You can also include an enum to be able to choose the menu type from the Inspector. Now that you can create levels and menus, let’s add a database that lists the levels and menus for easy reference. You can also add an index to track the current level of the player. Then, you can add methods to load a new game (in this case the first level will be loaded), to replay the current level, and for going to the next level. Note that only the index changes between these three methods, so you can create a method that loads the level with an index to use it multiple times. There are also methods for the menus, and you can use the enum type that you created before to load the specific menu you want – just make sure that the order in the enum and the order in the list of menus is the same.Now you can finally create a level, menu or database ScriptableObject from the Assets menu by right-clicking in the Project window.From there, just keep adding the levels and menus you need, adjusting the settings, and then adding them to the Scenes database. The example below shows you what Level1, MainMenu and Scenes Data look like.It’s time to call those methods. In this example, the Next Level Button on the user interface (UI) that appears when a player reaches the end of the level calls the NextLevel method. To attach the method to the button, click the plus button of the On Click event of the Button component to add a new event, then drag and drop the Scenes Data ScriptableObject into the object field and choose the NextLevel method from ScenesData, as shown below.Now you can go through the same process for the other buttons – to replay the level or go to the main menu, and so on. You can also reference the ScriptableObject from any other script to access the different properties, like the AudioClip for the background music or the post-processing profile, and use them in the level.Minimizing loading/unloadingIn the ScenePartLoader script shown in the video, you can see that a player can keep entering and leaving the collider multiple times, triggering the repeated loading and unloading of a Scene. To avoid this, you can add a coroutine before calling the loading and unloading methods of the Scene in the script, and stop the coroutine if the player leaves the trigger.Naming conventionsAnother general tip is to use solid naming conventions in the project. The team should agree beforehand on how to name the different types of assets – from scripts and Scenes to materials and other things in the project. This will make it easier not only for you but also for your teammates to work on the project and to maintain it. This is always a good idea, but it’s crucial for Scene management with ScriptableObjects in this particular case. Our example used a straightforward approach based on the Scene name, but there are many different solutions that rely less on the scene name. You should avoid the string-based approach because if you rename a Unity Scene in a given context, in another part of the game that Scene will not load.Custom toolingOne way to avoid the name dependency game-wide is to set up your script to reference Scenes as Object type. This allows you to drag and drop a Scene asset in an Inspector and then safely get its name in a script. However, since it’s an Editor class, you don’t have access to the AssetDatabase class at runtime, so you need to combine both pieces of data for a solution that works in the Editor, prevents human error, and still works at runtime. You can refer to the ISerializationCallbackReceiver interface for an example of how to implement an object which, upon serialization, can extract the string path from the Scene asset and store it to be used at runtime.In addition, you might also create a custom Inspector to make it easier to quickly add Scenes to the Build Settings using buttons, instead of having to add them manually through that menu and having to keep them in sync.As an example of this type of tool, check out this great open source implementation by developer JohannesMP (this is not an official Unity resource).This post shows just one way that ScriptableObjects can enhance your workflow when working with multiple Scenes combined with Prefabs. Different games have vastly different ways of managing Scenes – no single solution works for all game structures. It makes a lot of sense to implement your own custom tooling to fit the organization of your project.We hope this information can help you in your project or maybe inspire you to create your own Scene management tools.Let us know in the comments if you have any questions. We would love to hear what methods you use to manage the Scenes in your game. And feel free to suggest other use cases you would like us to cover in future blog posts.

>access_file_
1389|blog.unity.com

Making of The Heretic: The VFX-driven character Morgan

Creating a character with the help of Visual Effects Graph was an interesting challenge for the Unity Demo team. As someone who spent a lot of time in his career waiting for renders to finish, Demo team’s Technical Artist Adrian Lazar has appreciation for the creative options made possible by real-time authoring. Read the post below for his detailed breakdown of the process behind the character Morgan, as well as useful tips for anyone doing VFX in Unity.My name is Adrian Lazar and I’ve been working in the computer-generated graphics industry for the last 18 years or so, starting with post-production in advertising and transitioning to real-time graphics with game development in 2009. I have a generalist background and in the last few years I’ve been taking more technical art tasks - this helped me ship my own indie title together with a small, but talented team.When I joined Unity’s Demo Team in early 2019 as a technical artist, we were getting ready to release the first part of The Heretic so I helped with some finishing effects. Soon after that we started talking about Morgan, the god-like, VFX-driven character introduced in the short film 2nd part.On the storytelling side, Vess (Veselin Efremov, writer, director and creative director of The Heretic) had some clear requirements: Morgan needed to morph between multiple states, calm and angry, female and male or a combination of these, grow in height multiple times over, crumble, catch fire, and more.Regarding the appearance, on the other hand, Vess intentionally left it quite open for exploration and experimentation. We had some early concepts created by our former colleague Georgi Simeonov, but those didn’t get into the vfx and shape-shifting aspect of the character, both fundamental to the final look - this meant that I had a pretty blank slate to start which was challenging but fun!I started my initial tests in Houdini 3D, a tool I feel comfortable with from before and that gave me a good opportunity to explore and bounce some initial ideas with Vess.Of course, for the production of the real-time short film, we wanted the effects that build Morgan to be developed inside Unity, so that it would be easy to iterate on the character and make sure it reacts correctly to everything else that happens around it. Therefore I had to look for a different solution and move away from pre-simulating the effect in another software.One thing that gives me joy as being part of Unity’s Demo Team is having the chance to stress test, improve, and sometimes develop tools and processes that our many users can make use of daily.In Morgan’s case, the opportunity was two-fold: create a complex VFX-driven character that runs in real-time, and equally important, take a first step into real-time authoring of complex effects. This was very exciting for me.Just think about it, being able to develop and iterate on the character look in the final environment, from the desired camera angle and with the final lighting, post-processing, and other VFX! This is a dream that would have been unthinkable only a few years back.It was not a smooth ride, but with a good team effort, we achieved both.And so, with a visual look open to experimentation and with real-time playback and real-time authoring being the two technical goals, I turned to a Unity tool still in its infancy back then: the Visual Effects Graph developed by Julien Fryer and the Paris team.The VFX Graph was still quite new at the time, and had a long road ahead until you could really use it for true & deep real-time authoring. However, the benefits it promised were huge. I was excited about not having to wait for the effects to be simulated in DCC, exported & evaluated in Unity then back to DCC for tweaks. As a team, we knew that we wanted to be able to do changes until the very end, sometimes hours before the final deadline, and why wouldn’t we - this is one of the promises of real-time graphics.It was a back-and-forth familiar to those working on both: first, you need to have an idea of what you want to achieve, then you need to build the tech for it, but, as with any other creative process, things are rarely straightforward. Ideas are changed and adapted in the process, sometimes due to creative direction, other times due to the tech restrictions.To complicate things even more, we were in uncharted waters with a tool that hadn’t been used at this scale before, working towards real-time authoring of a complex VFX-driven character.So the tech and the look-dev closely followed each other and when I had both it was great, fast iterations, fast experimentation, and just overall lots of fun. This creative freedom was addictive, with the visuals changing direction in a manner that was closer to working on concept art rather than on a production.One added benefit of fast experimentation is that you can quickly change directions and repurpose a test, as it was the case when I was working on some details across the face. It didn’t turn into anything that was useful for that purpose, but it inspired a new direction for the calm version of the hair.But there were a few times when I got stuck because one part couldn’t advance without the other. If I didn’t know where to take the character creatively or I couldn’t find a way to do what I wanted with the tech I had, things couldn’t move forward.Luckily my colleagues were there to help, so here’s a shout out to them:Vlad Neykov (Lead Graphics Test Engineer) was a fantastic resource, answering patiently all my questions and contributing with ideas about how to solve different technical problems.Julien Fryer (VFX Graph Lead Programmer) helped optimize Morgan for The Heretic. Stress testing our tools using internal productions like The Heretic allowed us to discover new opportunities for improvement. Morgan’s performance requirements led to the development of an LOD system by Julien which will be available soon to all of VFX Graph users.Paco (Plamen Tamnev, 3D Artist in the Demo Team) provided a much needed visual pass over Morgan - working with him was a great experience.Krasi (Krasimir Nechevski, our Animation Director), who led and supervised the rig and animation of the character, and even performed the motion capture, had invaluable feedback for how the effects in the meteorite and the impalement shots should behave.After 3 major versions and countless smaller experiments, the final version of Morgan was finally emerging, just as we were getting close to the final deadline.VFX Morphing: The effects covering Morgan can be morphed between a brighter version and a darker one. This is not (only) a color change. There are two separate sets of systems, each with its own geometry, density, color, effects etc. The idea was to make the effect similar to a skin layer being peeled or shed off but without making it look gross or graphic. During the morph particles are being expelled from the body and the face also has an emissive edge outlining the change. There are multiple options of customizing this effect in real-time, including changing the colors and meshes for the body, how many particles and being expelled, how those look, and more. Body Morphing: Independent of the VFX morph, the body can be morphed as well using blend shapes, changing from female to male, with the effects adapting to the new shape. Fire: While it looks more like ambers than proper fire, this highly customizable effect can be used to add an extra layer of intensity when required. It was the last effect built, just a few weeks before the final deadline. The contribution amount can be set for each body part (helmet, head, eyes, body & cloth), the direction and turbulence can be modified in real-time and you can also change the color gradient of the fire between the two states of the morph. You can also choose between three output modes, lit mesh (used in The Heretic), unlit (used in Morgan standalone package) and screen space blur/distort - an experimental mode that didn’t make it in the final video but is still available in the Morgan Asset Store package, under fire options.Crumble: As the name suggests, Morgan can crumble into pieces, with several options for how this can happen. The effect is using a starting point that the user can set (for The Heretic, it was the tip of the right index finger) and uses that origin position to create a radial gradient across the entire body. That gradient is being modulated by two noise masks and the final result is used to drive two sliders, Crumble A and Crumble B. The idea of having two sliders was to make the crumble effect a bit more controllable during animation, as Crumble A runs only across one of the noise patterns and doesn’t destroy the body completely. Crumble B can be animated with a small delay to make a complete destruction. After the crumbling effect is triggered, you need to use either the Reset or the Recompile VFX Graph option to bring the character back to its previous state. This is required because the crumbling activates a few physical forces under Update. That means that the previous state cannot be played back, only reset.For The Heretic short film, we wanted a more physical destruction so we combined this crumble effect with a simulation exported from Houdini.Debug: Being able to debug properties early on was crucial for being able to experiment with different versions of this character, for example, you can use the debug option to see the noise masks I was describing above. Debug options were built alongside each main feature. Realtime Customization: Each of the features mentioned above has several customization options exposed in Morgan’s Inspector. In total, there are over 300 parameters that can be animated over time as you desire. Other Related Effects: While most of the time was spent on Morgan’s character, there are also 2 other big effects: the meteorite when part of the arm moves towards Gawain and the spikes impaling him.Morgan is made of 17 visual effect graphs each covering a different part. We did this so that it would be easier to manage them. First, we needed the particles to spawn on the skinned mesh and follow it during character animation. As skinned meshes aren’t yet supported by default in the VFX Graph, we had to find another way around it. The position, normals and tangents for the base meshes are rendered in UV space, which are then set as texture parameters in the VFX graphs - this allows us to position and orient the particles correctly on the character. The vertex color and the albedo texture are also rendered in UV space - these textures are used to manipules certain properties like size, scale, angle and pivot.For the Morgan package, the process of generating the textures was greatly improved by my colleagues Robert Cupiz (tech and rendering lead for The Heretic) and Torbjorn Laedre (principal engineer at Unity Demo Team). A custom editor centralizes all the graphs making up Morgan - this makes it easy to update shared properties fast. There are about 300 parameters exposed but there’s no real limit for how many can be added, however having too many parameters in the interface can make it less practical to work with.Using animation curve samplers is a quick way to add different kinds of ease in and out for animations, but also delays. They are very versatile and I have used them a lot, usually to drive a lerp between two values, being floats, vectors, colors etc. Tying multiple properties and actions to a single parameter is really powerful. For example, the fire intensity slider drives multiple properties related to the fire. Again, it’s a matter of balancing convenience and speed vs granularity of what gets exposed in the end.Subgraphs are great for reusing the same nodes across multiple graphs, but be careful at what point in development you add them. The way they’re currently implemented, each time you update and compile a subgraph, all the graphs using it get recompiled. Depending on how many and how heavy the main graphs are, it can take a long time for them to recompile. My suggestion is to add subgraphs when you have tested the logic in one graph and if it happens that you need to do frequent updates to the sugraphs, it might be faster to copy-paste the nodes on one of the main graphs, do the changes, and then copy-paste them back in. Not that convenient, but I had to do it on several occasions because the compilation time across all the graphs for each tweak was too long. For most instances, I find the sliders for exposed parameters to be more user friendly than regular numerical inputs. They feel more fluid and natural during tweaking and you can use the range to keep the user between some values that are safe not to break the effect. Every parameter of a graph can be exposed so that the user can change it from the Inspector but that doesn’t mean it should be done. Be careful not to drown yourself or the person who will use the effect in too many options. With Morgan, I went through many changes over what was exposed, searching for a good balance between accessibility and power. Although not a technical tip, something that I underestimated was that solving both the technical and the artistic challenge was more difficult than the sum of the two combined because of how interlocked these two were in this case.The best way to learn more about Morgan is to download the standalone package and play with it. You can find more blog posts and videos about the creation process behind this project on The Heretic landing page. For someone who has spent a long time waiting for simulations and renders to finish, this realtime revolution that is reshaping so many industries is a dream come true. I'm looking forward to the next challenge we'll pick up.

>access_file_
1390|blog.unity.com

Blend virtual content and the real world with Unity’s AR Foundation, now supporting the ARCore Depth API

Unity’s AR Foundation 4.1 supports Google’s new ARCore Depth API. With the addition of this capability, AR Foundation developers can now deliver experiences that blend digital content with the physical world more realistically than ever before.With its extensive feature set and vast reach, Google’s ARCore is one of the most popular and powerful SDKs available to developers of augmented reality (AR) experiences. We have been working closely with Google to ensure Unity users have swift access to newly released ARCore features. The release of the ARCore Depth API is a significant milestone as it enables enhanced understanding of physical surroundings as well as more realistic visuals in AR Foundation-based experiences.ARCore can take advantage of multiple types of sensors to generate depth images. On phones with only RGB cameras, ARCore employs depth-from-motion algorithms that compare successive camera images as the phone moves to estimate the distance to every pixel. This method allows for depth data to be available on hundreds of millions of Android phones. And on devices that include a Time of Flight camera, the depth data is even more precise.AR Foundation now includes the following new features:Automatic occlusionAccess to depth imagesThe most obvious effect of ARCore’s depth information is the ability to realistically blend digital content and real-world objects.We’ve expanded AR Foundation’s existing support for pass-through video to include per-pixel depth information provided by ARCore so that occlusion “just works” on supported devices. By simply adding the AR Occlusion Manager to the same GameObject that holds the AR Camera and AR Background Renderer components, the depth data is automatically evaluated by the shader to create this blending effect.When occlusion is combined with AR Foundation’s existing support for ARCore’s Lighting Estimation capabilities, augmented reality apps can achieve almost seamless visual quality.AR Foundation provides developers convenient access to the same per-pixel depth data it uses for automatic occlusion. Depth data is a powerful tool that allows developers to add rich interactions with the user’s surroundings. For example, the depth data could be used to build a representation of real-world objects that can be fed to Unity’s physics system. This creates the opportunity for digital content to appear to respond to and interact with the physical surroundings.This capability opens the door to novel AR game experiences such as The SKATRIX by Reality Crisis. This upcoming title leverages the ARCore Depth API to generate meshes that transform the physical surroundings into an AR skatepark.Having access to the raw depth data gives developers the tools to create unique interactive AR experiences that weren’t previously possible.The 4.1 versions of the AR Foundation and ARCore XR Plugin packages contain everything you need to get started and are compatible with Unity 2019 LTS and later. Samples demonstrating how to set up automatic occlusion and depth data are located in AR Foundation Samples on GitHub.We’re excited to see the enhanced visuals and rich experiences made possible by the ARCore Depth API. And we look forward to continuing our close collaboration with Google to bring more awesome AR functionality to AR Foundation developers.For more information please check out Google’s ARCore Depth API announcement and Depth Lab app to see examples of this tech that were made in Unity. Finally, join us on the Unity Handheld AR forums as you try out this latest version of AR Foundation. We’d love to hear about what you’ve created using the new features, and we welcome your feedback.

>access_file_
1394|blog.unity.com

What is gametech? An overview of the ecosystem

Just five years ago, gaming was a $91 billion-a year industry: fast forward to 2020, and it is set to generate an astonishing $160 billion in revenue. To put this figure into perspective, it makes the gaming market over double the size of the global recorded music industry (around $19 billion) and the worldwide film box office (around $43 billion) combined.While there have always been technology companies supporting the video game industry, the explosive growth it has experienced in the last decade, spearheaded by mobile gaming, has fed and fueled an extensive ecosystem of supporting companies which are further accelerating this growth. As these companies have consolidated their positions within the different layers of the industry, and increased their contribution to its continuous growth, leading market research firm Newzoo released an infographic to map out the ecosystem, which has become known as “gametech”. Below, we look at what defines this ecosystem and dive into the main categories and companies.What makes a gametech companyThe evolution and rising importance of this sector led Newzoo to mark a separation between companies with relevant products, and those with dedicated solutions that answer the specific needs of those on the field, building the games.Whereas some companies build products to support the widest variety of customers with one-size-fits all solutions, the defining characteristic of a gametech company is having a product optimized towards serving the gaming industry, from console to mobile. It is this focus that has catalyzed the explosive growth of the market, and as competition increases at every layer, new innovations are born that propel its growth further.A case in point is the rise of innovative new tools in the last few years to empower game developers to better measure, optimize, and automate their ad monetization. This is in response to the growing significance of ad revenue as a meaningful revenue stream for game developers. The explosive arrival of hyper-casual mobile games, whose entire business model is based on ad revenue, has added impetus to this, and this is reflected in the map’s categories.Let’s dive into Newzoo’s map of the gametech ecosystem.The gametech landscapeNewzoo's map divides the gametech ecosystem into four categories, which all have multiple subcategories. We’ll go through all four, highlighting some notable companies from each.DevelopmentDevelopment includes every aspect needed to actually build a game. It is broken down into engines, like Unreal (owned by Epic Games and a large factor behind their $15 billion valuation) and Unity (valued at $6 billion); art tools like Blender (who received a $1.2M grant from Epic Games last year) and Adobe’s Substance; middleware like Easy Anti-Cheat and Battleye; in addition to audio tools, software management tools, and localization tools.The latter has become particularly important in recent years, as mobile devices have become more affordable across the globe, leading to the proliferation of the global mobile gaming audience. At the same time, the technology to target these users at scale through UA campaigns has become increasingly advanced. The combination of these factors has provided a fertile ground for the growth of localization tools.OperationsIn the game tech ecosystem, the operations category is focused largely on mobile games. Newzoo breaks it down into game performance monitoring, with leading companies including GameBench and Bitbar; ad mediation companies like ironSource and Admob (owned by Google) which optimize ad monetization operations; ad monetization tools, such as ironSource and Facebook Audience Network; and customer engagement such as One Signal and Airship. In addition, game analytics is an important category that helps developers measure and analyze the health of their games: notable companies in this space include devtodev and the appropriately named GameAnalytics. Another category within operations is platforms, which is dominated by a handful of players: Steam and Epic for PC games, Playstation and Xbox for console games, and Google Play and the App Store dominating the mobile space. Epic, launched by the makers of Fortnite in December 2018, has proven that it’s possible for newcomers to penetrate this market, but it’s still relatively early days. The final subcategory in Newzoo’s breakdown of operations is infrastructure and cloud services, like Google Cloud, Firebase (purchased by Google in 2014), and AWS: Game Tech. GrowthThe marketing ecosystem within game tech is equally multi-layered. User acquisition is a significant part, with the pie including companies like ironSource, UnityAds, and Facebook. A crucial component of UA, ad creatives, has become a subcategory of growth itself, with companies like Luna Labs and AppOnboard carving a niche for themselves. App Store Optimization (ASO) has also emerged as a subcategory of its own in recent years, with companies like Splitmetrics and Storemaven helping UA managers improve their games’ app store pages to better attract and convert users. In the context of streaming’s explosive growth, influencers, another growth subcategory, have become increasingly important tools, particularly for PC and console titles, which dominate the streaming space. Influencers are also of growing use to mobile games, in the context of increased automation within user acquisition and competition for users’ attention.Another important part of the UA process is ad attribution and UA analytics, which allows UA managers to track where their acquired traffic of new gamers has come from, in addition to providing various fraud solutions. Key players in this space are AppsFlyer, valued over $1 billion, and Adjust, who received $255 million in investment last year.Market AnalysisAnalyzing all this growth are companies such as Newzoo, GameRefinery, App Annie, and GameAnalytics, which was bought by Chinese ad-tech firm Mobvista in 2016. Research firm Quantic Foundry also provides valuable insight into gamer motivations.Looking aheadThe growth of the game tech ecosystem shows no signs of slowing down, as games become fixtures in the way people spend their free time. There are numerous reasons why it is superseding other entertainment categories such as film and television. Perhaps its most valuable advantage is its unlimited ability to leverage content (theoretically, video games never need to “end”: new levels and features can frequently be added and optimized), which in turn removes any kind of limit to the potential revenue generated per user, whether that be from ads or in-game purchases. This contrasts starkly with streaming platforms like Netflix where content has an expiry date (be it the end of a movie or the end of a series) and the maximum revenue a user can generate is capped at the subscription fee, regardless of whether they watch 5 or 500 episodes a month. So, what does the future have in store for the gametech ecosystem? As has been the case until now, we can expect increased competition in all layers to drive further innovation, whether that be core development tools to create brand new gaming experiences, or new products for UA managers to drive and analyze marketing campaigns. Because gaming is more reliant on technology than other entertainment mediums, the impact of technological change is stronger, and thus further innovations will lead to a continuous cycle of growth, in which developers, consumers and tech companies enjoy the rewards.

>access_file_
1395|blog.unity.com

Making of The Heretic: Digital Human Character Gawain

Gawain is the main character from The Heretic, the real time short film made in Unity, written and directed by Veselin Efremov. This article will cover the creation of the character and give some insight into the different aspects of his production.We worked with a casting agency to choose the actor who would perform the role. This was the first digital role for actor Jake Fairbrother. You can normally see him in theatrical plays in London.The performance took place on several separate occasions. We started with a body scan at 4D Max , together with a 3D scan of the face and a first batch of 4D performance at Infinite Realities at their studio outside of London. We continued with capturing body performance at our mocap studio in Sofia, and later returned to Infinite Realities for additional 4D performance when we knew that we could scale the amount of screen time it is viable for. Voice performance was captured at SideUK studio in London.The project started with some early concept explorations by Georgi Simeonov. He tried different styles based on his initial discussions with Director Veselin Efremov, with some elements that were essential to the story, like the briefcase for example, being present in almost all of the versions.In the second stage, some of the ideas from the initial exploration were developed further and became more focused after Georgi and Veselin discussed what was working from the previous sketches. One thing that is interesting to note here is the subtle implementation of the medieval knight theme in the design of Gawain’s costume.The final version of the concept sketch for Gawain. Some things changed as we moved along, but we tried to stay as close as possible to the original design.Paco: After we received the initial scan and the cleaned neutral pose of the face from Infinite Realities, we had a meeting with our animation director Krasimir Nechevski to figure out some of the technical details that we needed to clear up before continuing with the outfit and animations, things like the uv layouts for the face, the different texture sets and how and where we split those, also choosing where to split the head from the body. This last one was especially important as the director Veselin made it clear from the beginning that he wanted to see as much of the neck and the area around it as possible in the closeups that he was planning with the 4D capture of the actor’s performance.We had to be careful with the distribution of the textures sets also because they had different resolutions, for example the body and legs had a much lower resolution compared to the face, mostly because we don’t see them pretty much anywhere, but we choose to have them just in case. After all of that was decided on, we transferred and tweaked the scanned data onto the new model and made some adjustments as we moved forward.The eyes went through a lot of polishes and tweaks to get to where we needed them to be, a lot of that creative guidance and drive came through the director Vess, who served as a reality check on what could be improved with them.The tech for the eyes was made by Lasse Pedersen with some help from Nicolas Brancaccio. The eyes used a single mesh for the cornea, iris and sclera, the shader controlled many features related to the eyes directly inside of Unity. We also had a mesh around the eyelids controlling the smoothing of the normals between the eyeball and the eyelids to give us a softer transition, it also served as a tearline mesh.An example of some of the controls that the shader gave us, in this case the AO of the eyes instead of having to use a separate shadow mesh for that with a baked texture.The technology stack used to bring Gawain to life, the shaders and all tools mentioned in this blogpost can be found in the Digital Human package we released recently. If you want to learn more information about the tech aspects of Gawain, stay tuned to this blog, we’re working on another article that goes more in-depth on the skin attachment system, shader, and other technical details.Now Krasimir Nechevski, our animation director will explain more in-depth the process behind the facial performance for the character of Gawain.Krasimir: Making a digital human pipeline was one of the main goals of the Heretic and a major accomplishment for the team. We have been avoiding it in the past by making robots or nightmarish creatures, but it was time for us to give it a go. There are multiple challenges in achieving this- there is the skin, hair, teeth and eye shading each with a very different and difficult set of problems, but the hardest part of making a digital double in my opinion is reproducing the facial movement with all the subtleties. It is a well-known problem and falling short usually leads to an awkward feeling in the viewer a.k.a. the uncanny valley.There are many ways of animating the face of a character- blendshape rigs, 4D (volumetric video), machine learning, simulation, all with varying pros and cons. We chose a somewhat unorthodox method so here I will try to chronologically explain our reasoning and process. To sum it up, we decided to use 4D directly and add only the fine detail wrinkle maps from a rig.It is worth noting that lately machine learning approaches of processing 2D video have been very successful at achieving convincing results and there are some examples that manage to produce incredible results by synthesizing facial performance in 3D. Based on this it is safe to assume that ML will solve facial performance in the future. But an important aspect of ML is data, a lot of data. Acquiring clean 4D sample data is essential, so we can view 4D as a milestone for achieving fully synthesized facial performance with machine learning.First we needed a proof of this concept, so we decided to make a very short segment of facial animation with the condition that if it fails we should be able to finish the movie without it. We started by doing the first capture session at a vendor [Infinite Realities] which has been developing a 4D capture system and achieving amazing results.Even though the system produced one of the best results at the time, there are some challenges that come with using 4D. Firstly it uses photogrammetry and there are some imperfections to the method that limit the quality. The major obstacles are usually due to an occlusion of the surface of the skin by either hairs or visibility, there is a certain amount of micro noise, reflective surfaces produce a lot of glitches, the head needs to be stabilized and lastly there is no temporal coherence of the meshes between the frames.Above you can see how the raw, decimated data looks like and how every frame of the volumetric video is made up of random triangles that are unique to it.Luckily there is a solution for that- a software called Wrap3D and is developed by [Russian3DScanner]. This tool is usually used for creating coherent meshes for blendshape based rigs. For most of the time during our initial research we tried cleaning the data ourselves with Wrap3D. It works by utilizing a set of small dots on the actor’s face and using those as markers to wrap the same mesh over all of the frames and thus achieving consistency between all frames. You start by wrapping the first frame and then with the help of the markers visible in the texture you wrap the first frame on to the second frame and so on.The markers on their own are not enough though, since there is quite a lot of error when putting those manually. To fix that there is a feature in Wrap3D that uses optical flow and by analyzing the texture makes the match between the consecutive frames pixel perfect. After projecting the textures for each frame the result is a stream of meshes with the same topology. With that out of the way we had to deal with the remaining imperfections like noise and replacing damaged sections by transplanting them from healthy meshes. The lead programmer involved in the 4D processing- Lasse Pedersen - developed a set of tools for importing and working with the data inside Unity.Even though the result was great it still lacked micro details because the processing and noise removal somewhat smooths the surface and loses the pore level details. We knew it could be pushed even further by adding fine details which are animated. To achieve this we used a FACS based rig developed by SnappersTech, which had the same topology as our 4D. Lasse developed a solver that managed to give accurate activations of the wrinkle maps from the rig adding this level of detail back. Here is an example of a later stage of our research.Later all of the mesh cleanup was done in DCC, but the tools Lasse developed have great potential and other uses. If you want to know more about that, Lasse is currently writing another blog post where he describes all of his work in-depth. The tools are also included in the Digital Human package we released recently.By a lucky coincidence not long before our deadline for the first part of the project I met the guys behind Wrap3D at a conference and they agreed to collaborate. It was hugely successful and they delivered the cleaned 4D for our initial test extremely fast and with excellent quality.After seeing the final result we were more confident than ever that it is a path worth exploring further. It was still far from perfect, but it did not feel uncanny. After the test was done and our pipeline proven we decided to add many more closeup shots with facial performance in the second part of the project relying completely for the 4D processing on our partners at Infinite Realities and Russian3DScanner. They also continued improving their tools and equipment and delivering even better results.To achieve our final result by adding wrinkle maps we needed a really good facial rig. We planned on using it for facial performance that was further away from the camera.FACS based rigs are a mainstream approach for solving facial performance. They are inspired by the so-called FACS (Facial Action Coding System) developed in 1978 by Paul Ekman, which is a common standard to systematically categorize the physical expression of emotions, and it has proven useful to psychologists and animators alike. A FACS based rig mixes hundreds of blendshapes of extreme poses for each AU (action unit) which is roughly every muscle of the face. Often adding some of these shapes together produces incorrect results which are fixed with the so called corrective and intermediate blendshapes. The result is a very complex system which is then usually controlled by capturing the performance of an actor with an HMC (head-mounted camera) and solving which blendshapes need to be activated.To animate the eyes Christian Kardach developed a tool that used a computer vision approach to track the irises from a render in Maya.Other issues with 4D worth mentioning is combining facial performance and body performance. The system for capturing high quality facial performance is very big and has a narrow useful volume. The actor needs to perform sitting with a very limited range of motion for the head. Later when we shot the motion capture I had to create convincing movements for the body that fit as best as possible. It would have been best if there was a way to capture such high fidelity 4D with a head-mounted device, but such technology is still not available.Paco: We used the body scan of the actor as a base for building the outfit for the character of Gawain in Marvelous Designer. We prepared a proxy version of the body that was easy to work with, especially when it was time to simulate the jacket with the many animations that Gawain had, it was only necessary to have the main shapes that the jacket would interact with like the bag on his hip and the shirt’s overall silhouette.Krasimir: Gawain’s body rig is composed of a few layers on top of each other. The main tool for animation and motion capture cleanup was Motionbuilder. At the base of the rig there is a skeleton compatible with both Motionbuilder and Maya.The Maya version of the rig had an additional deforming rig layer, which added twist and fan joints, a double knee setup and other details. The Snappers rig was referenced in the Maya scene which allowed for it to be safely iterated without affecting the main file.For the first part of The Heretic we did the motion capture of the actor at our internal studio in Sofia. For the second part we used the help of a motion capture vendor TakeOne.Paco: I’ve used Marvelous Designer for pretty much all of Gawain’s outfit, except for the shoes. All except for the jacket, were built with the traditional pipeline of making the base for the high poly mesh inside of Marvelous and then polish and texture it as a low poly asset that was skinned to the character.We initially tried to simulate the jacket in real time with Caronte, but after many attempts it never felt quite right and it wasn’t what the director initially hoped for. I began making a few tests with simulating the jacket inside of Marvelous and at this point Vess had to make the tough decision of scrapping the work that we had done with Caronte so far, it was obvious that the trade off in quality was too big compared to the output that we got directly from Marvelous Designer.The final sewing pattern for the jacket in Marvelous Designer. This was the mesh with the final resolution that was used for the low poly simulation. I exported it as a triangulated mesh to 3ds max where I did a custom UV layout and textured it in Substance painter after that.At this point I had a textured model with the custom UV’s and the original model in Marvelous both triangulated and matching vertex for vertex. I used the original untextured version in Marvelous for the different simulations and then I used a skin wrap in 3ds Max and used the exported simulation to drive the textured custom mesh that I did before that.I ended up going with this approach, because it seemed like a relatively safe and non destructive workflow, since I had relative freedom to do adjustments and have consistent results.For the topology of the jacket I used the triangulated topology from Marvelous in order for the jacket to deform in exactly the same way as it would there.The first iteration of the jacket removal that I did in Marvelous, using the simplified simulation proxy of Gawain. There were still a few small kinks to work out at this stage, but it worked as a proof of concept. We had some back and forth with Krasimir on how to go about it and after he took off his own jacket a bunch of times he came up with the animation that we ended up using for the simulation.The very first iteration of the outfit that we had, a lot changed especially for the jacket. This was the version that we tried to simulate with Caronte, so we left it a bit bulkier and with no prebaked wrinkles and deformations, as those should have come naturally from the simulation itself. Another thing that changed a lot was the collar of the jacket. What we ended up with was less bloated and it worked a lot better in all of the shots.For Veselin it was important that the character should have a more open design for the shirt’s neckline, especially for the closeup shots, where having unnecessary details in the way could take away from the actor’s performance. In the above shot is the first iteration of the shirt that I did based on Georgi’s designs, we tried a less typical design for the shirt’s neckline, but we failed to realize that it would turn out to be a bit of a problem in some shots, especially the ones with lower camera angles, so we had make a quick adjustment to it after seeing it in context.Other than the neckline, the design remained mostly the same, with some small balancing tweaks of the design to accommodate the new silhouette of the border.It’s a similar story with the additional equipment on top of the shirt, after seeing more and more shots with it, Vess realized that less is more in this case as well and the cleaner look helped a lot with some of the last shots of the film, where we had the character without the jacket.For those shots at the end of the film it was crucial to have a clean design that is readable and helped to drive the focus towards the face. Also having a red shirt where the heart of the white golem would be at the very end is a visual that the Vess was very keen on having from the very beginning.Paco : The pants and the leg pouch, they went through some additional tweaks later on, but overall they remained pretty close to that initial setup.The knee pads on the pants went through some revisions as well, the idea that Georgi had for those in the concept. was for them to vaguely resemble a medieval knight’s armour, something that we wanted to have hinted in some other elements as well, things like the elbow pad on the left arm and the shoulder design of the jacket.Initially we didn’t have fur on the collar of the jacket, that came later from one of the talks with the director Vess, he suggested that having finer details for this part of the jacket would give us a lot of visual fidelity for the closeups where we see the face.I decided to use Xgen and Maya to scatter the cards for the fur and did a quick grooming pass on them to add a bit of clumping and length variance before I began to map them onto a texture atlas. The final thing that helped to ground the fur a bit more in the shots and counter some of the repetition of the texture since there were too many hair cards for them to have unique uv’s was to add vertex paint that acted as an AO and color offset.After that we used Lasse’s attachment tool to attach the fur to the collar, the same as for the facial hair and eyelashes.I made a textured model for the gauntlet device on Gawain’s left arm. The lower part along the knuckles is intended to be seen directly, while the upper part serves as foundation that would be covered by the same type of animated wires that tech lead Robert Cupisz was creating for the Boston character.The idea that Vess had for this device was that it gives Gawain tactile feedback without him having to look at it, something that we see in one of the shots at the beginning of the film. It’s how Boston communicates with Gawain.The model was broken in different texture sets at 2k resolution and exported at 4k for some of the main objects, the rest remained at 2k for the final export. All of the textures were made by using the generators and tools in Substance Painter.Paco: Gawain’s briefcase began with a concept blockout from Georgi Simeonov and from there I took over, refined the model and textured it in Substance Painter.One of the more interesting features of the briefcase was the self-retracting strap that is seen when we first meet Gawain. There is also a small fan that suggests cooling functions of the case to go along with the temperature display on the side.An interesting fact about the temperature display is that I actually made it red in the beginning as it was in the original design by Georgi and we had it like that for almost the entire production, until one day when Vess was working on one of the very last shots where Gawain drops the briefcase, he realized that it’s a bit to reminiscent of a bomb that is about to be detonated. This was a nice catch as this was definitely not his original intention, also a lot of the design was based on cooling, having things like vents and fans, so if anything it should have been cold, so he suggested the colder cyan colored display and a temperature that is more extreme and interesting at the same time, close to the absolute zero, but not quite.For the belt strap of the briefcase I initially made a regular opaque rubber that had slight reddish tint to it, but when Vess began working on the finals and lighting he made an experiment with the transparency feature of the Lit shader and we were pleasantly surprised how good it looked, so we ended up using this transparent silicone type of material. Vess also made a texture that controlled the roughness of the transparency and tint of the material for it to be properly grounded and weathered, otherwise it would have been too artificial and clean looking.For the coin that Gawain uses to open the portal, Vess needed something that is based on a realistic medieval design, something that would be a hint to the deeper lore behind the character’s past adventures and would make sense to have as an artefact.Working on The Heretic was a great learning experience, we tackled many challenges during the production that we never had before and hopefully we would do it even better the next time around. We would like to thank all of the people that were involved with this production, it was a great experience working with all of you.We really hope that people will find some of the information in here helpful for their own work. See our page for The Heretic for additional blog posts and webinars.If you have additional questions about the Digital Human Character package, sign up for our next live Unite Now session "Meet the Devs: Deep Dive into The Heretic assets" on June 17 at 9 am PDT.

>access_file_
1396|blog.unity.com

Introducing Unity Mars – a first-of-its-kind solution for intelligent AR

Unity Mars provides augmented reality (AR) creators everywhere with specialized tools and a streamlined workflow to deliver responsive, location-aware AR experiences into users’ hands.Unity Mars is the world’s first authoring solution that brings real-world environment and sensor data into the creative workflow. That means you can quickly build mixed and augmented reality experiences that are context-aware and responsive to physical space. And they will work in any location with any type of data. From the beginning, we designed Unity Mars to solve the most common pain points across the entire AR development cycle: defining variables, testing scenarios, and delivering AR experiences that intelligently interact with the real world.AR apps are made to be used in real-world conditions, but it’s notoriously difficult – if not impossible – to manually define all the potential variables your user might encounter when using your app. What physical objects will be in their environment, and where will they be placed? How will the user hold their phone? Will they be sitting or standing? And even if you know the exact physical site where they’ll be using the app, rooms can be rearranged and there’s still a multitude of human factors to consider. Unity Mars is unique as an AR authoring tool because it enables you to take into consideration all these variables, while also providing you with a visual workflow that lets you move through the prototyping phase quickly, and with very little coding.To build your app, you begin with proxies that represent real-world objects. With your framework in place, you set conditions and actions on your proxies to tell the app how to respond to them.With visual aids for “fuzzy” authoring, you define minimum and maximum measurements for real-world objects rather than coding precise values.With the WYSIWYG Simulation View, you visualize your app exactly as it will run in the real world. Instead of coding, you simply drag your content directly into the view and Unity Mars creates the appropriate proxies and conditions for you.To help you get started, we’ve provided Starter Templates, which cover popular AR use cases including a training tutorial application that works with all of our indoor and outdoor environment templates. And we’ll be adding more soon.If you’ve built an AR app before, you know how difficult it is to test on a wide range of devices and in a multitude of locations. Even if you have a specific place in mind, like an event space, you may not be able to test it thoroughly beforehand, given variables like crowds and weather. In short, it’s impossible to test an AR app for every possible user reality. Since we’re not able to bend the laws of time and space, we went for the next best thing – the ability to fully test your AR experience without leaving Unity Mars.The Simulation View provides you with environment templates that simulate data so you can test your AR experiences against a variety of indoor and outdoor rooms and spaces. That means that you don’t need to have real-world data on hand or have to physically test the experience wherever you want it to work. You can also model your own simulation environments or use photogrammetry scan data.Once you’ve built and tested your AR experience in the Unity Mars authoring environment, you need to make sure it will react intelligently whenever and wherever the end-user interacts with it. Unity Mars enables that. Its runtime logic adapts responsively to the real world, which is especially important for training and remote-guidance apps that must “understand” where physical objects are located.You can use any type of real-world data in your app, including surfaces, images, body-tracking (coming soon), and more. The always-on query system gives your app contextually relevant behavior based on the user’s surroundings. By addressing the toughest challenges in each phase of AR app development, Unity Mars gives creators the ability to finally deliver AR experiences that live up to the end user’s expectations: digital content that seems to live in and react to the real world.While developing Unity Mars, we engaged with a number of innovative studios eager to literally get their hands on this new technology. One of them is Sugar Creative, a leading studio based in the UK recognized for their cutting-edge AR and VR experiences. In partnership with Dr. Seuss Enterprises, Sugar Creative used Unity Mars to create Dr. Seuss’s ABC AR, an app that enhances how children learn to read by bringing the Dr. Seuss characters to life. Will Humphrey, their lead creative and studio manager, has this to say, “Unity Mars has been the toolkit that has allowed us to realize a new horizon, a shift in the potential of immersive experiences by enabling them to become truly dynamic. Put simply, Unity Mars is adding intelligence to AR.” Other developers working with early versions of Unity Mars have created a variety of Augmented Reality applications such as sales and marketing experiences for an auto showroom, and a training application for factory workers. Regardless of the use case, they all concur that Unity Mars is ushering in the next generation of AR content by giving them more creative freedom and flexibility.As well as the features and benefits explained above, Unity Mars leverages our Auggie-award-winning AR Foundation framework to enable you to build an experience once in Unity and deploy it across multiple mobile and wearable AR devices. This authoring workflow fundamentally changes not only how you create AR experiences, but also elevates the quality of the experiences you deliver. You can try Unity Mars for free for 45 days.Try for freeVisit our Unity Mars page to find more information about features, pricing, and tutorials. And, make sure to watch our Getting Started with Unity Mars webinar to learn everything you need to get up and running with Unity Mars.

>access_file_
1398|blog.unity.com

Giving game developers a head start

Catsoft Studios builds essential game creation tools that shave time off production so you can focus your energy where it counts.Every game begins with an idea – a world to build, a compelling game mechanic, a feature that players are bound to fall in love with – but it takes a lot of work to bring that idea into fruition. Catsoft Studios creates tools to help make the journey from idea to playable game a lot smoother. Built around the studio’s core product, Game Creator, this hard-working Unity Asset Store publisher has produced a slew of tools, templates, and systems designed to help you bring your ideas closer to reality.“Game Creator acts as a bridge between game programming and design,” says Marti Nogue Coll, the main force behind Barcelona-based Catsoft Studios, describing the ethos that drives the publisher’s Asset Store offerings.“I like to think of the game development cycle as a set of layers. When you create your game from scratch, you start from the lowest level and have to make your way up to the top. Unity gives you a tremendous head start,” he explains. “Game Creator aims to push this even further. When you want a character to move to a certain position, you don’t want to deal with direction vectors, acceleration formulas, lerping between animations or obstacle avoidance. You want to move a character from A to B. That’s what Game Creator is all about: making game development more human-friendly.”Game Creator is the base, a core package of common, genre-agnostic systems, including cameras, characters, variables, and a high-level visual scripting solution. Modules add and extend these features with gameplay elements from managing inventory to defining melee fight systems to crafting quests. The Stats module helps you to create intricate RPG attribute systems, while the Dialogue add-on is a system for managing complex branching conversations between characters. Each integrates closely with Game Creator to boost developers’ freedom and productivity, and the visual scripting system can be extended with free custom nodes shared on the Game Creator Hub. The Game Creator ecosystem includes features with appeal for game designers as well as developers – really, Marti claims, his tools are for “anyone with a game idea.”As a computer science student based in Barcelona, Marti discovered that he had a flair for creating tools. “I worked on a project where we had to develop an RPG mobile game. We spent almost nine months developing the tools and assets, and only two to flesh out the game,” he says. “The fact that those first nine months were more satisfying than the stressful latter ones was a hint for me that I may enjoy creating tools more than developing fully fledged games.”Marti took advantage of a two-week break between semesters to dig deeper into UDK, RPG Maker, and Cocos 2D, and it was around the same time that he fell in love with Unity. “When I opened Unity 2.6, I fell in love with its simplicity,” he recalls. “A big scene view with an island for me to play with, scripts that get automatically compiled and a clear interface. It simply clicked.”From there, the shift toward developing creator tools for the Asset Store felt natural. Marti has observed that many programmers who work on games are constrained by tight deadlines that don’t leave them enough time to build great tools for their own workflow – instead, they’re often forced to create things that just barely get the job done. “Focusing on Game Creator makes this work the other way around,” he explains, “putting all the effort into the tools, and, from time to time, testing them by joining a game jam.”Catsoft Studios currently has eight packages on the Asset Store, but Marti says that he uses assets in his own dev process as well. “The Asset Store is a place full of hidden gems and well-known top-notch products,” he says, citing UMotion Pro animation editor and the wide-ranging suite of Synty Studios art asset packages among his go-to resources.He’s fueled by the collaborative energy and collegial spirit of the Asset Store community, which helps him to refine his tools to better serve the game developers who use Game Creator and its modules. “So far, it’s like a dream job,” he says. “Game developers are very passionate about making games. This means that when someone sends you an email, it’s because they genuinely want to know something, not because their boss told them to ask.”Looking forward, Catsoft Studios is working on a new module called Traversal, which he plans to follow up with a “research phase” exploring how best to create assets that harness the latest Unity features like polymorphic serialization, DOTS, and the UI Toolkit, among others.For Marti, publishing on the Asset Store creates a virtuous cycle of creativity – developers use the tools he creates to fuel their projects, while engaging with customers encourages Marti to keep pushing the envelope on what he creates for them.“Most users usually have a very clear idea of the game they want to make and are excited about developing them,” says Marti. “Talking and discussing ideas is a blast of good energy and excitement, which boosts our motivation to continue further developing better tools.”

>access_file_
1399|blog.unity.com

Use articulation bodies to easily prototype industrial designs with realistic motion and behavior

In Unity 2020.1, now in beta, we will have a new Physics component: ArticulationBody. Articulations make it easier than ever to simulate robotic arms and kinematic chains with realistic physics and movement. Along with other improvements from PhysX 4.1, Unity is more capable than ever of simulation for industrial applications.With Unity 2019.3 we upgraded our physics library from PhysX 3.4 to PhysX 4.0. Now, the Unity 2020.1 beta takes users a step forward with an upgrade to PhysX 4.1. While previous builds of that library delivered an excellent performance for a wide variety of game types, modeling reality for non-gaming applications was more difficult to accomplish. Modeling kinematic chains, like the type you’d see in a rag doll, robotic arm, or mechanism with several concurrent hinges, would result in stuttery, and unrealistic motion. Not only would these joints look peculiar, but they would also be impossible to use for simulating a real device, impeding efforts to model or prototype industrial designs.One of the major culprits of these real-world shortcomings was the selection of joint components that connected rigid bodies together. These joints, coupled with a physics solver optimized for game performance over fidelity, resulted in kinematics that failed to simulate realistically.Learn more about the overall improvements brought by PhysX 4.1 here.Certain applications require constraining the motion of some rigid bodies relative to each other. To visualize this, imagine connecting the skeleton bones of a rag doll, a multi-jointed robotic arm, or having a door rotating on its hinge. This has traditionally been made possible by the means of the joint components such as the FixedJoint or the ConfigurableJoint that connect two Rigidbody objects together.Behind the scenes, each joint will be decomposed into a few primitive constraints, such as a linear constraint to keep the bodies at some specific distance or an angular constraint to keep the bodies oriented in a specific way around a particular axis. Those same constraints are used to keep the bodies from overlapping each other by maintaining the same distance apart. All of the constraints combined will be plugged into an iterative solver that aims at converging to a set of impulses to apply to each of the connected pairs of objects in world space that will position them to satisfy all of the constraints, if possible.The first problem is the sheer number of conflicting factors that can create convergence issues for the solver. Iteration count, the relative masses of connected bodies, and the total complexity of the set of the constraints in a scene can create an unsolvable scenario. In cases like this, the partial solution is used, so certain constraints are left unsatisfied.The second problem is that the magnitude of the impulse applied depends on the joint error -- a value that shows how badly a constraint is violated at a given time. Because of this error compensation behavior, there will always be at least some springy effect, exactly as if the bodies were connected by a set of damped springs, especially when joints are chained together.Our solution to the above kinematics problem is the new concept of articulation: a set of bodies organized in a logical tree, where a parent-child relation expresses the idea of mutually constrained motion. There is always a single root body, and there can be no loops. We use Unity’s transform hierarchy to express articulations.Now, users can easily model an existing robot like the Universal Robots UR3e below and simulate a task that more accurately reflects what that movement would look like in the real world. This allows roboticists to visualize a particular movement sequence, test new code, or even validate new designs in a synthetic environment.Articulations help roboticists and other industrial developers in two major ways: they move in a more similar fashion to their real-world counterparts, and they can be constructed faster than the old RigidBody+Joint, saving development time.We anticipate that one of the main use cases for articulations will be in the field of robotics. Robotic arms often have six or more joints linked serially in a row, which means that small errors in each relative joint pose can have a potentially large effect on the pose of the end effector; errors are propagated up the kinematic chain which creates unrealistic movements and an end effector position that has drifted off target.Simulation can help roboticists accelerate their development time by modeling many deployment scenarios and unit tests virtually, and then execute them at scale rather than trying to run those same suites of tests on a real robot in real-time. We hope that the ArticulationBody component, as well as the improvements to PhysX, can help roboticists use Unity for their simulation efforts.The amount of degrees of freedom in a given parent-child relationship depends on the actual joint type used. Currently, we support:Fixed: has zero degrees of freedom and is used to lock the bodies relative to each otherPrismatic: has one degree of freedom, which is a linear offset along a particular axis relative to the parentRevolute: has one degree of freedom, a rotational analog of the PrismaticSpherical: has up to three degrees of freedom, and is a ball-in-socket joint that only allows relative rotations but no linear motionIn order to further increase the realism of these joints, a new solver based on Featherstone’s algorithm is used. This technique computes the effects of forces applied to a structure of joints, links, and solid bodies using reduced coordinates- that is, space where each body has as many coordinates relative to its parent as there are degrees of freedom. Previously, we relied on maximal coordinates that were more performant for general use cases but made sacrifices to accuracy and precision to gain that performance.Forward dynamics and articulations in a reduced coordinate space help satisfy the high requirements for precision and accuracy that robotic arms need. Along with other improvements provided by PhysX 4.1 such as the TGS solver, articulations make reliable robotic arm simulations possible in Unity for the first time. Modeling a robot using the iterative joint solver required precise tuning and shortcuts that would still fail to match the movement of a real robot.We use the forward dynamics algorithm to simulate articulations in the reduced coordinate space (that is a space where each body has as many coordinates relative to its parent as there are degrees of freedom). It scales linearly with the total amount of degrees of freedom in an articulation, and can be faster than using the traditional iterative solver that scales with the number of constraints instead while computing a more accurate result.To support articulations in Unity, we added one new component: the ArticulationBody. Drawing an analogy with regular physics, ArticulationBody is like Rigidbody and ConfigurableJoint in one component. In an articulation, all bodies except the root one have a joint connecting them to their parent, that’s why they were not separated into individual components.The shape of the bodies is described by using the regular Collider component, just like with Rigidbodies.Once created, the bodies in an articulation cannot be moved by the means of the Transform component, because it can break the limits set by the reduced space coordinates. The only exception is the root body, that can be moved using the ArticulationBody.TeleportRoot function. ArticulationBody won’t respond to the changes in the Transform component by design.That said, there are several possible ways of interacting with an articulation. Firstly, forces and torques can be applied to each body in an articulation. Secondly, each joint has a linear drive per each degree of freedom that can be controlled by setting linear and angular targets. Finally, it’s possible to alter the poses of bodies in the reduced coordinate space directly.One particular advantage of articulations is that the quality of simulation doesn't directly depend on the mass ratio of connected bodies. With rigid bodies and fixed joints, simulation started to look unrealistic with mass ratios higher than 10:1 between connected bodies. In the following example, however, you can see a dimensional grid of articulation bodies connected with fixed joints can still be simulated precisely, even with the red spheres being 1000 times heavier than the black ones.Simulation can help roboticists accelerate their development time by modeling many deployment scenarios and unit tests virtually, and then execute them at scale rather than trying to run those same suites of tests on a real robot in real-time. We hope that the ArticulationBody component, as well as the improvements to PhysX, can help roboticists use Unity for their simulation efforts.To see how a serial link robot arm can be constructed with articulation joints, check out our robotics demo project.Download Unity 2020.1, now in beta, to try your hand at using ArticulationBody.

>access_file_
1400|blog.unity.com

3 tips to scale your offerwall campaigns and increase profit

Running offerwall ad campaigns is a surefire way to acquire high-quality users cost-effectively.Offerwalls function as mini in-game stores which list out tasks for users to complete, such as installing another game and reaching level 5, in exchange for in-game currency. With the right offerwall user acquisition strategy, advertisers can acquire users that go on to play for months, if not years, and generate extremely high ARPUs - within RPG, for example, some advertisers see offerwall campaigns generate ARPUs above $100.The key of course is in optimizing the campaigns toward ROAS, setting the right event, and capitalizing on high traffic such as during special promotions, also known as currency sales. Read on for best practices for maximizing profit from your offerwall campaigns.1. Use data to choose the right CPE eventFrom an advertiser’s perspective, choosing the right CPE event for an offerwall campaign to run with is key for maximizing long-term profit and engagement - and data should play a big part in that choice. In other words, it’s not enough to simply see that the competition is using “Level 50” as their offerwall event and do the same. Instead, you need to choose the event that best suits your game flow and user behavior, taking into account when users are dropping off and when they’re paying. We recommend relying on data and nothing else - not your competitors, and not your gut instinct. Essentially, you want to choose an event that occurs just after the “tipping point,” which is the point in the game in which users do something valuable that is likely to bring about payout, scale, and engagement. For example, if you see that on D7, users generally make an in-app purchase, it’s best to set the event as something that may occur on D8. Otherwise, if you set it too early, the user will complete the event and have no incentive to continue on to make that IAP, leaving you ROI negative.To help choose the right event, we also recommend running at least 3 offerwall campaigns at any given time - each with a different type of in-game event depending on how many days it takes a user to complete the event. That includes one shallow in-game event, one medium, and one long. This will enable you to more easily test which events deliver that perfect balance between scale and ARPU. In addition, running with multiple events ensures that you’re widening your net and appealing to as many users as possible.During testing, don’t just differentiate events in terms of depth and level. Also test the text on the offerwall event, the instructions in the pop-up that follows, and the icon that goes near it. You’ll be surprised how much control you have as an advertiser over what you can show users - and anything can be tweaked. Is a short explanation that’s simple and to the point best? Or does a longer one that offers more details on how to complete the offer improve performance?By testing multiple events and playing with the descriptions, Huuuge Games quickly reached their D7 ROAS KPI with good scale. In fact, their D7 ROAS for offerwall was 9x higher than rewarded video, and D7 ARPU was 5x higher. Read the full case study.2. Increase bids ahead of special offerwall promotionsOften, networks like ironSource will announce special offerwall promotions ahead of time to take place on major holiday weekends like Christmas, Thanksgiving, and New Year’s Day. The already high traffic these holidays bring, combined with promotion’s promise to raise rewards for a limited time only, guarantees increased offerwall engagement for publishers. In order to capitalize on this traffic, we recommend advertisers raise their bids in time for the special promotion. Doing so boosts the chances that your offer tops the publishers’ offerwalls, making it one of the first ones users see.Lilith Games, for example, was running an offerwall ad campaign for their game Rise of Kingdoms within a publisher’s double credit promotion over Thanksgiving weekend. By raising their bids, Lilith topped most publishers' offerwalls, and as a result, was able to increase offerwall volume 4x. On the monetization side, their ARPU went up by 3x and purchase rate by 20%, delivering a 2x increase in their D7 ROAS. Read the full case study here.3. Adjust ROAS goals for each eventIt’s always best practice to optimize a campaign’s ROAS goal according to the most accurate, granular LTV curve available. However, we often see that advertisers run the same ROAS goal for all their offerwall campaigns. But LTV curves vary per each offerwall event, that means advertisers should be setting different ROAS goals for different events. For example, you may see that a shallow event like “install and complete level 1,” which takes 1 day to complete, has a D90 ARPU of $10, while a deep event by the same advertiser like “install and completel level 11,” which takes 7 days to complete, generates a D90 ARPU of $60. Though there are fewer users completing the deeper event, the ones who do generate an extremely high ARPU, and retain better.In the graph below, we can see that the very successful, deep event is already recouping ROAS within 20 days, while the shallow event is taking 90 days to recoup. It’s likely that the advertiser is optimizing their offerwall campaigns towards some number in the middle. However, by adjusting the ROAS and increasing the bids for the deeper event, we’d likely see a surge in traffic and profitability.We see below that the deep campaign reaches 200% ROAS in 90 days. If the advertiser shifts the ROAS curve to hit 100% within 60 days, they increase their bids to $45 for the cost per engagement campaign, which lowers the ROAS goal from 51% to 32%. The bid increase lets the advertiser scale higher and faster, and maximize profits.On the other hand, the bids for the shallow campaign are probably too high - by lowering bids and increasing the margins, the advertiser can save and increase profit. As for the advertiser’s shallow event, which reaches 100% ROAS after 90 days, we can lower the breakeven day to 60 days and change the ROAS goal accordingly.

>access_file_