// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1695 transmissions indexed — page 52 of 85

[ 2023 ]

20 entries
1026|blog.unity.com

Improving job system performance scaling in 2022.2 – part 2: Overhead

The 2022.2 and 2021.3.14f1 releases have improved the scheduling cost and performance scaling of the Unity job system. In part one of this two-part article on what’s new with job systems, I offered some background information on parallel programming and why you might use a job system. For part two, let’s dive deeper into what job system overhead is and Unity’s approach to mitigating it.Overhead means any time the CPU spends not running your job, from the moment you begin to schedule it until the moment it finishes, unblocking any waiting jobs. Broadly, there are two areas where time is spent:1. The C# Job API layer2. The native job scheduler (which manages and runs all scheduled C# and, internally, C++ jobs)The C# Job API’s purpose is to provide a safe means to access the native job system. While this is a binding layer for the C# to C++ transition, it’s also a layer that allows you to prevent accidental scheduling of C# jobs that will run into race conditions or deadlocks when accessing NativeContainers from within a job.In addition, this separation provides a richer way of creating jobs themselves. At the C++ layer, jobs are just a pointer to some data and a function pointer. But with the C# API on top, you can customize the types of jobs you schedule, allowing for better control over how job data should be split up and parallelized to fit user-specific use cases.When scheduling a job, the C# job binding layer copies the job struct into an unmanaged memory allocation. This allows the lifetime of the C# job struct to be disconnected from the job lifetime in the job system, since this is affected by the job’s dependencies and overall load on the platform. The job system then conditionally performs safety checks in Editor playmode builds to ensure a job is safe to run.These steps are important, but they are not free and contribute to job system overhead. Since job size can vary, as well as the number of NativeContainers and dependencies a job might have, the cost to copy jobs and validate their safety is not fixed. Because of this, it’s important Unity keeps costs small and constrained to linear computational complexity.In the 2021.2 Tech Stream, the engineering team made significant improvements to the job safety system by caching the safety check result for individual job handles. This is particularly important, since the safety system needs to understand entire chains of job dependencies and each native memory reference all jobs contain to understand which may be missing dependency information and to which job a dependency should be added to. This can result in a non-linear amount of items to iterate over when scheduling (i.e., for each job and its dependencies, check the read/write access for each NativeContainer the job refers to and any job referring to the NativeContainers).However, Unity can take advantage of the fact that C# jobs are only scheduled one at a time, and check safety during this scheduling. Instead of rescanning all jobs each schedule, we can quickly determine if revalidating job dependency chains is necessary or not, allowing large amounts of work to be skipped. For even small job dependency chains, this dramatically reduces the cost of job safety checks. Ideally there should be no reason to turn job safety checks off when developing (job safety checks are not on in player/shipping builds).Whenever a C# or C++ job is scheduled for execution, it goes through the job scheduler. The scheduler’s role is to:Track jobs via job handlesManage job dependencies, ensuring jobs only start executing once all dependencies have completedManage “worker threads,” which are the threads that will execute jobsEnsure jobs are executed as quickly as possible – usually meaning they should run in parallel when dependencies allowAdditionally, while the C# Job API only allows jobs to be scheduled from the main thread, the job scheduler needs to support multiple threads scheduling jobs at the same time. This is because the underlying Unity engine uses many threads which schedule jobs and can even schedule jobs from within jobs. This functionality has pros and cons, but requires much more scrutiny for correctness and adds the requirement that the job scheduler must be thread safe.In the 2017.3 release, the basic look of the job scheduler was:Queue for jobsStack for jobsSemaphoreArray of worker threadsThe typical usage follows this pattern: As jobs are scheduled, they are enqueued into a global, lock-free, multiple-producer, multiple-consumer queue, which represents jobs that are ready for handling by a worker thread. The main thread then signals using a semaphore to wake up worker threads.The number of workers told to wake up depends on the job type being scheduled – single jobs such as IJob only wake a single worker, since that job type doesn’t spread work across multiple worker threads. IJobParallelFor jobs, however, represent multiple pieces of work that can be run in parallel. While one job is scheduled, there might be many pieces for some or all workers to help with at the same time. As such, the scheduler figures out how many workers can potentially help and wakes that number up.Once awake, worker threads are where the actual job work happens. In 2017.3, they were responsible for dequeuing a job from the job queue, ensuring all relevant job dependencies were complete. If they weren’t complete yet, the job and incomplete dependencies would be added to a lock-free stack as a way to jump to the front of the queue to try and run again. Worker threads do this in a loop until either the engine signals that it wants to shut down, or there are no more jobs in the stack and queue. At which point, the worker threads go to sleep by waiting on a signal from the main thread semaphore.The job scheduler creates as many worker threads as there are virtual cores on the CPU, minus one by default. The intention here is for each worker thread to run on its own CPU core, while leaving one CPU core free for the main thread to continue running. In practice, on platforms where a core isn’t reserved for non-game processes, it can be better to reduce the amount of worker threads so computation done by the operating system or driver threads doesn’t compete with the game’s main or job worker threads.Since the main thread is the primary place where jobs are scheduled from, it’s very important to not delay the main thread. Doing so directly affects how many jobs enter the job system and thus how much parallelism can occur within a frame.With the main thread theoretically scheduling lots of jobs and the rest of the CPU cores executing those jobs, we should be able to maximize how much parallel work can be done on the CPU and allow performance to scale as the hardware changes. If we had more worker threads than cores, the operating system could context switch the main thread, and switch to a worker thread. Having an additional worker thread running might help empty your job queue faster, but it would certainly prevent new work from entering the queue, which ultimately has a larger negative effect on performance.There are a couple of potential problems with the above job scheduler approach that can lead to job system overhead. Let’s look at some examples.Main thread schedules an IJob (non-parallel job) with no dependencies:A job is added to the queue, and a worker thread is signaled to wake upA worker thread wakes upThe worker executes the jobThe worker checks for any more jobs to executeThe worker goes to sleep since there are no more jobsOnce the main thread signals using the job scheduler’s semaphore, one of the sleeping worker threads (not necessarily worker 0) will wake up. Waking up and context switching takes some time on the worker core. This is because, while the worker thread is asleep, the CPU core that the worker thread will end up running on was likely doing something – maybe running another thread spawned by the game or some other process on the machine that was using the thread.To enable threads to be paused and resumed later, a thread’s register state needs to be saved, instruction pipelines need to be flushed, and the switched-to thread’s state needs to be restored. Even signaling the thread takes time on the main thread’s core, since notifying which thread to wake up is handled by the operating system. Ultimately, this all means that work is being done on the main thread core and the worker thread core that is not our job, and thus is overhead we want to reduce.How quickly workers can be notified and how much time an individual job takes to run can also have an impact on the system. For instance, if you take the above use case but schedule two jobs instead of one:A job is added to the queue, and a worker thread is signaled to wake upThe second job is added to the queue, and a worker thread is signaled to wake upIn some order, but twice: A worker thread wakes upA worker executes the jobThe worker checks for any more jobs to executeThe worker goes to sleep since there are no more jobsIf the timing works out, you have two workers working in parallel on the job.However, if one of the jobs is too small and/or it takes too long to signal and wake up both workers, one worker might steal all the work in the queue, and as a result we’ve signaled a worker for no reason.This type of job starvation and wake sleep cycle can end up being quite expensive and limit the amount of parallelism the job system offers.You might be thinking, “Isn’t overhead from signaling threads and context switching a cost of doing business when dealing with threads in the first place?” You certainly aren’t wrong. But, while you don’t have direct control over how expensive signaling or waking up threads is, you can control how often those operations occur.One solution to avoid waking up workers for no reason is to only wake them when you suspect there are lots of work items in the queue for workers to take justifying the wake-up cost. This can be done by batching: Instead of signaling workers as soon as you schedule a job, add the job to a list and, at specific times, flush that batch of jobs into the job system, waking up an appropriate amount of workers at the same time.There is still a risk that the actual wake-up takes too long, the batched jobs are very small, or the number of jobs in a batch is just not very high. In general, the more jobs you include in the batch, the more likely it is to avoid overhead from waking up threads for no reason. Unity maintains a global batch which is flushed whenever a call to JobHandle.Complete() is called. So if you need to explicitly wait for a job to complete, try to do so as late and infrequently as possible, and generally prefer scheduling jobs with job dependencies to best control safe access to data.You might also be asking yourself, “If signaling threads and waiting for them to wake up/go to sleep is purely overhead, why don’t we keep our threads awake all the time looking for work?” When there are plenty of jobs in the queue, this can actually occur naturally. Unless the operating system deems the worker thread to be lower priority than some other work (or is explicitly time sliced and should be swapped to give other threads their fair share of CPU time – it depends on your platform), worker threads will happily keep working.However, as with the PartialUpdateA and PartialUpdateB functions we saw in part one, not all jobs are parallelizable and free of data dependencies. As such, you usually need to wait for some subset of jobs to complete before you can run others. As a result, we see bottlenecks in a job graph’s parallelism when there becomes fewer runnable jobs (jobs with no outstanding dependencies) than there are worker threads, resulting in some workers having nothing productive left to do.If you don’t everlet worker threads sleep, you can run into a handful of issues. When worker threads constantly check for new jobs and fail to find any, this is considered “busy waiting,” or work that’s wasteful and doesn’t progress the program. Keeping all cores running with maximum parallelism, but without progressing the game, is a drain on battery life. Not only that, if a core doesn’t have idle time, without sufficient cooling the CPU’s temperature will rise, leading to downclocking – running slower to avoid damage from overheating. In fact, on mobile platforms, it’s not uncommon for entire CPU cores to become temporarily disabled if they get too hot. For a job system, being able to use cores efficiently is very important, so there is a balance between putting workers asleep, and having them constantly loop looking for new jobs, hoping they get lucky.Another area that can generate overhead in the design above is the lock-free queue and stack. We won’t go into all the nuance of implementing these data structures, but one common trait of lock-free implementations is the use of a compare-and-swap (CAS) loop. Lock-free algorithms don’t use locking synchronization primitives to provide safe access to shared state, but instead use atomic instructions to carefully create higher-order atomic operations such as inserting an item into a queue in a thread-safe manner. However, perhaps unintuitively, lock-free algorithms can still prevent one thread from progressing until another is complete. They can also have secondary effects on the CPU instruction and memory pipelines, hurting performance scaling. (“wait-free” algorithms would allow all threads to always progress, but that doesn’t always provide the best overall performance in practice.)Here is a contrived example of adding a number to a member variable, m_Sum, with a CAS loop:CAS loops rely on the compare-and-swap instruction (here we use the C# Interlocked library abstracting platform specifics away), which “compares two values for equality and, if they are equal, replaces the first value.” Since we want Add() function users to not worry about this function potentially failing, a loop is used to retry if it fails because some other thread beat us to updating m_Sum.This retry loop is, in essence, a “busy-wait” loop. This has a nasty implication for performance scaling: If multiple threads enter the CAS loop at the same time, only one will ever leave at a time, serializing the operations each thread is performing. Fortunately, CAS loops generally do an intentionally small amount of work, but it can still have large negative impacts on performance. As more cores execute the loop in parallel, it will take each thread longer to complete the loop while the threads are in contention.Further, because CAS loops rely on atomic read-and-writes to shared memory, each thread generally requires its cache lines to be invalidated on each iteration, causing additional overhead. This overhead can be very expensive in comparison to the cost of redoing the calculations inside the CAS loop (in the case above, redoing the work of adding two numbers together). So, how high the cost is can be non-obvious at first glance.Under the 2017.3 job scheduler, when worker threads were not running jobs, they were looking for work in either a shared, lock-free stack or queue. Both of these data structures used at least one CAS loop to remove work from the data structure. So, as more cores became available, the cost of taking work from the stack or queue increased when the data structures had contention. In particular, when jobs were small, worker threads proportionally spent more time looking for work in the queue or stack.In a small project, I’ve generated deterministic job graphs that a typical game may have for its frame update. The graph below is composed of single jobs and parallel jobs (each parallelizing into 1–100 parallel jobs), where each job may have 0–10 job dependencies and the main thread has occasional explicit sync points where it must wait for certain jobs to finish before scheduling more. If I generate 500 jobs in the job graph, and make each take a fixed amount of time to execute (each portion of a parallel job takes this time as well), you can see that, as more cores are used, overhead in the job system increases.For jobs that take 0.5μs, once there are 20 workers, the frame updates as fast as not using the job system at all, and runs nearly twice as slow when using all cores on my machine. By default, all cores are used in Unity, so with 1μs jobs, there is almost no improvement in performance despite using 31 worker threads. This is a direct result of high contention on the lock-free queue and stack. Luckily, user jobs tend to be larger in size and can hide this overhead. However, the scaling issue is there, and small jobs are still common enough (especially for parallel jobs). Even when using larger jobs, your scheduling patterns and worker timing can cause large amounts of overhead due to contention with the global, lock-free stack and queue in the job scheduler.By now, you can see that there are a few areas our team needed to address to reduce overhead in the job system, both on Unity’s side and on the game creator’s side:Avoiding stalls on the main thread: Signaling to wake worker threads is expensive – keep this to a minimum.Modifying state on the main thread shared with worker threads is likely to lead to cache invalidations and potential busy-waiting.The main thread should schedule jobs frequently – avoid explicitly waiting on jobs to .Complete(). Prefer submitting jobs with dependencies instead.Avoiding stalls on worker threads:Worker thread efficiency directly impacts parallelism. Avoid contending on shared resources where possible.Busy-waits on worker threads will drain battery life and can result in downclocking due to increases in temperature.While Unity can’t change how many jobs users submit in their games, there are a decent number of issues that our engineers can tackle with a different job scheduler approach. In the 2022.2 release, the job scheduler, at a high level, breaks down into a few basic components:Array of worker threadsArray of queues for jobsArray of semaphoresThis is very similar to the previous job scheduler. However, the main difference is the removal of the shared state between the main thread and worker threads. Instead, we make the queues and semaphores (or futex on platforms that support it) local to each worker thread. Now, when the main thread schedules a job, it’s enqueued into the main thread’s queue rather than a global queue.Similarly, if a worker thread needs to schedule a job (e.g., a job schedules a job in its Execute), that job is scheduled in the worker’s own queue rather than in the main thread queue. This reduces memory traffic, since workers reduce the frequency of invalidating cache lines when they write to a queue. As such, workers don’t read/write to all the different queues at the same frequency.The worker loop has also changed, now that there are more queues to work with:Workers look in their own queue for work and only look at other worker queues when theirs is empty. Since workers prefer their own queues for dequeuing and enqueuing work, the amount of contention on any one queue is reduced.Another difference is how threads are signaled to wake up. Worker threads are now responsible for waking up other worker threads, and the main thread is responsible for ensuring that at least one worker thread is awake when it schedules a job.This change in responsibility allows the main thread to remove excessive overhead since it no longer needs to be solely responsible for waking threads when parallel jobs are submitted. Instead, the job system performs tracking to know if it needs to wake any workers at all. The main thread can ensure a worker is always awake to make progress on jobs and when workers wake and find a job in its own queue or another’s, workers can signal other workers to wake up and help empty the queue if needed.The queue separation for workers also provides some interesting leeway for configuration and optimizations, which our team is continuing to add to and improve on. In 2022.2, users should see reduced cost on the main thread to wake up worker threads and improved throughput of jobs on worker threads, regardless of how many cores their platform has. Additionally, while Unity has not backported the queue separation to 2021.3 LTS, we have brought back the design change to make worker threads responsible for signaling each other rather than the main thread solely. High job system overhead on the main thread due to signaling the global semaphore should no longer be an issue as of 2021.3.14f1.If you have questions or want to learn more, visit us in the C# Job System forum. You can also connect with me directly through the Unity Discord at username @Antifreeze#2763. Be sure to watch for new technical blogs from other Unity developers as part of the ongoing Tech from the Trenches series.

>access_file_
1027|blog.unity.com

Eyes, hands, simulation, and samples: What’s new in Unity XR Interaction Toolkit 2.3

The XR Interaction Toolkit (XRI) is a high-level, component-based interaction system for creating VR and AR experiences. It provides a common framework for interactions and streamlines cross-platform creation. This update adds three key features: eye gaze and hand tracking for more natural interactions, audiovisual affordances to bring interactions to life, and an improved device simulator to test in-Editor. To help you get started, let’s explore each addition in more detail.For a more in-depth breakdown of the update, check out what’s new in XRI 2.3, or explore the sample project.XR developer and founder of LearnXR.io, Dilmer Valecillos, has put together an awesome video tutorial on XRI 2.3:Along with XRI 2.3, we’re shipping the Unity XR Hands package in prerelease. XR Hands is a new XR subsystem which adds APIs to enable hand tracking in Unity. It includes built-in support at release for OpenXR, with support for Meta platforms soon to follow. In addition, external hardware providers can pipe in hand-tracking data from their existing XR SDK by following the provided API documentation.This release of XRI includes the Hands Interaction Demo, a sample package showcasing a hand interaction setup where you can switch between hands and controllers without changing anything in your scene on-device. Using this functionality, your content may start with a standard controller setup, but transition seamlessly to hands for specific tasks or natural interactions in gameplay.XRI 2.3 also supports natural poking interactions through the XR Poke Interactor. This allows you to poke using hands or controllers on 3D UI or XRI-enabled UGUI Canvas elements.New headsets like the HoloLens 2, Meta Quest Pro, and PlayStation® VR2 include sensors to track where users are looking. Gaze-based interactions can help you build XR apps that feel more natural and provide an additional way to engage with content. To support this type of interaction, we have introduced the XR Gaze Interactor, driven by eye-gaze or head-gaze poses. You can use this interactor for direct manipulation, like hovering or selecting by dwelling on interactables.Since we generally don’t recommend that apps be controlled entirely with eyes, we have introduced an additional form of controller and hand-based interaction assistance to help users select specific objects: the XR Interactable Snap Volume. This component complements the gaze interactor, as it allows for snapping interactions to a nearby interactable when aiming at a defined area around an object. Snap volumes can also be used without the gaze interactor to enable easier object selection for users.Tobii, a global leader in eye-tracking technology, assisted with the concepts and research. If you’re interested in learning more, you can browse their knowledge base of eye-tracking concepts.Using hands for interaction is different from using controllers in that there’s no haptic or tactile feedback to confirm when an interaction takes place. The affordance system, a set of performant components that animates objects or triggers sound effects in reaction to an object’s interaction state, helps mitigate this feedback gap. This system is built to work with any combination of interactor and interactable in both new and existing projects.The new XR General Grab Transformer reduces the complexity of the hierarchy and allows one general-purpose transformer to support both single and two-handed interactions on an interactable, rather than multiple grab transformers. It also enables two-handed scaling, letting you scale objects up and down by moving your hands apart or together, similar to zooming in and out on a mobile phone.We’ve also added an Interaction Group component. This behavior allows a developer to group interactors together and sort them by priority, which allows only a single interactor per group to interact at a given time. For example, when a Poke, Direct, and Ray Interactor are grouped together, poking a button will temporarily block the other interactors from interacting with the scene. This can keep you from accidentally grabbing something nearby when you’re working in the distance, and prevents rays from shooting into the scene while you’re grabbing or poking an object up close.Testing XR apps on a headset is important, but testing in-Editor helps reduce iteration time. In this release, the XR Device Simulator received a major usability update with a new onscreen UI widget that makes it easier to see what inputs drive the simulator, and which ones are currently active.New simulation modes have also been added so you can toggle between commonly used control modes. At startup, the device simulator activates the new first-person shooter (FPS) mode, which manipulates the headset and controllers as if the whole player was turning their torso. You can then cycle through the other modes to manipulate individual devices: the headset, the left controller, and the right controller. To use the XR Device Simulator, import the sample from the Package Manager.It’s been a long time coming, and our updated sample project is finally here. It showcases the array of XR experience building blocks you can use in XRI 2.3. The project is divided into stations that help you understand how each major feature of XRI works, and includes both simple and advanced examples for each. You can access the sample project on GitHub and use it to kick-start your next XR app.Though it’s still early days for eyes and hands in the XR Interaction Toolkit, we’re always working to make building expressive XR experiences easier. As we head towards XRI 2.4 and beyond, we would appreciate your feedback. We’d also love to see what you build with these tools, so feel free to include the hashtag #unityXRI when posting on social media.

>access_file_
1028|blog.unity.com

The water technology behind Avatar: The Way of Water

Wētā Digital – now part of Unity – developed many of the tools and solutions used to bring the world of Avatar: The Way of Water to life. Here, we take a look at the CGI technology behind the water. If you’re interested in being among the first to access some of the tools used in the film, you can register for the Unity Wētā Tools beta through our website.James Cameron is no stranger to working with water. Titanic aside, in 2012 he made a record-breaking solo dive, piloting a submarine to the bottom of the Mariana Trench in the Pacific Ocean: Earth’s lowest point at nearly 11 kilometers deep. As he said in the resulting 2014 documentary, Deepsea Challenge, “Down here you feel the power of nature’s imagination, which is so much greater than our own.”It must have been truly remarkable, then, seeing as the world of Pandora and its stunning visuals ultimately came from Cameron’s own imagination.Translating Cameron’s vision, which for the sequel included the new reef village of the aquatic Metkayina clan, required extensive use of visual effects – especially for the dominant water setting.The tools and solutions used to create the film’s VFX – including the award-winning water effects – were developed by Wētā Digital, now a division of Unity.To ensure that the interactions between the characters and water elements were as realistic as possible, a team of experts, including Unity and Wētā’s water simulation VFX specialists Alexey Stomakhin, Steve Lesser, Joel Wretborn, and Sean Flynn, were brought together to form the “Water Taskforce”. This team’s water toolset was recently recognized with a win at the Visual Effects Society (VES) Awards, with the Emerging Technology Award.Extreme attention to detail saw the taskforce conduct extensive research and experimentation in collaboration with New Zealand’s National Institute of Water and Atmospheric Research (NIWA) to find the best approach to creating CGI water. This included taking into account the effects of tides, wind, and the sea floor on aquatic environments.Avatar: The Way of Water required water effects for 2,225 shots, some taking up to eight days of simulation to achieve the high resolution needed.There were also numerous scenes where water interacted with over 50 creatures in a single shot. This presented the challenge of needing simulations to be accurate at scale, from large domains for bigger creatures, to submillimeter resolution for thin film on skin.As it was not computationally feasible to create a single-representation water system, the toolset was developed with a number of distinct solvers to keep compute times to a minimum.“The Loki water state machine was crucial for delivering the sheer volume of large-scale water shots in this movie. In a typical VFX-heavy movie, water shots of this complexity are few and far between and require many iterations and passes from very experienced artists. In contrast, our state machine approach was able to deliver great results after just a single pass, even by artists who had just entered the industry.” – Sean Flynn, simulation lead, Unity x Wētā DigitalA majority of the water tools developed by the team sit within Wētā’s proprietary simulation framework, Loki. This piece of tech includes solvers for multiple water states, including procedural water waves, bulk water, spray, mist, hero bubbles, diffuse bubbles, foam, capillary surface waves, thin film, and residual wetness.State machineMany of these solvers sit within the Loki state machine – an airborne spray system. The water states are coupled with the surrounding air, with transitions between states handled in a mass- and momentum-conserving way.Rather than a one-size-fits-all approach, the Loki state machine allows multiple solvers to run in tandem. Each solver is optimized for the level of detail required by its respective state, such as bulk water, spray, and mist. This helps keep large-scale water simulations efficient while still capturing the very fine droplet interactions required by spray and mist.All of the states including the surrounding air are completed in a single simulation pass. As all solvers are computed with proper physical interactions between them, this is what helped to create such natural and realistic water interaction throughout the film.During SIGGRAPH 2019, a practical approach for modeling close-up water interaction with characters was presented, with a focus on high-fidelity surface tension and adhesion effects as water moves over and drips from skin. Using a scene from Alita: Battle Angel (a screenplay also written by Cameron), the team showed how this method allowed for a resolution of effects that was performant enough – on the scale of a fraction of a millimeter – to cover a whole character with a layer of water.The approach was to adapt an existing particle-in-cell (FLIP/APIC) solver to capture small-scale water-solid interaction dynamics. This technique was then advanced during the production of Avatar: The Way of Water to handle any sequence that involved characters emerging from water.“This was not a cheap solution, as we had to simulate water dynamics at sub-millimeter scales. The results would often take days to compute. We had to ensure our solver was scalable, robust and reliable enough to produce physically plausible visuals out of the box, with minimal tuning required from artists.” – Alexey Stomakhin, principal research engineer, Unity x Wētā DigitalTo achieve believable dynamics in underwater scenarios – for example, when characters breathe underwater in Avatar: The Way of Water – the approach to underwater bubbles was to simulate them together with a narrow band of water around the region of interest. The bubbles themselves would be represented in two parts: a hero and diffuse counterpart.The hero counterpart captures bigger bubbles with more explosive and turbulent behaviors. It utilizes an incompressible two-phase Navier-Stokes solve on a Eulerian grid, with the air phase represented by FLIP/APIC particles to facilitate volume conservation and accurate interface tracking.The diffuse counterpart captures the motion of smaller bubbles below the resolution of the Eulerian grid. The team has developed a novel scheme for coupling diffuse bubble particles with bulk fluid that could also be applied to other submerged, porous objects such as sand, hair, and cloth.To enhance the visual detail of a water surface simulation, the team from Wētā Digital and IST Austria developed a method of post-processing that took a simulation as an input, and increased its apparent resolution by simulating detailed Lagrangian water waves on top of it.Linear water wave theory was extended to work in non-planar domains with Lagrangian wave packets attached to spline curves that would evolve over the bulk fluid surface. This method produces high-frequency ripples with dispersive wave-live behaviors, customized to the underlying fluid simulation.A technique was developed for the realistic movement of underwater bubbles – created by movement in the water – reaching the water surface and converting into foam. This was important for nearly all of the water scenes in Avatar: The Way of Water.Grid-based Navier-Stokes simulators – usually reserved for capturing large-scale motion such as bulk fluid – are inherently limited by their grid resolution, making this method impractical for small-scale phenomena like spray and mist from breaking waves. These whitewater effects are usually simulated as independent Lagrangian particles.“One key aspect of our whitewater method is the interaction of two solvers: a grid-based fluid solver coupled with bubbles, and a SPH solver for foam constrained to the water surface. The declarative solver framework in Loki is what makes building and supporting these complex systems possible in production without having to develop new solvers from scratch.” – Joel Wretborn, senior research engineer, Unity x Wētā DigitalThe key aspect most of the existing solvers neglect are the collective effects: groups of bubbles rise faster than single bubbles due to their combined buoyancy, and the collection of many bubbles can have a significant impact on the motion of the water.The new technique addresses this limitation by simulating bubbles two-way coupled with the surrounding fluid. This effectively captures collective bubble effects, and creates a more connected look between bubbles and the motion of the fluid. As bubbles reach the surface they transition into "wet" foam particles constrained to the water surface, discretized with smoothed particle hydrodynamics (SPH). In the end this created believable whitewater dynamics in both close-up and large ocean shots.The simulation technology used by the Water Taskforce was created by present and former colleagues at Wētā Digital, as well as friends from Wētā FX and academic institutions, including: Alexey Stomakhin, Joel Wretborn, Kevin Blom, Gilles Daviet, Steve Lesser, John Edholm, Noh-Hoon Lee, Eston Schweickart, Xiao Zhai, Sean Flynn, Andrew Moffat, Gary Boyle, Tomas Skrivan, Andreas Soderstron, John Johansson, Christoph Sprenger, Ken Museth, and Chris Wojtan. Learn more about Unity Wētā Tools beta.

>access_file_
1029|blog.unity.com

AAA vets share advice on setting up a scalable DevOps toolchain

Monster Closet Games is a small studio with big ambitions – and the experience to match. Most of the core team has been in the industry for 20-plus years, and they’ve worked on a number of gaming’s biggest franchises, from Assassin’s Creed and Prince of Persia to Far Cry and Halo. They’re currently developing an online multiplayer title codenamed Project Shrine, with plans to launch on PC and current-gen consoles.“High-level, it’s a third-person co-op dungeon raider,” says Monster Closet CEO Graeme Jennings. “You and your group get together, build synergies between your characters, and raid dungeons for treasure. It’s about working together as a team.”Teamwork is at the heart of Monster Closet’s approach to game development, and the studio plans to stay tight-knit and focused. “I’d rather have 40–50 developers who love working together, who love the way we work, and who build great games because of that,” says Jennings. “I have a genuine belief that great teams, with the right tools, can build great games.”Monster Closet’s artists and developers are used to working with a powerful tech stack of proprietary solutions. Starting over from scratch meant this wasn’t an option, so in the first few months, the team carefully curated a toolchain that would scale with their ambitions.For Project Shrine, Monster Closet is accelerating production with Unity’s engine-agnostic DevOps solutions and automations, including Unity Version Control for source control and Backtrace for error tracking. We interviewed Monster Closet’s lead online programmer, Patrice Beauvais, and CTO, Thomas Félix, to learn about what they’ve been creating and how they built a tech stack designed to scale.What did you consider when you started building your tech stack?Thomas Félix: A few of us have had different experiences with live games, and we wanted to have a solid DevOps foundation that could support that in the long term. Even though GAAS [games as a service] is not at the core of our game, we wanted to make sure we had a powerful tech stack that would help us release and iterate quickly.We all had experience with Perforce, but we’re not necessarily big fans – it works well enough, but we had been doing things that weren’t really meant to be done with it. We were also looking at Git as a solution, but then we found Unity Version Control.On paper, Unity Version Control mixed the great approach you get with something like Git, but also something much more powerful like Perforce to manage data. We were seduced by Unity Version Control’s task branch workflows; after about six months of evaluations, we decided to give it a try. At that time, we were around six or seven people. Because the team was growing slowly, it was a nice, smooth ramp-up. We’re now at around 43 team members, and so far, so good.What was your process for testing this version control system?Patrice Beauvais: For us on the tech side, we didn’t want to have half our project on Git and half on Perforce. We know how much of a burden it is to have to use and maintain two different source code management tools – it’s common with live service games where the online systems and game data aren’t necessarily fully integrated.And a team of your size just couldn’t support that type of approach – the fact that Unity Version Control can facilitate both of those workflows is helpful?Patrice Beauvais: Totally.What kinds of problems have you encountered previously using two different version control systems?Thomas Félix: In game development, you always need to build customizations for the tools you use, no matter how good they are. Unity Version Control is a great example – even though it works well, we still found ways we wanted to tailor it to our workflows. If you have to support two source control systems, you double that work, which is always painful. Someone who’s good with Git might not be as familiar with Perforce, and vice versa. Training people takes twice as long.Patrice Beauvais: For me, the worst part is integrating data between two different version control systems. If you’re using Perforce but your data library is stored in Git, that data will need to go back into Perforce, so the two need a way to interface. Even though there are many solutions, these interactions aren’t really meant to happen, and sometimes you lose the project history. A bigger team can make it work, but I’m not going to spend six months building a solution to migrate data from Git to Perforce.It sounds like your team has used many different version control systems over the years. What are the benefits and challenges of some of the solutions you’ve explored?Thomas Félix: Let’s start with Perforce – it’s super resilient, it manages data very well, and it’s not that complex for nontechnical team members. You don’t find that anywhere else, really, except with Unity Version Control. On the other hand, the big monorepos you see in Perforce aren’t really suitable for game development – fast integration, multiple branches, that kind of thing. You can manage with Perforce, but it’s far from ideal, especially if you want to build a robust CI/CD pipeline.Patrice Beauvais: Git’s UI is great for programmers, but I probably wouldn’t ask an artist to work with it. It’s not ideal for managing large files and data, and it doesn’t support locks natively very well, yet.Thomas Félix: Unity Version Control is a better solution in many regards – the UI is tailored for content creators, so it’s great for usability. We see Unity Version Control as the perfect marriage of Git and Perforce.Programmers usually want to be in Git, and you can get pretty much the same workflows in Unity Version Control. For nontechnical content creators, it’s easy to submit their data, which solves one of the biggest problems teams run into with source control.Data loss is the worst thing that can happen to us. Code is easy to handle in every source control solution, but data is always tricky. We cannot afford to lose work, and each mistake made on the data side means paying for it a thousand times later on. We try to be very, very careful with that. With Unity Version Control, it’s a win for both our programmers and content creators.Do you have any best practices you can recommend for maintaining build integrity?Thomas Félix: For a small company like ours starting out, we knew we couldn’t afford to have broken builds because we submitted bad data or code to the main branch.With Unity Version Control, we never work in the main branch. We’re always in control of it, it’s always stable, and the mergebot actually does most of that work for us. That really resonated with us when we were trying it out, and it’s one of the first things we put in place, even when we were just five people working on the main build. It’s worked really well: The main branch is almost never down, and it’s been like that for almost two years now.How does Unity Version Control handle speed when working with large files and switching between branches and workspaces?Thomas Félix: In terms of task branches and switching back and forth, that works well, too. It takes a bit of time for people to get used to this workflow – task branches are a new concept to many people, and it’s maybe not as fluent immediately for artists as it is for programmers.That being said, every week – not every day – we do catch small problems through mergebots and our CI/CD processes, but they never enter the main branch or break the build. It takes a bit of time to get used to, but working in one branch will always be quicker than working across two – not by much, but if you step back and look at your pipeline as a whole, you start to realize it’s a much, much better way of working. At least for us, as a small-to-medium-sized company, it’s perfect.So there’s a culture and learning change you have to make to move to continuous deployment, but it seems like you’re saying you’ve already caught a lot of bugs or other potential issues before they even hit main.Thomas Félix: Totally. I would never go back to one-branch development. A team like us just can’t afford to spend days debugging or fixing problems that hit main.When we interviewed Apocalypse Studios, they discussed the “culture shift” that task branch workflows can require. They were using Perforce before Unity Version Control and talked a lot about branches versus streams. What’s your take on that?Thomas Félix: Branches and streams are quite different to me. If Unity Version Control didn’t exist, we could probably build something around streams and try to get the same thing going, but it would be complex and error-prone. In Unity Version Control, it’s much easier and much safer, because branching is what it’s built for.In Perforce, streams are the equivalent of tasks. If you’re super technical, you can make it work, but I would never put that in the hands of artists. With Unity Version Control, currently we have more than 1,000 branches – most of them are archived, and we have about 10–15 open at any given time. I’m not sure I’d like to have 1,000 project branches in Perforce.What challenges do you anticipate as you move further along in development? What challenges have you faced already?Patrice Beauvais: As we mentioned, people aren’t immediately used to the task branch way of working. For artists, it’s really new to them, so we’re careful to explain how it works and why we’re doing it.Thomas Félix: That’s true. People weren’t resistant to it or anything, but it’s definitely a cultural switch. Anyone looking to switch to Unity Version Control, like we did, needs to take that into account. It’s a better way of working, but you have to be willing to think outside the box. We started fresh, from pretty much nothing – no office, no infrastructure, and a very small team – so it was a little easier for us than it might be for other studios. Building your infrastructure in the cloud sounds cool, but it comes with challenges in terms of iteration time, costs, setup, security…. In the end it’s a win, but it took us some time to get a reliable workflow up and running.You’re also using another of Unity’s engine-agnostic solutions: our verified solutions partner, Backtrace. Can you tell us what your error tracking pipeline looks like?Thomas Félix: We use Backtrace to track every single bug in most of our applications – the first ones, obviously, being the game and the Editor. We mentioned before that we built some tools around Unity Version Control – Backtrace is integrated there, too.It didn’t take long to set it up, and it gave us access to some top-class tools, dashboards, and workflows. We were able to get a lot of the things we had in place at previous companies up and running pretty easily. After being operational for around six months, we already had visibility on all the crashes in the game, the Editor, and our tools. It wasn’t something I expected to get so early when starting a new studio, to be honest.Patrice Beauvais: It’s a super good tool. At Ubisoft, I worked on a proprietary solution like Backtrace for two or three years. Backtrace is really feature-forward – it’s even faster than what I was working on, and was easy to implement. Again, we did add our own customizations for custom data, and worked to integrate it with our server, which is on Linux.Thomas Félix: We were quite impressed by the time it took to set up Backtrace. Two or three days and we were already receiving crashes, so we decided to move forward.What did you do to ensure the process of implementing Unity Version Control went smoothly?We’ve shipped a lot of big games, and we try to use that experience to think about how we can apply it in new contexts. That’s why we ended up going with Unity Version Control, and with Backtrace as well. The tricky part is making sure we don’t invent problems we don’t have – we’re not a 1,000-person studio anymore!We’re always trying to find a balance between how we leverage our experience while reminding ourselves that we’re not trying to build the next big AAA game. We still want to make something great, and to do that, we need the best workflows – and Unity Version Control fits perfectly.What was your process for testing this version control system?Thomas Félix: The tricky part was making sure we could put it in the hands of artists, both in terms of the UX and data integrity. We worked with several artists on the team to make sure they understood how to use it. It was really important to us to nail data management for our project. The more people we added to the team, the better the feedback we got – people were happy, and we knew we were onto something.How are you using Unity Version Control’s Gluon workflow for artists?Thomas Félix: We do use Gluon, but for something else, what we call the raw data – data that’s not tied to the engine. Let’s say you’re an artist and you’re modeling a mesh: You’re using the raw data, the source file, in something like Blender. This doesn’t have to reach the engine; only the data you export from it does. This data is managed in a task branch, but we manage the source files in Gluon.These files can be really heavy – character artists using tools like Zbrush can generate files that are 2, 3, 4 GB per asset, if not more. You don’t want programmers having to sync 1 TB worth of original character meshes, so we manage those in Gluon using partial workspaces. Character artists only synchronize character files, modelers will only synchronize model files, and it’s the same for audio, textures, and so on. It’s all stored in a separate repository, away from the task branch workflow.So, to recap, you’re using Gluon for scenarios where you’re working with huge files so someone doesn’t have to download the whole repo – they can just use a partial workspace.Thomas Félix: Exactly. It’s an archived version of the original data, so we don’t use task branches for that. We don’t need to have a task branch for those materials, as long as creators submit their latest work every once in a while.What advice do you have for smaller studios looking to scale up and tackle ambitious projects, like you’re doing?Thomas Félix: That’s a good question. From day one, you need to know where you want to go. For us, we started with a small team, and we knew we wanted to grow, but we didn’t want to scale to 1,000 employees – even 200 isn’t our goal. We made many decisions – decisions that we’re really happy with! – that we might not have made if we had different ambitions.Building your infra in the cloud does make it easy to scale – just be careful, because it can cost you an arm and a leg. Always try to be in control of your workflow. If something doesn’t work, do the work to understand why. Make sure you have strong foundations, basically.Looking to optimize your game development pipeline? Get started with Unity DevOps, built to work with any engine.

>access_file_
1030|blog.unity.com

Mobile gaming at all hours: 7 facts about how users engage with their phones

People spend more time on their phones than ever before, 5 hours per day according to data.ai. The more interesting question for advertisers is, where and how are those 5 hours spent? We ran a survey using our proprietary market research solution to discover how users spend time on their mobile phones in 2023.It’s clear that users spend a lot of their time on their phones in games. In fact, 70% of users enjoy playing games the most compared to other mobile activities, 73% of users are playing games while watching TV, and the most amount of users (30%) check their games last before going to bed.Here are insights into how users are mobile gaming and more key findings about how users spend time on their phones: 1. Out of all the activities they can do on their phones, the majority of users (70%) enjoy playing games the most70% of users like playing games on their phones, 55% like scrolling through social media, 46% like texting and calling friends and family, 28% like reading the news, and 25% like answering emails. Despite the nearly endless list of activities today, the majority of users still place mobile gaming at the top every time - it's a timeless activity.2. Users spend the most phone time on the weekends (56%), with Gen Z the most likely to do so (64%)Naturally, users are likely to spend more time on their phones when they have free time, such as the weekends. That said, 44% of users say they spend more time on their phones on the weekday, which isn’t that much less. Schools often don’t allow phone usage, which is a key reason why Gen Z is likely to be more active on the weekends.3. Users spend the most time on their phones in the evening (37%)The evening is often a time for users to wind down from the work day and catch up with what’s been happening on their phones - checking the news, responding to texts, scrolling through feeds. Users spend the least amount of time on their phones in the morning, with only 14% indicating such.While this trend is consistent across Millennials, Gen X and females, Gen Z and males are the most active on their phones during the afternoon (35% and 34%) and evening (35% and 35%). For Gen Z, this is likely because they turn to their devices as soon as school gets out.4. Most users spend time on their phones while watching TV (71%)The average attention span of a human is 8.25 seconds, .75 seconds lower than a gold fish’s, which is why users are beginning to engage with two devices at once. This presents a strong opportunity for performance based CTV advertising - when users see an ad on their TV, they can immediately download your app or look up your product from their phones, without having to remember to do it later.5. Users are most likely to be gaming while watching TV (73%)Across all demographics, users who spend time on their phones while watching TV are most likely playing games (73%), followed by scrolling through social apps (59%). Users are gaming from their phones at all times, even when they’re engaging with other devices, further reaffirming the opportunity for advertising on these channels.6. The first apps users check in the morning are text messaging apps (32%)In total, users check their texts, followed by social apps (26%) and email (22%). Gen Z and Millennials follow this trend closely. Gen X, however, checks email first (31%), followed by texts (25%), social (21%) and games (21%). Priorities in the morning often differ based on responsibilities for the day and how much time users have to get ready.7. Games are the last apps users check before going to bed (30%)30% of users check their games last, 28% of users check social media last and 22% check their texts last. This means that users go to sleep with mobile gaming on their mind above anything else, indicating the value mobile gaming holds to those who play.We use our phones a lot, but, until now, did we really understand how? The above stats give you a clearer picture of how and where our time on our phones is spent.

>access_file_
1031|blog.unity.com

3 reasons why Unity at GDC is all about you

Wherever you are in the game development lifecycle, Unity is here to help you do more. This month at GDC 2023, we’ll be demonstrating all the ways our solutions work together to make Unity the leading real-time platform for creating, running, and growing games.Connect with us and find what you need at any stage – from making and publishing your game to expanding your player base and building a successful business.Read on to learn why our presence at GDC 2023 is all about helping you make truly great games.Get the scoop on the latest tech and discover how Unity creators are succeeding. These are just a few of the sessions you won’t want to miss.Seasoned gamedev Will Armstrong is ready to share how you can boost your productivity developing games in Unity. Join him on Tuesday, March 21 to learn about best practices for coding standards, profiling performance, debugging and testing, applying design patterns, and more.This session gives you the chance to ask Unity your burning questions about game creation, tools, services, and functionality. Senior program managers, engineers, and other staff host this Ask Me Anything as a cap to the week’s Unity Developer Summit. No question is too big or too small.See what’s new in Unity’s 2022 LTS and 2023 Tech Stream, including the latest on graphics, multiplayer, and the Entity Component System (ECS).Visit the Unity booth on the show floor (S327) to meet fellow developers at our creator stations and check out hands-on demos of their projects (see who you can expect to find there below). You’ll also be able to chat with Unity specialists to learn more about Unity tools and services. Whether you’re looking for answers, feedback, or want to connect with the team one-to-one, we can’t wait to see you there.Breachers is an upcoming 5v5 tactical VR FPS by Triangle Factory. Climb, shoot, and strategize your way to victory.From Steel City Interactive, Undisputed is an authentic boxing game that features true-to-life visuals, bone-jarring action, and licensed boxers.Death Carnival is a fast-paced arcade shooter from Furyion Games with adrenaline-fueled combat in single-player, online co-op, or PvP.A handful of Unity’s GDC 2023 sessions will lift the hood on popular Made with Unity releases to show you how they came together. Here are just a few highlights from the schedule.Join us for a guided panel discussion with developers from Intercept Games, who will talk through the process and challenges of creating fully spherical planets in Kerbal Space Program 2.Key members of the Second Dinner team discuss how they started out as a two-person studio and grew to launch MARVEL SNAP, one of the biggest mobile games of 2022. In this hands-on session, Nifty Games’ vice president of engineering will share learnings from shipping NFL Clash and NBA Clash to global audiences using Cloud Build, Remote Config, and Cloud Content Delivery. You’ll also hear insights on sustaining and growing a mobile player base.Bookmark this page to keep up with the latest on Unity at GDC 2023, and stay tuned for updates about the event. If you haven’t yet bought your pass, use code “UNITY10” for 10% off GDC All Access, Core, or Summits registrations.

>access_file_
1032|blog.unity.com

Made with Unity Monthly: February 2023 roundup

Curious how others are creating with Unity? Check out this roundup of the latest Made with Unity news and discover what the Unity community has been up to.#MadeWithUnity games reached some exciting milestones in February.To start, Intercept Games’ Kerbal Space Program 2 has been released in Early Access on Steam, if you’re looking for something out of this world. Ready to get your puzzle on? NAHUAL by Thirdworld Productions may be the quirky brainteaser you’ve been waiting for.Congratulations to all of the games that won an Academy of Interactive Arts & Sciences’ D.I.C.E. Award! As a follow up to last month’s finalists list, you can find the full list of winners in this IGN article. Esteemed Made with Unity winners include:TUNIC, Outstanding Achievement for an Independent GameMarvel SNAP, Mobile Game of the YearOlliOlli World, Sports Game of the YearWe share new releases or milestone spotlights every Monday on the @UnityGames Twitter account. Be sure to give us a follow and support your fellow creators.Tuesdays are dedicated to #UnityTips on Twitter. Here are a couple we found particularly helpful in February:@SunnyVStudio dropped some major grout for your tiles if you start to see screen tearing.@jamesebrill is on a roll with getting stones rolling – learn how to breathe life into 2D rolling stones.Keep tagging us using the #UnityTips hashtag.We're stunned by what you all create every week, and you certainly kept the amazing projects coming in February. If we missed something from you, be sure to use the #MadeWithUnity hashtag next time you share.Twitter’s @cptnsigh gave us all virtual whiplash with a strikingly smooth fast-paced FPS, and @canopy_studio healed our whiplash with a relaxing waterfall. Then, we snuck around with a bow and arrow and completed puzzles in @CatthiaGames’ Cynthia: Hidden in the Moonshadow. Finally, @PhillipWitz's adorable frog showed off its new skating skills.On Instagram, @papetura warmed our cups with a dose of fiery cuteness, and now we’re certainly ready for Spring thanks to @Studio_unjenesaisquoi, @umanimation1, and @focus_entmt. @Cornf_blue went super speed with some insane parkour and @JfvmYt was busy expanding their castle!We’re so excited for the #MadeWithUnity year ahead (and GDC 2023 later this month), so keep adding the hashtag to your posts to show us what you’ve been up to.Following the wrap of Global Game Jam 2023, we took to Twitch for a Let’s Play stream that saw members of our team play a selection of 10 games created during the event and solicited from the community.Later in the month, we sat down with Whales and Games to discuss Townseek in an all-new Creator Spotlight stream. Then, we released another Creator Spotlight clip (above) on the use of timeline in As Dusk Falls. And finally, as a callback to the 2022 Let’s Dev 101 session on animation, we posted the full stream to YouTube.Don’t forget to follow us on Twitch and hit the notification bell so you never miss a stream.On February 23, we hosted our second Dev Blitz Day of the year, focusing on scripting. The event was held in both the forums and on the Discord server. Throughout the day, we had more than 100 threads and would like to thank everyone who participated.Keep an eye on our forum announcements and Discord for updates about future Dev Blitz Days.Just because you're a “one-person team” doesn’t mean you can’t call in extra help! Check out how Thomas Sala, creator of The Falconeer, was able to fill in skill gaps using assets from the Unity Asset Store. Assets were able to take the game to another level – from adding realism to gameplay to localizing in 13 different languages. Similarly, here are three more stories from studios that were able to save time and money by using assets.Taking to social media, here's a roundup of some of our favorite creator showcases from Twitter in February:Clay Outdoors Pack | Unicorn OneVolcano | NatureManufactureMegabook 2 | Chris WestLove/Hate | PixelcrushersDon’t forget to tag the @AssetStore Twitter account and use the #AssetStore hashtag when posting your latest creations.Last but not least, here’s a non-exhaustive list of Made with Unity titles released in February. Do you see any on the list that have already become favorites or notice that something is missing? Tell us about it in the forums.Birth, Madison Karrh (February 17)The end is nahual: If I may say so, Third World Productions (February 17)PlayStation® VR2 (PS VR2) releases (February 22): Cities: VR – Enhanced Edition, Fast Travel GamesCosmonious High, Owlchemy LabsDemeo, Resolution GamesThe Last Clockwinder, PontocoThe Last Worker, Oiffy, Wolf & Wood Interactive LtdThe Light Brigade, Funktronic LabsSynth Riders, Kluge InteractiveThe Tale of Onogoro, Amata K.K.Tentacular, Firepunchd Games UGWHAT THE BAT?, TribandZenith: The Last City, Ramen VRSons Of The Forest, Endnight Games Ltd (February 23)Clive ‘N’ Wrench, Dinosaur Bytes Studio (February 23)Kerbal Space Program 2, Intercept Games (February 24)Phantom Brigade, Brace Yourself Games (February 28)Rytmos, Floppy Club (February 28)That’s a wrap for February! Want more community news as it happens? Don’t forget to follow us on social media: Twitter, Facebook, LinkedIn, Instagram, YouTube, or Twitch.

>access_file_
1033|blog.unity.com

Inspecting memory with the new Memory Profiler package

In this blog, we will cover five key workflows in the new Memory Profiler package that you can use to diagnose and examine memory-related issues in your game. These are:Monitoring your application’s memory pressureSeeing the distribution of Unity ObjectsDetecting poorly configured assetsLocating unintentional duplicate objectsComparing memory captures to validate optimizationsFor an introduction to the Memory Profiler, please see the recent blog, Everything you need to know about Memory Profiler 1.0.0.This first workflow monitors how demanding your application is on a device’s memory resources. This process is critical to determining whether or not your application is at risk of performance problems, or even being evicted and terminated by the operating system, due to consuming too much memory.To begin, we have a build of an example game running on the target device. Naturally, it is essential that we take a memory capture of the game, running on the actual hardware, to see how it uses the devices' available memory resources. Furthermore, memory does not behave in the same way in the Unity Editor as it does in the Unity runtime, so taking a memory capture of the Editor in Play Mode is not a good representation of how a game’s memory will look on a device. (Taking a memory capture of the Editor is appropriate when developing tools for the Editor, such as custom Editor windows.)After navigating to the stage in our game where we want to analyze the memory usage, we attach the Memory Profiler to our device using the dropdown in the Memory Profiler. We can then take a memory capture, as shown below.After opening this capture, the Memory Profiler displays our application’s memory footprint at the top of the Summary page as “Memory Usage On Device”.Here we can see that our application’s memory footprint is 492.5 MB, out of an available 3.50 GB. We need to use our best judgment next as to whether we believe that is a sensible proportion of the device’s physical memory (RAM) to be using at the time of capture. Remember that a device’s physical memory is shared by all running processes.You’ll notice that this visual indicator is showing you total resident memory. Total resident memory refers to how much of your application’s memory resides in the device’s physical memory hardware (RAM). This is the clearest indicator of how demanding your application’s current memory usage is on the target device for two reasons. First, this is because as your application’s total resident memory usage increases, so does the likelihood of incurring frequent page faults, where the operating system has to page virtual memory in and out of the device’s physical memory. Frequent page faults will cause significant performance degradation of your application. And second, this is because many operating systems use your application’s resident memory usage to determine its current memory footprint. If your application’s memory footprint gets too high, the operating system will evict your application and terminate it, causing a crash for your players.Therefore, you can use the Memory Usage On Device visual indicator in the Memory Profiler to infer if an application might be at risk of performance issues or being terminated by the operating system, due to an overuse of memory at the time of capture.This contrasts with Allocated Memory, sometimes referred to as Committed Memory, which you might notice is displayed in various graphics below this indicator, and is currently the default option shown by all other views, such as Unity Objects. Allocated Memory refers to all memory that your application currently has allocated, regardless of whether it has been made resident in physical memory or not, and therefore it matches your application’s view of memory more closely. As such, this can be useful for exploring all of your application’s currently allocated memory, whilst resident memory usage is key to understanding the memory pressure your application is placing on the hardware at any moment in time.The Memory Profiler’s Unity Objects tab provides you with an overview of your application’s memory from the perspective of Unity Objects; that’s your application’s textures, shaders, meshes, materials, and so on. This is a great place to begin exploration in the Memory Profiler because Unity Objects will be inherently familiar to so many Unity users, as it is what the majority of us work with directly in the Unity Editor. Not only does this provide a familiar entry point to understanding our application’s memory, but it can also help to diagnose and fix a range of potential issues by providing this Unity-specific context.To see the Unity Objects view, simply select the Unity Objects tab at the top of the Memory Profiler after opening a memory capture, as shown above.You can see how the Unity Objects view quickly gives us an understanding of the distribution of Unity Object types in our application. This allows us to both gain a high-level understanding of what types were consuming the most memory at the time of capture, as well as to reason about this, such as whether it is expected that a particular scene is heavy on AudioClip objects, for example. Expanding each type also enables us to view every Unity Object that is currently allocated, individually, as shown below.It’s important to remember that Unity Objects make up a proportion of our application’s total allocated memory. You can see exactly how much in the indicator above the table, highlighted below.Here, you can see that our total allocated memory size, “Total Memory In Snapshot”, is 4.64GB and that our Unity Objects account for 2.37GB of that. Furthermore, if we filter the table – for example, by using the search feature – you’ll notice that this bar updates to reflect our search results. In other words, it displays the size of all the memory currently shown in the table. This helps you to maintain perspective of exactly how much memory you are inspecting as a proportion of the whole capture and can help to inform where to invest optimization efforts.In version 1.0 of Memory Profiler, the Unity Objects table shows you Allocated Memory, or, put another way, it shows you all Unity Objects that are alive in your application. We are exploring adding Resident Memory visibility to these views in an upcoming release, which would enable you to see exactly which of your Unity Objects are currently resident in physical memory, and therefore see exactly which are directly contributing to your application’s current memory footprint. You can use the All Of Memory tab to inspect the remainder of your application’s memory at the time of capture, which will include memory outside of Unity Objects, such as various Unity subsystems, managed-only (C#) memory, and DLLs and executables.The Unity Objects view can help us to diagnose a range of potential issues. One such issue is detecting assets that have been badly configured, causing them to consume more memory than is necessary.In the capture below, you can see that a substantial portion of our Unity Objects are textures. The capture is from a project with high graphical fidelity that uses the High Definition Render Pipeline and makes heavy use of visual effects. So, with this context in mind, we expect to see heavy use of textures, which we do.However, upon expanding our second large category, Texture2D, we can notice that two textures appear much bigger than the others. Using our understanding of our project, we are surprised that these textures are bigger than comparable textures, like HoloTable_Normal or HoloTable_Mask, as we expected them to be similar in size.So, we select one of these textures in the table to learn more details about it, and to begin investigating what might be the cause for this. Here, in the Details view we find our explanation – our texture is writable, or “Read/Write Enabled.”This is a common problem that we see across many user projects: accidentally making a texture writable when it’s not needed by checking the “Read/Write” setting on the texture’s import settings. When a texture has this flag enabled, it will double its size in memory. This is because a second copy of the texture data is required so that it can be accessed on the CPU. A tell-tale sign of this is that the Total Size of a texture is twice the size of what you expected, or twice the size of similar textures.After disabling the “Read/Write” flag on both of these textures and taking a second capture, we can see both of these textures have halved in size.We are exploring adding a column for graphics (GPU) memory to the Unity Objects table in a future release to make locating cases where a Unity Object has allocated graphics memory, such as in this example, easier.A common mistake that we see in Unity projects is unintentionally creating duplicate Unity Objects. For example, it is very easy to accidentally create a duplicate Material by accessing a MeshRenderer’s material property. Not only does this add up quickly in this case – if, for example, it is done on every instance of a particular MeshRenderer – but, furthermore, these dynamically created materials must be explicitly destroyed.To help with locating this type of issue, the Unity Objects table provides a quick filter to show you potential duplicate Unity Objects only. This view will filter the table to show only Unity Objects that have multiple instances with both an identical name and size. It is important to note that many potential duplicates will be expected and not a cause for concern at all. For example, multiple instances of a prefab might have identically named and sized Transform components, and these would be expected duplicates. It is only discovering unintentional duplicates that we are interested in, which we will illustrate in this workflow following example.The capture below was taken in a simple scene with two instances of a Door prefab, and we have enabled the Show Potential Duplicates Only filter located underneath the Unity Objects table. This has filtered the table to show us only Unity Objects that have multiple instances with the same name and size.Because we have two instances of a Door prefab in our scene, we also have, as expected, two instances of all the relevant objects: MeshRenderer, Transform, GameObject, and so on. However, we also have two instances of the “Door” Material in our capture above. These Door instances look the same in our scene, so it is expected that they would share a Material. This is, therefore, an unintentional duplicate, and in this particular example was caused by accessing the MeshRenderer’s material property in the prefab. Removing this property access and taking a second capture shows the duplicate material is no longer present in the Unity Objects table.It’s important to remember that this filter is simply showing you all Unity Objects that have multiple instances with the same name and size. It requires your knowledge of your project to interpret whether the potential duplicates you see are expected, or are, in fact, unintentional and cause for investigation. We recommend paying attention to the Total Memory In Table bar at the top, which gives you a visual indication of what proportion of your application’s allocated memory you are seeing in the table. This can help you to maintain perspective of where to invest your optimization efforts.The Memory Profiler also provides functionality to compare two memory captures. This allows us to make changes to our project, for example to address an issue we might have found, and subsequently test if our changes have indeed had the desired outcome. It is important to always test that your hypothesis is correct and your changes have had the desired outcome on the actual hardware. Here, let’s explore an example of this comparison workflow.Below is a capture of our mobile game taken during the first level. We can see that the biggest category of Unity Objects is Texture2D. After opening this category to check what our biggest textures are, we can see there are a few UI textures that are quite large in relation to the rest of our game – megabytes each. This raises a suspicion for us: Why are these textures so much larger than the others and do they need to be? To discover why, we can first locate the source texture asset in our project by selecting the texture in the Memory Profiler and using the “Select In Editor” button, which will highlight the source texture asset in our Project window.Using the Inspector window, we can see that all of our offending large UI textures are not being compressed due to their dimensions not being a power-of-two, as shown by the “NPOT” (non-power-of-two) text.This explains these large texture sizes. We can now use our knowledge of our project to reduce this memory usage. We know that three of these textures (the help controls) are always displayed together in the UI, as well as the other three textures (the creatures). Therefore, we can hypothesize with high confidence that creating two Sprite Atlases for each set of three textures will reduce our allocated memory usage, because it will enable them to be compressed without increasing the number of textures in memory.To compare two snapshots, begin by opening the first snapshot. This is the “base” against which we want to compare. Now above the open snapshot, select the “Compare Snapshots” tab and choose the second snapshot. The Memory Profiler will now present a summary comparing the two snapshots, as shown below.To see the effect of our change and verify that it did, in fact, reduce the size of our application’s allocated memory for the Texture2D category, we can select the Unity Objects tab. Here, we are presented with a comparison table that shows the Unity Object types that have changed, as well as how they have changed between the captures (shown below).We can see our Texture2D type as a whole has reduced in size by 3.6MB and has four less textures than before. Expanding this category, we can see the removal of our individual, uncompressed Sprite textures, and the addition of our two Sprite Atlas textures, resulting in a net reduction of 3.6MB and 4 Texture2D objects.So this was a success – we have confirmed that our hypothesis was correct using the comparison functionality, and we have reduced the size of these textures in allocated memory.From reading this blog, you should now have a better understanding of five key workflows in the new Memory Profiler package. These workflows are designed for diagnosing and examining memory-related issues in ayour game. We hope the Memory Profiler package released in Unity 2022.2 helps you to better monitor, examine, and understand your game’s memory footprint. Please feel free to reach out to the team to share your feedback on how we can improve performance profiling tools via our forum page –, or share your suggestions through our roadmap page, where you can also see some of the features that are being worked on.If you’re interested in more details on this topic, we will be publishing another blog in the coming weeks that will dive deeper into how an application’s memory footprint is calculated, covering topics such as resident and allocated memory in more detail.

>access_file_
1034|blog.unity.com

How to run effective UA campaigns for your subscription app by measuring long term goals

According to Databox, around 80% of marketers prefer looking at short term goals, like eCPI, because it's easy to measure, all the competitors are doing it, and it saves money. However, as a subscription app, planning for the long term is critical, especially for campaigns on ad networks.Let’s say you pay $0.60 for an install, a relatively low cost in the US. If the user who installed your app churns on day 1 - 25% of users will likely do so according to Business of Apps - that $0.60 is a sunk cost.Elina Dakhis, Senior Strategic Partnership Manager at ironSource, with a focus on Apps Beyond Games, shares her insights on why LTV is important for your subscription-based app and tips to master it.Why you shouldn’t spend all of your resources optimizing towards short term goalsBefore diving into how to measure success in the long term, let’s first dive into why you shouldn’t devote all of your attention to short term goals and achieving low CPIs.Higher bids bring in revenue generating usersIt’s possible to have great install rates but flat revenue - there isn’t always a clear correlation. Often, the problem lies in optimizing towards driving cheap installs, or low quality installs that drive very little value. Meanwhile, more expensive installs lead to users that will spend time in your app, engaging with your premium content, generating more revenue, and eventually converting to subscribers.Diversity of bids means diversity of usersLocation, device platform, and network all have an impact on the price of the cost per install. For example, CPI differs by country depending on how big the audience is, how much they spend inside apps, etc. To reach a diverse set of users across different geos, devices, and networks, it’s important to remain open to a range of costs. Just because a bid is low, doesn’t mean those users aren't valuable, and vice versa.So, you shouldn’t be narrowing in on achieving low CPIs - high CPIs are actually quite valuable. That said, to determine what’s best for your strategy, it’s crucial to look at long term goals. We suggest calculating your LTV.Longer term goals help you determine user acquisition costsBlindly paying for low CPIs without looking at long term metrics, such as LTV, means you could be missing out on an opportunity to spend more to acquire high-quality users and increase profit. If you know how your users behave in your app in the long term, you can predict how much revenue you'll generate from your users, and you can make more calculated decisions for your UA budget.How to build the LTV model for campaigns on networksTo get a clearer picture into the effectiveness of your campaigns, it’s important to look at user behavior after they install the app and into the long term. Note that you should build dedicated LTV models for the different channels you’re running with - social, ad network, etc. Here’s how to measure LTV for your ad network campaigns taking into account multiple revenue streams:1. Plot the ARPU curve taking into account all revenue generatorsARPU, or the average revenue per user, is determined by calculating the accumulated revenue generated by a segment of users on a specific day after install. To determine ARPU, first, sum all of the revenue generators - the amount subscribers pay, the revenue from in-app purchases, and the revenue from ads. Then, divide that by the number of installs. For example, if a segment of 1,000 users generates $6,000 over 6 months, the Month 6 ARPU would be $6. If those 1,000 users generate $12,000 over 12 months, the Month 12 ARPU is $12.When building the ARPU curve for subscription apps, it’s important to take into account all of your revenue generators - subscriptions, in-app purchases and ads. For some apps, you can stop at choosing a relevant ARPU goal, 12 months for example, to determine the value of your users. For most, however, you’ll need to construct an LTV model from the right trendline.2. Choose the right trendline for each revenue generator to build your LTV modelPlace a trendline over the average revenue per user (ARPU) curve to build the LTV model. Doing so automatically fills in the revenue predictions from the last day of calculated data to the end of the users’ lifetime in your app.When building the LTV curve for a hybrid model with subscriptions, ads, and/or in-app purchases, keep the behavior of these components in mind. A logarithmic trendline usually works better for the LTV curve for apps that don’t monetize with subscriptions. We’ve found that a power curve fits over the ARPU the most accurately for subscription apps. This is because subscription apps tend to offer some kind of utility that stands the test of time. Once you’ve built the ARPU curve for each revenue stream, stack them on top of each other to get a more accurate prediction. Below is a more detailed example.The graph above is the LTV model for the first 180 days of a Social Utility App - their monetization model is based on subscriptions and ads. As you can see, we plotted the ARPU curves (solid lines) based on data we already had for subscriptions and ads separately. From there, we placed power curves (dotted line) to predict the future revenue - keep in mind that the end of the LTV curve does not indicate a user’s last day in the app. Based on the graph, we can assume that the LTV for the average user will be $0.80 for weekly subscribers, $0.25 for monthly subscribers, and $0.15 for ads.Now’s the time to start measuring the granular metrics to optimize the precision of your LTV model. There’s more to creating a winning LTV model than just choosing the right trendline.3. Enrich the model with more dataThere’s a lot of uncertainty behind building an accurate revenue prediction, and it’s important to be comfortable with this. Typically, apps have many more non-subscribers than subscribers and subscription rates are constantly changing. IAPs offer a glimpse into the level of user engagement, but often don’t paint the whole picture of how users behave in your app.It’s important to look at other engagement events outside of just how much a user is paying each week, month, or year or their engagement with IAPs and ads when building the LTV model. In fact, you should be tracking as many metrics as possible, as early as possible. You can include any type of in-app engagement, such as opening the app a certain number of times, editing a few photos, etc. This granular understanding of your app’s overall performance will help you determine exactly where you stand, allowing you to streamline your strategy towards investing in the right users.If you start including other metrics into your LTV model and you see different behaviors for different user groups, you should consider building different models to reflect different revenue streams - subscription, IAP, ads - rather than combining them into one.4. Build a different model for each subscription time frameMany apps offer weekly, monthly, and annual subscriptions, and these users are going to behave differently and bring in revenue at different rates - it’s not one size fits all.Rather than converting annual subscriptions to the monthly equivalent, it’s best to build an LTV model for weekly vs. monthly vs. yearly subscriptions. From there, if you’re including an engagement metric outside of revenue, you can apply a different rate to each model (since, for example, churn will be different for monthly users compared to weekly users). This way you’ll improve the accuracy of your LTV model and have a better idea of how specific users are interacting with your app according to different subscription models.What now?Once your LTV model is ready, the next step is adjusting your KPIs based on the information to ensure you’re making the best decisions for your UA strategy. Choose a reasonable margin you’d like to maintain and determine the shortest KPI possible where you can still accurately predict long-term user behavior in your app. Often, it’s the average time it takes a user to subscribe. Your work doesn’t end here - continue to adjust the data so the LTV model remains as updated as possible and takes into account fluctuations in user behavior, such as during holiday seasons, unexpected pandemics, political unrest, etc.Measuring short term goals are important, but long term goals are just as, if not more, important to calculating overall success and the effectiveness of your campaigns. Start measuring your LTV model using the above steps and be sure to take into account multiple revenue streams.

>access_file_
1036|blog.unity.com

Improving job system performance scaling in 2022.2 – part 1: Background and API

In 2022.2 and 2021.3.14f1, we’ve improved the scheduling cost and performance scaling of the Unity job system. In this two-part article, I’ll offer a brief recap of parallel programming and job systems, discuss job system overhead, and share Unity’s approach to mitigating it.In part one, we cover background information on parallel programming and the job system API. If you’re already familiar with parallelism, feel free to skim and skip to part two.In the 2017.3 release, a public C# API was added for the internal C++ Unity job system, allowing users to write small functions called “jobs” which are executed asynchronously. The intention behind using jobs instead of plain old functions is to provide an API that makes it easy, safe, and efficient to allow code that would otherwise run on the main thread to instead run on job “worker” threads, ideally in parallel. This helps to reduce the overall amount of wall time the main thread needs to complete a game’s simulation. Using the job system for your CPU work can provide significant performance improvements and allow your game’s performance to scale naturally as the hardware your game runs on improves.If you think of computation as a finite resource, a single CPU core can only do so much computational “work” in a given period of time. For example, if a single threaded game needs its simulation Update() to take no more than 16ms, but it currently takes 24ms, then the CPU has too much work to do – more time is needed. In order to hit a target of 16ms, there are only two options: make the CPU go faster (e.g., raise the minimum specs for your game – normally not a great option), or do less work.Ultimately, you need to eliminate 8ms of computational work.That typically means improving algorithms, spreading subsystem work across multiple frames, removing redundant work that can accumulate during development, etc. If this still doesn’t get you to your performance target, you may need to reduce game simulation complexity by cutting content and gameplay, for example, by reducing the number of enemies allowed to be spawned at once – which is certainly not ideal.What if, instead of eliminating work, we give the work to another CPU core to run on? Nowadays, most CPUs are multi-core, which means the available single-threaded computational power can be multiplied by the number of cores the CPU has. If we could magically and safely divide all the work currently in the Update() function between two CPU cores, the 24ms Update() work could be run in two simultaneous 12ms chunks. This would get us well below the target of 16ms. Further, if we could divide the work into four parallel chunks and run them on four cores, then the Update() would take only 6ms!This type of work division and running on all available cores is known as performance scaling. If you add more cores, you can ideally run more work in parallel, reducing the wall time of the Update() without code changes.Alas, this is fantasy. Nothing is going to divide the Update() function into pieces and run them on separate cores without some help. Even if we switched to a CPU with 128 cores, the 24ms Update() above will still take 24ms, provided both CPUs have the same clock rate. What a waste of potential! How, then, can we write applications to take advantage of all available CPU cores and increase parallelism?One approach is multithreading. That is, your program creates threads to run a function which the operating system will schedule to run for you. If your CPU has multiple cores, then multiple threads can run at the same time, each on their own core. If there are more threads than available cores, the operating system is responsible for determining which thread gets to run on a core – and for how long – before it switches to another thread, a process called context switching.Multithreaded programming comes with a bunch of complications, however. In the magical scenario above, the Update() function was evenly divided into four partial updates. But in reality, you likely wouldn’t be able to do something so simple. Since the threads will run simultaneously, you need to be careful when they read and write to the same data at the same time, in order to keep them from corrupting each other’s calculations.This usually involves using locking synchronization primitives, like a mutex or semaphore, to control access to shared state between threads. These primitives usually limit how much parallelism specific sections of code can have (usually opting for none at all) by “locking” other threads, preventing them from running the section until the lock holder is done and “unlocks” the section for any waiting threads. This reduces how much performance you get by using multiple threads since you aren’t running in parallel all the time, but it does ensure programs remain correct.It also likely doesn’t make sense to run some parts of your update in parallel due to data dependencies. For example, almost all games need to read input from a controller, store that input in an input buffer, and then read the input buffer and react based on the values.It wouldn’t make sense to have code reading the input buffer to decide if a character should jump executing at the same time as the code writing to the input buffer for that frame’s update. Even if you used a mutex to make sure reading and writing to m_InputBuffer was safe, you always want m_InputBuffer to be written to first and then the m_InputBuffer reading code to run second, so you know whether the jump button was pressed for the current frame (and not one in the past). Such data dependencies are common and normal, but will decrease the amount of parallelism possible.There are many approaches to writing a multithreaded program. You can use platform-specific APIs for creating and managing threads directly, or use various APIs that provide an abstraction to help manage some of the complications of multithreaded programming.A job system is one such abstraction. It provides the means to break up parts of your single-threaded code into logical blocks, isolate what data is needed by that code, control who accesses that data simultaneously, and run as many blocks of code in parallel as possible to try and utilize all computational power available on the CPU as needed.Today, we cannot divide arbitrary functions into pieces automatically, so Unity provides a job API that enables users to convert functions into small logical blocks. From there, the job system takes care of making those pieces run in parallel.The job system is made up of a few core components:JobsJob handlesJob schedulerAs mentioned before, a job is just a function and some data, but this encapsulation is useful, as it reduces the scope of which specific data the job will read from or write to.Once a job instance is created, it needs to be scheduled with the job system. This is done with the .Schedule() method added to all job types via C#’s extension mechanism. To identify and keep track of the scheduled job, a JobHandle is provided.Since job handles identify scheduled jobs, they can be used to set up job dependencies. Job dependencies guarantee that a scheduled job won’t start executing until its dependencies have completed. As a direct result, they also tell us when different jobs are allowed to run in parallel by creating a directed acyclic job graph.Finally, as jobs are scheduled, the job scheduler is responsible for keeping track of scheduled jobs (mapping JobHandles to the job instances scheduled) and ensuring jobs start running as quickly as possible. How this is done is important, as the design and usage patterns of the job system can potentially conflict in non-obvious ways, leading to overhead costs that eat into the performance gains of multithreaded programming. As users started adopting the C# job system, we began to see scenarios where job system overhead was higher than we’d like, which led to the improvements to Unity’s internal job system implementation in the 2022.2 Tech Stream.Stay tuned for part two, which will explore where overhead in the C# job system comes from and how it has been reduced in Unity 2022.2.If you have questions or want to learn more, visit us in the C# Job System forum. You can also connect with me directly through the Unity Discord at username @Antifreeze#2763. Be sure to watch for new technical blogs from other Unity developers as part of the ongoing Tech from the Trenches series.

>access_file_
1037|blog.unity.com

The next generation of VR gaming on PS5

Sony Interactive Entertainment’s next-gen VR headset, the PlayStation® VR2 (PS VR2), launches today, and we’re excited to share the latest tools you can use to build for this innovative platform. In this post, we’ll cover two aspects of developing for PS VR2: graphics and inputs.PS VR2 leverages PS5’s next-gen computing and graphics power to help you create stunning, high-performing VR games. You can target a 4k resolution equivalent, running titles at 60Hz, 90Hz, or 120Hz, which should be achievable with the PS5 using some of the techniques discussed below.First, let’s start off with render pipelines. We recommend the Universal Render Pipeline (URP) for most VR developers because URP will be our first render pipeline to support some of PS VR2’s unique features, such as foveated rendering and gaze tracking. You can also use the Scriptable Render Pipeline (SRP), Built-in Render Pipeline, and High Definition Render Pipeline (HDRP), but be aware that some features, like foveated rendering, are only available on URP for the time being.Overall, URP is a great match for VR games. It’s flexible, straightforward to use, and customizable. It also works well if you’re building for multiple platforms, including all-in-one VR devices. You can make your own custom render pipeline using SRP, and we provide an extensive C# API that allows you to implement any renderer your games require.The PS5 also provides advances on the GPU side through its new NGGC Graphics API. With NGGC, we’ve taken advantage of the optimization technologies available on PS5 in ways that were previously unavailable. We’ve also rearchitected our rendering backend to allow efficient utilization across multiple cores while improving GPU state transitions. This adds up to more efficient rendering in terms of CPU, GPU, and memory usage while offering the same visual results, without any reauthoring of assets or game code.PS VR2 will be able to use a technique known as foveated rendering. This technique helps you make VR games with better visual fidelity by decreasing the GPU rendering workload required for a given scene. Foveated rendering is used to improve GPU performance by reducing image quality in peripheral vision.PS VR2’s hardware can go a step further by using eye tracking to optimize GPU rendering. By projecting the eye-gaze information into screen space, you’re able to render at high quality in the precise screen area where a player is looking.The intent of foveated rendering and eye tracking is to keep the image quality high in those parts of the image deemed important, while smoothly fading to lower resolutions in areas of the image considered to be less important. This means that you can reduce the size of some render targets while still keeping quality where you want it.We’ve removed all of the complexity of setting this up in Unity, allowing you enough control over the degree of foveation to be able to balance image quality versus GPU performance for your specific requirements.With this capability, foveated rendering on PS VR2 can be up to 2.5x faster, without any perceptual loss compared to equivalent image quality through standard stereo rendering. We’ve also seen gains up to 3.6x faster when foveated rendering is combined with eye tracking. (Note that these tests represent ideal increases in performance, tested on a Unity demo, and numbers will vary based on your game.)Foveated rendering on PS VR2 can bring a massive reduction in GPU usage while producing the same perceptual quality – and, combined with eye tracking technology, the performance gains are even better.Outside of graphics performance, eye tracking also unlocks a new input method. You can use eye tracking to allow users to select items from menus, start interactions with NPCs, use in-world tools, and more. Eye tracking could even be a focal point of the gameplay mechanics.Leveraging eye tracking works in much the same way as other XR input devices. You have access to the components for eye gaze, which is a combination of position and rotation across both eyes that defines a place in the virtual world. You can use this to tell where the user is currently looking.In addition to basic pose information, you will also have access to pupil diameter and blinking states for both of the player’s eyes. Combining these with the pose, you can start to form your own ideas around gameplay and interaction to more deeply engage with players.Here is an example of a simple gaze-based reticle:You’ll see a gaze tracker object that drives the movement of the reticle in this script, using the same TrackedPoseDriver that we might use to track any other XR controller in Unity. This one just happens to be tied to the eyeGazePosition and eyeGazeRotation set up in our Input System Action Map. There is also a TrackedPoseProvider specifically made to handle eye tracking if you are planning to use the more traditional Unity input methods.The new Sense Controllers are available for Unity developers and include interesting unique features only found on PS VR2.Finger touch detection uses capacitive touch to detect when a player’s fingers are resting on the buttons without actually pressing them. These controls are available on all the primary buttons and thumbsticks, so you can use them to drive more natural gestures with players’ hands during gameplay. You could also, in a more basic approach, drive a hand model to enable players to “see” where their fingers are when they look at the controller. This can really help players stay focused and immersed in an experience without having to lift the headset up or feel around for a specific button.PS VR2 uses inside-out tracking technology for the new system, giving you six-degree of freedom tracking for both the headset and controllers. You can now use most, if not all, of the standard Unity XR stack, making it easier to develop your games for broader platform reach. To set up the controllers themselves, we have exposed these input controls through both traditional Unity Input Manager and the newer Input System package.In addition to eye tracking and controller input, PS VR2’s SDK also allows full control over PS VR2 Sense technology haptics. This includes audio-based haptic feedback to provide a deeper experience for players, as well as a more traditional vibration support. The new controllers also include the same Adaptive Triggers available with PS5 DualSense controllers, meaning you can program the triggers with different styles of feedback based on game context. In addition to controller-based haptics, PS VR2 has added headset feedback, allowing you to give players adjustable vibration in the headset. This could be used to alert players of an event, or combined with audio to add more realistic sensation to experiences.We have worked hard to give you flexibility when it comes to integrating PS VR2 input and haptics into your games. With a combination of tracking improvements and a standard Unity XR SDK for PS VR2, you can leverage the full Unity XR stack, including things like the XR Interaction Toolkit, other XR SDK-dependent assets, Unity Asset Store packages, and other packages available through the Unity Package Manager.These features allow you to explore new forms of gameplay and worldbuilding.PS VR2 is available today, and you can build for it with Unity 2021 LTS and later. Foveated rendering requires Unity 2022.2 and later. We’re excited to see what this headset means for the VR industry as it unlocks a new level of performance and input controls for you to build even more immersive and exciting experiences.You’ll need an active Unity Pro subscription (or a Preferred Platform license key provided by the respective platform holder) to access these specific build modules via developer platform forums. Register here to become a PlayStation developer.Share your PS VR2 and PS5 games with us using the #MadewithUnity hashtag on social media. Have questions? Registered PlayStation developers can connect with us in the Unity forum on the PlayStation developer site.

>access_file_
1038|blog.unity.com

How to overcome the in-app purchase revenue gap

Now that players’ budgets are tightening, it’s becoming more important for midcore and hardcore games to diversify their monetization strategy beyond in-app purchases. According to data.ai, consumer mobile gaming is on a downward trend - it dropped 5% in 2022, and will drop another 3% in 2023 to $107 billion.How do developers fill that gap in monetization? They can do it in a way that won’t require players to spend a penny: carefully placed ads.In fact, based on Unity LevelPlay data, ad revenue for in-app purchase based genres (RPG, casino, puzzle, simulation, lucky rewards) has been steadily on the rise since 2020. As consumer spending continues to decelerate, we’ll likely see a major uptick in 2023.As you explore an ad-based monetization model for your mid-core or hardcore game, it’s worth determining which ad units get you the most bang for your buck. Based on our data, the answer is clear: rewarded videos and offerwalls. Here are four best practices to get started.Make your offerwall stand outIt’s likely easiest to start with the ad unit most similar to the in-app purchase model: the offerwall. Like in-app purchases, this mini-store offers valuable hard currency to users. But unlike in-app purchases, no cash is required - just the user’s time.The more accessible you make the offerwall to users, the bigger your revenue potential - so make sure your traffic drivers get users’ attention by putting them in the right spots. Here are the most effective traffic driver locations that maximize your offerwall engagement:Home screen: The best traffic drivers are the most visible ones - so your game’s home screen is an ideal spot to promote your offerwall, and increase impressions as a result. Store: The store is filled with users already looking to increase their hard currency, making it a great place to tell users they can get this currency for free.Breaks in gameplay: Placing traffic drivers during natural breaks in the gameplay (out of currency, end of level) can improve the user experience by giving players currency when they need it most. Start small with new placementsThe other critical ad unit to include in your ad monetization strategy is rewarded video - they’re completely optional, and offer players an enticing reward for a short amount of their time. They work well with every type of game - including traditionally in-app placement-based games, because their soft currency doesn’t interfere with paid hard currency. And the best part is - because offerwalls and rewarded videos work with different currencies - you can implement them both at the same time.During implementation, we’ve seen one common mistake: adding too many ads right away. If you build a complex placement system and add it to a game, it will be extremely hard to analyze its performance. This is especially true in in-app purchase-based games - since the deeper a game economy is, the harder it will be to attribute certain successes to certain placements. That’s why we recommend starting with one or two placements.As you set your ad placement strategy, make sure your ads are checking all the boxes:Optimizing exposure (e.g. placed in busy areas like the main screen)Helping players reach their goals (e.g. offering extra soft currency to upgrade a weapon). Once you see your placement is performing well, you can double-down on your strategy and start adding more rewarded videos.Balance your game economyAs you’re adding new soft currency placements to your game, like rewarded videos, there’s a chance they can impact your in-game economy - so you need to adjust it accordingly.Let’s say you’ve added a new rewarded video placement - even though they’re optional for users, you should always assume that every player is watching them, so you can balance your economy accordingly.The chart below, for example, shows how you can always incorporate ads into your economy without impacting the total amount a player receives per level. With each amount of currency your ad is offering, you can simply reduce that amount from your end-level winnings - keeping everything balanced. For example, in Level 4, users can win 20 coins through in-app purchases. If you incorporate ads, you’ll need to split that 20 into 10 for in-app purchases and 10 for ads.It’s important to remember that your new placements should only offer soft currency, not hard currency. This way, you can avoid cannibalizing your in-app purchases and the hard currency they offer.A/B test your placements as you goOnce you’ve added a new placement (rewarded video and/or offerwall) and adjusted your economy accordingly, it’s time to test how your new ad system is working and make adjustments as needed.Let’s say you added a new offerwall traffic driver and rewarded video placement to the home screen. Now you optimize. How’s your engagement rate? For the rewarded video placement, is the amount of currency affecting people’s interest in making in-app purchases?By understanding how players are engaging with your ads, you can adjust accordingly - and eventually apply your insights to any new placements you add.Once you’ve found a baseline placement (rewarded video and/or offerwall) that meets your KPI goals and you’re ready to A/B test, start with the basics:The ad’s location in the game (e.g. home screen vs. store) The placement’s design and messagingAnd, for rewarded video placements, go even deeper by testing:Currency type and amountCapping and pacing the ad’s frequencyTo accurately measure A/B test results, make sure to only perform one test at a time on any given placement. Your best measuring tools are retention and ad revenue per user (ARPU). If you’re making changes that boost ARPU while retaining high retention, you’re on the right path.Ultimately, as the industry adapts and changes, developers - particularly those who make traditionally IAP-based games - can adapt accordingly. Offerwalls and rewarded videos are key for increasing monetization, and with the right implementation strategy and A/B tests to optimize performance, there’s only room to improve.

>access_file_
1039|blog.unity.com

Inclusion starts here: Why a year spent supporting representation was the best of my career (so far)

In 2021, Unity funded a multi-year, $300,000 grant partnership with Spelman College to build a gaming degree program and support Spelman’s Innovation Lab. Then, in 2022, with support from Unity, I became the first Expert in Residence (EIR) in Atlanta to launch a pilot program aimed at underrepresented creators.The goal of the newly established program is to continue to build on Unity’s partnerships with Historically Black Colleges and Universities (HBCUs) and Minority Serving Institutions (MSIs).During my residency, I formed an internal council of stakeholders across the company who invested time in helping to create new opportunities for participants. I also had the chance to meet with faculty from various HBCU and MSI departments, demonstrate tools, provide guidance and resources to students, and give workshops on topics ranging from real-time 3D (RT3D) use cases for research to career paths for students.Through my work with HBCUs, I continue to be blown away by student and faculty dedication to making the world better and creating more opportunities and pathways to careers that will directly impact how diversity is approached globally.It is one of the biggest highlights of my career to be able to create a program that expands inclusion in gaming and immersive experiences. As I reflect on the past year – especially during Black History Month – here are just a few things I’ve learned that were critical to the successful execution of this pilot program.This work requires investment from leaders within the community, as well as from the industry, so that the two can come together at one table to meet the needs of students and faculty. During the program, I built relationships and established partnerships with nonprofit organizations, government officials, and entrepreneurs, all of whom were dedicated to using emerging technology for economic development. This was helpful in developing success metrics focused on career pathways for students.As a result of this work, I was honored as an inaugural Champion of the Metaverse by the US Black Engineer (USBE) Information Technology magazine. The Black Engineer of the Year Awards honor leaders in various engineering disciplines for their technical contributions and advocacy for STEM education in the community.Only 2% of game professionals are Black, and it was critical to understand the systemic issues that contribute to the underrepresentation of people of color in the games industry. Part of that research included looking at the history of STEM education and where opportunities to break into the industry lie. Understanding game engine technology and its use cases is also essential to the future of work, particularly in entertainment.Sharing that message with the technology community at large is important, and the AFROTECH Conference was a perfect place to continue highlighting the intersection of real-time technology and cultural creativity.It was exciting to have Black technologists from around the world attend our Real-Time 3D is the Future panel at AFROTECH. During the panel, we discussed the power of computer graphics and creative tools like SpeedTree and Ziva. We also talked about the significance of technology history by acknowledging the contributions of Black scientists and inventors in the video game industry who have helped make technologies like game engines possible. Being able to connect the dots between STEM education – particularly in art and engineering – to the software that we develop helps demystify the myriad ways you can build a career in the games industry.Creating a strategy to do work like this has to include multi-threaded efforts across teams internally at Unity, including Education, Employee Resource Groups (ERGs), Social Impact, Inclusion and Diversity, Marketing, Recruiting, and Product Advocacy. Through building this village, we can establish programs that leverage Unity products for future creator success and open the door to new opportunities for our creative communities around the world.I can’t stress enough how important it is to check in often with both your industry and community collaborators. There is no one-size-fits-all approach to this kind of work. By listening to your users and actively sharing their feedback, you can create a custom program that meets the needs of students and faculty while helping shape inclusive product design and software development.Although the pilot program is complete, many colleagues within Unity and the broader games industry have reached out to learn how to keep the momentum going. We hope that this EIR model can be replicated with future initiatives; however, whether you’re a Unity employee or not, you can keep up with everything Spelman is doing with our grant by following the Spelman Innovation Lab online.An exciting event coming up soon is Spelman’s HBCU Game Jam, which is being held in Atlanta from February 24–26.If you’re in the Atlanta area and would like to volunteer in person for the Spelman College event this month, email us to learn about opportunities. To keep up with Unity’s own inclusion and diversity efforts, visit our site.

>access_file_