// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1690 transmissions indexed — page 67 of 85

[ 2021 ]

20 entries
1321|blog.unity.com

Join Unity Pulse, the community at the heart of building a better Unity

Are you interested in providing feedback directly to Unity teams? Sign up to become a member of Unity Pulse, our new product feedback and research community that we created because we believe your experiences and insight into product features are vital to Unity’s evolution.Unity Pulse is a new online feedback community created by the market research team at Unity where you may have the opportunity to connect directly with Unity product teams, gain access to product concepts before they reach the public and give feedback on Beta products to help us make the best products and Unity experiences for you.Shape the future of Unity with usEngage with us to give feedback on early features and prototypesConnect in closed groups with Unity product teamsPolls, surveys, discussion and ideas sharing boards, virtual roundtables and moreGet points for participating in certain activities and redeem points for rewardsWe promise not to spam you with notifications. We’ll reach out only when we have content that’s relevant to you as we will tailor your Unity Pulse experience according to your profile.Sign up nowThe goal of Unity Pulse is to have a centralized source of solicited user feedback. We will be sun setting the Advisory Panel, so if you would like to continue to provide feedback as part of our research projects, please sign up for Unity Pulse. We will be redirecting some of the feedback opportunities for the Beta Program to Unity Pulse as well. And, don’t worry, there will still be opportunities for you to provide your insights and experiences through other parts of the Unity site including forums and Product Board.We want to learn and connect with users working in any industry, using any product or service, even if you are new to Unity, your previous experience and knowledge is valuable to us. We know that your time is important and we appreciate you spending some of it with us and the Unity Pulse community.We believe the world is a better place with more creators in it, and we want to provide you with the best products and experiences to make it as easy as possible for you to bring your creative vision to life. Join us on our journey and help us build a better Unity.—Have questions? Email us at unitypulse@unity3d.com.

>access_file_
1322|blog.unity.com

Bursting into 2021 with Burst 1.5

Our High Performance C# (HPC#) compiler technology, Burst, has gone from strength to strength. In the latest version Burst 1.5, we’ve made some major improvements. In this post we’ll take you through the headline features, and show you how to make the most of Burst in your projects.In collaboration with our partners at Arm, we’ve added Arm Neon hardware intrinsics to Burst 1.5. These let you target the specific hardware instructions available on Arm platforms, including the amazing vector technology Neon in all its glory.Arm Neon intrinsics were first introduced as an experimental feature in Burst 1.4, and we're happy to announce that in Burst 1.5 Neon intrinsics are now fully supported.Burst currently supports all Armv8-A intrinsics. Armv8.1-RDMA, Armv8.2-DotProd and Armv8.2-Crypto intrinsics are an experimental feature in Burst 1.5, full support will be added in the next Burst version.Arm Neon intrinsics make use of the v128 type, familiar from Intel intrinsics, and also the v64 type. These types comprise bags of 128 or 64 bits, respectively. It's up to you to make sure you are correctly treating vector element type and count; after all, in the CPU it is represented as a SIMD register.A simple usage example:Keep in mind that the IsNeonSupported value is being evaluated at compile time based on your target CPU, so it doesn't affect the runtime performance. If you want to provide multiple intrinsics implementations for Arm and Intel target CPUs, you'd want to have more of the IsXXXSupported blocks in your code.An important thing to consider is that Neon intrinsics are supported only on Armv8-A hardware (64-bit). On Armv7-A (32-bit) IsNeonSupported will always be false. If you are still targeting older 32-bit Arm devices, you can still rely on Burst to optimize your managed code automatically, without using Neon intrinsics directly.We’ll be following up on Arm intrinsics in a subsequent blog, sharing more details on Neon intrinsics.Hardware intrinsics are targeted at advanced users who want to get the absolute maximum performance out of the compiler, and want to fine-tune their code trying to squeeze down a few more CPU cycles. If you accept this challenge, we are happy to hear your feedback!A prominent new feature in Burst 1.5 is what we refer to as Direct Call. With Burst, we began focusing on jobs to accelerate tasks that run on Unity’s job system with our HPC# compiler. We then added function pointers, so you can manage and call into bits of Burst code from just about anywhere:The code proceeds to run through Burst. Note that Direct Call methods only work this way (as shown above) when called from the main thread.In Burst 1.5, we’ve added ample new and interesting functionalities to give you some extra optimization superpowers.Hint.Likely, Hint.Unlikely and Hint.AssumeOne key request that has continued to come up focuses on the use of intrinsics to inform the compiler whether something is likely or unlikely to happen. In Burst 1.5, we’ve added two new intrinsics to Unity.Burst.CompilerServices – Likely and Unlikely:These intrinsics enable you to tell the compiler whether some boolean condition (like the condition of an “if” branch) is either likely or unlikely to be hit. This allows the compiler to optimize the resulting code.We’ve also added an Assume intrinsic:This intrinsic informs the compiler of certain trends that will always occur. For instance, you can use Assume to tell the compiler that a pointer is never null, an index is never negative, a value is never NaN, and so forth. Be careful though – the compiler won’t check if your Assume is actually valid, so please ensure that your assumptions are actually true.IsConstantExpressionWe’ve also added an intrinsic to query whether an expression evaluates to a constant expression at compile time:This query can be used as shown above, to ensure that some value is constant. Otherwise, it can be used in algorithms with faster paths, if, for example, something is definitely not NaN or null.[SkipLocalsInit]In C#, all local variables are zero initialized by default. Sometimes developers want to skip the cost of doing this zero initialization, so we added an attribute [SkipLocalsInit] to do just that. Simply apply this attribute to any function that you don’t want to have the zero initialization happen on. This mirrors .NET 5’s SkipLocalsInitAttribute functionality, but brings it to Burst sooner.Check out these smaller but equally awesome improvements in 1.5, in no particular orderBurst now supports ValueTuple structures (int, float) within Bursted code – so long as types don’t stray across entry-point boundaries. For example, you can’t store them in a job struct or return them from a function pointer.We added Bmi1 and Bmi2 x86 intrinsics to Burst 1.5 – gating them on AVX2 support. Any CPU that has AVX2 support can now make use of these incredible bit manipulation instructions directly in their code.In Unity 2020.2 or newer versions, you can now call new ProfilerMarker("MarkerName") from Bursted code.We also re-enabled the Burst warning BC1370, exclusively in player builds. This warning tells you where throws appear in a function unguarded by [Conditional("ENABLE_UNITY_COLLECTIONS_CHECKS")] – which isn’t supported in player builds.Finally, there is a whole slew of performance improvements surrounding the use of LLVM 11 as our default code generator, along with optimizations for stackalloc hoisting, dead loop removal, compile time improvements and much more.Burst 1.5 is the last version to support Unity 2018.4. Our next version will have a minimum requirement of Unity 2019.4.Burst is a core part of our technology stack that you can start using today. It is a stable and verified package, already employed in thousands of projects, and counting. While our DOTS technology stack leverages Burst to provide highly optimized code, Burst also serves as a stand-alone package outside of DOTS.It supports all the major desktop, console and mobile platforms, and works with Unity 2018.4 or newer.If you have any thoughts, questions, or would just like to let us know what you are doing with Burst, then please feel free to leave us a message on the Burst forum.

>access_file_
1323|blog.unity.com

How to design and optimize creatives for mobile apps

Engaging ad creatives are key to succeeding on an ad network, and are often the difference between acquiring users at scale and struggling to get liftoff. But what does it take for apps, from categories ranging from education and travel, to entertainment and finance, to create killer ad creatives that convey the right message, engage users, and drive installs? By the end of this article, you’ll know just how to achieve this.Read on for top tips from Giacomo Maragliulo, Art Director at ironSource, and Shay Elkoby, Creative Operations Lead.1. Be authenticAuthenticity is key for apps to drive installs from the right users; the kinds who will stick around long term and build a strong connection with your product. Here are a couple pointers to make sure your ad is authentic:First, understand your USP and what motivates your audience. This will enable you to conceptualize your key messaging in your ad creatives, and what you need to highlight using various audio or visual effects.Jiggy, for example, knew that their target audience especially enjoys creating amusing dance videos for family members. Even more specifically, their motivation is to use the videos as light-hearted pranks. This shines through in their ad creative focuses on using “grandma” in the various dance animations, and uses copy like “prank” and “hilarious”. The developers also know that their users are strongly motivated by viral memes and gifs, so they make sure to showcase the many options that are available within the app.Second, be sure to show both the core app experience and the end product. The core product might be photo editing, but if the end goal is for users to share the photos on social media, make sure to show that too in the ad with strong visual cues - like screenshots of the user interface.2. Educate your audienceExtending on the authenticity tip, you want to make sure the creative is educational - by the end of the ad, the user should know exactly what to expect from your app and why they should download it.There are two methods for doing this - you should run A/B tests to determine which works best. First, try showing your app using a degree of creative license and a relatable storyline; for a delivery app, for example, this could involve using human actors in a home setting, debating what to do for dinner, before taking out their phone and ordering dinner via the app. The user gets to see the app in use and understands its value, but in a less direct way.Alternatively, you can test a different creative strategy - one that tells users straight away, explicitly, what the app is and why they need it. This could mean using creative elements like animations, testimonials, or in-app screenshots or videos in the first few seconds, and focusing on demonstrating the app’s core features for the duration of the ad.Whatever approach you go for, ensuring the creative is educational will mean only the most relevant users will install, meaning they’ll stick around longer and increase your ROAS and LTV. Remember that it won’t be possible to convey all your messages and selling points in a short ad - prioritize what’s most important, and leave some room for curiosity among users that will lead them to head to the store and install the app.3. Leverage audio effectsAudio, especially when paired with strong visuals, can be very effective in improving the user experience and in turn boosting IPMs for app ad campaigns. Get inspired by what others are doing - look out for the use of voiceover narration, crescendos, and the different musical styles and sound effects used in the background.TeasEar used both music and audio effects in its video ad: the music has a fast rhythm to complement the vibrant and dynamic visuals and generating excitement, while the audio effects when the different stickers are used highlights the ASMR aspect of the app. In this case, the audio effects could be described as educational, by helping reinforce that this app is in the ASMR category.Note that while audio elements can be combined together, like music and sound effects, forcing too many audio elements into one creative doesn’t necessarily mean it’ll increase its IPM - it might even do the opposite. Make sure to A/B test different versions and combinations of audio elements to find what’s right for your specific ad campaign.4. Research ad creative trendsCompetitive intelligence is a key part to building ad creatives that achieve excellent IPMs. If you can emulate the key features and components of your most successful competitors’ ads - while adapting them and making them your “own” - you’re already on the way to making a high-impact creative. Scour App Annie, Sensor Tower, Youtube, Facebook, and Instagram to see what’s hot. Here’s a few features to put on your checklist:Ad typesSee what ad types other apps in your category are using to guide your own strategy. Are they using static ads or videos? What about interactive end cards? Either way, you should experiment with several different types of creatives, but it’s always useful to see what your competition is doing.LengthHow long are your competitors’ video ads? 10 seconds or 30 seconds? We’ve found that for apps, short is sweet - around 15 seconds is generally the optimal length. Shorter than this and you risk failing to educate users about your app and conveying your key message; longer than this, you risk users losing engagement and dropping off before they finish watching your video. Having said this, be sure to run plenty of A/B tests to determine the optimal length for your specific campaign - 15 seconds is a good place to start, but users might prefer even shorter video ads depending on the product.On-screen effectsHow are your competitors using prompts or other visual effects to help convey their message? For instance, they could be using human hands to point at something and emphasize it; they could use a human protagonist to tell the story, or on-screen copy; they might use live actors.

>access_file_
1324|blog.unity.com

Enter the Boss Room: our new multiplayer sample game

Explore Unity’s new, experimental netcode library and the underlying patterns of a multiplayer game in our small scale cooperative RPG, Boss RoomCreating multiplayer games is not easy, and it’s common to feel overwhelmed when exploring the development of a multiplayer game – even with SDK docs provided. You need advice on what to do with the provided SDKs and patterns you can use as building blocks for your own games. It’s our goal as the Unity Multiplayer Networking team to equip and support developers (like you) with the tools needed to build great multiplayer gaming experiences for your players. Part of that promise involves providing not only the foundational networking technology but also the documentation and educational templates needed to understand its application. That’s where our multiplayer samples come in – with Boss Room being the first educational content of its kind.As Boss Room is being developed, tutorials on the different aspects of networking the sample will be written for developers on our new documentation site. These tutorials will cover many critical pieces of networking a small cooperative game, such as how to choose between RPCs vs NetworkVariables, or how to design your game to be responsive with lag compensation techniques.Now let’s dive into our new, early access co-op sample: Boss Room.Enter the Boss RoomWelcome to the Boss Room, an official sample project built on 2020 LTS that showcases Unity’s native suite of tools, graphics, and experimental networking technology – available now as Early Access through Github. Boss Room leverages the new experimental netcode package to bring up to eight players together to defeat imps and a boss in this adorable vertical slice of a cooperative RPG dungeon. Eager for more Boss Room? Don’t sweat – it will continue to evolve alongside the community as our multiplayer solution evolves too.Get started here, or read on to see a quick tour of what to expect in Boss Room today.The premisePlayers will begin Boss Room by hosting or joining a game server, which will be hosted on one of the players’ devices.Once connected, players will select a Hero from one of the eight available, and join a lobby and select your character while waiting for all party members to be ready to play. When all players are ready, a short timer shows and then all Heroes are transported into the Boss Room environment.Once in the Boss Room, players must work together to get past enemy minions and defeat the boss.NetworkingWith the main goal of Boss Room being to teach developers the underlying concepts and patterns behind a multiplayer game, the networking aspects of the sample are crucial.Part of the main value designed in Boss Room is the multiplayer patterns it shows: action animation anticipation, lobby, state vs RPC, and more. The goal is to provide users with not only the implementation of such patterns, but also documentation to help them understand it. Be sure to keep an eye on our documentation site to stay up-to-date with these articles as they’re published over the coming months.The networking model used in Boss Room is a client-hosted server, and players can connect to each other through an integrated Photon Relay and IP direct connection.Characters and classesFor Player Characters, we have four 3D character classes (Mage, Warrior, Rogue, and Archer), three races (Elf, Human, and Dwarf), and two genders represented today. These characters have two primary abilities and a few emote animations included for each. For AI / Enemies, we showcase imps and a dungeon boss – each with their own unique abilities.The characters in Boss Room are all based on a common character model we lovingly call the “U” – designed to be reused and scaled into new shapes and sizes. Since they all share a common structure, they’re designed for reuse. Boss Room uses the UCL license, which means you'll be able to reuse all our assets and build on top of them in your own Unity project, so by all means… mod away!As previously mentioned, Boss Room demonstrates a variety of action gameplay techniques that are commonly found in co-op RPGs, along with the coding patterns and techniques that are useful when implementing said actions. This includes server-driven pathfinding and movement with client-side interpolation as well as having the Action System be a generalized mechanism for characters to “do stuff” in a networked way.Actions include everything from your basic character attack to a fancy AOE skill like the Archer's Volley Shot. Below is a list of action archetypes implemented using this system:Melee attack with a physics-based hitbox check.Area-of-effect attack. The attack is centered on a point provided by the client to show client-side area selection with a server-driven effect.Ranged Projectile attack which spawns a server-driven projectile. Also includes a variant of this action with an ability to "charge" the shot by holding down the action button.Stunned action that prevents AI-driven agents from doing anything.Stealth action that toggles stealth mode for rogue.A buffing action with an included ability to charge it by holding down the action button. Produces extra effect on the maximum level of charge.Emote actions that play silly character animations – these show player communication.Chase action that makes your character follow the chosen targetTarget selection that is used for actions that can aim at a target, if one is chosen.Trample action that Boss executes and pushes unwary heroes aside – this shows patterns for networked physics.Revive action that allows heroes to bring each other back to life.Dungeon and gameplayThe Boss Room environment is designed to let players have a few moments to test their character’s skills before they face the boss.The dungeon features an antechamber and a boss room, with a simple coop switch puzzle to get into the Boss’s area. The goal is simple, defeat imps, solve the puzzle, take down the boss – and get to the treasure at the end!A note from our teamThe goal of Boss Room is greater than just API education. Multiplayer is one of the areas of game making which will influence your game design, not just your implementation. Knowing good practices like "when is it ok to be client driven vs server driven if I have a server driven, physics impacting NPC like the boss" or "tricks where you can use ramp up animations to hide latency" will help you make a better multiplayer game. Boss Room's whole feature set is examples of these practices and patterns.Our team's goal is to give you a reference for the whole engineering process around multiplayer game development so you're able to build a multiplayer game from A to Z for specific types of games. This will involve design, implementation, testing and how to handle a live environment. We're starting with Boss Room as the reference – our docs around it are currently being written.Boss Room's secondary goal is dogfooding the new experimental GameObject netcode package (an evolution of MLAPI). The samples team and the SDK team work hand in hand to improve MLAPI, give UX feedback, and raise issues. As MLAPI adds new features and updates the SDK, we'll keep using these features and make sure they make sense in a project development context.What’s up next?As our networking solution continues to grow and improve, so will this sample. Here are ways to stay involved.Follow this project on Github, and explore its capabilities with this guideFollow the core netcode progress on GitHub and add requests for updates or features to our public roadmapChat with us on Discord or in the Unity Multiplayer forum and share your experiences or ask for help

>access_file_
1325|blog.unity.com

Tales from the optimization trenches: Saving memory with Addressables

Efficient streaming of assets in and out of memory is a key element of any quality game. As a consultant on our Professional Services team, I’ve been striving to improve the performance of many customer projects. That’s why I’d like to share some tips on how to leverage the Unity Addressable Asset System to enhance your content loading strategy.Memory is a scarce resource that you must manage carefully, especially when porting a project to a new platform. Using Addressables can improve runtime memory by introducing weak references to prevent unnecessary assets from being loaded. Weak references mean that you have control over when the referenced asset is loaded into and out of memory; the Addressable System will find all of the necessary dependencies and load them, too. This blog will cover a number of scenarios and issues you can run into when setting up your project to use Unity Addressable Asset System – and explain how to recognize them and promptly fix them.For this series of recommendations, we will work with a simple example that’s set up in the following way:We have an InventoryManager script in the scene with references to our three inventory assets: Sword, Boss Sword, Shield prefabs.These assets are not needed at all times during gameplay.You can download the project files for this example on my GitHub. We’re using the preview package Memory Profiler to view memory at runtime. In Unity 2020 LTS, you must first enable preview packages in Project Settings before installing this package from the Package Manager.If you’re using Unity 2021.1, select the Add package by name option from the additional menu (+) in the Package Manager window. Use the name “com.unity.memoryprofiler”.Let’s start with the most basic implementation and then work our way toward the best approach for setting up our Addressables content. We will simply apply hard references (direct assignment in the inspector, tracked by GUID) to our prefabs in a MonoBehaviour that exists in our scene.When the scene is loaded, all objects in the scene are also loaded into memory along with their dependencies. This means that every prefab listed in our InventorySystem will reside in memory, along with all the dependencies of those prefabs (textures, meshes, audio, etc.)As we create a build and take a snapshot with the Memory Profiler, we can see that the textures for our assets are already stored in memory even though none of them are instantiated.Problem: There are assets in memory that we do not currently need. In a project with a large number of inventory items, this would result in considerable runtime memory pressure.To avoid loading unwanted assets, we will change our inventory system to use Addressables. Using Asset References instead of direct references prevents these objects from being loaded along with our scene. Let’s move our inventory prefabs to an Addressables Group and change InventorySystem to instantiate and release objects using the Addressables API.Build the Player and take a snapshot. Notice that none of the assets are in memory yet, which is great because they have not been instantiated.Instantiate all the items to see them appear correctly with their assets in memory.Problem: If we instantiate all of our items and despawn the boss sword, we will still see the boss sword’s texture “BossSword_E ” in memory, even though it isn’t in use. The reason for this is that, while you can partially load asset bundles, it’s impossible to automatically partially unload them. This behavior can become particularly problematic for bundles with many assets in them, such as a single AssetBundle that comprises all of our inventory prefabs. None of the assets in the bundle will unload until the entire AssetBundle is no longer needed, or until we call the costly CPU operation Resources.UnloadUnusedAssets().To fix this problem, we must change the way that we organize our AssetBundles. While we currently have a single Addressable Group that packs all of its assets into one AssetBundle, we can instead create an AssetBundle for each prefab. These more granular AssetBundles alleviate the problem of large bundles retaining assets in memory that we no longer need.Making this change is easy. Select an Addressable Group, followed by Content Packaging & Loading > Advanced Options > Bundle Mode, and go to Inspector to change the Bundle Mode from Pack Together to Pack Separately.By using Pack Separately to build this Addressable Group, you can create an AssetBundle for each asset in the Addressable Group.The assets and bundles will look like this:Now, returning to our original test: Spawning our three items and then despawning the boss sword no longer leaves unnecessary assets in memory. The boss sword textures are now unloaded because the entire bundle is no longer needed.Problem: If we spawn all three of our items and take a memory capture, duplicate assets will appear in memory. More specifically, this will lead to multiple copies of the textures “Sword_N” and “Sword_D”. How could this happen if we only change the number of bundles?To answer this question, let’s consider everything that goes into the three bundles we created. While we only placed three prefab assets into bundles, there are additional assets implicitly pulled into those bundles as dependencies of the prefabs. For example, the sword prefab asset also has mesh, material, and texture assets that need to be included. If these dependencies are not explicitly included elsewhere in Addressables, then they are automatically added to each bundle that needs them.Addressables include an analysis window to help diagnose bundle layout. Open Window > Asset Management > Addressables > Analyze and run the rule Bundle Layout Preview. Here, we see that the sword bundle explicitly includes the sword.prefab, but there are many implicit dependencies also pulled into this bundle.In the same window, run Check Duplicate Bundle Dependencies. This rule highlights the assets included in multiple asset bundles based on our current Addressables layout.We can prevent the duplication of these assets in two ways:1. Place the Sword, BossSword and Shield prefabs in the same bundle so that they share dependencies, or2. Explicitly include the duplicated assets somewhere in AddressablesWe want to avoid putting multiple inventory prefabs in the same bundle to stop unwanted assets from persisting in memory. As such, we will add the duplicated assets to their own bundles (Bundle 4 and Bundle 5).In addition to analyzing our bundles, the Analyze Rules can automatically fix the offending assets via Fix Selected Rules. Press this button to create a new Addressable Group named “Duplicate Asset Isolation,” which has the four duplicated assets in it. Set this group’s Bundle Mode to Pack Separately to prevent any other assets no longer needed from persisting in memory.Using this AssetBundle strategy can result in problems at scale. For each AssetBundle loaded at a given time, there is memory overhead for AssetBundle metadata. This metadata is likely to consume an unacceptable amount of memory if we scale this current strategy up to hundreds or thousands of inventory items. Read more about AssetBundle metadata in the Addressables docs.View the current AssetBundle metadata memory cost in the Unity Profiler. Go to the memory module and take a memory snapshot. Look in the category Other > SerializedFile.There is a SerializedFile entry in memory for each loaded AssetBundle. This memory is AssetBundle metadata rather than the actual assets in the bundles. This metadata includes:Two file read buffersA type tree listing every unique type included in the bundleA table of contents pointing to the assetsOf these three items, file read buffers occupy the most space. These buffers are 64 KB each on PS4, Switch, and Windows RT, and 7 KB on all other platforms. In the above example, 1,819 bundles * 64 KB * 2 buffers = 227 MB just for buffers.Seeing as the number of buffers scales linearly with the number of AssetBundles, the simple solution to reduce memory is to have fewer bundles loaded at runtime. However, we’ve previously avoided loading large bundles to prevent unwanted assets from persisting in memory. So, how do we reduce the number of bundles while maintaining granularity?A solid first step would be to group assets together based on their use in the application. If you can make intelligent assumptions based on your application, then you can group assets that you know will always be loaded and unloaded together, such as those group assets based on the gameplay level they are in.On the other hand, you might be in a situation where you cannot make safe assumptions about when your assets are needed/not needed. If you are creating an open-world game, for example, then you cannot simply group everything from the forest biome into a single asset bundle because your players might grab an item from the forest and carry it between biomes. The entire forest bundle remains in memory because the player still needs one asset from the forest.Fortunately, there is a way to reduce the number of bundles while maintaining a desired level of granularity. Let’s be smarter about how we deduplicate our bundles.The built-in deduplication analyze rule that we ran detects all assets that are in multiple bundles and efficiently moves them into a single Addressable Group. By setting that group to Pack Separately, we end up with one asset per bundle. However, there are some duplicated assets we can safely pack together without introducing memory problems. Consider the diagram below:We know that the textures “Sword_N” and “Sword_D” are dependencies of the same bundles (Bundle 1 and Bundle 2). Because these textures have the same parents, we can safely pack them together without causing memory problems. Both sword textures must always be loaded or unloaded. There is never concern that one of the textures might persist in memory, as there is never a case where we specifically use one texture and not the other.We can implement this improved deduplication logic in our own Addressables Analyze Rule. We will work from the existing CheckForDupeDependencies.cs rule. You can see the full implementation code in the Inventory System example. In this simple project, we merely reduced the total number of bundles from seven to five. But imagine a scenario where your application has hundreds, thousands, or even more duplicate assets in Addressables. While working with Unknown Worlds Entertainment on a Professional Services engagement for their game Subnautica, the project initially had a total of 8,718 bundles after using the built-in deduplication analyze rule. We reduced this to 5,199 bundles after applying the custom rule to group deduplicated assets based on their bundle parents. You can learn more about our work with the team in this case story.That is a 40% reduction in the number of bundles, while still having the same content in them and maintaining the same level of granularity. This 40% reduction in the number of bundles similarly reduced the size of SerializedFile at runtime by 40% (from 311 MB to 184 MB).Using Addressables can significantly reduce memory consumption. You can get further memory reduction by organizing your AssetBundles to suit your use case. After all, built-in analyze rules are conservative in order to fit all applications. Writing your own analyze rules can automate bundle layout and optimize it for your application. To catch memory problems, continue to profile often and check the Analyze window to see what assets are explicitly and implicitly included in your bundles. Check out the Addressable Asset System documentation for more best practices, a guide to help you get started, and expanded API documentation.If you’d like to get more hands-on help to learn how to improve your content management with the Addressable Asset System, contact Sales about a professional training course.

>access_file_
1326|blog.unity.com

Build stunning mobile games that run smoothly with Adaptive Performance

Learn how to use Adaptive Performance to tune your mobile game – balancing frame rates and graphics. Get the latest details on Adaptive Performance’s updates for device simulator, samples and scalers.Developers must pay close attention to their game’s performance on players’ devices, especially when building more complex mobile games. After all, performance issues can affect gameplay and drain the device’s battery. An excessive amount of heat generated by mobile phones, more specifically, can also cause thermal throttling, which leads to dropped frame rates – an issue that is tough to recover from.So why does thermal throttling affect your mobile game's performance? Well, as your game attempts to do more work, such as rendering or processing game logic, CPUs and GPUs use more power. This increase in power means that more heat is produced, which slows down device performance in an attempt to reduce its temperature.With Unity and Samsung’s Adaptive Performance, you can now monitor the device’s thermal and power state to ensure that you are ready to react appropriately. While playing for an extended period of time, for instance, you can reduce your level of detail or LOD bias dynamically to see that your game continues to run smoothly. Adaptive Performance allows developers to increase performance in a controlled way, which in turn, minimizes graphics fidelity.Adaptive Performance works for all Samsung Galaxy devices. In other words, only Samsung devices can benefit directly from Adaptive Performance implementation.Samsung is the leading Android device manufacturer, with more than a third of the global market share according to AppBrain. This means that adding Adaptive Performance to your game is a sure way to improve performance on hundreds of millions of devices.While you can use Adaptive Performance APIs to fine-tune your application, Adaptive Performance also offers automatic modes. In these modes, Adaptive Performance determines the game settings to tweak based on several key metrics, including:Desired frame rate based on previous framesDevice temperature levelDevice proximity to thermal eventDevice bound by CPU or GPUThese four metrics dictate the state of the device so that Adaptive Performance tweaks the adjusted settings to reduce the bottleneck. This is done by providing an integer value, known as an Indexer, to describe the state of the device. The Indexer is a system that keeps track of your device’s thermal and performance state and provides a quantified quality index.Scalers represent individual features in your game, which can include, but are not limited to, graphics and physics settings. Scalers adjust themselves based on the Indexer’s value. You can view which Scalers are available in Device Simulator’s Adaptive Performance extensions.Simulating bottlenecks can be difficult, but thanks to Adaptive Performance’s integration with Device Simulator, you can test various scenarios directly in the editor instead of waiting for the device to heat up before benchmarking.With the Thermal settings in Device Simulator, you can set the device to Throttle, or to send out a warning when throttling is imminent. You can also adjust levels and trends to positive, which indicates that the device is generating heat.The Performance settings, meanwhile, allow you to set any current bottlenecks to CPU, GPU or Target Frame rate. Similarly, you can set CPU and GPU levels to simulate the frequency of their performance.Both Thermal and Performance settings affect how Adaptive Performance alters your game’s performance via Indexers and Scalers. With Device Simulator, you can enable different Scalers to see how Adaptive Performance accommodates your device when it is throttling.For example, you can allow Adaptive Performance to tweak the Shadow settings when the GPU is set as your bottleneck and the warning level is set to Throttling with an increase in both thermal trends and levels. You can also override the Scaler with the slider to test individual settings.Adaptive Performance empowers the creation of custom scalers to enhance and expand on the ways that game settings are controlled. This includes settings that are not automatically provided.To implement a custom scaler, you must implement the AdaptivePerformanceScaler class.Setting the QualitySettings.masterTextureLimit, for instance, prompts you to describe the texture quality and size per level.Based on the current level of the scale, you can override the OnLevel virtual function and implement a scaling logic. The CurrentLevel that Adaptive Performance reports back can inform you to set the QualitySettings.masterTextureLimit to a higher value, which will then use a lower scaled texture mipmap of all the textures. When dealing with texture sizes, you can see that your custom scaler impacts visuals and specifically targets the GPU.Describing a max level and boundary also ensures that your game’s visuals are not entirely lost, as higher leveled mipmapped textures make up half the dimensions of a lower leveled mipmap texture.Adaptive Performance provides out-of-the-box features that allow your game to react appropriately to the current state of the device. To learn more about Adaptive Performance, you can view the samples we’ve provided in the Package Manager by selecting Package Manager > Adaptive Performance > Samples. Each sample interacts with a specific scaler, so you can see how each scaler impacts your game. We also highly recommend viewing the End User Documentation to learn more about Adaptive Performance configurations and how you can interact directly with the API. The documentation, along with other relevant links, can be found below.Watch the video to learn more about Adaptive Performance. You can also find out how to implement Adaptive Performance with our documentation.

>access_file_
1327|blog.unity.com

How much do media buyers know about mobile gamers?

According to App Annie’s 2021 State of Mobile Report, mobile gaming grew 20% year over year in both users and total consumer spend. But is that jump reflected in mobile marketers’ media plans for 2021? ironSource surveyed 211 advertising industry professionals across agencies and programmatic partners to find out and gauge how much of their media strategies focus on mobile gaming.Advertisers are ready to spend more in-game in 2021- Our survey found that ad spend in mobile games is set to increase this year, with 49% of those who have previously run in-game campaigns, and 32% of those who haven’t, planning on expanding in-game budgets in 2021. Meanwhile, 48% of those who have historically run in-game campaigns plan to continue to do so at existing spend levels.- 73% of survey respondents currently buy in-app ads- 51% of survey respondents currently buy in-game ads; 49% of these plan on increasing their spend from 2020, 48% plan on keeping it flat- 35% of those who did not buy in-game in 2020 plan on starting to buy in 2021Buyers who do not purchase in-game underestimate the size of the marketDespite the increase in spend and adoption, many media professionals continue to hold onto biases regarding who plays games, what kind of games are popular, and what ad formats work best in mobile games. This is particularly true of those who have never run campaigns on in-game inventory - as most found it difficult to identify the scale of the global gaming audience, their demographic makeup, or the revenue generated by the mobile gaming market.Out of buyers who do not purchase in-app:- 58% underestimate the number of gamers worldwide, which Newzoo reported in 2020 as over 2.6 billion- 60% understate the average age of a gamer, which is 36- 86% assume that people playing most frequently earn salaries between $50K-$149K, when in actuality it $250K+- 75% underestimate the revenue generated by the gaming market. Mobile games made up $76 billion of revenue in 2020, according to Newzoo. As a comparison, global revenue for other entertainment channels like streaming remains lower than gaming - e.g., streaming audio ($26 billion), streaming video ($42 billion).Buyers of all kinds continue to hold misconceptions about the gaming audience today- 73% of buyers - regardless of whether or not they buy in-game ads - underestimate how much of the app audience would engage with rewarded video to unlock content. According to eMarketer, 74% of users would watch an ad in exchange for in-app rewards or currency, whereas most media buyers assumed the number was 65% - and those who do not buy in-game ads were likely to estimate it at 51%.- Across 15 questions about mobile audiences and gaming, the average score was 45% correct among the media professionals surveyed, even for those who had previously run campaigns on in-game inventory.- 83% of buyers know casual games are the most downloaded genre of mobile games. However, 67% incorrectly identify puzzles as the most popular sub-genre of casual games, versus the 15% who correctly identified it as arcade- 63% of buyers assumed casual games were the top genre for time spent, while only 15% correctly identified RPG, strategy and action gamesWhat does this mean for ad buyers - and developers?It’s reassuring to see an increased investment in gaming as more advertisers understand the reach and impact of the mobile gaming audience. The findings indicate that there remains an opportunity to educate media buyers about the power and potential of in-app and in-game advertising, and of the benefits of interactive formats such as rewarded video.

>access_file_
1329|blog.unity.com

Zutari uses Unity to design renewable energy sites for a more sustainable future

See how Zutari, a South African engineering consultancy, is using Unity’s real-time 3D development platform to automate large-scale solar photovoltaics (PV) projects to reduce the time required to develop design-level insights and decrease costs.Zutari’s mission is to co-create innovative engineering solutions that deliver real impact and enable environments, communities, and economies across Africa to thrive. Among its core areas of expertise, Zutari works to deliver sustainable energy solutions—like hydro, solar, hybrid, storage, and wind power—that fit unique local needs, terrain, and constraints. To accomplish that, Zutari and its visualization team embrace emerging technology throughout a project’s lifecycle.“By bringing storytelling and creative technology together, we create immersive and interactive project experiences that better communicate the vision of major infrastructure and built environment projects,” says Murray Walker, Expertise Leader in Interactive Visualization at Zutari.At the heart of the company’s emerging technology is Unity’s real-time 3D development platform. Zutari is using Unity to change the way large-scale solar projects in South Africa are designed, created, and operated.Sun tracking and shadingTaking on a solar project is a huge endeavor. Some sites can be as large as 112 square miles. That’s about 2.5 times bigger than the city of San Francisco. To get an entire site into Unity, 3D models are spawned after projects are exported and coordinates are added for each of the components. Autodesk Revit and AutoCAD models are brought into Unity to create an immersive, interactive virtual environment.Solar panels need to be placed in the correct position relative to the earth and project site in order to optimize each panel to convert as much sunlight into energy as possible. Every site is going to be different and come with its own set of challenges, whether it’s in Brazil, Malawi, or Canada.In Unity, Zutari can do sun tracking and shading for each solar panel. For example, it enables them to see if panels are too close together and shading each other at a specific time of day or year, which is going to be different in the winter than it is in the summer.The terrain can also play a factor, complicating sun shading even more. The geographic location might not allow a panel to be placed as planned. By tracking sun and shading, Zutari is able to optimize its solar sites and install as many panels as possible to increase the energy output.While it is typical that these issues are accounted for using more traditional engineering and design tools, this highly immersive visualization allows Zutari’s engineers to quickly review the impact of design decisions under any possible condition throughout a year of operation.Shedding new light on construction monitoringConstruction monitoring helps provide scheduling guidance for when and where components should be installed. Unity enables developers early insight into what is happening on-site throughout various stages of the project lifecycle with a virtual model that multiple stakeholders can access simultaneously.Zutari tracks progress by leveraging drone footage captured at various stages of construction. The drone footage is then used to place the objects accordingly within the virtual environment. Updated visualizations are sent to the client every week.This provides them with an interactive progress report, the ability to “tour” the site virtually throughout the process, and validation that the build is consistent with what’s being paid for. Zutari is always looking for ways to improve the process even further.“Our goal is to eventually use Unity to train machine learning algorithms that can help us determine the completion percentage of a site, where the drones can automatically discern a bush from a pole we’ve constructed in the ground and perform an accurate calculation of the site’s completeness,” says Walker.An internal solar PV design solutionTo improve and accelerate design even further, Zutari is using Unity alongside AutoPV, a computational design solution also developed by Zutari with its top PV design engineers to automate the design process of large utility-scale solar PV facilities.The solar design process is quite laborious and time-consuming, taking weeks or even months for large utility-scale projects. It requires numerous iterations to optimize the routing, length and cable sizes, and the placement of inverters and junction boxes for each facility. Manually calculating these routings, locations, and engineering parameters is complex and even minor errors can result in setbacks and rework. With AutoPV, the cable and inverter layouts and engineering parameters can now be calculated automatically in a matter of seconds. This allows near-instant design-level detailed bills of materials, equipment schedules, cable losses, and other detailed engineering parameters from very early project development stages. The power that Unity brings to AutoPV allows Zutari’s engineers and clients to interactively visualize, review and optimize designs as many times as they should wish. Something that is currently not possible to achieve due to long design lead times. Unity’s advanced visualizations also provide rich material for interaction with other stakeholders right from the first steps of development.Energy for more use casesUnity is also being leveraged in a number of additional capacities across Zutari’s business, including wind turbine design and construction, trucking logistics for material transport, and more. Zutari is also actively developing virtual reality (VR) solutions to train operators of high voltage electrical equipment and installations in the safety of their own offices. These training solutions will have a significant impact on the future of operator and contractor training.See why industry leaders are embracing real-time 3D technology to change the way buildings are designed, created, and operated.Learn more about Unity for AEC

>access_file_
1330|blog.unity.com

Mapping what’s next for in-car navigation experiences

We’re teaming up with HERE Technologies, the world’s leading location platform that collects data from over 100,000 sources and powers maps in over 150 million vehicles, to reimagine in-car experiences. Check out our shared vision for the future of embedded automotive human-machine interfaces (HMIs) in a new demo made with Unity and featuring HERE 3D city-data.We believe every screen – and how people interact with these screens – can benefit from real-time 3D technology. Unity’s real-time 3D brings disjointed HMI design and development workflows together to create visually compelling, immersive HMI experiences in cars and other industrial products.We’re working with the broader HMI ecosystem to extend the power of this technology everywhere to the benefit of both creators and consumers. Following our collaborations with Elektrobit and NXP Semiconductors, we’re teaming up with HERE Technologies, the leading provider of map content and location-based services to the automotive industry.To provide a glimpse of our shared vision of the automotive user experience, we’re debuting a futuristic, wide-screen demo of an embedded in-vehicle infotainment (IVI) system. It showcases our combined capabilities by integrating a 3D map of San Francisco from HERE Premier 3D Cities data with Unity.The demo has been tested on the Qualcomm Snapdragon SA8155, a popular automotive System on Chip (SoC). This prototype serves as inspiration for a world where automotive original equipment manufacturers (OEMs) can create more immersive infotainment systems that blend 3D location data with dynamic, high-end design capabilities.“The goal of our collaboration with Unity is to meet our customers’ desire for a more stimulating in-car navigation experience,” said Jorgen Behrens, Chief Product Officer at HERE Technologies. “Unity’s robust 3D rendering engine makes HERE 3D city data, route guidance and navigation look impressive, providing a rich and immersive in-dash experience to the driver.”Current HMI design workflows are rife with inefficiencies and pain points. Typical processes start with a designer’s concepts and guides, which are then interpreted by remote tier 1 integration engineers. After OEM design review, it goes back to the designer for design review and changes – sometimes taking days or weeks per cycle. Collaboration occurs across multiple tools with limited interoperability, resulting in incomplete and inefficient implementations of concepts as well as lackluster graphics performance. Only a small portion of design concepts and visuals are able to make their way to mass production.Unity’s real-time 3D unlocks more efficient HMI workflows, bringing user interface (UI) and user experience (UX) design and development together in one end-to-end experience. Visually compelling, highly interactive concepts, mockups, and final designs appear and perform as they would in target HMI hardware (chipsets and screens).This real-time workflow enables design and engineering teams to collaborate in a rapid, agile way and seamlessly transition from initial prototypes to final production implementation. Vision becomes reality faster and without the many compromises, teams typically have to make along the way.In comparison with other real-time 3D engines, Unity’s runtime scalability enables designers and HMI engineers to create one HMI system yet be able to deploy to both high-end and low-end SoCs, saving OEMs millions of dollars when working with multiple carline variants.To create this demo, HERE’s 3D Concepts & Prototyping team in its Vertical Products division leveraged Unity’s extensibility and Unity Scripting API to create a custom, simplified version of the Unity Editor. The beauty of Unity is that anyone can tailor it to the way their teams work; in this case, HERE’s team reconfigured Unity to better support HMI design processes, removing components in the inspector or hierarchy that were not required and creating a simplified design-focused UI and workflow.To build this proof-of-concept demo, HERE’s team created UI elements from familiar content creation tools as well as Unity prefabs. Teams could drag-and-drop these pre-prepared elements, including HERE Premier 3D Cities content, into the HMI design. The team also integrated samples of HERE’s location data, such as routing and weather, and point-of-interest (POI) indications, such as gas stations and restaurants. Map content was styled and enhanced with location-specific animatable objects and Unity prefabs, and used for navigation, situational awareness views as well as location-based services (LBS).With the custom Unity Editor featured in the demo, HERE’s designers were able to use Unity with no prior experience. They used Unity for UI logic and all interactive elements, rendering, visual effects, and animations.Unity provided the flexibility to create UIs and 3D map visual configurations in multiple styles and interactively adjust them by time of day.Thanks to Unity, they could also continually test the performance of their design on their target display and make adjustments to rapidly iterate the design. This “what you see is what you get” (WYSIWYG) development shows how OEM teams can shorten design cycle times and greatly reduce development costs.The future looks bright for HMI design and development when combining the awesome power and features of the Unity engine and HERE’s rich automotive-grade location data and services.Want more information? Get in touch with a Unity expert to explore bringing Unity and HERE into your HMI projects.Explore Unity for HMI

>access_file_
1331|blog.unity.com

Unity 2020 LTS and Unity 2021.1 Tech Stream are now available

We know that creators work differently. That’s why we offer two release versions, Tech Stream and Long-Term Support (LTS), so you can choose the solution that better fits your needs.The Tech Stream release gives you access to the latest in-progress features, so you can explore new capabilities for your project and pressure-test components as we continue to build them.The LTS version of Unity prioritizes proven stability. This release rolls up mature builds of the features and improvements made over the previous calendar year’s Tech Streams into a single install with two years of support.Release versions offer you greater control over how you create and deploy real-time 3D experiences – and that means more freedom in how you build imaginative experiences for your players and greater confidence that we have your back.Our role is to power and support your work, so you can be successful. In 2020, we doubled down on our commitment to delivering a high-quality creative environment that facilitates your productivity while improving performance for both your workflows and your players’ experience.Over the last 12 months, this work has allowed us to deliver a more robust and stable Unity Editor for your creative foundation, regardless of which release stream you use. We raised our quality bar by focusing on two annual Tech Stream releases rather than our previous three, extending the stabilization period for even the newest features.We also changed the package lifecycle for how we label the readiness of individual features to better clarify what you can expect in terms of packages’ stability and functionality.The work on our Data Oriented Technology Stack (DOTS) continues, with Burst Compiler and C# Jobs Systems available in both the 2020 LTS and 2021.1 Tech Stream for you to use in any project. These represent two of the three core DOTS features, the other being Entities.Entities represents a revolution in highly performant game creation. But because of its potential, and because our dedication to quality tooling continues to grow, we’re ensuring we stick to the highest standards for quality and stability. That way, we know it will meet your needs as modern game creators and that Entities will be accessible, not just functional. To stay updated, check out the expanded DOTS forums.We remain committed to delivering quality, productivity, and performance for games and teams of all shapes and sizes. Let’s dig in to see what that means for you.Quality: Stable workflows for you, beautiful experiences for your playersProductivity: Efficient iteration and workflows for your teamPerformance: More horsepower to seamlessly create and deliver world-class game experiencesOptimized workflows help you reduce the time it takes to bring your projects from concept to final render as you build anything from the lightest 2D game to an immersive 3D world. Read on for a brief overview of what’s included in the releases, and check out the release web page here and linked below or the 2020 LTS release notes and 2021.1 Tech Stream release notes for more detail.Take advantage of optimized workflows to create cinematic content and gameplay sequences that engage players from the very first pixels.2020 LTSWith improvements across the Universal Render Pipeline (URP), Shader Graph, VFX Graph, Cinemachine, Animation Rigging and more, the 2020 LTS includes workflow enhancements that help you stay in the flow with fewer interruptions.2021.1 Tech StreamBy integrating visual scripting into the Unity Editor and continuing to invest in the URP, High Definition Render Pipeline (HDRP) and 2D tools, the 2021.1 Tech Stream offers enhanced features and optimized workflows for stunning results on the widest variety of platforms.Learn more about what’s included in each release to help you create stunning visuals.If you prefer to get under the hood, we’ve got something for you, too. Our newest releases offer you greater freedom as you create optimized, high-performing games with an enhanced coding experience, improvements in testing, building and profiling, and a continued focus on making sure it’s stable so you can create with confidence.2020 LTSProjects’ growth in complexity can impact productivity since the build process needs to account for more code and greater functionality. In 2020, we overhauled many subsystems within the Unity compilation engine to optimize build times. A new configuration setting means that you can get into and out of Play Mode more quickly; C# 8 support gives you greater efficiency when writing code, and our Roslyn Analyzer integration monitors your code quality and standards. Safe Mode and our profiling tools will help you to code faster while building higher performance into your game.2021.1 Tech StreamWe now integrate and ship the latest graphics packages with the core Unity engine. This shift will simplify your efforts to harness cutting-edge graphics capabilities while ensuring you’re always working with the latest verified code, and it includes the most recent versions of the URP, HDRP, Shader Graph and VFX Graph. And of course, we’ve also improved your coding experience across the board with code coverage, better support for profiling and simulation, and even more compilation improvements.Learn more about what’s included in the releases for optimized coding workflows.Getting – and keeping – your game in players’ hands is crucial to your success.Our network of deep industry partnerships helps you to build your experience once and deploy it everywhere. This makes it possible for you to stay ahead of the curve in a fast-changing industry and take your game to the latest platforms, even on Day One.With a special emphasis on AR, VR and mobile development, the 2020 LTS and 2021.1 Tech Stream releases boast new features and enhancements to make this process even smoother.2020 LTSThe 2020 LTS release includes support for OpenXR and the Oculus Quest 2 to help maximize your reach on a wide range of AR and VR devices. Additionally, AR Foundation 4.0 supports ARKit scene mesh reconstruction using LiDAR sensors on the iPhone 12 Pro and iPad Pro, bringing a new level of realism to your AR experiences as they blend seamlessly with the real world. Lastly, Adaptive Performance 2.0 comes with new sample projects to showcase its features.2021.1 Tech StreamThe XR Interaction Toolkit (Pre-release) allows you to add interactivity to your AR and VR experiences without having to code the interactions from scratch. The toolkit now includes major bug fixes and workflow improvements, additional interactions and new samples that demonstrate all the toolkit’s interactions. AR Foundation 4.1 also provides access to the latest AR features created by ARKit and ARCore, including depth textures and automatic occlusion.Starting in 2021.1, we are changing the way we publish, show and label packages in the Package Manager. This new system is meant to provide clearer guidance regarding a package’s stability, anticipated support level, expected release date and Unity’s long-term commitment to the package. This new lifecycle is the result of many rounds of feedback with the community and promises to clarify and improve the Package Manager experience. You can read the details here. Creators who want to discover and try Pre-release and Experimental packages can continue to do so by visiting a new dedicated forum space.Curious about the new netcode solution that you heard about at the GDC Showcase? We’re pleased to announce that it is now live as an Experimental package on GitHub. Access the resources you need to start exploring networking multiplayer games on our new documentation site.Join our Developer Advocate team as they provide a hands-on overview of some of the key features included in the 2020 LTS and 2021.1 Tech Stream releases.The 2020 LTS Webinar takes place April 20, and the 2021.1 Tech Stream webinar takes April 22. Registration is open.We believe the world is a better place with more creators in it, so we’re constantly striving to build a better platform for you. That means ensuring that you have a strong foundation and powerful tools for anything you want to make. Learn more about the releases here.You can provide feedback on the new features and updates in our forums. We invite you to share your input on the 2020 LTS release here and on the 2021.1 Tech Stream here. As always, you can also find the complete list of updates in our 2020 LTS release notes and 2021.1 Tech Stream release notes.

>access_file_
1332|blog.unity.com

A new Package Manager experience in Unity 2021.1

The Package Manager is a modular system and API designed to speed up your workflows and optimize the size of your runtime by offering Unity-developed features as optional packages.This move away from a monolithic architecture where every feature used to be embedded as part of the core editor gives you the power to customize your development environment to be purposeful and performant.We’ve continued to invest in improving the Package Manager experience over the years, and we want to share upcoming changes you’ll see in the 2021.1 Tech Stream release.We heard you loud and clear about how lack of clarity around package readiness, supported packages and quality concerns affected your workflows. By ensuring that packages are clearly labeled and categorized you will be able to identify what are supported vs pre-release easily and quickly. Hopefully these changes can give insight into how a package moves through each phase and what to expect in each ultimately winding up with the gold standard for packages in Unity. What does this mean?Packages are getting a new categorization labeled either “Released” or “Pre-release” in the editor. Experimental packages (the third category) will not show up UNLESS they have been manually installed. With one glance and you’ll be able to tell what will be the optimal package for your project via visual icons.The Experimental phase contains exploratory and cutting-edge packages. They have not been tested for production, and they are not necessarily part of any roadmap. While individuals or teams might offer direct support to users for Experimental packages, they are not maintained by official Unity support channels. Experimental packages can be deprecated without being released.Given the potential risk associated with using Experimental packages in production projects, they will not be discoverable in the Editor’s Package Manager, however you can find information about ongoing Experimental packages in the forum and through Unity beta communications, where you can also find instructions on how to add them to your test projects and discuss them with the developers.The following summarizes each phase of this new lifecycle.Pre-release packages are actively being developed and need feedback from early adopters. It is expected that those packages will stabilize and reach the Released phase by the next Unity LTS (long term support) release of the year. Pre-release packages are officially supported by Unity and are part of the roadmap. To discover these packages in Package Manager, you need to enable this option in the Project Settings. Information about these packages will also be shared in Unity beta communications.“Experimental” and “Pre-release” will no longer be discoverable by default in the Package Manager, however they will still be available to you. Your feedback on these early versions of packages is invaluable to us and is a critical part of the package lifecycle process. We’re building out a dedicated forum and a webpage to keep you up-to-date and communicate all the latest available Experimental and Pre-release packages, and we will share details in our beta communications as well.Released packages are the equivalent of the Verified phase of the previous lifecycle. They constitute the default discovery experience in the Package Manager window, ensuring that all packages discovered in the Package Manager by default are fully validated by Unity and safe to use in production projects. What this means is that releases are tested, validated and you know it meets our team’s rigid release standards. Create with confidence.You can find information about specific Released and Release Candidate packages in the Unity Manual.What happened to packages that were previously available as Preview?All Preview packages will be classified as Experimental in Unity 2021.1. Teams at Unity will promote these packages to Pre-release state when they are on track to become Released by the next LTS release of Unity and they are heading towards a set of stable APIs.Packages can remain in the Experimental stage for an indefinite amount of time. They are unsupported and might not ever be Released.How can Experimental packages be discovered or tested?You will be able to learn about Experimental packages through beta-related communications and in the forum. These packages are high risk and intended only for testing purposes. They will typically be announced for product feedback or specific testing needs.How will deprecated packages be announced?Information about Experimental packages will be shared in a package’s dedicated forum thread.Released packages that are being deprecated will be announced publicly as part of the general communications.Which Unity versions use which lifecycle?Unity 2018 to Unity 2020: Lifecycle v1 (Preview, Verified phases)Unity 2021 and newer: Lifecycle v2 (Experimental, Pre-release, Released phases)How can newer versions of Released packages be discovered and tested?Newer versions of the packages will be released in the Pre-release stage, which is discoverable if you have enabled this option in Project Settings.Where can I find non-released packages that are now not visible?If the packages don’t get to the Pre-release state, we can’t guarantee their availability or support. We recommend that you visit their forum threads to learn about the status so we can help you find an answer.—We look forward to hearing from you about the new Package Manager experience and appreciate all the feedback so far that has enabled us to bring this to you! For any questions or comments, head over to the forums!

>access_file_
1334|blog.unity.com

Get to know Dragon Crashers – our latest 2D sample project

Back in Unity’s 2019 release cycle, we realized our vision of empowering 2D artists and creators with a complete suite of 2D tools. The release of our 2D packages included character skeletal animation and Inverse Kinematics (IK), level design with tilemaps, spline shapes and pixel art tools. Check out our 2D website for an overview.Our 2D team has since optimized those workflows and refined the graphics technology: the 2D Renderer inside of the Universal Render Pipeline. There’s no better way to put these tools to the test and see how they can make your 2D visuals shine than by exploring a new sample project. Dragon Crashers is now available for free on the Asset Store.Download from Asset StoreDragon Crashers is an official sample project made in Unity 2020.2 that showcases Unity’s native suite of 2D tools and graphics technology. The gameplay is a vertical slice of a side-scrolling Idle RPG, popular on mobile platforms today.While the party of heroes auto-attack their enemies, you can trigger special attacks simply by touching/clicking on the different avatars.The sample project has been tested on desktop, mobile and web platforms.In addition to the information shared in this article, please join us for our online Dragon Crashers overview webinar on April 14 at 11:00 am EST (5:00 pm CET) for key tips and a live walkthrough from our global content developer, Andy Touch. Take me to the registration section.Make sure you have Unity 2020.2 or 2020 LTS to get the project on the Asset Store. First, start a new 2D project, then go to Package Manager > My Assets to import Dragon Crashers. You will see some Project Settings pop-up messages; accept them all.If you encounter any issues, please let us know in the 2D dedicated Dragon Crashers forum.Once the project is imported, you should see a new option in the menu bar that offers shortcuts to the project’s scenes. Select Load Game Menu and press Play to try it.We recommend using high-definition display settings in the game view, such as the full HD (1920×1080) setting or 4K UHD (3840×2160).Our party of heroes and base enemies are diverse, and decked out with different outfits, accessories and variations. However, they are all bipeds that have a similar build.To avoid animating every single one of them with their respective 2D IK constraints, we created a mannequin. The animator used this mannequin, while the character artist created unique skins and accessories for the characters.The Mannequin Prefab (PV_Character_Base_Bipedal.prefab) was used to create Prefab variants for each character. The only difference in those variants is the Sprite Library Asset, where we swap the visual appearance of the biped character.All of the character Sprite Library Assets have the same Category and Label to define the body parts and their variants. For example, the knight and skeleton enemies both have a category named “mouth,” with sprite variants labeled as “mouth open,” “mouth teeth” and “mouth normal” used during animation.To apply the animations to all characters, ensure that each character’s visual asset or PSB has a similar rig. In other words, they must have bones named in the same way, attached to parts of the body of the same Category and Label. To save time, you can copy the mannequin’s skeleton (or reference character bones), and paste it to the different characters. This option is available in the Skinning Editor, part of the Sprite Editor.The Prefabs include features that make the characters more lively, like Inverse Kinematics and Normal and Mask maps for improved integration in the 2D lit environment.There’s no need to set your level design in stone so early in the process during prototyping. The worldbuilding 2D tools included in Unity enable you to have fun designing levels, and then easily iterate on them. The Tilemap Editor and Sprite Shape help automate tasks, such as setting up colliders to conform to object or terrain shapes, whereas the Scene view is your playground to make the game more exciting and aesthetically pleasing.The most important aspect is to have all your “brushes” ready in the Tile Palette, which can contain repeatable tiles, animated tiles, isometric or hexagonal tiles, or even GameObjects that render them all performantly, with just one renderer (Tilemap Renderer). For all the elements in the grid, refer to the Palette_GroundAndWalls Tile Palette.Another often overlooked feature that can be useful in level design is Sprite Draw Mode. Tiled sprites used for background layers can cover a large scene area with a small sprite to create a nice parallax effect.A Tilemap grid might not be the most practical solution to add more organic-looking objects, or spline-based elements to your project. Instead, we recommend a spline-based tool, such as 2D Sprite Shape,which draws much like a vector drawing software. Use it in background props or as part of the gameplay. The SpriteShape Renderer enables you to efficiently render many sprites attached to the spline or border of your shapes. See Prefab P_MineCartTracks_A to observe how the tracks are drawn with the spline line, and the supporting structure artwork is made from the fill texture of the Sprite Shape Profile.Prefab P_Bridge or P_MineCartTracks_B are other examples that demonstrate how a Sprite Shape border doesn’t need to be a simple line, but rather represents more elaborate elements – in this case, a bridge and a railtrack.With the 2D Renderer, use the Sprite-Lit shader for advanced lighting effects. Take full advantage of these effects by giving your sprites Secondary Textures.Normal maps can be added in the Sprite Editor. These RGB images represent the XYZ direction that the pixel is facing and signal the 2D lights how much to affect them. Mask maps can also be harnessed by the 2D Renderer data asset (RenderData_2D.asset in the project), part of the Light Blend Styles feature. The Light Blend Style called “Fresnel” is used to accentuate the rim light around characters and sprites. To achieve the fresnel effect, for instance, select to use the R channel information from the Mask maps. In this particular project, we only have one Light Blend Style, so the three channels – R, G and B – look the same (black and white). This makes the process of creating Mask maps more convenient.Shader Graph is frequently used in the demo to animate props without taxing the CPU. You can observe elements like wind moving the spiderwebs (P_SpiderWeb_Blur prefab), crystals glowing (P_Crystals_Cluster), as well as the lava flowing animation (P_Lava_Flowing_Vertical), which leverage a flow map texture to control the direction of the main texture’s UV coordinates. The flow texture uses the colors red and green to indicate the XY direction that pixels follow in every frame. Open the SubGraph FlowMap to learn how to achieve this effect.In the same dragon battle scene, there is another shader animation technique called “animated alpha clipping,” which creates smooth animation from a single texture. This occurs by showing a specific range of pixels in each frame based on their alpha values. Visual effects like the lava splatter (ParticleSystem_Splatters) or hit animation (P_VFX_HitEffect) were made using this technique with Shader Graph.The art style of the demo was created with consideration to 2D lights – and their many possibilities. As you can see, sprites enhanced by the handcrafted Normal maps and Mask maps are relatively flat. Some sprites, like the tilemap floor, are grey scale; meaning they are colored using the Color option from the Tilemap Renderer combined with the lighted areas from the environment.Real-time 2D lights allow you to spend more time in the final game scene in Unity Editor. Observing the direct results while composing your scene with lights and objects, or even being able to play the level as you go, allows you to better establish the desired mood and atmosphere for your game.Additionally, you can increase the immersion by animating those elements. For example, the P_Lantern_HangingFromPost Prefab shows how to attach a light to an animated object, or give the witch character a staff with Sprite-Lit particlesAnother benefit of using 2D lights in your project is the ability to reuse elements. Environments and props can look very different depending on the lighting conditions, which allows you to recreate many different levels with the same sprites.For creating, structuring, managing and iterating on the gameplay, our demo project used a combination of Scriptable Objects and Prefabs.All seven characters, regardless of whether they are heroes or villains, inherit their core structure from the base Unit Prefab and use the same behavior code. To differentiate values between characters, we used Scriptable Objects for different ‘blocks’ of unit-based values. Hard-coded values make it difficult to balance the game for non-programmers and cause gameplay to be rigid, so we set up unit values such as ‘Attack Damage Amount,’ ‘Ability Cooldown Time in Seconds’ and ‘Unit Health’ in Scriptable Objects; for anyone working on the project to make quick adjustments. Those value changes are then handled by the gameplay code dynamically.Each Unit Prefab has a core ‘UnitController’ script that acts as the unit’s ‘brain’ and handles internal-prefab script references and behavior sequencing. When the Dragon is attacked, for instance, the ‘UnitController’ executes related behavior events, such as transitioning to a flinch animation, playing a roar sound effect and reducing the Dragon’s health amount. These core behaviors adhere to the concept of encapsulation and only handle their own respective purposes and tasks. For example, UnitHealthBehaviour only handles logic, including setting and removing health values of a unit, and reporting important event callbacks, such as ‘HealthChanged’ or ‘HealthIsZero.’ However, the ‘UnitController’ sends values to ‘UnitHealthBehaviour’ based on the scenarios and actions that occur during gameplay.In some instances, systems external to Units would require updating if a specific event happens. Delegates are utilised for these setups.For example: When the Witch receives damage from an attack, and ‘UnitHealthBehaviour’ executes the event ‘HealthChanged(int healthAmount)’, then the external-subscribed ‘UIUnitHealthBehaviour’ can update the Witch’s Health Bar according to that value.Using Delegates allows us to isolate and test areas without dependency on other systems. For example; this included testing the performance of the pop-up Damage Display Number System in a separate scene, without needing to simulate the battle gameplay.Unity’s Timeline feature was used in two areas: Linear cutscenes and each Unit’s ability sequences.The linear cutscenes take place at the beginning and end of a battle. They handle sequencing for a variety of areas, such as camera transitions (using Cinemachine), character animations (using Animator), audio clips, particle effects and UI animations. Each track was bound to the relevant scene instance.A Timeline Signal was embedded at the end of the intro Cinematics to invoke a Unity Event when the Cutscene is finished. This ‘signals’ when to begin the battle gameplay logic.Timeline was used to create prefab-embedded ability sequences for each unit. This enables each Unit to have their own special abilities that are connected and associated to their character; similar to champion abilities in a MOBA game.Each unit contains two ability timelines; one ‘basic’ auto-attack and one ‘special’ manually-activated attack. The ‘UnitAbilitiesBehaviour’ script handles the logic for both ability timelines in terms of the ability currently playing, the ability sequence queue and starting/stopping ability cooldowns (depending on high-level gameplay logic, like whether the intro cutscene is playing, or if a unit has died). Ability Timeline Tracks bound to local systems of the Unit Prefab, including Character’s Animator for attack animation and Particle Systems for VFX. This allowed both the programmer and artist to create, playback and iterate on a Unit’s specific ability in isolation using Prefab Editing Mode before applying the changes to each instance of the Unit Prefab in the game.Timeline Signals were used for when an ability was to apply some kind of value modifier to a Unit target’s health. When the Knight swings his sword, for example, we want the damage applied as soon as the sword reaches a critical point in the animation, rather than the beginning or the end of the sword swing. As timing for animations and VFX iterated during development, the artist repositioned the ‘Ability Happened’ signal to the new desired point of the sequence in a very quick workflow, without relying on the programmer to change any values in the code.This also allowed us to add multiple ‘Ability Happened’ signals in a continuous attack, such as the dragon breathing fire at the group of heroes.Senior global content developer Andy Touch hosted a webinar running through an in-editor demonstration of the Character Pipeline Workflow that was used to create the project. This webinar unpacked how to:Export a 2D character from Photoshop into UnitySet up a character’s sprite rigSet up IK for a character’s limbsUse Sprite Swapping for different character visualsUse Nested Prefabs for weapon attachmentsApply Sprite Normal and Mask maps for 2D lighting stylesSequence character abilities using TimelineAs a token of appreciation for exploring Dragon Crashers with us, we would like to share a set of wallpapers, Zoom backgrounds and other visuals to inspire you throughout your 2D game dev journey. Get the Dragon Crasher backgrounds here.For those starting a new 2D project, there are already some great guides on the blog and forums. If you’re new to the tools, we recommend checking the 2D web page, 2D Tips Lightning Round blog and presentation for useful tips. For even more, check out a deep dive into our skeletal animation system here, or our previous project, the Lost Crypt and its corresponding webinar. As always, we also recommend perusing our latest docs, and of course, the 2D Renderer section for more information on specific features or APIs.

>access_file_
1335|blog.unity.com

Mobile ad mediation: What is it and how does it help app developers?

Mobile ad mediation: What is it and how does it help app developers?Mobile ad mediation platformsIf you’re looking to monetize your game through ads, you’ve likely come across the term "mobile ad mediation." By the end of this article, you’ll understand everything you need to know about ad mediation and how it works.What is mobile ad mediation?Most mobile game developers today use ads, or a combination of ads and in-app purchases, to monetize their content and turn their games into profitable businesses. To monetize with ads, you need to work with ad networks, which connect you with advertisers looking to acquire new users.It’s best practice to work with multiple ad networks - more ad networks means more opportunities to fill your ad requests with the right ads, which means more opportunities to make money. However, each ad network requires its own SDK integration, and too many SDKs in your app can slow it down and create a lot of manual overhead to maintain. You would need to find a way to evaluate the performance of each network’s ads in real-time, and decide which one of them will be chosen to fill your available ad slot. Scaling this process is very tricky.Mediation platforms centralize multiple ad networks in just one platform and manage your monetization operations through a single dashboard. You can then turn different ad networks on and off inside your mediation dashboard at the click of a button.But that’s not all they do - the best mediation platforms today also offer a variety of sophisticated optimization tools to maximize your revenue, such as in-app bidding and A/B testing. Check out the top fetures every mediation solution should have.Who uses mobile ad mediation?The short answer, app developers.The long answer, app developers who are looking to leverage ad inventory from several ad networks so they can better optimize revenue generated from their app.How does mobile ad mediation work?The top mediation platforms today leverage in-app bidding technology to manage the monetization process. The in-app bidding ad serving model works like an auction, and asks all the ad networks at the same time how much they’re willing to pay to serve the ad. The ad network that bids the highest wins the auction and gets to serve the ad.In-app bidding setups today are often combined with traditional waterfalls. Such hybrid systems are beneficial for developers because they provide access to both network bidders and high quality networks that only operate waterfalls. In essence, this maximizes the number of ad networks competing to fill your ad requests, in turn maximizing the revenue you make - so make sure you can access strong hybrid setups through your mediation platform.Why is mobile ad mediation important for app developers?Partnering with the right mediation platform will transform your ability to create a successful games business. Below, we explain the key advantages you can leverage with a great mediation platform.1. Ad media platform: Maximize ad revenue and eCPMsThrough in-app bidding technology, ad mediation platforms enable you to maximize your ad revenue - up to 3x in fact. There are three aspects to this - first, stronger competition for every impression means ad networks will bid higher than they usually would in order to outbid their competitors.Second, because all ad networks have the opportunity to bid to fill your ad request - not just the networks at the top of the waterfall - you never leave potential money on the table.Finally, with in-app bidding, bids for impressions are received in real time, which is more accurate than the flat eCPMs or historical data used in waterfalls. This ensures you, as the developer, never undersell your impressions.2. Maximize fill rateFill rate is the number of ads the ad network serves (impressions) compared to the number of ads you request (requests). Just because your app requests an ad from a given network, doesn’t necessarily mean it’s going to be served. Perhaps that ad network doesn’t have any interstitial ads to show to a user in South Africa at that moment in time.But, if you connect to multiple ad networks through a mobile ad mediation platform, there’s a much higher chance that one of the networks will have an ad available to serve to your users - no matter where they’re located. That’s because each ad network is known to have a stronger presence in certain regions. So leveraging several mobile ad networks makes sure you cover all your bases. Fill rate is important because if your fill rate is low, it means you’re not getting the most out of your app’s ad inventory and leaving potential revenue on the table.3. Reduce SDK bloatThere is such a thing as too many SDKs. Manually managing 4-5 different SDK ad networks can slow down your app and affect performance. The more SDKs in your app, the more unpredictable and inconsistent the app’s user experience will be. Instead, a mediation solution requires just one SDK, aggregating all those ad networks inside it. This saves you coding time and minimizes the SDK bloat that’s so common today.4. Save time through automationNot only does mediation save you time integrating multiple SDKs into your app, but it also saves you time looking after and manually managing your ad monetization strategy. Of course, someone should always be keeping an eye on the mediation platform - but if you don’t have the resources to hire a monetization manager, the mediation platform can do all the work for you. Once you’ve set up your bidder networks, there’s little to no technical or manual labor involved. Bottom line - you can focus on game development and improving your product, and let the mediation take care of the nitty gritty monetization part.5. Access additional solutions that fuel revenue growthThe leading mediation platforms today offer a variety of optimization tools that you can use to fuel your game growth. One such tool is A/B testing. ironSource mediation, for example, offers three different A/B testing tools: the quick bidding test to easily determine if bidding is right for your game, the quick A/B testing tool for optimizing waterfalls, and a robust A/B testing tool for testing changes like capping and pacing of ad placements and new ad placements.An equally important feature of mediation platforms is easy and accurate reporting: as a developer you want to make sure you’re staying on top of your app’s performance and any changes to your KPIs. The strongest mediation platforms, such as ironSource, make it convenient to navigate through your reporting dashboard, and see all the data you need within a few clicks. Understanding what’s working and what needs improvement is the first step on your journey to maximizing revenue and growing your game over the long-term.Now you know exactly what ad mediation platforms do, and why they’re so important for growing your game into a profitable, long-term business. The mobile gaming industry is so competitive and dynamic that not taking advantage of the growth products offered by the leading mediation platforms will leave you playing catch up.

>access_file_
1336|blog.unity.com

Tackling profiling for mobile games with Unity and Arm

Learn how to take on mobile performance issues with profiling tools from Unity and Arm. Go in depth on how to profile with Unity, how to optimize performance drop offs and tips and tricks on getting the most out of your game assets.In this blog, we examine how to identify performance problems in a mobile game through the use of profiling tools from Unity and Arm. We also introduce best practices for optimizing mobile game content.In order to identify performance problems in your game, you should first test it on a range of different devices. The best way to do this is to capture a performance profile on a real device. Tools such as the Unity Profiler and Frame Debugger can provide you with great insight into where elements of your game are taking their resources. Additionally, tools like Arm Mobile Studio enable you to capture performance counter activity data from the device, so you can see exactly how your game is using the CPU and GPU resources. While the device we used has a Mali GPU, the concepts introduced here also apply to other mobile GPUs.The game we are testing is an action RPG, where the player must fight waves of incoming enemy NPCs with melee and spell attacks. This type of game can quickly become GPU bound on a mobile device, with increasing numbers of foes on screen as well as multiple particle and post-processing visual effects.Tackling profiling for mobile games with Unity and ArmWe ran the game through the Unity Profiler to identify any slowdown in performance. We found a few high-priority suspects, post-processing, and fixed Timestep and instantiation spikes.The post-processing effects were a central cause of the game’s poor CPU performance.Of all the post-processing effects, the bloom pass, which makes bright areas in the scene glow, was the most taxing.In the screenshot above, you can see that the Render Camera is taking a huge amount of time and crosses the frame boundary. The main thread then waits until the rendering commands are complete before preparing the next frame. Let’s look at the Unity Frame Debugger to figure out what is going on.The first thing to notice in the Frame Debugger is that the game is being rendered at the device’s full screen resolution. For an average mobile device, this puts undue pressure on the device’s GPU, given the complexity of the content. Reducing the resolution to something more reasonable like 1080p or even 720p would significantly reduce the costs of rendering the game, especially the post-processing effects.The next point of observation is that the bloom effect occurs in 25 draw calls for the bloom pyramid. Each draw call represents a target buffer with a size starting at half the resolution of the fullscreen device resolution. This resolution is then halved with every iteration. Reducing the initial rendering resolution is one way to reduce the potential number of iterations. Another alternative would be to modify the bloom effect source code to reduce the number of iterations taking place, and impose some sensible limit. However, in this case, it would be better to disable the post-processing effects for now, due to the considerable amount of time it takes to handle those effects. That is, at least until the rest of the game can be made to run smoothly at 30 frames per second.Another improvement for the project would be to reduce the frequency of the fixed Timestep interval. We can see that it is currently short enough to be called multiple times a frame; by default, Unity sets this to 0.02 or 50Hz. You can try a fixed Timestep value of 0.04 for mobile titles aiming at 30 fps. The reason for this is because, at 0.333, which would match 30 fps, there is the chance that one frame spikes in time and you end up with two calls in the next frame. This means that it takes longer – and you can never break the cycle of a slightly longer frame. The user can also set the maximum allowed timestep to prevent any catch up from taking more than the desired amount of time.This Timestep duration affects scripts using the FixedUpdate function and any Unity internal systems that update on the fixed update interval, for example, physics and animation.For the purposes of this project, only physics and Cinemachine contributed heavily to the time taken, around 3ms per call; a call meaning that the system was entirely updated (though being called an additional 5 times meant that this could add up to 15ms per frame of wasted time).This occurs due to the slow post-processing effects. Turning them off reduces the time spent, however, the earlier recommendation of reducing the fixed Timestep frequency to avoid unnecessary work for the CPU still stands.During profiling, spikes could be seen in the frame time. Tracking them down in the CPU profiler hierarchy view shows that they stem from the instantiation of NPCs.The most common solution for this is to instantiate the characters ahead of time and keep them in an idle state, in some sort of object pool. These NPCs can then be grabbed from the pool at no instantiation cost. If more are needed, then the pool can be expanded as required.The same issue is also seen when abilities are being used, as they are also instantiating objects.Object pooling is the easiest way to solve these problems. It may affect loading times, but allows for a much smoother frame rate at runtime, which is the lesser of two evils in this case.We’ve also used Arm Mobile Studio to gain more insight into the game’s behavior. With the tools in Mobile Studio, we can get performance counteractivity data for the CPU and GPU, so we can see exactly how the game is using the device’s resources.You can download Arm Mobile Studio for free here. There are 4 tools included:Performance Advisor – to generate easy-to-read reports and get optimization adviceStreamline – a comprehensive performance profiler to capture all the counter activityMali Offline Compiler – to check how a shader program would perform on a Mali GPUGraphics Analyzer – to debug graphics API calls and analyze how content was renderedPerformance Advisor provides us with a quick summary of game performance, and is intended to be used as a regular health check. It’s quick to generate a report, particularly if you build it into a continuous integration workflow, alongside your nightly build system. Performance Advisor provides us with a quick summary of game performance, and is intended to be used as a regular health check. It’s quick to generate a report, particularly if you build it into a continuous integration workflow, alongside your nightly build system.During the first 2 minutes of the game, Performance Advisor tells us that we are only averaging 17 frames per second. The green section at the start of the frame rate analysis chart indicates where the game is loading, then suddenly, the chart turns blue, indicating that the game has become fragment bound, and it stays that way throughout. This means that the GPU in the device is struggling to process fragment workloads, which suggests that the game is either requesting too much work, or that it is not processing pixels efficiently.As we’ve added region annotations to the game, the frame rate analysis chart shows our custom region names. Where the chart shows a marker labeled with ‘S,’ Performance Advisor has taken a screenshot of the game to help us identify what is happening on screen at that point. You can configure screen captures to be taken when the fps drops below a specified value. Here, because the fps stays low throughout, Performance Advisor takes a screenshot at the default interval of every 200 frames.Take a look at the GPU cycles per frame chart, where we’ve added a budget of 28 million cycles per frame for this device. We’ve estimated that this is the maximum number of cycles that this device should be able to handle, while still achieving a frame rate of 30 fps. Here, we can see that the number of GPU cycles significantly exceeds this budget, and that the number of cycles increases over time.Performance Advisor provides optimization advice when it finds a problem. If we look at the shader cycles per frame chart, we see that the number of execution engine cycles is high. Inside a Mali shader core, the execution engine is responsible for processing arithmetic operations. Performance Advisor has flagged this as a problem and advises us to reduce computation in shaders.There is a simple fix for this. You can reduce the precision of shader variables to mediump, rather than highp, with no noticeable change on-screen. This will significantly reduce shader cost. For information on how to do this, refer to Shader data types and precision in our documentation. Additionally, as we discovered earlier with Unity’s Frame Debugger, the game is currently rendering to the device’s full screen resolution. Any changes we make to reduce the game’s rendering resolution (to 1080p or 720p) will also reduce the fragment shading cost.We had set a budget of 500,000 vertices per frame for this device. The budget is exceeded around 45 seconds in and the number increases steadily over time.Looking at the primitives per frame chart, we notice that the total number of primitives being processed increases over time, even though the number of visible primitives stays relatively constant. In the first 2 minutes of the game, the only new objects that are created are the enemy NPCs, which then get destroyed in a blast of lightning by our hero. This suggests that when the enemies are destroyed, their geometry is still present, even though it is not visible.There are several reasons why the GPU may not be able to handle the game’s demands, so we need to explore Arm’s profiling tool further with Streamline. Streamline will tell us more about this heavy fragment workload, and by looking at the other counters, we can find clues on how to lighten the load.Looking at the same section of the game in Streamline, we can explore a range of charts that show the GPU counter activity for the different stages of geometry and pixel processing. This illustrates how the game’s content is processed by the GPU, and whether there is unnecessary processing.Mali-based GPUs take a tile-based approach to processing graphics workloads, where the screen space is split up into tiles, and each tile is processed to completion in order. For each tile, geometry processing executes first, then the pixels are colored in during pixel processing.We already know that the GPU in the device is maxed out with fragment workloads, so we need to look for ways to reduce pressure on the pixel processing stage.One way to reduce the pixel processing load is to lower the complexity of the geometry that gets sent for pixel processing in the first place. Geometry that is completely off screen or backfacing is killed before pixel processing, but small triangles which only partially cover 2×2 pixel quads can erode fragment efficiency and have high bandwidth cost per output pixel.The Mali Geometry Usage and Mali Geometry Culling Rate charts in Streamline show how efficiently the GPU processes geometry. We can see the number of primitives being sent to the GPU, and how many of them are culled during geometry processing. Work that is culled at this stage won’t get passed through to pixel processing. This is good news, but we could organize the content more efficiently, so that non-visible primitives aren’t passed through at all.In the Mali Geometry Usage chart, we can see that , 1.07 million primitives enter geometry processing (orange line) in the selected timeframe (about 0.05 seconds), but 700,000 primitives are culled at this stage (red line).The Mali Geometry Culling Rate chart shows why they are culled. Around half are culled by the facing test (orange line), which is expected, as these are the backfacing triangles of our 3D objects. What is more concerning is that 31.9% of primitives are culled by the sample test (purple line) – ideally, this number should be less than 5%. The sample test indicates that these primitives were too small to be rasterized, failing to hit a single sample point, and therefore, considered invisible. This can happen when objects with complex meshes are positioned far away from the camera, and triangles in the mesh are too small to be visible. Higher numbers could indicate that the game object meshes are too complex for their position on screen.This problem gets worse for primitives that are big enough to pass the sample test but still only cover a few pixels. These ‘microtriangles’ are passed through to pixel processing and are expensive to process. This is because, during fragment shading, triangles are rasterized into two-by-two pixel patches, called quads. Tiny triangles only hit a subset of the pixels inside a quad, yet the whole quad must be sent for processing. This means that the fragment shader will run with idle lanes in the hardware, making shader execution less efficient.To check whether we have a problem with microtriangles, we can use the Mali Core Workload Property chart in Streamline to monitor the efficiency of coverage. Ideally, this should be less than 10%. We can see here that in some sections, the partial coverage rate (green line) is very high, over 70%. This value suggests that the content has a high density of microtriangles, which confirms the issue that was flagged earlier by the high sample culling rate.Geometry that does end up on screen needs to be appropriately sized for its position. A complex piece of scenery that is far away does not need to be very detailed, as it does not contribute much to the scene. We could use Level Of Detail (LOD) Meshes for objects that are further away from the camera, to reduce complexity and save processing power and DRAM bandwidth. Or, instead of using geometry, we could use textures and normal maps to build surface details for objects.Through the Performance Advisor report, we discovered that our shaders could be too expensive, and that we could benefit from reducing their precision. In Streamline, we can use the Mali varying usage chart to see the number of cycles where 32-bit (high precision) or 16-bit (medium precision) interpolation is active. Here, we can see that 32-bit interpolation is used in most cycles. 16-bit variables interpolate twice as fast as 32-bit variables, and use half the space in shader registers to store interpolation results, so it is recommended to use mediump (16-bit) varying inputs to fragment shaders whenever possible.To explore shaders, we can use Arm Mobile Studio’s static offline compiler tool, to generate a quick analysis of the shader program.To do this, you need to grab the shader code from the compiled file that Unity gives you, then run Mali Offline Compiler on that file:1. In Unity, select the shader you want to analyze, either directly from your assets folder, or by selecting a material, clicking the gear icon and choosing Select shader.2. Choose Compile and show code in the Inspector. The compiled shader code will open in your default code editor. This file contains several shader code variants. 3. Copy either a vertex or fragment shader variant from this file into a new file, and give it an extension of either .vert or .frag. Vertex shaders start with #ifdef VERTEX and fragment shaders start with #ifdef FRAGMENT. They end with their respective #endif. (Don’t include the #ifdef and #endif statements in the new file).4. In a command terminal, run Mali Offline Compiler on this file, specifying the GPU you want to test. For example: malioc –c Mali-G72 myshader.frag Refer to Getting started with Mali Offline Compiler for more instructions.We chose to analyze the fragment shader that was responsible for the dissolve effect that occurs when the enemy NPCs die. Here is the Mali Offline Compiler report, with highlighted sections of interest:We can see that only 2% of arithmetic computation is done efficiently at 16-bit precision. The shader will operate more efficiently if we reduce precision from highp to mediump. This reduces both energy consumption and register pressure, and can double the performance. There are situations where highp is always required, such as for position and depth calculations, but in many cases there is little noticeable difference on-screen when reducing precision to mediump.The report provides an approximate cycle cost breakdown for the major functional units in the Mali shader core. Here, we can see that the arithmetic unit is the most heavily used.In the shader properties section, we see that this shader contains uniform computation that depends only on literal constants or uniform values. This produces the same result for every thread in a draw call or compute dispatch. Ideally, this kind of uniform computation should be moved into application logic on the CPU.We can also see that the shader can modify the fragment coverage mask that determines which sample points in each pixel are covered by a fragment, using the discard statement to drop fragments below an alpha threshold. Shaders with modifiable coverage must use a late-ZS update, which can reduce efficiency of early ZS testing and fragment scheduling for later fragments at the same coordinate. You should minimize the use of discard statements and alpha-to-coverage in fragment shaders where possible. Refer to the Arm Mali Best Practices guide for advice on using discard statements.In Arm Mobile Studio’s Graphics Analyzer, you can see all the graphics API calls that the application made, and step through them one by one, to see how the scene is built. This helps to identify objects that are too complex for their on-screen size and distance from the camera. Here are a few examples we found in this game:The brickwork over in the far corner of the scene is built with geometry and uses 2064 vertices. The detail is not extremely visible in the final output, so this is wasted processing.We found the same issue for the floor tiles – these are 1170 vertices each, but even though the object is close to the camera, the scene does not really benefit from this complexity. It would be more efficient to use a normal map here, to represent the bumps and angular edges rather than building it with triangles. Additionally, we can see that these objects are drawn using separate draw calls. Reducing the number of draw calls by batching objects together or using object instancing could increase performance.Another example is the statues at the back of the scene – 6966 vertices each. You can see that the mesh is quite complex, which will give a great visual result when the player gets close to the statues, but from this camera position, they are hardly noticeable. It would save a lot of processing power to use Mesh LODs here to represent these objects when they are this far away from the camera.Remember that reducing complexity for many similar objects adds up to a huge saving in geometry processing, which subsequently reduces the amount of fragment shading required. Not only will this bring the fragment workload down and increase our frames per second, it will also reduce the install footprint of the APK.We’ve uncovered several areas where we could make changes to the game to improve performance. Here are the ones we chose to implement, and how we did it.Fixed Timestep is a frame rate-independent interval that controls when physics calculations and FixedUpdate() events are performed. By default, this is set to run at 50 fps. While 50 or even 60 fps is sustainable on high-end mobile devices, more mainstream devices run at 30 fps, which this title is targeting. Go into Edit > Project Settings, and then into the Time Category, to set the Fixed Timestep property to 0.04. This will ensure that your physics calculations, FixedUpdate(), and updates are all running in sync.After the adjustments were made to the fixed Timestep in Unity, the fixed update portion of the main game loop was only called once per frame, for an average of 1.5ms. This is a huge improvement from the 12ms that it had taken previously – anda simple solution to a common performance pitfall.At the startup of the app, data for all objects referenced by built-in scenes or in the Resources folder are loaded into the Instance ID cache. These assets are treated like one big asset bundle, so there is metadata and indexing information that is always loaded into memory. Once an asset from this bundle is used, it can never be unloaded from memory.The recommended method for handling assets and resources when aiming to improve your memory consumption is through the Addressable Asset System, which provides an efficient way to unload content that’s no longer needed from memory.In our environment, we have many objects that appear multiple times. Walls, floor tiles and other environment props are all duplicated to build out this scene. We can save draw calls by enabling GPU instancing on the objects’ material. GPU instancing renders identical meshes with a small number of draw calls, and allows each instance to have different parameters, such as color or scale. This modification can add an uplift to CPU performance. Below, you can see Performance Advisor data before GPU instancing was enabled.And here, you can see the same portion of the application, but with GPU instancing enabled – a small but measurable gain toward our target of 30 fps.Render textures are a way of adding 3D elements to your UI, as well as many other use cases. If you have a camera rendering to the render texture, be sure to disable the camera when it’s not onscreen. There is no need to render something that the user won’t see. Use Graphics Analyzer or Unity’s Frame Debugger to make sure that these textures are not being updated offscreen.Rather than putting extra work on the CPU by creating and destroying the same objects over and over, try object pooling. Object pooling is a design pattern that prompts you to create the objects you will need up front, front loading the work of the CPU. Then, rather than destroying them, you can add them back to the pool to be reused when an object of the same type is needed again. This is a fantastic way to relieve the processing power of the CPU, so it can work freely on more important tasks for your game.With the move to object pooling, there is no spike attached to the waves of enemies appearing onscreen that can be identified in the Unity Profiler captures, as well as no discernible effect on the frame rate.When a Mesh is onscreen, the GPU spends time rendering all of the triangles in the mesh, no matter how small. In games where your camera or assets can move, this often creates a situation where you can spend a lot of the GPUs resources rendering triangles of meshes that are too small to be seen in the frame. To address this, use Level Of Detail (LOD) Meshes . This lets your game leverage less complex meshes as the camera moves away from the assets, which decreases the amount of mesh complexity that the GPU must render and reduces the vertex count per frame, giving larger triangles to pixel processing. Not only does this improve efficiency, it keeps the artistic integrity of the scene intact.For other asset optimization tips, be sure to check out the Game Artist Guides from Arm.When you know that some assets with the same material properties will be used in the same scene, you can batch them together. Combine their texture data into a single texture atlas, which will save draw calls by drawing them at once, and result in a smaller footprint when compressed, compared to multiple separate files.When writing your own custom shaders, or using Shader Graph, you can decide on what precision to use: float or half. Choosing half, wherever possible, will make for more performant shaders – but remember that you will likely need to use float for anything that deals with world-space positions or depth calculations!When you start to plan the post-process effects for your project, you have two options to choose from: the legacy Integrated feature set, or the new Post Processing v2 feature set. Below, you can see the game using the Integrated feature set.Every 3–4 frames, we see a spike in V-Sync, where the system is waiting on the frame to render. This causes the game to drop below 30 fps, consistently, and wastes power on the device. ere, however, you can see the game’s profiler data using the same effects, this time, with the Post Processing v2 feature set.This profiler graph is much better, as Post Processing v2 is optimized to run on mobile hardware. Use it in your project to get the best post-processing performance.Adding post-processing effects to your game can add a nice layer of polish and visual depth to your project. But it’s also important to balance these effects with performance. After all, these effects can get expensive. Turning these off on mass-market devices can save a lot of power, and stop a device from heating up in your players’ hands.Once the other optimizations were in place, we could still see spiking in some areas. By using binary searching, turning things on and off, we eventually tracked down two things: One was the post-processing stack that was being used. This helped with the total time, but the frame rate eventually levelled out once we turned off anti-aliasing–so much so that some of the post-processing was able to stay on, even on the lowest spec devices we were using to test.After optimizing the game, we ran it through Arm Mobile Studio again, to look for any differences. The Performance Advisor report now shows that we have achieved an average fps of 28.9 (previously 17), and reduced overall fragment boundness. Fragment activity is still high in some sections of the game, so we still have work to do, but with good data to guide our investigation, we should be able to optimize these sections to further improve performance.The number of vertices per frame is now well under our 500,000 budget, and you can see regular dips where the enemy NPCs are destroyed.Geometry usage and culling is now much more efficient, with the number of visible primitives at a much healthier percentage of the number of input primitives. The facing test is responsible for around 50% of culled primitives, as expected, and those killed by the sample test is below 10%, showing that we have reduced the number of very small triangles.By using Unity’s Profiler and Frame Debugger, along with Arm Mobile Studio, we have been able to discover multiple ways to improve performance and reduce the pressure on the CPU and GPU on a mobile device. Some of the problems we uncovered could be avoided in future titles, by sticking to a set of best practices for content.Of course, we don’t want optimizations to reduce the quality of the onscreen visuals. Here’s how the optimized version of the game looks beside the original version.Tackling profiling for mobile games with Unity and ArmPerformance testing often happens quite late in the development cycle. It’s great to find further opportunities to optimize, but what if there’s no time to fix the issues before your release deadline? It’s much more practical to design content optimally to start with. It can be useful to set content budgets around mesh complexity, shader complexity and texture compression, to give your team the best chance to design efficiently for mobile. Here are some resources that could help your team:Arm guide for Unity developersDeveloper guides for best practices on MaliUnity learn course, 3D Art Optimization for Mobile ApplicationsOnce you know that most of your application and assets follow a set of best practices, you can do regular performance testing throughout your development cycle, to catch any issues early enough to fix them.Teams that use a continuous integration system can take advantage of automated performance testing, available with Arm Mobile Studio professional edition. This edition can run across multiple devices in a device farm, and takes the pain out of manual profiling. The reported data can even be fed into any JSON-compatible database, so that you can build visual dashboards and alerts to monitor how performance changes over time, to flag issues sooner.Unity’s built-in profiler is a great place to start. Read about how to profile your application in the Unity documentation. Or, explore Frame Debugger, which lets you investigate how an individual frame is constructed.Download Arm Mobile Studio for free from the Arm Developer website and check out the starter guides for Performance Advisor, Streamline, Mali Offline Compiler and Graphics Analyzer, to get up and running quickly.For additional help profiling with Unity’s Profiler and Frame Debugger, please feel free to ask questions in our forum.For further support while working with Mali devices or Arm Mobile Studio, go to Arm’s Graphics and Gaming Forum, where you can ask questions, and Arm will be happy to help.

>access_file_
1337|blog.unity.com

Best practices for maximizing revenue from interstitial ads in your app

Apps attract all kinds of users - some might play for weeks and engage with all the content features, while some might just hop in and out after a day or two. To make sure you’re monetizing the majority of your users, interstitial ads are an important addition to your ad strategy.In-app purchases and rewarded video ads are powerful revenue generators, effective for making money from your most engaged users who are committed enough to spend their time or money in the app. But what about the significant number of your users who aren’t engaged enough to interact with your in-app store or rewarded video ads? That’s where interstitial ads come into play. By the end of this article, you’ll understand what they are and our latest best practices for placement, frequency, and segmentation for your game’s interstitial ads, so you can begin maximizing your revenue.What are interstitial ads?Interstitial ads are full screen ad units that contain static creatives, videos, or even gifs. They are system-initiated, which means you as the developer decides when the user sees them. This is in contrast to user-initiated ads like rewarded videos, which the user chooses to open and interact with in exchange for a reward.A successful interstitial ad strategy will maximize your ad revenue without causing any meaningful reduction in user retention. Let’s take a look at how to achieve this.Placement of interstitial adsPlacement has a huge impact on the success of your interstitial ad strategy, and should be your first focus. A bad placement, such as one that interrupts the app flow, will damage the user experience and can lead to a drop in retention. Generally speaking, try to make sure you’re only serving these ads at natural breaks in play - although there are some exceptions as we’ll cover shortly.For IAP-heavy games, it's best to be more cautious, showing interstitials only at the end of game sessions, or when users do not engage with your rewarded video ads or your IAP offers. Each time you try one of these placements, make sure you’re A/B testing to see what works best with your specific player base.For an ad-based game, like in the hyper-casual genre, placing interstitials when the user opens the game, loses a level, returns to the homepage, or gives up on watching a rewarded video can be effective at maximizing ARPDAU. For starters, try A/B testing the impact of showing the interstitial before or after the end-level pop up screen.After end-level screenShowing the ad after the end-level screen is the most common practice in the hyper-casual space.When the user finishes the level, a pop up appears on the screen with an offer to increase the winnings by watching a rewarded video ad. If the user ignores this offer and exits away from the pop up, they are shown an interstitial ad. While this can be effective in terms of ARPDAU uplift, it can sometimes have a negative impact on the user experience. To be shown an interstitial ad at this point can, as a result, frustrate them.Before end-level screenTo avoid this, a different approach that we’ve seen work in the hyper-casual space is to show the interstitial ad before the end-level pop up screen with the rewarded video offer. As you can see below, the ad appears immediately after the level is completed.In this approach, the offer to increase the winnings via a rewarded video ad appears after the interstitial ad. This flow is often better for the user experience, helping increase ARPDAU while minimizing any potential effect on retention rates. It's a particularly good fit for games where users typically win every level, such as puzzle games.Mid-levelIn addition, in hyper-casual games where levels are over the 45 seconds mark - such as some games in the i-o category - we recommend testing the impact of mid-level interstitial placements. For these games, waiting until the end of each level is too long of an interval and misses out on potential revenue. Even though mid-level placements interfere with the gameplay, our data has shown that Day 7 and Day 14 retention is not significantly affected by this placement - while ARPU can increase significantly.Timing of interstitial adsOnce you’ve chosen your placements, you need to decide when to show users an interstitial ad for the first time. The truth is it depends largely on the genre of your app. In the hyper-casual category, where games are predominantly ad-based, users have a relatively short lifetime and tend not to stick around for too long. Because of this, concentrate on monetizing users while you can - we recommend showing interstitial ads from day 1 in order to maintain a positive return on investment (ROI) for your user acquisition campaigns. If you wait until day 7, for example, the user may have churned already and therefore you'll have missed the opportunity to monetize them and recoup the money you spent on acquiring them.But at what moment in the game should you show users an interstitial for the first time on Day 1? While some might assume it’s best practice to display an interstitial right away, on Level 1, we’ve in fact seen stronger retention and ARPU performance from showing the ad at a slightly later level, from Level 3 to Level 8. By waiting until a slightly later level, you can earn the engagement of the user and avoid the risk of causing them to churn at the first opportunity. Whatever your game and category, make sure to A/B test this approach and see if it works for you.While Day 1 is a good starting point for hyper-casual games, in game genres like casual, midcore, and hardcore, we recommend waiting at least two weeks before showing interstitial ads to your users. In these games, where users tend to have longer lifetimes, you have more time to try to convert users from non-payers into payers, or for them to engage with user-initiated ads like rewarded videos. In fact, rewarded videos are a useful tool for converting users into payers, by giving them a “taste” of premium content for free. During these two weeks, it’s a good opportunity to get to know your users’ behavior, understand who’s a payer, who’s an ad whale, and who generates minimal value - this will guide your segmentation strategy, which we’ll cover later on.Frequency and pacing of interstitial adsAfter breaking the ice and showing your users interstitial ads for the first time, you then need to figure out how often they’ll see these ads per session, and the interval of time between each ad. This is what frequency (or capping) and pacing is all about.The number of ads you can show your users without damaging your retention really depends on your app's genre: for example, people who play hyper-casual games are used to seeing lots of ads, while RPG or strategy gamers have a lower tolerance for system-initiated ads.Do your research and get familiar with your genre’s benchmarks to guide you in the right direction. Then you’ll need to A/B test to find the exact number that works for your specific app. The metrics to look out for when A/B testing different pacing and capping strategies are impressions, retention, and ARPU. Allow enough time and a large enough volume of users before stopping your tests - around 2 weeks is usually enough.If you increase the frequency of your system-initiated ads, and you see after two weeks that D1 retention dropped, without a big enough increase in your ARPU, then try reducing the frequency or tinkering with the placement. Alternatively, if you increase the frequency and you see ARPU rise after a couple of weeks of testing - with no significant impact on retention - then you can roll out this change to your full audience.Learn how Neon Play and ZiMAD ran A/B tests on the ironSource platform to find the optimal frequency for their interstitial ads and level up their ARPUUser segmentation for interstitial adsUser segmentation is one of the most important strategies to leverage with your interstitial ads. It is particularly crucial for games which have users with longer lifetimes and a complex in-app economy. There are several tools to set up segmentation, including one we have in ironSource’s mediation platform, which lets you break down your users into different groups like payers and non-payers. This is the most common form of segmentation, because the potential negative impact on retention caused by serving interstitials to paying users isn’t worth the incremental revenue uplift from interstitial ads.Another useful segmentation is between ad whales and non-ad whales, which differentiates between users who generate significant revenue for your app through rewarded videos, and users who generate a smaller amount of revenue. For ad whales, who are valuable for your ARPU and more likely to convert into payers, we don’t recommend serving interstitial ads. For your non-ad whales - users who are engaged enough to watch rewarded videos, but not frequently enough to generate significant revenue - you can serve them interstitials with a measured approach. Run A/B tests to check the optimal frequency and to determine the value these ads generate in relation to their impact on retention and overall ARPU.

>access_file_
1339|blog.unity.com

Teaching robots to see with Unity

The world of robotics is full of unknowns! From sensor noise to the exact positioning of important objects, robots have a critical need to understand the world around them to perform accurately and robustly. We previously demonstrated a pick-and-place task in Unity using the Niryo One robot to pick up a cube with a known position and orientation. This solution would not be very robust in the real world, as precise object locations are rarely known a priori. In our new Object Pose Estimation Demo, we show you how to use the Unity Computer Vision Perception Package to collect data and train a deep learning model to predict the pose of a given object. We then show you how to integrate the trained model with a virtual UR3 robotic arm in Unity to simulate the complete pick-and-place system on objects with unknown and arbitrary poses.Robots in the real world often operate in and must adapt to dynamic environments. Such applications often require robots to perceive relevant objects and interact with them. An important aspect of perceiving and interacting with objects is understanding their position and orientation relative to some coordinate system, also referred to as their “pose.” Early pose-estimation approaches often relied on classical computer vision techniques and custom fiducial markers. These solutions are designed to operate in specific environments, but often fail when their environments change or diverge from the expected. The gaps introduced by the limitations of traditional computer vision are being addressed by promising new deep learning techniques. These new methods create models that can predict the correct output for a given input by learning from many examples.This project uses images and ground-truth pose labels to train a model to predict the object’s pose. At run time, the trained model can predict an object’s pose from an image it has never seen before. Usually, tens of thousands or more images need to be collected and labeled for the deep learning models to perform sufficiently. Real-world collection of this data is tedious, expensive, and, in some cases like 3D object localization, inherently difficult. Even when this data can be collected and labeled, the process can turn out to be biased, error-prone, tedious, and expensive. So how do you apply powerful machine learning approaches to your problem when the data you want is out of reach or doesn’t actually exist for your application yet?Unity Computer Vision allows you to generate synthetic data as an efficient and effective solution for your machine learning data requirements. This example shows how we generated automatically labeled data in Unity to train a machine learning model. This model is then deployed in Unity on a simulated UR3 robotic arm using the Robot Operating System (ROS) to enable pick-and-place with a cube that has an unknown pose.Simulators, like Unity, are a powerful tool to address challenges in data collection by generating synthetic data. Using Unity Computer Vision, large amounts of perfectly labeled and varied data can be collected with minimal effort, as previously shown. For this project, we collect many example images of the cube in various poses and lighting conditions. This method of randomizing aspects of the scene is called domain randomization1. More varied data usually leads to a more robust deep learning model.To collect data with the cube in various poses in the real world, we would have to manually move the cube and take a picture. Our model used over 30,000 images to train, so if we could do this in just 5 seconds per image, it would take us over 40 hours to collect this data! And that time doesn’t include the labeling that needs to happen. Using Unity Computer Vision, we can generate 30,000 training images and another 3,000 validation images with corresponding labels in just minutes! The camera, table, and robot position are fixed in this example, while the lighting and cube’s pose vary randomly in each captured frame. The labels are saved to a corresponding JSON file where the pose is described by a 3D position (x,y,z) and quaternion orientation (qx,qy,qz,qw). While this example only varies the cube pose and environment lighting, Unity Computer Vision allows you to easily add randomization to various aspects of the scene. To perform pose estimation, we use a supervised machine learning technique to analyze the data and generate a trained model.In supervised learning, a model learns how to predict a specific outcome based on training a set of inputs and corresponding outputs, images, and pose labels in our case. A few years ago, a team of researchers presented2 a convolutional neural network (CNN) that could predict the position of an object. Since we are interested in a 3D pose for our cube, we extended this work to include the cube’s orientation in the network’s output. To train the model, we minimize the least squared error, or L2 distance, between the predicted pose and the ground-truth pose. After training, the model predicted the cube’s location within 1cm and the orientation within 2.8 degrees (0.05 radians). Now let’s see if this is accurate enough for our robot to successfully perform the pick-and-place task!The robot we are using in this project is a UR3 robotic arm with a Robotiq 2F-140 gripper, which was brought into our Unity scene using the Unity Robotics URDF Importer package. To handle communication, the Unity Robotics ROS-TCP Connector package is used while the ROS MoveIt package handles motion planning and control.Now that we can accurately predict the pose of the cube with our deep learning model, we can use this predicted pose as the target pose in our pick-and-place task. Recall that in our previous Pick-and-Place Demo, we relied on the ground-truth pose of the target object. The difference here is that the robot performs the pick-and-place task with no prior knowledge of the cube’s pose and only gets a predicted pose from the deep learning model. The process has 4 steps:An image with the target cube is captured by UnityThe image is passed to a trained deep learning model, which outputs a predicted poseThe predicted pose is sent to the MoveIt motion plannerROS returns a trajectory to Unity for the robot to execute in an attempt to pick up the cubeEach iteration of the task sees the cube moved to a random location. Although we know the cube’s pose in simulation, we will not have the benefit of this information in the real world. Thus, to lay the groundwork for transferring this project to a real robot, we need to determine the cube’s pose from sensory data alone. Our pose estimation model makes this possible and, in our simulation testing, we can reliably pick up the cube 89% of the time in Unity!Our Object Pose Estimation Demo shows how Unity gives you the capability to generate synthetic data, train a deep learning model, and use ROS to control a simulated robot to solve a problem. We used the Unity Computer Vision tools to create synthetic, labeled training data and trained a simple deep learning model to predict a cube’s pose. The demo provides a tutorial walking you through how to recreate this project, which you can expand by applying more randomizers to create more complex scenes. We used the Unity Robotics tools to communicate with a ROS inference node that uses the trained model to predict a cube’s pose. These tools and others open the door for you to explore, test, develop, and deploy solutions locally. When you are ready to scale your solution, Unity Simulation saves both time and money compared to local systems.And did you know that both Unity Computer Vision and Unity Robotics tools are free to use!? Head over to the Object Pose Estimation Demo to get started using them today!Now that we can pick up objects with an unknown pose, imagine how else you could expand this! What if there are obstacles in the way? Or multiple objects in the scene? Think about how you might handle this, and keep an eye out for our next post!Can’t wait until our next post!? Sign up to get email updates about our work in robotics or computer vision.You can also find more robotics projects on our Unity Robotics GitHub.For more computer vision projects, visit our Unity Computer Vision page.Our team would love to hear from you if you have any questions, feedback, or suggestions! Please reach out to unity-robotics@unity3d.com.CitationsJ. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, P. Abbeel, “Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World” arXiv:1703.06907, 2017J. Tobin, W. Zaremba, and P. Abbeel, “Domain randomization and generative models for robotic grasping,” arXiv preprint arXiv:1710.06425, 2017

>access_file_
1340|blog.unity.com

Experimenting with Shader Graph: Doing more with less

You can improve the runtime efficiency of your shader without sacrificing the quality of your graphics by packing physically based rendering (PBR) material information into a single texture map and layering it into a compact shader. Check out this experiment.This experiment works in both the Universal Render Pipeline (URP) and High Definition Render Pipeline (HDRP). To get the most out of this article, you should have some familiarity with Shader Graph. If you are new to Shader Graph, please explore our resources for an introduction and more detail about this tool for authoring shaders visually.When working with art assets in a terrain-like environment, multiple layers of tileable material are typically preferred as they produce better blending results. However, the GPU performance cost of multiple texture samples and growth of memory use with each layer added to the shader can be prohibitive for some devices and inefficient in general.With this experiment, I aimed to:Do more with lessMinimize the memory footprint and be frugal with texture sampling in representing a PBR materialMinimize shader instructionsPerform layer blending with minimum splat map/vertex color channelsExtend the functionality of splat map/vertex color for extra bells and whistlesWhile the experiment achieved its goals, it comes with some caveats. You’ll have to set your priorities according to the demands of your own project in determining which trade-offs are acceptable to you.Before layering, the first thing you need to do is figure out the PBR material packing. PBR material typically comes with the parameters for Albedo (BaseColor), Smoothness mask, Ambient Occlusion, Metalness, and Normal defined.Usually, all five maps are represented in three texture maps. To minimize texture usage, I decided to sacrifice Metalness and Ambient Occlusion for this experiment.The remaining maps – Albedo, Smoothness and Normal Definition – would traditionally be represented by at least two texture maps. To reduce it to a single map requires some preprocessing of each individual channel.The final result of the PBR Material packed into a single texture. Red = dHdu (Derivatives Height Relative to the U direction) for Normal Definition#. Green = dHdv (Derivatives Height Relative to the V direction) for Normal Definition#. Blue = Linear Grayscale shade representing Albedo (color reconstructed in shader). Alpha = Linear Smoothness map (standard Smoothness map). Note: The texture is imported into Unity with sRGB unchecked and compressed with BC7 format. When porting to other platforms, switch to the platform-supported equivalent 4-channel texture format.Processing the mapsAlbedoAlbedo is normally defined as an RGB texture; however, many terrain-like materials (rock, sand, mud, grass, etc.) consist of a limited color palette. You can exploit this property by storing Albedo as a grayscale gradient and then color remapping it in the shader.There is no set method for converting the RGB albedo to a grayscale gradient. For this experiment, The grayscale Albedo was created through selective masking of the original Albedo map channels and Ambient occlusion; to match the prominent color in the shader color reconstruction, just eyeball any manual adjustments.SmoothnessSmoothness is considered very important for PBR material definition. To define smoothness more precisely, it has its own channel.A simple multiplier was added to the smoothness in the shader for some variation in the material.Normal definitionThe Normal map is important for showing the detailed characteristics of a surface. A typical PBR Material uses a tangent space normal map. In this experiment, I chose a pre-converted derivatives map using surface gradient framework for the reasons below. (SeeMorten Mikkelsen’s surface gradient framework for more information).To pre-convert tangent space normal maps to derivatives, use this Photoshop action.Using a pre-converted Derivatives map has several advantages:Can be directly converted to surface gradient, using fewer instructions than a standard tangent space normal map, which requires derivatives conversion in the shaderCan be stored in two channels (dHdu and dHdv), resulting in a lower memory and texture cache footprint in runtimeDoes not require blue channel reconstruction in the shader, which is typical when processing tangent space normal maps, since the surface gradient framework takes care of the normal reconstruction (fewer shader instructions)Works correctly when adjusted in Photoshop – that is, by blending, masking or reducing intensity – and does not require renormalization. For example, to reduce intensity, simply blend the map against RGB(128,128,0).In conjunction with the surface gradient framework, the advantages further include:Normal bump information can be blended and composited in the shader the same way as albedo blend/composite, with the correct result.Increasing, reducing and reversing bump contributions is trivial and accurate.But pre-converted derivatives from tangent space normal map also have some disadvantages:Using Photoshop conversion, normal definition gets clamped at an angle greater than 45 degrees, to balance precision in an 8-bit texture.Artists are used to working with tangent space normal maps and require the maps to be pre-converted via Photoshop as part of their workflow.Note: Clamping at an angle greater than 45 degrees does not apply to shader-based derivatives conversion.Depending on your use case, the limitation may have a lesser or greater effect. In this experiment, a normal direction less than 45 degrees does not have a noticeable negative impact on the end result. In fact, in this case it provides a benefit by reducing unwanted reflection from extreme normal direction.The full unpacking processThe complete Sub Graph to unpack the Compact PBR texture to output colored Albedo, smoothness and surface gradient.Note: Surface gradient conversion to Normal is done outside the Sub Graph so that the material can be easily blended based on the output of the UnpackedSubGraph.For this experiment, I chose a tier-based layering method on a single channel remap. The Sub Graph does five linear interpolations (plus the base, forming six layers).There are many ways to blend layer weights. This method has the simplicity of a single vector input, which suits the experiment goal. This allowed lots of layering without burning through multiple channels in splat maps or vertex channels.The drawback of this method is that you cannot control the weight of an individual layer’s contribution. The blending will always be a transition from the previous layer. Depending on the use case, this can be a limiting factor compared to a traditional per-channel blend.The Sub Graph to remap a single channel to represent the six layers.The Sub Graph shown above is predefined for six layers of tier-based blending. To create more layers, divide 1 by the desired number of layers blended, subtract 1, and then remap each layer based on that value range.For example, for a nine-layer blend material, each layer remap range is 1/(9-1) = 0.125.Be aware that as you divide the single channel into smaller portions, you have less shading range.Layer blending requires only a single channel (the red vertex channel). The remaining three vertex channels offer extra functionalities. The final Shader Graph produces results using the remaining vertex channels.In this experiment, vertex painting was done inside Unity Editor using Polybrush (available from the Package Manager). Suggested Vertex Paint color palette for this shader.Red: Used to weight the layer contribution. Red vertex channel painting demoGreen: Sets the surface gradient property, to flip, reduce or add normal bump contribution (remapped to -1 and 1).0 reverses the normal bump (-1)0.5 value zeroes out the normal bump (0)1 sets the normal bump to the original value (+1).Green vertex channel painting demoBlue: Controls smoothness and surface gradient bump scale to create a wet water look0 = no alteration255 = maximum smoothness and flat normal map (wet look)Blue vertex channel painting demoAlpha: Controls the weight of the Albedo layer, setting the base color to white,with the contribution based on the y axis of the surface normal. It does not alter the smoothness and takes advantage of the original surface layer smoothness and bump property.0 = no snow255 = solid snowAlpha vertex channel painting combined with previous channels to showcase how the whole layers interact with the snowThe combined results of the different vertex painting channels:You can adjust the shader blending method and the settings for the various vertex channel/splat map functionalities according to your project’s requirements.The purpose of this experiment was to extend the functionality of the Shader Graph while minimizing resources. The texture was preprocessed and unpacked, but is there a payoff in runtime efficiency?Performance profiling shows the efficiencies these efforts produced.A standard six-layer blend shader was created for comparison with the compact six-layer blend shader. Both shaders were created using an identical blending method with the same functionalities. The main difference is that the standard shader uses three different textures to represent a single layer.For profiling, a single mesh was rendered on screen with blend material using the Universal Render Pipeline in the targeted platform.Mobile memory and performance profileTexture compression for mobile (Android):Standard PBR with Albedo, Mask and Normal map at 1024x1024 for mobile:6x Albedo map ASTC 10x10 = 6x 222.4 KB6x Mask map ASTC 8x8 = 6x 341.4 KB6x Normal map ASTC 8x8 = 6x 341.4 KBTotal Texture memory usage 5.431 MBCompact PBR at 1024x1024 for mobile:6x PackedPBR Texture ASTC 8x8 = 6x 341.4 KBTotal Texture memory usage 2.048 MBWith the compact six-layer material, there is approximately 62% Less texture memory consumption on Mobile (Android), savings of more than half. Mobile Android/Vulcan with Adreno 630 (Snapdragon 845); Snapdragon profile results:Approximately 70% less texture memory read in runtime.Standard took 9971020 clocks to render.Compact took 6951439 clocks to render.Compact material renders on screen approximately 30% faster. Profiling result from Snapdragon Profiler.PC memory and performance profileStandard PBR with Albedo, Mask and Normal map at 1024x1024:6x Albedo map DTX1 = 6x 0.7 MB6x Mask map DXT5/BC7 = 6x 1.3 MB6x Normal map DXT5/BC7 = 6x 1.3 MB Total Texture memory usage 19.8 MBCompact PBR at 1024x1024:6x PackedPBR Texture BC7 = 6x 1.3 MBTotal Texture memory usage 7.8 MBThe compact six-layer material uses 60% less texture memory consumption on PC (savings of more than half).PC laptop with Radeon 460 Pro rendering at 2880x1800; RenderDoc profile results:Draw Opaques for standard 6-layer blend: 5.186 ms.Draw Opaques for compact 6-layer blend: 3.632 ms. Compact material renders on screen approximately 30%* faster. *RenderDoc profile value fluctuates; 30% is an average of samples.PC desktop with nVidia GTX 1080 rendering at 2560x1440; nSight profile results:Render Opaques for standard 6-layer material: 0.87 msRender Opaques for compact 6-layer material: 0.48 msCompact material renders on screen approximately 45% faster. Profiling results from nSight.Console performance profileOn PlayStation 4, using compact material yields 60% memory savings, identical to that for PC as the PS4 uses the same compression.PS4 base rendering at 1920x 1080; Razer profile results:Render Opaques for standard 6-layer material: 2.11 msRender Opaques for compact 6-layer material: 1.59 msCompact material renders on screen approximately 24.5% faster.Profiling result from PS4 Razor profiler.In summary, using a compact six-layer PBR shader offers performance gain and significant memory savings. The variation of GPU performance is interesting but expected, as unpacking the material consumes more ALUs than sampling more textures.This sample project with Shader Graphs and Sub Graphs can be downloaded here:[DOWNLOAD HERE], Unity 2020.2.5f1 with HDRP 10.3.1[DOWNLOAD HERE], Unity 2020.2.5f1 with URP 10.3.1[DOWNLOAD HERE], Photoshop action to pre-convert tangent space normal map to derivatives.Screenshot from Universal Render Pipeline version of the project.The main components of this experiment are:Shader Graph for custom materialPre-converted DerivativesSurface gradient frameworkAlbedo color reconstructionSingle-channel layer blendingUpVector blend technique, smoothness and bump control via vertex channel blendThis experiment showcases how you can use Shader Graph to produce beautiful graphics that are also efficient. Hopefully, this example can inspire artists and developers to push aesthetic boundaries with their Unity projects.Rinaldo Tjan (Technical Art Director, R&D, Spotlight Team) is a real-time 3D artist with an extreme passion for real-time lighting and rendering systems.Having started his career in the PlayStation 2 days, he has more than a decade of end-to-end artist workflow knowledge, from texturing to final rendered scene creation. Prior to joining Unity Technologies, he helped deliver AAA games such as BioShock 2, The Bureau: XCOM Declassified, and Mafia III.He currently works with Unity clients to help them augment their projects and realize their true potential using Unity, while helping drive the internal development and standards of Unity rendering features.

>access_file_