// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1695 transmissions indexed — page 49 of 85

[ 2023 ]

20 entries
961|blog.unity.com

Ready to create your own games? Here are 7 must-save tutorials

Do you want to create a game? Have you started a game and are struggling to finish it? Have you finished a game but are unsure about what to do next?Below, we’ve curated a collection of resources to help you expand your game development skills, processes, and management, bringing you to professional game creator status in no time. Discover how to use game jams in the development process, and learn about services and analytics for managing your game and understanding how it’s performing.In Unity’s flagship project, Intermediate Scripting, you’ll learn to create with code as you program your own exciting projects from scratch in C#. As you iterate with prototypes, tackle programming challenges, complete quizzes, and develop your own personal project, you’ll transform from a beginner to a capable Unity developer.Start the project.Have you ever wanted to create a mobile game that performs well on almost any device? What about adding downloadable content (DLC) or having holiday-themed content in your game? This is where the Addressables system can help.Take the course.If you’re working on a game (or other real-time simulation) that requires the most efficient CPU usage possible, then Unity’s Data Oriented Technology Stack (DOTS) is a great way to get the performance you need.Learn about DOTS.Lots of people start games, but it’s surprisingly hard to finish one. This course guides you through the process used by professional creators to develop their ideas, keep their projects on track, and deliver complete games.Take the course.Are you a creator in need of a challenge? Developers all over the world participate in game jam events to expand their skills and test their ingenuity. This project will guide you through the basics of game jams, from what you can expect when you join a game jam to developing your game after the jam is complete.Get jamming.Development operations (DevOps) is a key group of tools and workflows that you can use to help you create and manage your game or other real-time experience throughout its lifecycle.Start the course.Learn about the features and campaign types available on Unity's User Acquisition (UA) dashboard, including campaign setup, creative pack addition, targeting options, monitoring tools, and best practices for optimal performance.Get the tutorial.Curious about what other educational tools are available? Check out the full Unity Learn catalog.

>access_file_
962|blog.unity.com

Happy Harvest demo: See the latest 2D techniques

There’s no limit to how innovative today’s 2D games can be. With so many creative possibilities and the evolution of Unity’s 2D rendering and tools, we’ll keep you up to date on best practices for making 2D games in Unity.Happy Harvest, now available on the Unity Asset Store and Unity Samples, shows developers how to harness the latest capabilities for creating 2D lights, shadows, and special effects with the Universal Render Pipeline (URP) in Unity 2022 LTS. It incorporates best practices any 2D creator can use, including not baking shadows into a sprite, keeping sprites flat, moving shadow and volume information to secondary textures, advanced Tilemap features, and much more.Happy Harvest is a top-down demo with cheerful cartoon art. The sample takes you through a day in the life of an industrious farmer. Stepping from his farmhouse, he makes his way along cobblestone paths lit by lanterns. He tends to wheat, carrot, and corn crops, picks apples, and feeds his pigs and cows. His farmstead is dotted with ponds, there’s a barn in the back, and it’s all ringed by verdant pine trees.A top-down perspective comes with challenges like how to project the character, manage overlapping objects, and create shadows in an imaginative way. These were handled by using features included in Unity 2022 LTS for shadows, sprite libraries, and Tilemap 2D.All of these assets are free to use in your personal or commercial projects, and you can also modify the demo with your own ideas. Download it today to start exploring this bucolic scene and its many details.The demo is accompanied by a collection of instructional articles. These technical walkthroughs will help you understand how the lights, shadows, environment, and animations were created, so you can use the same steps in your own 2D project.The following pages are available with the demo:2D light and shadow techniques in URPHow to create art and gameplay with 2D tilemapsHow to animate 2D characters in Unity 2022 LTSCreate 2D special effects in Unity with the VFX Graph and Shader GraphYou can find these articles in the description on the Unity Asset Store page and in the in-Editor tutorial window in the demo. Additionally, you can preview each script in Unity Samples to better understand the recommended guidelines and coding structure for a 2D game.Let’s take a brief look at what you’ll learn from each page.There are plenty of cute details in Happy Harvest, from fields of ripened corn and golden wheat, to gently swaying lanterns and the red-shingled farmhouse. But it’s the lighting and shadows that are the most immersive part of this cozy world, featuring an all-over glow and late afternoon shadows.By moving light and shadow information to separate textures (which does require some extra steps during the art creation process), you can create optimized real-time 2D lights and shadows.Read the article “2D light and shadow techniques with the Universal Render Pipeline” to learn how to:Create and work with normal maps and mask maps (Secondary Textures) to add rich details like rim lighting on the main character, barrels, lamp posts, and other props.Use ambient and spot lighting to set the mood with tinting and effects that mimic the sun’s movement throughout the day.Create the illusion of volume, like the effects used on the bushes, by enabling normal maps on the lights.Create shadows for any shape and time of day using blob shadows and infinite shadows with the Shadow Caster.Control the movement of time and changing of the light with a day-to-night script.Optimize your 2D lights with tips from the Unity team.What do the cobblestone paths, ponds, grass, and background forest have in common? They were all made with Unity’s Tilemap system, which provides a way to create a game world with tiles – small sprites placed on a grid. Instead of designing a level as one big image, you can split it into brick-like pieces that are repeated throughout a whole level.Tilemaps can help save time on art creation as well as memory and CPU power. This is because tiles can be rendered by a dedicated renderer and the tiles that are not visible on the screen can be disabled. A brush tool makes it efficient to paint tiles on a grid, and they can be scripted to use painting rules. They also come with automatic collision generation for more efficient testing and editing. Additionally, you can place GameObjects or use the API for game logic.You can find tips for using the Tilemap system in the article “Create art and gameplay with 2D Tilemaps in Unity,” including how to:Use secondary textures for tilemaps: Every tilemap in the sample has counterparts called normal map and mask map textures that share the same dimensions and layout, but are painted for displaying the lighting.Use the Rule Tile feature, which is part of the 2D Tilemap Extras package. This package contains reusable 2D and Tilemap editor scripts you can use in your own projects and as the basis for custom Brushes and Tiles.Organize tiles in your project hierarchy:In the sample, the tiles are all contained in one GameObject called Grid.We created as few tilemaps as possible inside the Grid toprevent overlapping pixels and help keep overdraw low.Use the Tilemap API based on how it was used in the sample.With his rolled-up sleeves and pompadour hairstyle, the farmer in Happy Harvest is ready to work. To get him moving around the scene, we used techniques like rigging his face to create different expressions, sprite libraries for character variations, and Sprite Swap for switching between sprites attached to the same bone during the animation process.In the article “2D characters and animation in Happy Harvest,” we break down these and other techniques used to create the animations. You’ll get tips on how to:Draw and animate characters from different angles to suit a top-down perspective. In Happy Harvest, good-looking visuals are achieved with four directions.Work with skeletal animation in Unity using the 2D Animation and PSD Importer packages. These enable you to import your character artwork directly from Photoshop into Unity by importing all of the character’s layers as sprites and placing them exactly as they were painted in the app.Rig a character in the Sprite Editor.Connect sprites to bones, geometry, and weights.Use the Sprite Library Editor and Sprite Swap to manage types of animations other than those that can be achieved with bone rotations. This includes facial expressions when the character changes the direction they’re facing.Use the 2D Inverse Kinematics (2D IK) tool, which is a part of the 2D Animation package. It calculates the rotations and allows a chain of bones to move them to target positions.The farmer’s crops need both sunshine and rain. Luckily, the evening brings rainfall, and our hard-working farmer can retire to his little home with a cozy fire in the hearth.There are different options for creating 2D visual effects like these in Unity. You can animate an explosion frame-by-frame or spawn particles and cloud sprites. Use the Built-in Particle System for particle spawning on the CPU. Alternatively, you can leverage the GPU and use the VFX Graph and Shader Graph to spawn millions of particles or apply post-processing effects with URP Volumes.In the article “2D special effects with the VFX Graph and Shader Graph,” you’ll learn about the different techniques used to create the special effects in Happy Harvest, including:Simple particle effects created with the Built-in Particle System, like falling leaves from the bushes, or the farmer’s footprintsFlipbook particles to create moths around the lamps at night using the Built-in Particle System, or water splashes from the rain spawned by the VFX GraphCommon effects like a shader applied to the water tiles to make the waves move, the fire in the fireplace, and the smoke from the chimneyWeather effects like rain and thunder made in the VFX Graph – the rain particles use the 2D Lit shader so they blend nicely with the environment and react to lightsShaders to move the trees and other vegetation to simulate a light breezeTint and bloom post-processing effects applied to the whole scene to set a warm, cheerful moodLearn more about creating visual effects in our e-book The definitive guide to creating advanced visual effects in Unity. The e-book provides a complete overview of how to use visual effects authoring tools in Unity to create advanced effects, including water and liquid, smoke, fire, explosions, weather, impact, magic, electricity, and much more.Happy Harvest is a playground for 2D creators who want to pick up new visual techniques in Unity. Expand on the sample, reuse its elements and scripts in your own projects, and test it on your mobile and desktop devices. This sample and its supporting content is designed to provide useful tips for everyone, from beginners to experienced 2D developers. Happy harvesting!If you haven’t yet, be sure to download these advanced e-books that cover 2D game development as well as rendering and visual effects (3D and 2D) in Unity:2D game art, animation, and lighting for artistsIntroduction to the Universal Render Pipeline for advanced Unity creatorsThe definitive guide to lighting in the High Definition Render Pipeline in UnityThe definitive guide to creating advanced visual effects in UnityPlus, check out our other 2D demos, The Lost Crypt and Dragon Crashers.You’ll find many more resources for advanced programmers, artists, technical artists, and designers in the Unity best practices hub.Got feedback? Please share your thoughts on the demo in the dedicated forum.

>access_file_
963|blog.unity.com

How to optimize latency in your ad waterfall

When a developer is ready to show an ad but the video isn’t ready, their users’ experience could suffer - and due to missed opportunities to show an ad, so can revenue.By using latency data to pinpoint optimization opportunities, you can increase impressions and improve your user experience, ultimately boosting revenue. Let’s explore how to get started with latency optimization.Does my app have a latency issue?First, you need to confirm if you have a latency issue - you might not even have one at all. Let’s review the key latency terms:Waterfall latency is the time (in seconds) it takes to get an ad ready, from the moment an ad is requested until it’s ready to showInstance latency is the time (in seconds) it takes for an instance to respond to an ad request Latency tolerance is the average duration of time a developer is willing to accept between ad impressionsYou only have a latency issue if your waterfall latency is higher than your latency tolerance - in other words, if it takes longer to get an ad ready than the time you have set between ads.If you don’t know your latency tolerance, you can calculate it using internal BI metrics - or you can estimate it by playing your game and measuring exactly how often you’re seeing ads. Pay attention to the ads carefully - if they’re not appearing at the moments they’re scheduled for, it’s an issue worth investigating and minimizing.How do I fix my latency issue?Reducing latency is a multi-step process and differs depending on the ad unit - let’s start with banners and interstitials.The first step is identifying instances that are taking too long to load - and LevelPlay’s real time pivot can help you do just that. Here’s how to find and delete them.Identify waterfalls with potentialStart by identifying waterfalls that need optimizing, and focus where optimizing could make the most impact on revenue. Look for apps, ad units, and geo groups with relatively high waterfall latency and high revenue. Then you can generate your own waterfall from the past 7 days using the app, ad units, and geo groups you found as your waterfall filters. To optimize for latency, your waterfall measures should include “revenue,” “revenue SOW (share of wallet),” “instance latency,” and “eCPM” as measures. Make sure to sort it by eCPM. Identifying problematic instancesNow identify your target: instances that are slowing down your waterfall and not generating enough revenue to justify the slowdown.First, count how many instances you have in your waterfall (on Unity LevelPlay, you can find this by going to the mediation management page, where you can download a CSV of your waterfall).This gives you key information: whether you have a short or long waterfall. A short waterfall just has a few instances causing latency issues - but assuming you have a long waterfall (e.g. 50+ instances), you’ll need to pinpoint many problematic instances.Next, use your revenue filter to find instances that have the least impact on revenue - we suggest anything generating less than 0.3% of your revenue to start.Then, count how many instances are filtered and the sum of their overall share of wallet. It’s important to find a balance between instances contributing to your revenue share of wallet (market share of revenue) and instances causing latency. Generally, it’s best to remove the most instances possible without reducing your revenue share of wallet more than about 6%. It’s too risky to remove more than 6% of your share of wallet, but removing anything less than 3% may not have enough of an impact on your latency. So, if you’ve counted your instances and the sum of their overall share of wallet is more than 6% or less than 3%, you should adjust your revenue filter as needed. If the share of wallet is too high, then lower the filter to 0.2% - and if it’s too low, try raising it to 0.4%.During this process, make sure you’re focusing on a few things:Only non-bidding instancesInstances towards the top, not the bottom of the waterfall - the instances at the bottom aren’t causing delay like the instances at the top areReducing latency in a short waterfallEven if you’re looking at a short waterfall, you might still find yourself having latency issues. In this case, the latency isn’t caused by having too many instances in your waterfall - rather, it’s a case of specific instances with high latency, so focus on instance latency.Try looking for any high-latency instances at the top of your waterfall. If you can identify a problematic instance, decide how to address it - if it has low revenue share of wallet, then A/B test removing it. If it has significant share of in your waterfall (e.g. more than 5%), then you can still A/B test removing it - but pay attention to how that impacts performance .A/B testing removing instancesNow that you’ve selected the instances you’d like to try removing, you should run an A/B test. Just make your A group the original waterfall, and your B group the new waterfall with instances removed. Make sure the only difference between the groups is the removed instances - otherwise you can’t pinpoint exactly what’s caused your boost (or drop) in revenue. As you analyze your test results, keep an eye on your KPIs - your B group should have decreased waterfall latency, which should lead to increased impressions per DAU, ARPDAU, and ARPU.What about rewarded video ads?With rewarded video ads, optimizing with latency is a slightly different process thanks to progressive loading. It works by setting up the ad loading process in advance - while the user is watching the first ad, the second one is already loading, and so on. Like other ads, rewarded videos can still have latency - but thanks to progressive loading, you should only see latency during the first video of a session (session depth 1).To examine rewarded video latency in your waterfall, just break up your waterfall by “session depth” and keep an eye on waterfall latency. Then, you should confirm two things - first, placements with back-to-back rewarded videos have less than one second of latency (from the second video onward). Second, make sure your first video doesn’t have high latency. Any instances that don’t pass these two tests are worth investigating and even A/B testing their removal. *If you decide to remove instances, it’s important to see how this might impact later sessions - so make sure to first remove the “session depth” filter from your view*The further you dig into your waterfall’s latency, the further you can optimize it - and it’s never too late to get started. Latency optimization is a win-win: while your users can get a better experience, you can boost impressions and revenue.

>access_file_
964|blog.unity.com

An overview of carriers, OEMs, MVNOs, and how they benefit your on-device strategy

With nearly 87% of all mobile users in the United States expected to own a smartphone by 2025 according to Statista, on-device advertising is a sure-fire way to reach the massive, pre-existing audience on smartphones around the world.To better grasp the importance of this channel and how to reach users effectively through it, it’s essential to understand the bones of the smartphone industry, the difference between carriers, original equipment manufacturers (OEMs), and mobile virtual network operators (MVNOs), and how the wireless telecommunications space has evolved over time. Let’s take a look.There are three key players that make up the smartphone industry: carriers, OEMs, and MVNOs. Each plays a role in creating new technologies that innovate the industry, building a larger audience and changing how we interact with our devices. While one can’t operate without the other, it’s worth taking time to understand their unique functions in the industry.Mobile carriers provide the cellular connectivity services that power your phone. In the U.S., mobile carriers must acquire a radio spectrum license – used to carry information wirelessly – from the government. With many of the world’s vital services completely reliant on spectrum, it’s a finite and critical asset that requires careful management by the government. That said, there are only a few large giants: Verizon Wireless, T-Mobile, AT&T, and Boost in the U.S., and Orange, Vodafone, Deutsche Telekom, Hutchinson, and Bouygues in Europe.Including carriers in your on-device strategy means you’re prepared to reach a distinct group of subscribers that is geographically specific based on the best available radio spectrum in the area. Throughout the years, carriers have increased their subscribers by leveraging technologies that improve the quality of services, number of applications, and overall operations.Carriers are also responsible for encouraging OEMs to create devices that can run with their new technology. After all, carriers don’t matter much without a device that supports their operations, and vice versa.In the mobile industry, OEMs are companies that manufacture their own phones in their own factories. Examples of OEMs include Apple, Samsung, LG, and Lenovo. Considering carriers can’t operate without a quality device that supports their technology, OEMs determine how, when, and why mobile devices are used, making them critical to the app economy.As a marketer, including OEMs enables you to reach anyone who owns the device, without worrying about whether they pay a subscription fee. By taking an OEM-first approach to on-device advertising, you can sell to a broad range of users, many of whom are loyal to a specific OEM. For example, whether you’re a Samsung or Apple owner has become a personality trait in recent years.MVNOs purchase their radio spectrum directly from carriers. Essentially, they are streamlined communication providers that rely on more established carriers for network services. Since MVNOs don’t have to build and maintain their own infrastructure, they’re able to offer their services at lower rates. Interestingly, Germany, France, Spain, the U.K., and Denmark have close to 375 active MVNOs, while North America and China have just 130 and 14 active MVNOs respectively, according to Delta Partners.Including MVNO sources in your on-device strategy uncovers a wide range of supply for your app and allows you to reach new users quickly and efficiently. Unlocking these sources also helps you spread the risk between channels.From 1G to 5G, each generation of wireless service has contributed to the growth of the mobile industry. In the 1980s, 1G focused on voice communications, and in 1992, 2G introduced basic data capabilities. In 1994, the first smartphone was released by IBM. Due to cost, the phone ended up in a pile of devices deemed too ahead of their time, but this didn’t mean users weren’t excited about smart features – personal digital assistants (PDAs) were all the rage.Right before the turn of the millennium, OEMs began successfully marrying PDAs and mobile phones. The Ericsson R380, manufactured by Ericsson Mobile Communications and released in 2000, was the first phone marketed as a smartphone. In 2001, Palm, a PDA OEM, released the Kyocera 6035, and Handspring released the Treo 180 in 2002. The Kyocera 6035 was the first smartphone to be paired with a major carrier through Verizon, and the Treo 180 provided services via a GSM line and operating system that integrated telephone, internet, and text messaging services. 3G was also released in 2002 and offered higher transfer rates, improving cellular data performance.Fast forwarding, Apple’s iPhone was released in 2008 by the mobile carrier O2, which represented one of the largest OEM and carrier deals. The first iPhone ran on 3G and offered much faster web services, making the internet accessible on mobile networks. The look, interface, and functionality of the iPhone set the stage for future smartphone design. Later that year, the first phone manufactured by Android was released as the T-Mobile G1. It also ran on 3G and was far less advanced than the phones we see today, like the Galaxy Fold.In 2010, Verizon launched 4G in the United States, offering speeds up to 10 times faster than 3G. 4G was carriers’ crown jewel up until 2019, when 5G began rolling out. 5G once again offers lower latency, higher capacity, and increased bandwidth, and we’re continuing to see advanced devices released like the Galaxy Fold, Nokia G20, and Blackberry KEY2 LE.Today – with more competition, new entrants, and higher consumer expectations – OEMs, carriers, and MVNOs have to figure out how to grow revenue for all parties. Satisfying different niches in the mobile industry, these three key players can’t operate without each other. Understanding their core business models is essential to getting started with on-device campaigns.Promote your app directly on devices with Aura from Unity to create rich experiences that engage and retain customers throughout the device lifecycle.

>access_file_
965|blog.unity.com

Unity 2023.1 Tech Stream is now available

We’re excited to share that the 2023.1 Tech Stream is available for download.You’ll find improved features and render quality for both the High Definition Render Pipeline (HDRP) and Universal Render Pipeline (URP), along with platform graphic improvements, additional connectivity types for multiplayer solutions, and more.Tech Stream releases allow you to go hands-on with early access to the latest features, and you can share your feedback on how we can build even better tools to power your creativity.Here are a few highlights from this release, but you can always get more details in the official release notes.In Unity 2023.1, we continue to bring additional features to enhance render quality and feature coexistence in both HDRP and URP (for more on our vision, read our Games Focus blog post on scalable rendering).You can now add lens flares generated from all highlights visible on screen (direct, indirect, emissive surfaces, specular highlights) in just a few clicks with a single post-process volume.Compatible with both HDRP and URP, this feature can be used at the same time and complement SRP Lens Flares, which offer more advanced artistic control on light lens flares.With HDRP, we want to offer a coherent out-of-the-box experience for artists to create high-fidelity environments for PC and consoles. In 2022.2, we introduced one of the last missing parts, the first-ever native water system in Unity. In 2023.1, we’re focusing on enabling finer authoring of water to better integrate with the world and gameplay.You can use Water Excluder to dynamically remove water from inside a boat or cave, and Water Deformer deforms water locally for waves, vortex, or deformations around a ship in movement.Foam Generator allows you to simulate white water for a boat trail or around rocks in open oceans, and Current maps creates local currents by both managing surface waves to follow the currents and the water query API to allow for objects to drift. You can also take advantage of great control to customize the Water Line when the camera crosses the water surface.To get started, we created multiple samples available in the HDRP package from the Package Manager, as well as various demo scenes available on GitHub.To learn more about water rendering in HDRP, you can watch our talk “An overview of the new HDRP Water System” from GDC 2023.To improve the visual fidelity of transparent and transmissive objects, you can now add an extra optional pass to compute the thickness of transparent objects. This takes into account the thickness of non-opaque materials traversed by the light, especially important for nonuniform objects or when rendering multiple objects, one behind another.In 2023.1, we are bringing the last pieces of technologies used for digital human and creature rendering, as seen in the Enemies and Lion demos. HDRP’s high-quality line renderer allows you to render lines using advanced voxelization to fix the transparency ordering and aliasing issues typically seen when rendering hair and fur.We also improved high-fidelity skin rendering, optimizing performance of the subsurface scattering pass for high-resolution pass and adding dual lobe and diffuse power on Diffusion Profiles for materials using Subsurface Scattering. When simulating skin, it’s common to use two specular lobes to account for the thin oily layer covering the epidermis.To see it all in action, you can download the Enemies project on the Unity Asset Store.We’ve made stability and performance improvements to DirectX 12 and ray tracing, as well as increased compatibility with the engine’s existing feature set and consoles support. With this, the Ray Tracing API and HDRP’s ray-traced effects such as ray-traced shadows, reflections, AO, Global Illumination, path tracing, and recursive rendering are officially out of preview.We also added VFX Graph ray tracing support, enabling the authoring of complex particle effects that are compatible with HDRP’s ray-traced effects, as well as terrain heightmap support to use ray tracing on large worlds. Instancing support added to the Ray Tracing API allows you to efficiently ray trace large and dense scenes that include high-frequency repeating meshes and details.Start experimenting now with ray tracing by installing the HDRP Sample Scene template in the Hub, which has been updated to provide new ray tracing quality settings.Lastly, this release also introduces Inline Ray Tracing support for DXR1.1-capable platforms. You can now issue hardware-accelerated ray queries from within compute shaders in order to traverse the bound Ray Tracing Acceleration Structure and perform intersection testing.To help celebrate ray tracing coming out of preview, NVIDIA has sponsored the Unity 23.1 beta, providing NVIDIA GeForce RTX™ 3070 graphics cards as prizes for participants.The GeForce RTX™ RTX 3070 graphics card is powered by Ampere – NVIDIA’s second-generation RTX architecture. Built with dedicated second-gen RT Cores and third-gen Tensor Cores, streaming multiprocessors, and high-speed memory, it gives you the power you need to rip through the most demanding games.Winners will be contacted directly with instructions on how to claim their new graphics cards.For Light Probe-lit objects, Probe Volumes enable you to set up and iterate on Light Probe placement more quickly. The visual quality of Light Probe-lit objects is higher and affects Volumetric Fog in HDRP and particles. In some scenarios, Probe Volumes also enable you to indirectly light static objects, for example in an environment. Combined with tools to reduce light leaking, they can decrease the need for lightmaps, reducing baking time with less need to author lightmap UVs.Bake sets enable you to set up and blend between different Light Probe-lit lighting scenarios in HDRP. At runtime, GPU memory footprint is reduced through streaming the probe data from the CPU.With the 2023.1 release, the core functionality and user experience of Adaptive Probe Volumes are improved and officially out of preview.We have also implemented limited support for Adaptive Probe Volumes in URP. Note that this iteration will not support Lighting Scenario Blending or Lighting Normalization for Reflection Probes. It may not yet be optimized for performance, especially when running on lower-end platforms.To learn more, you can watch our talk “Efficient and impactful lighting with Adaptive Probe Volumes” at GDC 2023 and check out the lighting tutorial “4 techniques to light environments in Unity” from Unite 2022.Baked GI now uses the new LightBaker v1.0 architecture for on-demand bakes to provide you with a more predictable and stable light baking experience. When baking with the GPU backend in on-demand mode, you can use the Baking Profile in the Lighting window to select the tradeoff between performance and GPU memory usage.This new output in VFX Graph allows you to inject particles into HDRP’s Volumetric Fog to generate clouds, smoke, mist, fire effects, or to make Volumetric Fog more dynamic and procedural. Different Blend modes (add, multiply, min-max) allow you to use particles to add, remove, or combine with existing fog. For example, you could use smoke to add density to the fog, show wind chasing, mist, or create underwater streams.In keeping with the vision shared in this Games Focus post, advances in platform support and technical integrations continue in 2023.1.We continue to add improvements to performance and functionality for key platforms, including Windows, Android, iOS, Meta Quest, Magic Leap 2, Xbox®, PlayStation®5 and Playstation®VR2.Unity now supports building projects for Arm-based Windows devices while achieving native performance on devices that use the ARM64 processors, such as the Surface Pro 9 and the Lenovo ThinkPad X13s. This opens up new possibilities for you to create high-performance, immersive experiences on a wider range of devices.There are two key features for your mobile development on Android devices – Android GameActivity and Android Project Configuration Manager – that you can use with 2023.1 Tech Stream.Android GameActivity gives you greater control over the essential parts of your application, as well as more freedom and flexibility in your core code. You can find the documentation here.If you use plug-ins or are a plug-in developer yourself, you now have a more flexible and robust way to configure Android Gradle settings (manifest, settings, and build) using the Project Configuration Manager. Learn how to Modify Gradle project files in the Unity Manual.Adaptive Performance 5.0 includes enhancements to control the lifecycle of Adaptive Performance at runtime. Additionally, we are launching an Android provider to extend this package to most Android devices.Unity 2023.1.0a22 extends HDR display support to URP for desktop and console platforms, with mobile and XR platform support to follow in 2023.2. HDR displays are capable of reproducing images with higher peak brightness and wider color gamut in order to achieve better color saturation and contrast in highlights and shadows. The result is a more realistic variation in luminance across scenes, increased surface detail, and improved depth perception.To further improve DirectX 12 performance for Windows platforms, Unity 2023.1 introduces a new graphics jobs threading mode called Split Graphics Jobs. This mode aims to reduce unnecessary beginning- or end-of-frame synchronization between the main and native graphics jobs threads, resulting in significant performance improvements. In our internal testing, we’re observing meaningful CPU render setup performance gains over DX11 when targeting DX12 using Split Graphics Jobs. For more information, see the official forum post.XR Interaction Toolkit v2.3.0 includes several new features and capabilities, including Interaction Groups, Poke and Gaze Interactors, hand interaction integration and samples, and Device Simulator usability improvements. You’ll also find a new Interaction Affordance System, which allows you to build high-performance interaction indicators (visual, audio, haptics, and so on). You can install XRI 2.3 via the Package Manager and find more details in the documentation.We celebrated a major milestone with Netcode for Entities, released alongside Unity 2022 LTS, and we intend to continue adding capabilities to our multiplayer solutions in the Editor. We’re also working to better integrate all of our multiplayer solutions, such as Netcode for GameObjects and Editor-side features, with Unity Gaming Services to provide a single multiplayer solution, as outlined in our Games Focus multiplayer post.EXPERIMENTAL RELEASE The Unity Transport Protocol (UTP) is the lower-level networking infrastructure that handles the transport of game data across networks and connected platforms and devices. In 2023.1, UTP supports both web and TCP connections to improve the capabilities of the technologies that rely on it, including our netcode solutions.EXPERIMENTAL RELEASE Multiplayer Play Mode (MPPM) is a workflow improvement feature from our multiplayer toolset that aims to deliver a user experience that’s single-player-like but with a focus on the development cycle of multiplayer games. Leveraging MPPM allows you to emulate multiple players connected to the one game experience simultaneously, all on one machine. It supports recently released features such as Netcode for GameObjects, so you can be efficient in multiplayer development, even with a low hardware investment.As outlined in our Games Focus kickoff blog post, we are committed to a stable core. This means that we continue to update C# support in different ways, including the under-the-covers compilation process.In previous releases, our implementation of IL2CPP made it challenging at times to track down the specific part of the code that a managed stack trace was referring to because it only provided method names. With 2023.1, developers can now enable the addition of debug symbol processing, which displays the C# source code line number information so it’s far easier to track down specific areas in the code base of a game project. Read more on how to activate and view this additional information in the documentation.We shared our ongoing dedication to bringing new capabilities and functionality to the Editor and continue to ensure that creating your projects feels seamless.We’re improving and standardizing the context menus that pop up when you right-click across items and workflows. Improvements include more consistent interactions, sorting optimizations, and an optional search field.The Terrain Tools package has been migrated to the new Overlays toolbar framework for a more consistent and predictable experience with Unity scene authoring workflows.To read more about the 2023.1 Tech Stream, check out the 2023.1 release notes for a comprehensive list of features and the Unity Manual for documentation. As you dive in, keep in mind that while each Tech Stream release is supported with weekly updates until the next version, there is no guarantee for long-term support for new features. Also, remember to always back up your work prior to upgrading to a new version. Our upgrade guide can also assist with this. For projects in production, we recommend using Unity LTS for greater stability and support.Each Tech Stream is an opportunity to both get early access to new features and to shape the development of future tech through your feedback. We want to hear how we can best support you and your projects. Let us know how we’re doing on the forums, or share your feedback directly with our product team through the Unity Platform Roadmap.

>access_file_
966|blog.unity.com

9 Verified Solutions on the Unity Asset Store's new AI Marketplace

Explore a wide range of third-party solutions for AI-driven development and gameplay enhancements, now available on the Unity Asset Store’s new AI Marketplace.Whether you’re looking for professional-quality Verified Solutions, community-built solutions, or up-and-coming AI tools, the Unity Asset Store is your destination for an ever-growing selection of AI solutions.Verified Solutions have undergone enhanced vetting, and each provider is committed to offering high-quality solutions, service, and long-term support. No matter how large or ambitious your project is, these solutions can scale to help you meet any creative challenge.Convai gives AI characters in games and virtual worlds the ability to have human-like conversations and more. You can use Convai to add a backstory, knowledge base, voice, and overall intelligence to your character assets, enabling them to converse naturally with players and carry out actions.Inworld AI offers an advanced AI NPC platform that enables you to go beyond dialogue trees and create fully interactive characters. Inworld's low-code integration for Unity makes worlds more believable, drives immersive gameplay, and helps increase player retention.Layer AI’s mission is to give every developer access to high-quality game art using the power of AI. You can use Layer to upload your creations and generate hundreds of additional assets with just a few clicks – all in your custom art style. An increasing number of game studios use Layer for scaling creative asset production, live ops pipelines, and marketing content.Leonardo Ai is a content creation suite that uses generative AI, enabling you to create stunning 2D assets and textures. You can also generate full UV texture maps for 3D models.Unleash your creativity, amplify emotions, and enhance your storytelling with LMNT’s AI speech for characters and voice-over. You can either pre-render as a traditional asset, or generate on the fly for dynamic dialogue.modl:test’s Automatic QA enables you to test more content than a human can. AI bots explore your levels to detect and report errors, events, and crashes on the Modl.ai cloud platform.Polyhive is an AI texturing suite that enables you to texture 3D meshes with text. It uses a mesh-aware texturing process with 360-degree consistency, so you can produce high-quality assets in minutes.The Replica Studios AI Voice Actors Unity plug-in seamlessly integrates with Replica Studios’ desktop app, allowing you to generate AI text-to-speech and transfer voice-overs directly into your active Unity project.Zibra Effects by Zibra AI is a versatile, no-code toolset that simplifies VFX creation, and helps you take virtual worlds to new heights of immersion through real-time simulation and lifelike physics.This is just the beginning of Unity AI. Please keep in touch with us in the Assets and Asset Store forum for the latest updates.

>access_file_
968|blog.unity.com

Now available: 12 recipes for popular visual effects using the Universal Render Pipeline

Our new cookbook for Universal Render Pipeline (URP) effects is now available to download for free. This guide provides 12 recipes for popular visual effects that can be applied to a wide range of games, art styles, and platforms. Get ready to cook up Renderer Features, GPU-instantiated meshes, decals, volumetric materials, and more. You can use it alongside the Introduction to the Universal Render Pipeline for advanced Unity creators guide, which offers a wealth of information about how to use URP for creators that have developed projects with the Built-In Render Pipeline.Here’s a handy overview of the recipes you’ll find in the book.Renderer features provide you with ample opportunities to experiment with lighting and effects. This recipe focuses on Stencils, using only the bare minimum of required code. If you work alongside the sample project, open the sample scene via Scenes > Renderer Features > SmallRoom – Stencil in the Editor.The sample project uses the magnifying glass over desk example, and the aim is to convert the lens of the magnifying glass so that it allows you to see through the desk like an x-ray image. The approach uses a combination of Layer Masks, shaders, and Renderer features.Renderer Features are a great way to achieve dramatic custom effects or gameplay possibilities.Exchanging data between CPU and GPU is a major bottleneck in the rendering pipeline. If you have a model that needs to be rendered many times using the same geometry and material, then Unity provides some great tools to do so, which are covered in the cookbook’s instancing chapter.This recipe uses a field full of grass to illustrate the concept of instancing. It uses the SRP Batcher, GPU instancing, RenderMeshPrimitives, and ComputeBuffers.Often used together, toon and outline shaders present two distinct challenges. The toon shader takes the cooler that would be created using a URP-compatible Lit shader, and ramps the output rather than allowing continuous gradients, thereby requiring a custom lighting model.The example in this recipe uses Shader Graph. However, Shader Graph doesn’t support custom lighting, so there’s no node available to directly access the Main and Additional Lights. Instead, you can leverage a custom node to access those.Check out the Toon and outline shading recipe to get the full details.Ambient Occlusion is a post-processing technique available from Unity 2020.2. This effect darkens creases, holes, intersections, and surfaces that are close to one another. In the real world, such areas tend to block out or occlude ambient light, thereby appearing darker.See how you can implement real-time Screen Space Ambient Occlusion (SSAO) effect as a Renderer Feature using URP.Decals are a great way to insert overlays onto a surface. They’re often used to add visuals such as bullet holes or tire treads to the game environment as the player interacts with the scene.If you want to follow along this recipe, you’ll work with URP Decal Projection properties, creating the material, and even adding a decal with code.The water recipe is created in Shader Graph to make the steps more accessible. It’s built in three stages:Creating the water colorMoving tiled normal maps to add wavelets to the surfaceAdding moving displacement to the vertex positions to create a swell effectWhile this recipe forms the basis of a simple water shader, you can enhance it using Caustic Reflections, Refraction, and Foam.Using LUT Textures is an efficient way to create dramatic color grading, and this approach can be useful in many games. It involves using one filter, but the steps employed apply to all of them.Lighting with URP is similar to using the Built-in Render Pipeline. The main difference is where to find the settings.This chapter in the cookbook covers related recipes for real-time lighting and shadows, including baked and mixed lighting using the GPU Progressive Lightmapper, Light Probes, and Reflection Probes. You’ll pick up enough instruction for a five-course meal!A few things to keep in mind about shaders and color space: When using lighting in URP, you have a choice between a Lit Shader and Simple Lit Shader, which is largely an artistic decision. If you want a realistic render, you can use the Lit Shader, but if you want a more stylized render, you can use Simple Lit for stellar results.Shadow settings are set using a Renderer Data object and a URP Asset using URP. You can use these assets to define the fidelity of your shadows.This recipe includes tips for: Main Light and Shadow Resolution, Shadow Cascades, baking lights, and more.Light Probes save the light data at a particular position within an environment when you bake the lighting by clicking Generate Lighting via Window > Rendering > Lighting panel. This ensures that the illumination of a dynamic object moving through an environment reflects the lighting levels used by the baked objects. It will be dark in a dark area, and in a lighter area it will be brighter.Follow this recipe to find out how to position Light Probes with a code-based approach in order to speed up your editing, how to use Reflection Probes in your scene, and how to blend them.Screen Space Refraction uses the current opaque texture created by the render pipeline as the source texture to map pixels to the model being rendered. This method and recipe is about deforming the UV used to sample the image.Learn how to use a normal map to create refraction effects as well as tint a refraction effect.This is a recipe for using ray marching to render a 3D texture. Unity supports 3D textures, which are an array of images placed in a grid on a single texture, rather like a Texture Atlas. The difference is that each image is the same size. Using a 3D UV value, you can source a texel from the grid of images with UV.Z defining the row and column of the individual image to use.You can also use Houdini when creating the 3D texture. Alternatives to a 3D texture include using multilayered Perlin noise, or prebaking a tileable noise texture using Unity.There are many advanced resources available for free from Unity. As mentioned at the beginning of the blog post, the e-book Introduction to the Universal Render Pipeline for advanced Unity creators is a valuable resource for helping experienced Unity developers and technical artists migrate their projects from the Built-in Render Pipeline to the URP.All of the advanced e-books and articles are available from the Unity best practices hub. E-books can also be found on the advanced best practices documentation page.

>access_file_
970|blog.unity.com

The new Water System in Unity 2022 LTS and 2023.1

Most high-end environments require an ocean, lake, river, or even just a pool at some point. There are many ways to emulate these with a few shaders, but building an entire physically based, coherently integrated system in the High Definition Render Pipeline (HDRP) has been a complex and time-consuming task.Today, we spotlight the new Unity Water System, introduced for the first time in HDRP as part of the 2022.2 Tech Stream, with a focus on rendering. The upcoming 2023.1 Tech Stream extends the system’s feature set for better integration with worlds and gameplay.It’s now easy for environment artists to create and configure water surfaces – such as oceans, lakes, rivers, and pools – that will be integrated with other world and gameplay elements, like boats or seasides.The first step is to activate water in your project:Activate and configure water for each quality level (render pipeline assets for each quality level)Activate water in the frame settings of your camera(s)Activate water inside your scene with a Water Rendering volume override, which allows you to enable water rendering only when you need it depending on where your camera isOnce water is active, you are only a few clicks away from creating beautiful water bodies from the GameObject menu. HDRP provides three water surface types: pool, river, and ocean.Creating water bodies is fast, but the Water System provides many tools and options to customize and integrate water with your world and gameplay.For a quick overview of all the systems and extensions in action, take a look at our water samples in the HDRP package (open Package Manager > select the High Definition Render Pipeline package in your assets > choose the water samples inside the Samples section).We provide four sample scenes.The Swimming pool scene shows how to set up different “pools” in the same scene at different heights, as well as how to use custom meshes to render the surfaces with custom shapes.The Island scene showcases the ocean body and includes a water mask to remove swells around the island, a water deformer to create waves, decals to place foam on the beach, a foam generator to generate foam behind the waves and around the island, and a water excluder to exclude water from inside a boat. It also focuses on leveraging Burst with the water query API to parallelize the computation of multiple objects (i.e., seagulls) to float on water.The Glacier scene showcases many features, including:River water bodyWater deformer to create a waterfallCurrent simulation to make the water flow through the glacierWater query API to make icebergs float down the riverFoam generator to add foam behind icebergsCustom foam on the borders of the river and for the waterfall swirl using Shader Graph and the Water master nodeDecals to animate water falling in waterfallSample caustics to project outside of waterThe Water Line scene showcases the customization of the water’s surface level and underwater rendering using a custom pass, generating a larger blurry separation and simulating the effect of water on a camera lens.Now that you’re fully briefed, let’s dive into each system.The Water System comes with an off-the-shelf, physically based water shader designed from a recently introduced lighting model in HDRP, also available for Shader Graph for customization. You can adjust the smoothness, refraction, absorption, diffusion, and light-scattering properties of a water surface.The scattering color works like the base color of the water, so you need to start there to set up the general feel. Then you can change the absorption distance and refraction color, to control how transparent your water is and the tint applied to objects you see through water refraction. For a clean Caribbean Sea, you’d have a large absorption distance coupled with a cyan scattering and refraction color, whereas for a dirty river, you’d want a dark brown scattering color and a quasi-opaque river with a small absorption distance.There are limitations to water rendering in 2022.2, some of which have been addressed in 2023.1 (seeing water behind Volumetric Clouds, precise waterline) or will be for 2023.2 (improved performance, support for rendering transparent surfaces overlapping water).If you’re as fan, as we are, of Commandant Cousteau, you will enjoy diving under water with the Water System. It can detect when the camera is under the water plane and simulate underwater rendering according to the water’s physical properties.Caustics can be generated procedurally to simulate the light refraction from the surfaces projected on the ground surface. Note that caustics can be reused as projections outside of water, using a decal, for example, to simulate water caustics reflected inside a cave.When the camera sits between water and exterior, a water line is generated, and underwater rendering is composed on the submerged part of the camera. Water line rendering is improved in 2023.1, allowing you to customize it with a custom pass (see: The Water Line sample scene).What makes water even more visually attractive is how it moves. Note that the provided water simulation system focuses on the water plane deformation and not on fluid or splash simulation.The waves are procedurally generated using a Fast Fourier Transform (FFT) simulation. Simply put, this works by summing a lot of simple waves of different frequencies to form complex waves. By controlling the range of frequencies, you can control the agitation of your water.The system works by adding up varied ranges of frequencies, which we call bands.HDRP supports up to three bands, which simulates three effects of the real world on water and covers most use cases:The swell, representing the waves generated by the effects of the moon and the distant wind. It is solely controlled by the distance wind parameter and allows you to orient the swell's direction.Agitation, representing the chaos generated by the wind.Ripples, representing currents or local wind effects. This is the band with the highest frequencies, or smallest waves and adds fine detail to the water.For example, since pools have small perturbations, they only use the highest frequency band: the ripples. A river, which is a bit more complex, will have ripples and an agitation band. And an ocean has the full frequency range to be able to cover all the various wave sizes we see in real life. The simulation is done on a square patch that is then repeated infinitely.In 2023.1, we are adding support for currents that will make ripples flow according to flowmaps. The simulation is completely deterministic, so you can have consistent results when doing multiplayer games, physics simulations, or recording a short movie or a cinematic.For rendering the water plane deformation, the system uses vertex displacement and can either apply to the provided procedurally generated geometry or to a custom mesh that you provide. The procedural geometry works by instantiating quads next to each other, similar to what terrain does for rendering. This can also be used when creating an infinite ocean or long rivers to make sure the vertex density adapts to where the camera is.On top of that, in order to have finer vertex density to render small ripples, HDRP can rely on GPU Tessellation, which is a way to subdivide triangles using special shaders. With this capability, when the camera is at a certain range from the water surface, you can see that more triangles are generated close to the viewpoint.The system also provides various options to balance between visual quality and rendering performance. Learn more about optimizing performance in our documentation.As we just explained, the simulation runs entirely on the GPU for efficiency, but some of it can be mirrored on the CPU to allow you to sample the water height and currents (for example, to make objects float along a river).This has a high CPU cost since there are a lot of computations required for the simulation, but this is kept reasonable by using burstified jobs to more efficiently parallelize operations.Note that, based on feedback, the scripting API has evolved between 2022.2 and 2023.1 and might require adjustments to your code between the two versions.To get started, you can find two code samples in the water scenes: one in the Island scene, leveraging Burst to parallelize the sampling of many objects, and the other in the Glacier scene, showing how to make icebergs float along the currents of the river and fall down the cascade with simpler, non-parallelized code.Foam simulates the bubbles generated on or under the water by the wind at crests, by waves breaking against the shore or rocks, or by a moving boat or character in the water.The Water System can automatically generate foam based on the water body simulation and the distance wind speed for rivers and oceans. It then allows you to adjust the amount of foam and tweak the look, choosing its smoothness and the tiling of the texture.For 2023.1, a Foam Generator was added to simulate white water for a boat trail, waves, or around rocks in open water, so that foam can be spawned and continue its own course according to the waves. In the sample scene, you will see different examples of usage for the Foam Generator: In the Island scene it’s used to generate foam at the top of the breaking waves and around the island, and in the Glacier scene to generate foam behind the icebergs floating down the river.Using Shader Graph, you can inject custom foam to the Water master node and use it to add custom procedural foam for things like river borders or swirl down a waterfall. For example, in the Glacier sample, the swirl at the bottom of the waterfall is done by simply scrolling and blending a custom foam texture in Shader Graph. The foam on the borders is made by sampling the Depth Buffer to approximate depth.Simulation foam will be injected where the agitation of the waves requires it. But for artistic purposes, you may want to mask out some regions from receiving foam using a foam mask.To look more realistic, the water surfaces and simulations need to be modified locally to better integrate with your world and props. For example, to remove the water geometry inside a boat, add breaking waves near the shore on the seasides exposed to the swell, produce foam around rocks, modify local currents, shape calm areas inside bays, create a waterfall in the middle of a river, or generate small whirlpools in a river or a giant vortex in the middle of the ocean.Multiple components allow you to customize the simulation, foam, and currents to create these local variations or more complex effects:The Water Mask allows you to attenuate or suppress ripples, swell, and foam on a specific portion of a water surface.Decals can be added on the water to create local foam, override the smoothness of the water, or simulate small local deformations such as droplets, impacts, or custom ripples by using a normal map. They can be also used to simulate wetness on the sand or rocks (see: the Island sample scene) or caustics on the walls (see: the Glacier sample scene) independently from the water system. Note that color cannot be changed since decals are only treated as grayscale.Water deformers can locally modify the simulation to create waves, as showcased in the Island sample, or a waterfall, as illustrated in the Glacier sample.We measured the performance of the Glacier scene, where most of the features are activated on various consoles, and have profiled the Water System based on this. Through measuring, we learned that water rendering time takes approximately 4ms on the GPU on the latest generations, whereas previous generations’ consoles took around 7ms. This, of course, is highly dependent upon the quality settings of your water, the complexity of the simulation, and the amount of water pixels displayed on screen. But the system is performant enough to run smoothly on various platforms.Furthermore, the system is using deferred clustered lighting to optimize shading and support a high number of light sources. Most of the rendering time is coming from the GBuffer pass due to the important number of vertices required to get nice waves, even in the distance. When enabling GPU tessellation on the surface, this cost can be controlled by changing the maximum tessellation level and the fade distance when the camera is far away from the water.In the Miscellaneous section of each water surface, you have access to various debug modes to visualize current direction, deformation, foam, and other masks that are applied on the water.To see the Water System running on more complex scenes, we created a demo project showcasing pool, island, and river scenes.The Island scene showcases an infinite ocean with shore waves arriving on the beach. It shows how to use the water masks and make deformers and foam generators using custom render textures. It also uses decals to create complex interactions with the water surface when a wave approaches the beach.The Pool scene showcases an indoor water surface with realistic color, depth, underwater, and caustics.The River scene showcases the use of instanced quads and leverages a current map to simulate the flow, as well as decals, a custom Shader Graph, and VFX Graph to enhance the visuals.You can download the demos on GitHub.To use the water system, you must be working with HDRP and Unity 2022.2 or above.While you can use some of the initial functionalities in 2022.2, we recommend starting with 2023.1 where multiple improvements and features have been added, including some API changes.You can download the water samples in the Package Manager, from which you can start reusing sample code and systems. Also, check out our WaterScenes demo project on GitHub for more complex examples of water systems integrating together.To learn more, watch Rémi and Adrien’s Water System deep dive from GDC 2023 or read our documentation.We have yet to plan for any new features at this stage and are currently concentrating on stability. You can share your feedback with us or ask questions on our dedicated Unity forum thread.To propose new features and vote for upcoming features under consideration, check out our public roadmaps.Please share feedback on the new Water System in the HDRP forum. For future technical blogs from Unity developers, stay tuned to the ongoing Tech from the Trenches series.

>access_file_
971|blog.unity.com

Introducing Unity Muse and Unity Sentis, AI-powered creativity

We believe that every aspect of the creative process, from systems to objects to pixels, will be impacted by AI – and so will all of the resulting runtime characters, worlds, and experiences.AI can help you to be more productive while staying fully in control of your vision. It offers the possibility of in-game features and capabilities that couldn’t be built otherwise, potentially revolutionizing player experiences by embedding AI models in the runtime so content reacts and responds to players and users in new ways.We’re harnessing the power of AI to drive innovation, accelerate content creation, and increase your productivity across games, entertainment, and industrial use cases. We’ve been building a suite of AI tools that promise to accelerate creation time and complement your workflows by finding information and generating draft assets as quickly as typing in a text prompt or scribbling a sketch. From there, you could integrate work with familiar tooling to revise and edit the assets you need at a speed that’s unimaginable with today’s workflows.Today we’re announcing two new AI products: Unity Muse, an expansive platform for AI-driven assistance during creation, and Unity Sentis, which allows you to embed neural networks in your builds to enable previously unimaginable real-time experiences.Whether you’re just getting started or already a Unity expert, Muse and Sentis are here to help you open up new avenues of creativity and innovation. We’re committed to working closely with you to explore, validate, and define our AI offerings, which is why both will be available via closed beta at this time. Your feedback is vital for this work and will be actively solicited, considered, and integrated throughout the process.Unity Muse is an AI platform that accelerates the creation of real-time 3D applications and experiences like video games and digital twins. The eventual goal of Muse is to enable you to create almost anything in the Unity Editor using natural input such as text prompts and sketches.Starting today, we’re beginning a closed beta for Muse Chat, a critical feature of the Muse platform. Using Muse Chat, you can leverage AI-based search across Unity documentation, training resources, and support content to get well-structured, accurate, and up-to-date information from Unity. Muse Chat helps you find relevant information, including working code samples, to speed up development and troubleshoot issues.Over the next few months, you’ll see even more features in our Muse platform, including the ability to create textures and sprites or even fully animate a character, all using natural input.This is just the beginning – Unity Muse will continuously improve to help you in the Editor and on the web.Unity Sentis is a literal gamechanger. At a technical level, it bridges neural networks with the Unity Runtime, but the possibilities that Sentis unlocks are near endless.Unity Sentis enables you to embed an AI model in the Unity Runtime for your game or application, enhancing gameplay and other functionality directly on end-user platforms.Sentis allows AI models to run on any device where Unity runs. It’s the first and only cross-platform solution for embedding AI models into a real-time 3D engine, so you can build once and embed your model so it can run on the edge on multiple platforms, from mobile to PC and web to popular game consoles like Sony PlayStation®. Since you’re running your program on the user’s device, there’s no complexity, latency, or cost associated with model hosting in the cloud.Starting today, we’re also making available AI Verified Solutions – third-party packages that meet Unity’s highest quality and compatibility standards – on the Unity Asset Store. You can now find professional-quality AI solutions from providers such as Convai, Inworld AI, Layer AI, Leonardo Ai, LMNT, Modl.ai, Polyhive, Replica Studios, and Zibra AI. These new solutions enable AI-powered smart NPCs, AI-produced VFX, textures, 2D sprites and 3D models, generative speech, in-game testing with AI, and more, all designed to support and accelerate your creative process.Stay in the loop about current and upcoming AI betas by signing up for the Unity AI Beta Program. Let us know what you think in the forums today.We can’t wait to see how you leverage AI from Unity Muse, Sentis, and our growing ecosystem of AI Verified Solutions on the Unity Asset Store to accelerate your workflows and supercharge your experiences.Editor's Note (June 28, 2023): This announcement has been updated to reflect changes to the Unity Asset Store AI Marketplace following the removal of a provider.

>access_file_
972|lilianweng.github.io

LLM Powered Autonomous Agents

Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver. Agent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components: Planning Subgoal and decomposition: The agent breaks down large tasks into smaller, manageable subgoals, enabling efficient handling of complex tasks. Reflection and refinement: The agent can do self-criticism and self-reflection over past actions, learn from mistakes and refine them for future steps, thereby improving the quality of final results. Memory Short-term memory: I would consider all the in-context learning (See Prompt Engineering) as utilizing short-term memory of the model to learn. Long-term memory: This provides the agent with the capability to retain and recall (infinite) information over extended periods, often by leveraging an external vector store and fast retrieval. Tool use The agent learns to call external APIs for extra information that is missing from the model weights (often hard to change after pre-training), including current information, code execution capability, access to proprietary information sources and more. Overview of a LLM-powered autonomous agent system. Component One: Planning A complicated task usually involves many steps. An agent needs to know what they are and plan ahead.

>access_file_
973|blog.unity.com

Your guide to reducing your Android ANR rate and improving your users’ experience

ANRs, or “application not responding” errors, are a critical issue facing Android developers, and despite the fact that mobile apps have become more technologically advanced over time, ANR rates have only increased. In fact, blank and unresponsive screens not only harm the user experience, they can also affect your app’s ranking on the Google Play Store. So what can developers do to solve ANR issues? Alon Dotan, Integration Engineer Team Leader at Unity, offers best practices for app developers to reduce their ANR rate. Let’s dive in. How developers can reduce their ANR rateOver the years, I’ve seen first-hand how problematic code integrations can cause ANRs. This includes developers integrating their own code, third party code, and more. But by optimizing your code during integration, you can try to reduce your ANR rate. Here’s how:1) Reduce the workload during the application pause (when the application moves to the background). During a pause period, you would generally store your app’s information in the server - for example, you would report a player’s position in the game so they can return to the same place they stopped. This workload, combined with other third party code creating its own workload, can overload the device’s resources - particularly the main thread. The main thread is the processor that affects an app’s user interface, and is often the main spot where ANRs are located. So any added workload on the main thread increases the potential for an ANR. During the application pause, make sure all the actions you’re taking are necessary - or try saving the user’s state in your local device memory. And, of course, see whether you can also complete these actions outside of the pause period.2) Segment devices that are likely to experience ANRs. Despite mobile devices getting more advanced, there are still devices in use with low processing power which are more likely to have ANRs.As a developer, it’s critical to think about how each code integration could affect each of your devices, and choose carefully which integrations to run on older devices.3) Work with StrictMode to find ways you might be causing ANRs during integration. StrictMode is an Android tool that helps you find ways your actions might cause ANRs, like working on the main thread.4) Confirm that the third party code (e.g. SDKs) you’re working with are the most updated versions. Using some older versions might present ANR issues.5) Add an ANR watchdog, a component that reports ANRs and ensures you can keep track of where they’re coming from. In the Google Play ConsoleFinally, the Google Play console is a very useful place to find information about managing and reducing ANRs.6) Utilize information from the Google Play Console. Let’s say Google Play Console shows you that most ANRs are coming from a specific device. Even if you’re not sure how to solve the problem, you can always segment and filter out these devices. This should also apply to specific app versions or countries with high ANRs. This, of course, requires a balance - by segmenting out a certain device, for example, you’re also missing out on the benefits of third party code - so it’s always best to double check that you’re segmenting the right device before making any changes. Reducing ANRs is a marathon, not a sprint - but with the right tools, Unity LevelPlay developers have kept their in-app ANRs down, and you can do the same - ultimately giving users a smoother, easier, and more positive experience to make sure they keep coming back to enjoy your app.

>access_file_
974|blog.unity.com

Get started developing mixed reality for Meta Quest 3 with Unity

We’re excited to unveil a new way for you to create captivating, cross-platform, immersive experiences for Meta Quest. In this blog, we detail a brand-new preview of mixed reality development tools for Meta Quest 3, Meta Quest 2, and Meta Quest Pro, powered by Open XR and Unity’s AR Foundation. With this release, get ready to revolutionize the way we interact with the world around us.Mixed reality enables you to interact with digital content in the real world, enhancing your surroundings with virtual objects, characters, and experiences. Advanced sensors and tracking technologies allow for precise mapping of the physical environment and accurate placement of virtual content within it. Mixed reality also enhances the way we perceive and engage with our surroundings, offering a truly transformative and immersive user experience. Unity’s new toolset aims to provide you with the resources you need to create compelling cross-platform mixed reality experiences for Quest devices.OpenXR is a royalty-free standard that simplifies AR and VR development by enabling applications to reach a wide range of hardware without the need to rewrite code. Developed by a consortium of industry leaders, OpenXR’s interoperability makes it easier to create content that reaches a wide audience.Unity’s AR Foundation is a cross-platform framework, purpose-built for creating applications across mobile and headworn AR/VR devices. It allows developers to create experiences and deploy them to multiple platforms. By leveraging features from common SDKs, such as ARCore, ARKit, and the OpenXR standard, AR Foundation provides a seamless workflow in Unity so you can focus on unleashing your creativity.We are introducing a preview of AR Foundation support for Quest through a new Meta OpenXR package.This preview release offers Quest support for essential features such as passthrough, plane detection, device tracking, raycasting, and anchors. It also includes Quest-specific updates for samples like Simple AR, which demonstrates basic plane detection and raycasting, and Anchors, which demonstrate how to create an object that specifies the position and orientation of an item in a physical environment.Let’s take a closer look at passthrough and plane detection.With passthrough support, developers can now seamlessly blend the virtual and real worlds, allowing users to see and interact with their physical environment while engaging with virtual content.Imagine creating games where players can navigate their living rooms or offices while battling virtual enemies, or designing applications that overlay virtual objects onto real-world surfaces with unmatched precision. The possibilities are truly limitless.Plane detection in AR Foundation opens up a realm of possibilities for developers seeking to create context-aware experiences for Meta Quest. With plane detection, your applications can analyze and interpret the physical environment, allowing virtual objects to interact intelligently with the real world.Imagine building games where characters navigate obstacles in real time, or designing levels that adapt to different room layouts. AR Foundation’s plane detection for Quest will give you the data you need to understand physical space and push the boundaries of immersion.We know that having robust templates, sample content and pre-defined interactions can save you a lot of time. That's why we’re adding new XR templates and samples to Unity. You'll be able to streamline your project setup, explore complex object interactions and see examples of user interfaces. Stay tuned for the release of these templates in Unity Hub.You can get started building apps for Quest 3 with AR Foundation and OpenXR today by downloading Unity 2022 LTS or later. You will also need to download the experimental Meta OpenXR package. To do this, open the Unity Package Manager from inside the Unity Editor, click the plus (➕) symbol in the top left, then select “Add package by name” and type com.unity.xr.meta-openxr. Once downloaded, it will automatically trigger other required packages, such as the OpenXR Plugin and AR Foundation packages, to download. For sample content, check out Simple AR and Anchors on Github.The Unity XR team is continuously improving AR Foundation. As we carry on with development, we want to hear from you and would love to see what you build with these tools. Feel free to include the hashtag #unityARF when posting about your project on social media.

>access_file_
975|blog.unity.com

How to add VR support to your Universal Render Pipeline project

One advantage of developing Unity titles built on the Universal Render Pipeline (URP) is that they’re supported on a wide variety of platforms with minimal changes to render settings. Your project can run on any platform from mobile devices to consoles, PCs, and even VR.To those who haven’t developed for VR before, it can be daunting to figure out where to get started. Whether you’re just starting your project or have an existing URP project that you’ve published on other platforms, we’ll go over the steps to add VR support so that it can run on multiple VR platforms with minimal effort.One thing that every VR Unity project needs is the XR Plug-in Management package. This package makes it easy to configure your project for the VR platforms that you’d like to build for, from Meta Quest and Magic Leap to PlayStation®VR2 (PS VR2).Another useful package is the XR Interaction (XRI) Toolkit. This package decreases your setup time by providing prefabs that implement movement options common in most VR titles, such as teleporting, snap turns, and more. Read more about the options the XRI package provides in this blog.To add these packages to your project, open the Window menu and click on Package Manager. In the Package Manager window, click on the “Packages: In project” dropdown to expand the options, then click the “Unity Registry” option to make the Package Manager list all available packages.Unity supports a lot of packages, so type “xr” in the search bar of the Package Manager window to filter the list so that only XR-specific packages are shown. Next, click on the XR Plug-in Management package, then choose the Install button to add it to your project. Follow the same process for the XRI Toolkit package to add it, too.Next, let’s add the Sample Assets provided by the XRI team to help you get started. Click on the XR Interaction Toolkit package in the Package Manager, then the Samples tab in the package details panel. Then, click the Import button next to Starter Assets to add assets that will streamline the setup of VR behaviors and input actions.With these packages installed, you’re more than halfway to developing for VR.Next, you need to specify which VR platforms you want to target. To do that, open your Project Settings and click the XR Plug-in Management tab (which was included when you installed the package) to see the list of available plug-in providers that you can select for your project. Plug-in providers is another name for VR platforms. Select an option, such as Oculus or Open XR, and Unity will install the package(s) specific to that platform.Some XR platforms use the Windows, Mac, and Linux build target in the Build Settings, but others run on different targets. For example, building for PS VR2 requires switching the build target to PlayStation®5 (PS5), while others require switching to Android or iOS. Make sure you switch to the correct build target while you’re developing so that you can catch any important errors and warnings early. Keep in mind that to develop games for PS5, you have to register as a PlayStation developer.Each XR plug-in provides validation checks to make sure that your project is properly configured to build for XR. To find them, go to the Project Settings window, then click the Project Validation tab under the XR Plug-in Management tab to see if there are any warnings or errors to take care of. Most checks include a “Fix” button to resolve the issue for you.With the right plug-in providers selected, you’re now ready to start building for your VR platform(s) of choice. However, you still need to map VR controls to your existing player character. That’s where the Starter Assets provided by the XRI Toolkit come into play.In the Project Window, go into the Samples/XR Interaction Toolkit/2.3.2/Starter Assets/Prefabs folder, then drag the Complete XR Origin Set Up Prefab into your Scene. This Prefab comes preloaded with all the components you need to configure VR input for your player controller.The Prefab has a nested XR Origin Prefab that handles most VR controller setups. It has two game objects that represent the player’s left and right hands, as well as a Main Camera nested in a Camera Offset GameObject that handles moving the camera depending on whether or not a player is in a sitting or standing mode. Since the Complete XR Origin Set Up Prefab includes its own Main Camera, if you already have a Main Camera object in your Scene, then make sure to disable it so that you don’t have multiple cameras trying to render your player’s main view at once.The XRI Toolkit also comes with an Input Map that maps actions to common VR controls. These controls are standard across most VR platforms, so there’s no extra coding required to ensure your input is recognized on a variety of platforms, including Meta Quest, Magic Leap, and PS VR2. Make sure you copy and paste any scripts that were important for game state tracking from your original player controller to the newly added Complete XR Origin Set Up Prefab. Feel free to rename the Prefab to something that’s easier to remember, like VR FPS Controller.The XR Origin GameObject includes a lot of components that can be pretty confusing at first. Think of the XR Origin component of the XR Origin Prefab as the driver of all the other XRI components. Without it, XR locomotion doesn’t work. The rest of the components are providers that allow specific XR locomotion to be possible. These providers correspond to a different type of VR locomotion, from teleporting to snap turns to grabbing surfaces with two hands and pulling yourself along, and so much more.To learn how to tweak the locomotion values of the XR Origin Prefab to your liking, read the documentation. But for just getting started, using default values should work fine.Of course, not all VR platforms are created equal, and it’s critical to keep the hardware specs of your target platform in mind, especially when VR titles need to hit a consistent 60, 90, or 120 frames per second to minimize motion sickness. This can be achieved by reducing your max eye resolution, tweaking the anti-aliasing options for your Main Camera, reducing the number of dynamic lights in favor of baked lights, reducing the amount of alpha transparencies used in your GameObjects, and tweaking GameObjects’s LOD values so that they only render a higher-geometry version when the player is up close to them. For more suggestions, read our guide on optimizing graphics performance.The good news is that it’s very easy to switch between different quality settings depending on the quality level you’ve specified for each platform. In the Quality tab of the Project Settings window, you can create and customize quality levels that set unique renderer, anti-aliasing, LOD, and lighting settings to help you get the most out of each platform. Learn more about adjusting quality settings.Now that your project is configured to target your VR platform(s) of choice, it’s time to test the newly added XR rig in your project. There are two recommended methods for VR testing: use the XR Device Simulator from the XRI Toolkit to test the setup in the Editor or build the player and run it on the device.There is a handy tool available in Unity called the XR Device Simulator, which allows you to simulate XR controls inside of the Game view of the Editor when you’re in Play mode. You can save time by testing your locomotion settings without building your project. To add the XR Device Simulator to your project, open the Package Manager window, go back to the XRI Toolkit package, click on the Samples tab, and, under the Sample Assets, click the Import button next to the XR Device Simulator.After it’s installed, open the Project Settings window and click on the XRI Toolkit tab. In this tab, enable the “Use XR Device Simulator in scenes” checkbox to add the XR Device Simulator at runtime when you enter Play mode. You can also drag and drop the XR Device Simulator Prefab from the Project window straight into your Scene.Now, when you enter Play mode in the Editor, you’ll be able to use WASD and mouse controls to move your XR Rig and press Tab to switch between moving all the controls, just your left hand, or just your right hand. It’s not a 1:1 match to actual VR input, but it will help you test if your locomotion settings are working as expected right away.Just remember that before you build your project, you need to either disable the “Use XR Device Simulator in scenes” checkbox in the XRI Toolkit Project Settings window or disable/delete the XR Device Simulator from your scene. Otherwise your VR controls won’t work properly on the device. This is because the XR Device Simulator simulates VR input from either a mouse and keyboard or a game controller and ignores input from actual VR controllers.The second method for testing your XR Rig is to simply build and run it on your target platform. Each XR device’s setup for running Unity builds is unique, so read the documentation on how to connect them to your dev machine for the most up-to-date information. Once your device is connected, choose Build and Run in the Build Settings window to create the build for your target platform and load it onto the device.After the build is running on-device, you can test the input yourself and see what works and what doesn’t. The only limitation is waiting for the build to finish and having a device available to test on.With just these steps, along with some minor tweaking of foveated rendering settings, I was able to get the Japanese Garden scene of the new URP Template running on the PS VR2, which was showcased at GDC 2023.I hope this quick guide helps you get started adding VR support to your project. If you’re confused about any part of the VR setup, you can review Unity’s guide for configuring your project for XR for more information.The XR and URP teams are both working hard to ensure that developing for multiple platforms is as simple as possible. Please share any Universal Render Pipeline feedback in the URP forum and XR feedback in the XR forum. Watch for future technical blogs from Unity developers as part of the ongoing Tech from the Trenches series.

>access_file_
977|blog.unity.com

Updating Unity’s guiding principles for ethical AI

We recognize that AI continues to shape the world we live in, and we’re excited and committed to ensuring that our AI solutions are developed and deployed in a way that is transparent, fair, accountable, and in accordance with government regulation around the globe.Today, we’ve updated our guiding principles for ethical AI and invite our community of creators to join us in advancing the responsible use of AI, making it safe, fair, and productive for the Unity community to use.Unity will execute against each of the below principles by continuing to demonstrate our core values: Users First, Best Ideas Win, Go Bold, and In It Together. Throughout our product development process, we engage multiple stakeholders across our organization through established governance programs, and product teams adhere to these principles (and actively participate in their continued evolution). Lastly, we will engage our community for feedback and ask that all uphold our principles while operating in our ecosystem.We strive to use AI solutions that promote inclusion and to make this technology available to everyone. We recognize the importance of considering all types of human experiences and are committed to using AI to expand the ways in which creators can learn to create. To do this, we’re making it easier for creators to express themselves, bringing powerful tools to everyone from the biggest companies to the individual hobbyists, and we review our AI tools to ensure we use appropriate data sets and promote open and healthy expression whenever and wherever they are used.We acknowledge the potential consequences of AI and commit to following a responsible design process. We take proactive measures to anticipate and minimize potential harm to our community, ecosystem, and the planet that may arise from the use of our AI technologies, and we are committed to continuously improving our products to align with responsible and ethical practices, including environmental sustainability, human oversight, and accountability. This includes seeking diverse representation and a wider range of perspectives on design teams, monitoring energy usage and optimizing towards lower environmental impact, using appropriate data sets and testing AI technologies before deployment to ensure such technologies yield the expected results.We are committed to protecting the data and information incorporated into and used by our AI technologies, including from unauthorized disclosure or manipulation. We handle it with care and respect, ensuring that any use is in accordance with the intended purpose and aligned with these principles and Unity’s Privacy Policy. To give effect to this principle, we emphasize transparency in the use of AI, namely how our creators and users contribute to the continuing advancement of AI through their work and feedback, and will work with our community on mechanisms and tagging to enable transparency.Unity will continue to update and refine these guiding principles as needed to ensure that we maintain the proper goals in a quickly evolving field. We look forward to the continued advancement of AI technology and for a positive impact on society while prioritizing responsible and ethical practices.Stay tuned to the blog for more about Unity and AI, and, if you haven’t already, sign up for the AI Beta Program to be the first to hear about new tools and services.

>access_file_
978|blog.unity.com

The 2023 Mobile Growth and Monetization Report is here

The mobile monetization and advertising landscape continues to evolve every year, so staying up to date on best practices and trends is important to sustaining success. In years when revenue and resources are strained it’s even more critical to find strategic advantages.In our second edition of the Mobile Growth and Monetization Report, we gathered feedback from across our customer and support teams about opportunities for greater efficiency in monetization and user acquisition strategies.From there, our experts dove into our games and ads data to uncover trends, offer benchmarks, suggest best practices, and give insights that Unity is uniquely able to provide.The end result is a comprehensive report that explores how studios can be more efficient with their game growth strategies. It covers:Getting to first in-app purchase (IAP) conversions fasterImproving in-app advertising (IAA) with better placements and rewardsSupplementing revenue and retention with offerwallsHarnessing the advantages of targeting by genre or country targeting and custom store pagesRead on to learn about these four key report themes that will help your monetization and user acquisition strategies achieve more with less.Creating and managing a game with IAP is often resource intensive. Knowing how best to convert players and when IAP is most effective is critical to building a sustainable strategy that isn’t over-resourced.Getting to first IAP conversion takes a combination of factors around price points, which ad types work best, and when. For example, our analysis found that timing is everything, since 77% of players who have ever converted to IAP will have done so within the first two weeks.Knowing this, players who haven’t made an in-game purchase in their first 14 days should be considered prime candidates for alternative methods of monetization and can be segmented to view ads or offerwall.Getting more out of existing user bases becomes more important when user acquisition is challenging or costly.Our analysis looked at several factors around IAA placements and rewards for guidance on which strategies tend to work best. For example, players are most likely to engage with rewarded ad placements that are found between game levels, followed by in the IAP store and in the lobby/pre-level.And although 18% of games are using rewarded video ads in their IAP store, the audience that visits your IAP store are likely players who are already considering purchases. This means that in-store placement strategies could limit which players encounter these ads since many never visit the store if they have no intention to make in-app purchases.As player behaviors change over time, finding additional ways to monetize them can be a lifeline.Offerwall revenues can be impactful for games with multiple monetization strategies. Our analysis showed that 38% of total ad revenue comes from offerwalls for games that use this feature alongside another monetization strategy.The data also showed a clear retention advantage for players that converted on offerwalls. Their likelihood of continuing to play goes up by as much as 5x when looking across day 7 (D7) to day 120 (D120). This is most notable at later stages, like at day 90 (D90) where retention for offerwall converters is 14%, while other players are below 3%.When resources are tight, paid user acquisition can be one of the more challenging aspects of growing a mobile game. This means that it’s crucial to find even small advantages in campaign optimization to get the most out of every dollar spent.One place that our analysis showed opportunities for efficiency is country targeting. Specifically, the data reveals benefits to targeting tier-2 countries that have more interest in a genre, while avoiding those with less.For this analysis tier-2 countries include Australia, Germany, Denmark, France, Italy, Japan, South Korea, Norway, Sweden and Singapore while tier-1 which included the U.S, Canada, and the United Kingdom.Advertising in tier-2 countries can cost less tier-1 as they are markets with less buying power. Within tier-2 countries some genres are preferred over others where performance will be greater than the average for all tier-2. This means they can be cost effective and impactful markets to advertise in and are worth the effort to localize campaigns for if your genre performs well there.As an example, we found that Japan is a great place to advertise a sports game – CTR performs 13% above the average of all tier-2 countries, while a racing or hypercasual game performs 7% below average in Japan.Taking advantage of insights on IAP, IAA, offerwall, and campaign efficiencies can improve your ROI and growth at a time when every extra dollar earned or saved is critical.Download the full report to dig deeper into how you can make monetization and user acquisition strategies more efficient.

>access_file_
979|blog.unity.com

The fundamentals of monetizing with offerwalls

By listing offers users can complete in exchange for in-app rewards, offerwalls don’t just generate revenue; they can also increase retention and engagement rates since users have an incentive to keep coming back to the game to receive their rewards. This trifecta – revenue, engagement, and retention – makes offerwall one of the most lucrative ad units for developers.Below, Anna Poperko, game design consultant lead, covers the benefits and challenges of using offerwalls, the game genres that work best with this ad format, and the most strategic places to place offerwalls in your game. Let’s dive in.If you’re considering monetizing with an offerwall, here’s what you should know before you take the plunge.Increase retention: Players who engage with offerwalls are more motivated to return and continue playing because they’re invested in receiving and using the offered rewards. Essentially, as players become more invested, they become more loyal as well, boosting long-term retention.Monetize more players: Open the door for new revenue by monetizing non-paying players. Some people can’t or don’t want to pay, but they might be more open to paying with their time instead. You can set the same price for offerwall currency as the reward you set in the store (for example, $1 = 100 gems), so you’ll receive the same amount of money from players who complete the offers, such watching revenue-generating ads, as the ones who actually paid with their money.Raise eCPMs: Offerwalls tend to have the highest effective cost per milles (eCPMs) of any ad unit. In fact, they can generate about 10x higher eCPMs than rewarded video ad units because advertisers are more willing to invest in ad units that target highly engaged players. Think about it: Offers set deep in the advertiser’s game (like “reach level 20”) can help the ad unit to acquire a high-quality user, also potentially generating better revenue for the advertiser.To make the most out of offerwalls, a game needs two things: currency that’s worth making an effort for and players who are willing to make that effort.Ensure a gaming fit: Some games are a better fit for offerwall than others. Hypercasual games, for example, usually aren’t the best fit because, by nature of the game, players tend to be less invested in their long-term success. These players want snackable content and aren’t willing to invest their time in another game to receive a reward. On the other hand, midcore or hardcore games attract more committed gamers, so offerwalls provide incentives to progress because they capture an engaged audience willing to invest time to progress through the game.Consider your engagement: Since an offerwall caters to highly engaged gamers, its engagement rate (the percentage of total users who engage with the ad units) tends to be lower compared to other ad units.Factor in OS: Additionally, offerwalls are limited by different operating systems’ capabilities. Due to iOS constraints that limit data from ad clicks, this ad format is significantly more effective on Android, where it’s possible to extend more offers to users.Build a strong hard currency: For offerwalls to be effective, the player needs to feel like the currency is valuable and rare enough to work for. This is called hard currency: currency that has the highest possible value for the player, usually represented by gems, diamonds, and so on. Players determine a currency’s value in different ways: maybe they aren’t able to get it for free, they can only get it in scarce amounts, or they can’t buy it with a different form of game currency. Basically, the more valuable players think the currency is, the more likely they are to complete an offer in the offerwall.Tap into long-term retention players: Generally, offerwalls are aimed at players who found value in the hard currency and want to progress further in the game, but can’t or don’t want to spend money on in-app purchases.The games that combine these two conditions above (long-term retention players and hard currency) will benefit the most from the offerwall. That’s why this ad unit is most profitable in midcore and hardcore genres like shooters, RPGs, and strategies. Check out the eCPMs for each ad unit.When it comes to offerwalls, location is everything – it affects the quantity and quality of gamers that will interact with it, and thus the amount of revenue ads can generate. Here are the top placements for maximizing revenue and engagement from the offerwall.Home page: If you’re looking for the most exposure possible, the home screen is your best bet. The home page is the first thing the player sees when opening the game, so placing the traffic driver there will expose the offerwall to the maximum number of players possible. Adding an offerwall to the home page can also help to increase overall player engagement, because the home screen has the maximum number of users passing through.The in-game store: The next hot spot is the game store – when the player is looking to get some hard currency, the store is the first and natural place to look. By placing the offerwall traffic driver in the store, players can immediately recognize the opportunity to get hard currency without spending anything.Out of currency: You can also show the offerwall traffic driver when players need hard currency the most. This prompt catches players at the right moment, where offerwall is a great and free alternative to get more in-game cash.Promotional pop-up: Additionally, we recommend that developers use pop- ups to promote sales opportunities that increase a reward’s value over a limited period of time, for example: double coins for two days over Christmas weekend. To increase visibility for these promotions, it’s even beneficial to show the player this offerwall pop-up immediately when they open the game. Get more tips for running offerwall promotion campaigns.All in all, offerwalls have a slew of perks that can be utilized to your benefit if you know its recipes for success. Piecing together the right currency and players and knowing where to place the ad unit can make all the difference in your game’s revenue and overall success.Learn more about getting started with offerwalls. Then, put these tips to good use and contact us about growing your app business.

>access_file_
980|blog.unity.com

How digital twins are transforming large-scale airports

At a time when air travel is rapidly changing, airports are looking to digital transformation as a way to evolve their operations and services and meet environmental, social, and corporate governance (ESG) goals.Using digital twins – dynamic virtual copies of physical assets, processes, systems, or environments – airports can make more informed decisions and unlock significant operational efficiencies, enabling them to:Accurately predict passenger flow and allocate resources where they are needed the most.Avoid costly errors by testing different scenarios and making changes before implementing them in the real-world environment.Optimize operations by gathering and analyzing data from multiple sources such as flight and baggage information, security wait times, and more.Large-scale airports like Vancouver International Airport (YVR) are taking the lead in embracing digital twin technology to improve operations, meet sustainability and Reconciliation goals, and enable jobs of the future. Read on to learn more about YVR’s first-of-its-kind digital twin.With the vision to reimagine airport operations, the team at the Vancouver Airport Authority partnered with Unity’s professional services group to build and deploy the first-to-market real-time 3D digital twin of an airport in North America.Built as a people-first technology for the airport’s front-line workers, designers, and community, YVR’s digital twin leverages historical and real-time data and can present key information through 2D or 3D visualization, enabling better comprehension of complex operational systems, streamlined processes, and accelerated collaboration across the airport’s key stakeholders.Beyond enhancing operations throughout the airport, YVR’s digital twin of their airfield and terminal is also working to support the airport’s climate and Reconciliation goals.As the largest building in British Columbia and one of the busiest airports in North America, YVR faces a diverse range of process considerations to ensure smooth and efficient operations around the clock. To address these considerations, the airport and Unity worked to develop a situational awareness tool that integrates with its digital twin, enabling airport staff and partners to proactively and rapidly respond to operational issues while prioritizing the safety of passengers, planes, and cargo.“At YVR, we want our people to have the tools they need to succeed in a dynamic environment,” says Lynette DuJohn, CIO of the Vancouver Airport Authority. “The digital twin technology solves many airport-related challenges and provides an incredible layer of situational awareness for our employees.”The situational awareness tool provides a real-time, bird’s-eye view of YVR’s terminal, and consolidates information and alerts to notify users of potential safety issues and data anomalies. This tool also enables YVR staff to make informed decisions and explore hypothetical scenarios around security, weather, and other potential concerns.To advance the industry’s decarbonization goals and as a part of its commitment to becoming the world’s greenest airport, YVR is using its digital twin to build a first-to-market calculation model and baseline measurement to visually track and analyze aircraft carbon emissions from landing to takeoff. Currently, over 95% of emissions on Sea Island, where YVR is located, are from aircraft movements, vehicle traffic, and non-airport authority buildings. By measuring GHG emissions in real-time, YVR can now better support its own climate goals, as well as those of its carrier partners.“We’re going to be net zero by 2030 versus 2050, so we need to move our plan up by 20 years,” says DuJohn. “Being able to model all of those scenarios is going to be very powerful for us to really get our arms around efficiency and climate.”As a part of its path to Reconciliation, YVR is committed to advancing Indigenous participation and leadership in technology and innovation. The airport has partnered with Unity to deliver a Unity training program to Musqueam Indian Band learners, who will receive a foundational 3D skill certificate to support future career placements within the digital twin and gaming industries.YVR’s digital twin development doesn’t end with their airport. Together, Unity and YVR are working to commercialize the airport’s digital twin model for the global aviation industry, enabling other airports to advance their digital transformation.Want to learn more about how Unity and YVR built North America’s first-to-market real-time 3D digital twin of an airport? Read the full case study.Discover how our award-winning Custom Solutions team can create and deploy a custom digital twin to help you manage asset performance, reduce maintenance costs, maximize revenue, and hit your environmental, social, and governance (ESG) targets.Looking for creation tools and enterprise-level support to help you transform your CAD and 3D data into immersive apps and experiences? Check out Unity Industry, our suite of products and services designed specifically for industry creators.

>access_file_