// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1695 transmissions indexed — page 41 of 85

[ 2024 ]

20 entries
802|blog.unity.com

Unity 6 Preview is now available

We’re excited to announce the release of Unity 6 Preview, which is available for you to download today. Unity 6 Preview (formerly known as 2023.3 Tech Stream) is the last release of our development cycle for Unity 6, which is launching late this year.Last November at Unite, we announced that we were updating our naming conventions (you can read more about these changes in this forum post).Unity 6 Preview is structured just like a Tech Stream release. It’s a supported release that gives you a head start using new and updated features in projects that are in discovery or prototyping stages. For projects in production, we recommend using the Unity 2022 LTS release for greater stability and support.Here are a few highlights from the Unity 6 Preview, which also includes features released in 2023.1, and 2023.2. You can also find more details in the official release notes.In Unity 6 Preview, the Universal Render Pipeline (URP) and the High Definition Render Pipeline (HDRP) both see significant performance enhancements that speed up production across platforms. Depending on your content, the improvements described here can reduce CPU workload by 30–50% while providing smoother, faster rendering across various platforms.The new GPU Resident Drawer allows you to efficiently render larger, richer worlds without the need for complicated manual optimizations. You can optimize your games with up to 50% CPU frame-time reduction for GameObjects when rendering large, complex scenes across platforms, including high-end mobile, PC, and consoles.Working alongside the GPU Resident Drawer, GPU Occlusion Culling boosts the performance of GameObjects by reducing the amount of overdraw for each frame, which means the renderer is not wasting resources drawing things that are not seen.You can optimize GPU performance and significantly enhance visual quality and runtime performance with Spatial-Temporal Post-Processing (STP).STP is designed to take frames rendered at a lower resolution and upscale them without any loss of fidelity, delivering consistent, high-quality content to platforms with varying levels of performance capabilities and screen resolutions. STP is compatible with both URP and HDRP, across desktops, consoles, and, notably, compute-capable mobile devices.Render Graph for URP is a new rendering framework and API that simplifies the maintenance and extensibility of the render pipeline and improves rendering efficiency and performance. The new system introduces various key optimizations, such as the automatic merging and creation of native render passes, in order to reduce memory bandwidth usage along with energy consumption – especially on tile-based (mobile) GPUs.The new Render Graph API also streamlines the custom pass injection workflow, allowing you to extend the render pipeline with your own custom Raster and Custom passes, as well as to reliably access all of the pipeline resources needed using the new Context Container.Lastly, with the new Render Graph Viewer tool, you can now analyze the Engine’s render pass creation and frame resource usage directly in the Editor, simplifying render pipeline debugging and optimization.Foveated Rendering API in URP allows you to configure the Foveation Level, improving GPU performance at the cost of reduced fidelity around a user’s mid/far peripheral.Two new foveation modes are available. With Fixed Foveated Rendering, regions in the center of the screen space benefit from higher quality, while Gazed Foveated Rendering uses eye tracking to determine which regions of the screenspace will benefit.The Foveated Rendering API is compatible with the Sony PlayStation®VR2 plug-in and Meta Quest through the Oculus XR plug-in, with support for the OpenXR plug-in coming soon.Volume framework enhancements in both HDRP and URP optimize CPU performance on all platforms to make it viable even on low-end hardware. It allows you to set global and per-quality levels volumes in URP, similar to what was possible in HDRP with an improved user interface across the board. Additionally, it’s now easier to leverage the Volume framework with Custom post-processing effects for URP to build your own effects like a custom fog (check out this demo from our December live stream to learn more).Adaptive Probe Volumes (APV) provide a new way for you to build global illumination lighting in Unity. They enable more streamlined authoring and iteration times for Light Probe-lit objects, and open new possibilities like time-of-day scenarios and streaming.Building on the development of APV delivered in the 2023.1 and 2023.2 Tech Stream releases, enhancements in Unity 6 Preview improve authoring workflows, expand streaming capabilities, and extend control and platform reach to achieve impactful lighting transitions.We have expanded APV Scenario Blending to URP, enabling a wider range of platform support for you to easily blend between baked probe volume data for day/night transitions or to switch lights on and off in rooms.APV Sky Occlusion, supported in both URP and HDRP, enables you to apply a time-of-day lighting scenario to your virtual environments and achieve more color variations in static indirect lighting from the sky compared to APV scenario blending.APV disk streaming now supports a non-compute path in URP, and we’ve enabled support for AssetBundles and Addressables.Leverage the Probe Adjustment Volumes tool to fine-tune your APV content and fix light leaking situations. Adjustments you can make to probes inside these volumes include Override Sample Count and Invalidate Probes. Light Probes not affected by the Adjustment Volume can be hidden, and probe lighting data can now be previewed only for impacted probes, then baked directly from the Probe Volume and Probe Adjustment Volume components.Finally, we introduced a new C# Light Probe Baking API,enabling you to control how many probes to bake at a time to balance execution time against memory usage.We’ve used the APV probe baking editor code as an example of how to use the API, and you can find this example on GitHub.In HDRP, we improved sky rendering for sunset and sunrise to better enable your project’s time-of-day scenarios. This adds ozone layer support and atmospheric scattering to complement fog at long distances.Water has been improved with support for Underwater Volumetric fog that samples caustics to create volumetric light shafts. Performance optimization now includes an option to read back simulation from the GPU with a few frames of delay instead of replicating the simulation on the CPU. We also added support for transparent surfaces with mixed tracing mode to mix raytraced and screen space effects when rendering surfaces like water together with terrains and vegetation.Because performance is key when rendering large dynamic worlds, we optimized SpeedTree vegetation rendering for both URP and HDRP, leveraging the new GPU Resident Drawer mentioned above.For VFX artists, we’ve improved tooling and URP support so you can efficiently reach more platforms. VFX Graph profiling tools allow a VFX artist to find what could be optimized within a graph by getting feedback about memory and performance to tweak certain effects and maximize performance.Build VFX shaders with the support of Shader Graph Keywords, and more complex effects with URP with URP depth and color buffers for fast collision or for spawning particles from the world.Get a quick start in VFX Graph with new Learning Templates, a collection of VFX assets designed to help you learn about VFX Graph concepts and features.Unity 6 Preview addresses many of the top user pain points when using Shader Graph by including new editable keyboard shortcuts, a heatmap color mode to quickly identify the most GPU-intensive nodes in your graphs, and faster Undo/Redo.Access new Node Reference Samples containing a set of Shader Graph assets where each graph is a description of one node, with breakdowns of how the math works under the hood, and examples of how the node can be used. Learn more in the Node Reference Samples Tutorial video.Unity 6 Preview brings multiplatform enhancements across desktop, mobile, web, and XR, aimed at delivering optimizations to multiplatform development workflows and expanding reach across the most popular platforms.With the new Build Profiles feature, managing builds will be more efficient, with a higher degree of flexibility than ever before.As well as configuring build settings in each profile, you can now include different scene lists to customize the content of your builds, creating multiple unique, playable demos for your game with the scenes you want to share most.Additionally, you can set custom scripting defines for any profile, which are additive over those found in player settings, to allow for fine-tuning of features and behavior of both builds and Editor Play mode. This could be used to create vertical slices or target different behavior for different platforms.You can add an override for player settings to any profile, allowing you to customize settings that relate to the platform module. This feature makes it easier to configure publishing settings for different profiles. Overall, this new feature reduces the need to rely on custom build scripts to customize the way that builds are managed in the Editor.Finally, we also added the Platform Browser to enhance platform discovery inside the Editor. The platform browser is a place where you can discover all the platforms that Unity supports and create build profiles for any you choose.Android and iOS browser support has arrived With Unity 6 Preview. Now, you can run your Unity games anywhere on the web, without limiting your browser games to desktop platforms. Additionally, you can embed your games in a web view in a native app or use our progressive web app template to make your game behave more like a native app, with its own shortcut and offline functionality. With more bells and whistles such as mobile device compass support and GPS location tracking, your web games will be able to react to wherever your gamers choose to play.Fine-tune your web games with an update to the Emscripten 3.1.38 toolchain and the latest support for WebAssembly 2023, our collection of newer WebAssembly language features such as sign-ext opcodes, non-trapping fp-to-int, bulk-memory, BigInt, Wasm table, native Wasm exceptions, and Wasm SIMD. WebAssembly 2023 also supports up to 4GB of heap memory, unlocking access to even more RAM for you to use on the newest hardware.Additional mobile improvements coming with Unity 6 Preview include the latest Android tooling and support for Java 17 out of the box, as well as the ability to include debug symbols within your Android App Bundle. This will save you time when submitting to the Google Play Store and ensure you always have stacktrace information in the Play Console.The introduction of experimental support for a WebGPU backend marks a significant milestone for web-based graphics acceleration, paving the way for future leaps coming to graphics rendering fidelity for Unity web games.WebGPU is designed with the goal of harnessing and exposing modern GPU capabilities, such as Compute Shader support, to the web. This new web API will achieve this by providing a modern graphics acceleration interface that’s implemented internally via native GPU APIs such as DirectX 12, Vulkan, or Metal, depending on the desktop device you use.The WebGPU graphics backend is still in experimental state, so we do not recommend using it for production. Can’t wait? Discover how to gain early access and test WebGPU in our graphics forum.Unity delivered support for Arm-based Windows devices in 2023.1, enabling you to bring your titles to new hardware. With Unity 6 Preview we are now delivering native Unity Editor support for Arm-based Windows devices in Unity 6. This means you can now take advantage of the performance and flexibility that Arm-powered devices can offer to create your Unity games.Unity’s DirectX 12 graphics backend is fully production ready, and available for use when targeting DX12-capable Windows platforms. This change is preceded by a comprehensive array of improvements to both rendering stability and performance.Using DX12, Unity Editors and Players can benefit from significant improvements to CPU performance by using Split Graphics Jobs. Performance gains are expected to scale based on scene complexity and the amount of draw calls submitted.Most noticeably, the DX12 graphics API unlocks support for a wide range of modern graphics capabilities in order to enable the next generation of rendering techniques, such as Unity’s ray tracing pipeline. Upcoming features will make use of DX12’s advanced capabilities, ranging from graphics to machine learning, to enable an unprecedented level of fidelity and performance.Thanks to the ongoing partnership between Microsoft and Unity, two new Microsoft GDK packages are now available with Unity 6 Preview, 2022 LTS, and 2021 LTS. The Microsoft GDK Tools and Microsoft GDK API packages can be used for Microsoft gaming platforms with the same configuration and code base. These packages make it easier than ever to build for Microsoft gaming platforms like Windows and Xbox using the same code to utilize Xbox services like user identity, player data, social, cloud storage and more.The combined Microsoft GDK packages allow you to make games for Microsoft platforms with a shared code base and the ability to automate the build process through APIs. Additionally, new samples are provided to showcase various features available in the packages.Previously when targeting Xbox consoles and the Microsoft Store on Windows, guidance was to install separate GDK packages provided by Microsoft and Unity. This required the maintenance of a different branch of code for different Microsoft platform targets. Using the new Microsoft GDK packages, this is no longer the case. Also, it will now be possible to modify the MicrosoftGame.config file from an API directly in the build server. Combined with the new build profiles features in Unity 6, bringing your games to the Microsoft gaming ecosystem from a single project has never been easier.If you’ve been using the legacy Game Core Package or the Windows GDK package and want to migrate to these new Microsoft GDK packages (the Microsoft GDK API and Microsoft GDK Tools), follow the instructions detailed in this migration guide.We support most popular XR platforms, including ARKit, ARCore, visionOS, Meta Quest, Playstation VR, Windows Mixed Reality, and more. In Unity 6 Preview, you’ll find cutting-edge cross-platform features like mixed reality, hand and eye input, and improved visual fidelity. Many of these new features are now integrated into our revamped templates so you can get started more quickly.Whether you want to expand your existing game with mixed reality or you’re making something entirely new, AR Foundation helps you incorporate the physical world into players’ experience in a cross-platform way. In Unity 6 Preview, we’ve added support for image stabilization on ARCore, as well as improved support for mixed reality platforms like Meta Quest, including features like meshing and bounding boxes.To help you streamline your interactions, we’ve added a couple of major improvements to XR Interaction Toolkit 3.0 (XRI). This includes a new interactor called the Near-Far Interactor, enabling greater flexibility and modularity when customizing how interactors behave in your projects.We’ve also improved how we handle input in XRI with the addition of our new Input Readers, which streamlines the input process and reduces code complexity across various types of input. Lastly, we will ship a new virtual keyboard sample, giving you the ability to build and customize in-game keyboards in a cross-platform way.More platforms now support the use of hands to interact with content. Our XR Hands package enables you to implement custom hand gestures (such as thumbs up, thumbs down, pointing), as well as common OpenXR hand gestures. It includes samples to help you get started quickly. We’ve also included tools for creating, fine-tuning, and debugging your hand shapes and gestures so that your content is accessible to more people.One way to improve the visual fidelity of your game is through a feature called Composition Layers, which is currently available as an experimental package.This feature renders text, video, UI, and images at much higher quality using native support for the runtime’s compositor layers, enabling clearer text, sharper outlines, and an overall better appearance with significantly reduced artifacts.Unity 6 Preview accelerates the creation, launch, and growth of multiplayer games with the simplicity of integrated end-to-end solutions.We’ve made the new Experimental Multiplayer Center package (com.unity.multiplayer.center) available in the package registry. Multiplayer Center is a streamlined guidance tool designed to onboard you into multiplayer development. This central location in the Editor gives you access to the tools and services Unity offers for your project’s specific needs.Multiplayer Center presents interactive guidance based on your project’s multiplayer specifications, access to resources and educational materials, and shortcuts to deploy features and experiment rapidly with multiplayer capabilities.We’ve released Multiplayer Play Mode version 1.0., enabling you to test multiplayer functionality across separate processes without leaving the Unity Editor. You can simulate up to four players (the main Editor player plus three virtual players) simultaneously on the same development device while using the same source assets on disk. You can use Multiplayer Play Mode to create multiplayer development workflows that reduce the time it takes to build a project, run locally, and test the server-client relationship.We updated the Multiplayer Tools package to version 2.1.0, adding Network Scene Visualization as a new visual debugging tool. Network Scene Visualization (NetSceneVis) is a powerful tool included in the Multiplayer Tools package to help you visualize and debug network communication on a per-object basis in the Unity Editor Scene View of your project with visualizations such as mesh shading and text overlay.We added Distributed Authority mode in Netcode for GameObjects version 2.0.0-exp.2 (com.unity.netcode.gameobjects) when paired with the new Experimental Multiplayer Services SDK version 0.4.0 (com.unity.services.multiplayer). With Distributed Authority, clients have distributed ownership of/authority over spawned Netcode objects during a game session. The netcode simulation workload is distributed across clients, while the network state is coordinated through a high-performance cloud backend Unity provides.We improved the experience of Netcode for Entities with support for GameObjects to render debug bounding boxes. We also added the NetCodeConfig ScriptableObject which contains most NetCode configuration variables, which you can customize without needing to modify code.We’ve released the Dedicated Server package, which allows you to switch a project between the server and client role without the need to create another project. To do this, use Multiplayer roles to distribute GameObjects and components across the client and server.Multiplayer roles allows you to decide which multiplayer role (Client, Server) to use in each build target. This breaks down into:Content Selection: Provides UI and API for selecting which content (GameObjects, Components) should be present/removed in the different multiplayer rolesAutomatic Selection: Provides UI and API for selecting which component types should be automatically removed in the different multiplayer rolesSafety Checks: Activates warnings that help detect potential null reference exceptions caused by stripping objects for a multiplayer roleThis package also contains additional optimizations and workflow improvements for developing Dedicated Server platforms.The Experimental Multiplayer Services SDK is a one-stop solution for adding online multiplayer elements to a game developed in Unity 6 Preview. Powered by Unity Gaming Services (UGS), it combines capabilities from services such as Relay and Lobby into a single new “Sessions” system to help you quickly define how groups of players connect together.The Experimental Multiplayer Services SDK version 0.4.0 (com.unity.services.multiplayer) enables you to create peer-to-peer (P2P) sessions while providing multiple methods for players to join those sessions, such as by a Join Code, by browsing a list of active sessions and “Quick Join.”For this Unity 6 Preview milestone, several of these capabilities are still in an Experimental state, which means they are not yet supported for production. We intend to rapidly transition them to Pre-release and Release states for a fully supported experience on Unity 6 that integrates your feedback. You can engage with us in our community forums and on our official Discord server.Unity 6 Preview streamlines ECS workflows and resolves common pain points. As part of this effort, we changed the way that Entities are stored in preparation for a future consolidation of Entities and GameObject workflows. Entity IDs are now globally unique, and you can now move them efficiently from one Entity’s world to another. This does not impact ECS workflows, but it does disambiguate debugging by always showing exact entities.Additionally, the recent improvements delivered to ECS in Unity 2022 LTS are also available in Unity 6 Preview:ECS 1.1: Major physics collider workflow and performance improvements, plus 80+ fixes across the ECS frameworkECS 1.2: Quality-of-life and performance improvements across Editor workflows, serialization, baking, plus 50+ fixes and Unity 6 compatibilityThe Unity 6 Preview ships with Unity Sentis, a neural engine for integrating AI models into the runtime. Sentis makes new AI-powered features possible, like object recognition, smart NPCs, graphics optimizations, and more. Recent enhancements to Sentis focus on performance and simplifying the experience of getting startedWe now support AI model weight quantization (FP16 or UINT8) in the Unity Editor if you want to reduce your model size by up to 75%. That’s a big savings when it comes to shipping games on mobile. Model scheduling speed was also improved by 2x, along with reduced memory leaks and garbage collection. Lastly, we now support even more ONNX operators.To make it easier to find the right AI model for your project, we partnered with Hugging Face, the largest AI model hub in the world (600,000+ models). Now you can instantly find “grab and go” AI models for Unity Sentis to ensure easy integration.Once you have the right model, you’ll need to hook it up to your game. To make that easier, we introduced a new Functional API that helps to build, edit, and chain AI models. It’s intuitive, stable, and optimized for inference. The Backend API is still available for those of you who need a lower-level and fully customizable API to have full control over memory management and scheduling.To learn more about Unity Sentis, check out our blog overview, documentation, or dive into the community.The Unity Engine offers tools ranging from Visual Scripting to UI Toolkit to enhance your productivity and functionality. On top of existing tools, Unity 6 Preview specifically comes with two updates within the profiling tools portfolio.Unity 6 Preview brings two major updates when it comes to the Memory Profiler. First, graphics memory that was previously uncategorized is now measured and reported per resource (e.g., render textures and compute shaders). Second, reporting of resident memory is more precise – for example, memory that is swapped to disk is no longer counted towards this. These updates address direct feedback around the problem of understanding native memory use in particular.To learn more details about what’s in the Unity 6 Preview, check out the release notes for a comprehensive list of features, and the Unity Manual for details on how to use them.Unity 6 Preview release is supported with weekly updates until the next version. Remember to always back up your work prior to upgrading to a new version. Our Upgrade manual can assist with this. For projects in production, we recommend using Unity 2022 LTS for greater stability and support.Unity 6 Preview is most suitable for testing during preproduction, discovery, and prototyping phases of your development process. However, if any code, functionality, or fixes from any Unity 6 version are incorporated in a live game, it may be subject to the applicable runtime fees if the game is upgraded to Unity 6 in General Availability (provided the Runtime Fee criteria are met).The Unity 6 Preview release is an opportunity to both get early access to new features and to shape the development of future tech through your feedback. We want to hear how we can best support you and your projects. Let us know how we’re doing on the forums, or share your feedback directly with our product team through the Unity Platform Roadmap.

>access_file_
803|blog.unity.com

Get over 80 tips to speed up in Unity with our latest productivity e-book

Each mouse click adds up when you work in the Unity Editor every day. To help you speed up your workflows, we updated our popular e-book, 80+ tips to increase productivity, to Unity 2022 LTS.This new edition brings together numerous steps, settings, and workflows from across Unity 2022 LTS toolsets and systems that each in their way, make creating in the Editor more efficient and fun. Whether you’ve just recently begun to learn Unity or have shipped multiple projects with it, we’re confident you’ll find plenty of helpful hints for getting things done faster in your game development with Unity.There’s been no shortage of big product news from Unity lately, and many of the new features and workflows in Unity 2022 LTS are covered in our updated productivity guide. We included tips for making the most of the Universal Render Pipeline (URP) with Forward+ rendering, Renderer Features, and Decals; production-ready 2D tools like Sprite Atlas V2, Splines packages, and VFX Graph for 2D effects; and Material Variants for both the High Definition Render Pipeline (HDRP) and URP.But you’ll also pick up tons of time-saving gems that don’t make the headlines but can help you create faster. Read on for a sampling of what’s in the main sections on Editor, artist, and developer workflows.Leverage new search capabilities like Query Builder to craft complex queries and explore your projectUse Presets to:Customize the default state of anything in your InspectorCopy the settings of a component or asset, save it as an asset, and then apply the same settings to another item laterUse the Scene visibility and picking to hide and show objects in the Scene view:Avoid incorrect clicks or a cluttered hierarchy for complex scenes, without changing their in-game visibilityConveniently select and edit specific GameObjectsDisplay UVs, normals, tangents, and other Mesh information in the Inspector previewUse the Layers menu to toggle off the visibility of any Layers (such as UI) that may obscure your Scene view; lock a Layer to avoid changing its state accidentallyIf you frequently select the same objects in your scene, use the hotkey combos under Edit > Selection to quickly save or load a selection setChange colors in the Editor via Unity > Preferences > Colors to find certain UI elements or objects more quickly in Editor; adjust the Playmode tint to remind yourself when Play Mode is active so you don’t lose any changes you intended to save on exitPick up pointers from the latest URP 2D sample, Happy Harvest, on how to use Sprite Atlas, 2D Tilemap, 2D skeletal animation, 2D lights, and 2D Sprite Shape.Get tips to help you work in prefab mode, and with nested prefabs, prefab variants, and moreSee how the Animation Rigging package can help bring your 2D characters to lifeSpeed up your lightmapping with the Progressive LightmapperGet tips for for optimizing performance with light probes; for example, if you have set dressing or other static meshes that don’t require lightmapping, you can remove them from your lightmap bakes and use light probes insteadGet updated URP tips like: Basic steps for setting up DecalsUsing URP converters to correctly convert a project made with the Built-in Render Pipeline to URPA brief look at the Add Renderer Feature for injecting scripts into the rendering processAnd, updated HDRP tips for: HDRP Global SettingsThe Volume frameworkLightmapping optimization tipsUse Enter Play Mode settings to reduce compilation timeUnderstand the effects of disabling the Reload Domain and Reload Scene settingsImprove your debugging workflow with tips on how to use the Unity Debugger while in Play Mode: Attach breakpoints within the code editor in order to inspect the state of your script code and its current variables at runtimeLearn how you can use script templates to help create consistency in your code base across your development teamUse custom windows and customize inspectors to streamline workflows for your project needsUse the Platform Dependent Compilation feature to include or exclude certain sections of code based on the target platform for your buildOrganize your scripts into custom assemblies to promote modularity and reusabilityUse the Device Simulator to test your project on a range of devices directly in the Editor.Get the new Unity 2022 LTS productivity e-bookFind all of the technical e-books in Unity’s best practices hub or the advanced best practices page in Unity documentation.

>access_file_
805|blog.unity.com

Find a treasure trove of lighting and visual effects in our new match-3 sample Gem Hunter Match

Gem Hunter Match, a new official Unity sample, shows you how a 2D puzzle/match-3 game can stand out from the competition with eye-catching lighting and visual effects created in the Universal Render Pipeline (URP) in Unity 2022 LTS.Download the sample, together with its mini-manual, and get ready to dive for riches in crystal blue waters populated with brightly colored jewels and sea creatures. You’ll learn how to prepare and light 2D sprites to add depth, apply a Sprite Custom Lit shader for shimmer, and create glare and ripple effects.Download Gem Hunter Match.View the Gem Hunter Match script in Unity Samples.Gem Hunter Match follows the URP 3D Sample and Happy Harvest as the latest in a series of samples, created by multiple teams at Unity, that illustrate the many capabilities of URP in Unity 2022 LTS for 3D and 2D multiplatform projects. At the end of this post, we link to more great URP learning resources.This playable slice of a cross-platform match-3 puzzle game is available on the Unity Asset Store and Unity Samples. You can customize Gem Hunter Match with your assets or gameplay, or reuse any of its sprites, shaders, effects, audio, textures, and scripts in a project of your own.Bubbles, coral, and a watchful mermaid form the backdrop to the game board in Gem Hunter Match. Pearls, pink sapphires, ruby-red starfish, blue fish, golden clams, and sleepy sea turtles populate the board across three playable levels. Clear the gems and earn boosters and coins by matching three or more items. Boosters help you achieve the goals, but if you fail, you lose a heart. Wooden crates and rope are blockers; match three next to a wooden crate or underneath a piece of rope to remove them.The simple game loop includes an inventory you can reuse. Here are its main elements:The Main scene: This screen lists all the playable levels, which are referenced from a ScriptableObject called LevelList, located inside the Data folder. The Level scene: This shows the setup for the gameplay. You need to clear the elements in the Goals section. End of level / The Shop: Access the shop when you fail or complete a level; buy yourself boosters, hearts, or other currency. All of the shop items are in the folder named Data/ShopItems (you can also add your own via Assets > Create > 2D Match. Items in the shop include: Stars: You collect these after completing each level; in actual match-3 games, stars are often a part of the metagame, decoration, or are used to advance the storyline. 2D puzzle/match-3 games are popular because they’re cute and colorful, easy and fun to play, and accessible to anyone from almost anywhere.They can also include beautiful artwork, but with their static camera, repetitive gameplay, and, in many cases, prebaked lighting and shadows, they’re not known for bleeding-edge light and visual effects. And yet there are plenty of ways you can add pops of sparkles and glimmer for extra fun.A Sprite Custom Lit shader is one of the techniques used for creating the visual effects in Gem Hunter Match. This shader substitutes for scene lighting, allowing us to modify the 2D light texture information and control the lighting on each piece. The result is creative illumination of the sprites, like the shimmery effect that moves over the pieces.The light position data is moved into the shader, eliminating the need for actual light objects in the scene, which also helps to keep it neat. The encapsulated per-object lighting in the shader works well for better isolation and editing at scale and improves performance where batching is possible.With the light and shadow information held in the shader, only the color information is included in the sprites. The normal map is used by the 2D light system to calculate the direction of each pixel, ensuring it receives more or less light based on its position. The mask map is used by lights that can affect a specific RGB channel.The Lights prefab in Gem Hunter Match level scenes contains the 2D lights for the grid. These lights affect the default Sprite Lit shader and are applied to the grid items included in the Sorting Layer that receives light.The following image illustrates the steps in creating the sprites and how the Sprite Custom Lit shader fits into the process.In Gem Hunter Match, a “fictional” light position is represented by the LightRotator GameObject, which is animated to create a glare effect off the gems. The modifications to the 2D light texture and the fabricated highlights with the Dot Product node are both used in the TileShader Shader Graph that’s applied to the gems in the game.The Dot Product node can be useful in 2D projects when you want to do custom lighting. Dot Product measures how close two vectors align. In the sample, the LightDirection position is compared to the apparent direction of each pixel in the Normal map. The sampled black and white image can be used to add light to the sprite and update the values at runtime for all the pieces using the same shader.Get Gem Hunter MatchThe Radial Warp shader uses the URP 2D Camera Sorting Layer Texture setting. This handy feature gives you access to the graphics generated up to the indicated Sorting Layer in the URP 2D Renderer settings that you can then use in Shader Graph to apply effects. In the Happy Harvest sample, the Camera Sorting Layer Texture is used to create a water refraction effect, and in Dragon Crashers, it’s used for smoke distortion. In this sample, we use it to apply a distortion that simulates a shockwave, adding extra visual appeal when you make a match. It’s the kind of effect that creates an impact that your players will remember.We hope you’ll download Gem Hunter Match, play and customize it, then try out its graphics techniques in your own projects. All of these and more are covered in the Unite 2023 session, Lighting and AI techniques for your 2D game.Don’t miss out on these other samples, e-books, and tutorials for URP:E-book: 2D game art, animation, and lighting for artistsE-book: Introduction to the Universal Render Pipeline for advanced Unity creatorsUnite 2023: Accelerate your multiplatform development with the latest for URPLivestream: What’s new in Unity’s Universal Render Pipeline?Video tutorial: Happy Harvest: Custom crops and player notesE-book: The definitive guide to creating advanced visual effects in UnityMake sure to join the conversation about the Universal Render Pipeline on the Unity Forums or join Unity Discussions.

>access_file_
809|blog.unity.com

All Unity Muse capabilities are now available in the Editor, plus 3 new updates

We’re excited to announce that Unity Muse users can now use all five Muse capabilities directly in the Unity Editor. By bringing AI abilities to where you create, you can now more easily and conveniently use Muse to complement your creative process and streamline development.While you may already be familiar with Muse Sprite for creating original 2D assets and the recently improved Muse Texture generation for creating a breadth of unique textures for 3D objects, this update brings three exciting improvements:A new Editor-integrated version of Muse Chat provides project-aware responses.Now available to all users, Muse Animate allows you to create animations within the context of your project.LLM-powered Muse Behavior, which you can use to set up character interactions with an intuitive AI-assisted interface, is now available to anyone using Muse.Following Muse’s launch last year, we asked for feedback on how we can continue to improve the chat experience. What we heard was an overwhelming desire to be able to troubleshoot errors without leaving the Editor. We received feedback that it would be helpful for Muse Chat to automatically know more details of a project to reduce the time spent giving context and get more relevant answers faster. We’ve heard your feedback, and are excited to announce that Muse Chat is now available as an in-Editor package. This update also provides a foundation for providing project-aware responses. Muse Chat will now be able to retrieve key details about your project, including the Unity version, active render pipeline, input system used, target platforms, API compatibility, and other project settings.One example of how this update will simplify troubleshooting issues is solving console errors. When you have an error, you can now click on it and ask “How can I fix this error?” Muse will automatically retrieve information on what’s causing the issue and provide a suggestion for how to resolve it.Muse Chat will also continue to provide usable scripts that you can plug and play, just as it has before. Over time, Muse Chat will continue to evolve and become a more intelligent chat-based assistant that can provide tailored answers and information, code snippets, and smarter integrations with the Editor.Animation is a complex craft. Prototyping basic movements capable of plugging into game mechanics early can save you a lot of time in later stages of development. Muse Animate is perfect for helping you rapidly generate humanoid animations through natural language.Similar to Muse Chat’s feedback, we commonly heard that Muse Animate needed to be more deeply integrated in-editor. Based on that resounding feedback, we’ve launched a new in-Editor Muse Animate prerelease package that’s available for all Muse users. With this new version, you can create Unity Animation Clips directly within the Unity Editor and easily retarget them to work with characters that use the Unity Humanoid Rig.You can also edit your generated animations with Muse by selecting an animation to decompose it into multiple poses. From there, you can select effectors (little orange dots on some of the character’s joints) to more finely adjust the movements. Muse Animate will process your changes and regenerate the animation. The package also makes additional editing features available, such as extrapolated posing, loop to first pose, and transition duration.With this release, we are also actively working on enhancements to the existing AI animation model and introducing new ways to generate and modify animations, including the ability to create animations from sketches or videos.We’ve received a lot of requests for a built-in behavior tree that would allow you to bring game characters to life with NPC interaction abilities. Last year, we started a closed prerelease of Muse’s Behavior capability – our take on LLM-powered decision trees – to allow you to create custom logic for characters and objects in an easy-to-use workflow. This set of features also included generative abilities to help you set up more complex interactions.Now, the Muse Behavior prerelease package with LLM features is available to all Muse users. We’ve designed and improved this package to make it easier to use and for more intuitive editing. Like a classic behavior tree model, Muse Behavior consists of nodes and branches that are human-readable, allowing you to create representations that read like stories.Words within a node’s story are tagged as inputs used by the node’s internal logic. The words are then automatically converted into fields so you can embed data directly into the node. Want your character to speak? Add the talk node, provide the words and just like that, your character will have dialog that’s triggered by player interactions.Muse Behavior also includes a blackboard, which contains variables that can be reused in nodes across the graph. Simply click on the link button of a field, then choose from a menu of assignable options. Variables in the blackboard can be assigned values in code or assigned through the Behavior Graph Agent component in the Inspector window. This ability to set up and then duplicate repeatable actions makes it faster and easier to create complex and repeating interactions.With Muse Behavior, you can create custom actions that give you control over the high-level structure and narrative of your behavior trees. And don’t worry if you’re totally new to creating behavior trees – the node wizard guides you step by step through the process of creating a new action type and adding to the graph. You can also use the LLM feature to automatically generate your tree.As we continue to improve Muse Behavior, you can expect more generative features to further simplify complex behavior tree setups.These new packages and improvements are just the start. We’re continuing to innovate on existing capabilities like Muse Texture, where we’re leveraging new original research and proprietary models to generate true PBR materials for 3D objects. This means that you’ll be able to produce a multi-material UV texture that can wrap onto an object directly, and it will react more accurately to lighting.We also understand the importance of audio for setting the scene, so later this year, we’ll release Muse Sound so you can produce AI-generated, prompt-to-sound effects such as movement and environmental sounds.At GDC, we showcased how you could use all five Muse capabilities together to customize a game loop in the garden scene of our URP sample project. Check out our session Unity Muse: Accelerating prototyping in the Unity Editor with AI to learn how you can use all of Muse’s abilities to quickly customize a project scene and gameplay.We’ve updated the Muse onboarding experience to make it easier to start a free trial of Muse and add the Muse packages to your projects. Visit the new Muse Explore page to get started, and let us know what you think of the newest capabilities and improvements in Discussions.

>access_file_
811|blog.unity.com

Valeo unveils in-car XR racing game at SXSW

“Are we there yet?” What parents wouldn’t give to banish this phrase forever from long car trips, or to have some peace and quiet when driving their family around day-to-day! This was the challenge a team of Valeo software engineers tackled when they developed Valeo Racer, which was unveiled for the first time at South by Southwest (SXSW) in Austin, Texas this year.Valeo Racer is an in-vehicle, extended reality (XR) game that combines the real-world driving environment and virtual 3D elements to create a unique and immersive experience. Passengers compete against each other, playing on their phones or tablets connected to the car’s WiFi. Players control their race car to get as many points as possible by collecting coins while avoiding obstacles, which includes the other real cars on the road.Valeo Racer is the first XR video game to combine live video, vehicle environment perception, and digital gaming elements. The infinite runner game is generated by a new software stack developed by Valeo. It uses the vehicle’s existing Advanced Driver Assistance Systems (ADAS), including cameras, radar, and ultrasonic sensors, as well as artificial intelligence perception algorithms to process the car’s real-time environment data through the Unity Runtime to generate game elements.Unity Runtime is a proprietary component of the Unity Engine that handles various critical components of the game, rendering high-performance graphics, managing user inputs and interactions, coordinating game components, supporting real-time physics simulation, animation, scripting languages, asset management, and networking – all on the end device. By using Runtime, Valeo was able to give passengers the option to to play a mixed reality game with real-time data harnessed from the car’s sensors on their phone or tablet connected with in-vehicle WiFi – and even compete against each other during trips.Why did an international automotive supplier develop Valeo Racer, beyond just the fun of the game? The automotive landscape is rapidly evolving, from electric vehicles and advanced autonomous functions, to reinventing the driver and passenger experience. In-vehicle entertainment has become an important element of enhancing the user experience.Valeo CTO Geoffrey Bouquot and Unity’s Nick Facey, managing producer for Unity Industry, got together with Elizabeth Hyman, president and CEO of XR Association, at SXSW for a fireside chat titled “Cruising with Augmented Reality: Exploring Entertainment in Autonomous Cars.”Bouquot insisted on the limitless possibilities using automotive sensors and software for applications beyond their primary functions.“By 2030, almost all cars will be equipped with cameras and other sensors. Together with the associated software, these technologies support safety, electrification, and sustainability in the next generations of mobility,” Bouquot said. “Beyond that, it offers a world of opportunities for other applications, such as gaming, education, and so much more. It’s exciting to explore these new frontiers with our partners in automotive, gaming, and other creative industries.”Facey added, “A lot of the focus has been on the driver, such as bringing car signals into better graphics engines, Unity being one of them. Today, we have really good graphics and real-time information going to the driver to make driving safer, better, more efficient. Now we’re bringing the passengers into that experience. This convergence of technology is almost always a good thing for consumers.”Valeo is a world leader in automotive sensors and cameras and has also been working with augmented reality for more than 20 years. The parking guidance displays that you see overlaid on your backup camera video feed and your car’s head-up display? Those are AR features developed by Valeo and already available on millions of vehicles. The company’s new XR software development kit will offer game developers the means to create new types of games that utilize a car’s existing onboard cameras, sensors, perception algorithms, and artificial intelligence to reimagine the in-vehicle gaming experience.“As the leader of the XR Association, the industry trade association for virtual, augmented, and mixed reality, I'm always looking for opportunities to identify and showcase the latest advancements and use cases for immersive technology,” Hyman said. “While Valeo Racer is a gaming experience, it is also a platform that will inspire developers to create new products and experiences in a way that marries up the advancements of autonomous vehicles with immersive technology.”Valeo Racer is not a signal that the automotive tech company is entering the gaming space. Rather, it’s a demonstration of what Valeo’s XR software development kit could bring to the automotive industry, and it’s a proof of concept for a new type of in-car entertainment. As vehicles reach more advanced levels of autonomy, XR will offer even more opportunities to create new experiences for both passengers and drivers.Learn more about how Unity can help you with your XR project. Missed out on trying the Valeo Racer XR game at SXSW 2024? Here’s what a few people had to say about it.

>access_file_
812|blog.unity.com

See the Unity 2022 LTS updates to two of our biggest e-books: URP and HDRP for advanced users

The Universal Render Pipeline (URP) and the High Definition Render Pipeline (HDRP) are built to help you scale and deliver your games for wide platform reach with the best possible visual quality and performance.We created two technical e-books to provide Unity artists, technical artists, and developers with a better onboarding experience to help you harness the wide-ranging capabilities of URP and HDRP. We’re happy to announce that both guides are now updated to include all relevant features in Unity 2022 LTS.The Universal Render Pipeline for advanced Unity creators and Lighting and environments in the High Definition Render Pipeline are written by Unity and external technical experts. Each guide provides a treasure trove of illustrated step-by-step instructions and best practices for creating high-quality, performant graphics with your chosen pipeline.With the HDRP guide weighing in at 186 pages and the URP one at 166 pages, these are two comprehensive resources you can reference throughout the planning and development of your Unity 2022 LTS-based projects.Let’s look at what’s in each of the guides.URP is a multiplatform rendering solution built on top of the Scriptable Render Pipeline (SRP) framework. It is the successor to our Built-in Render Pipeline and is designed to be efficient for you to learn, customize, and scale to all Unity-supported platforms. In Unity 2022 LTS, URP provides the majority of the functionality offered by the Built-in Render Pipeline, and in certain areas exceeds it. Our top goal is that URP is the leading renderer for mobile, XR, and untethered hardware.The URP e-book will help you migrate your Built-in Pipeline-based projects to URP, or start a new project based on URP.The updates are threaded through almost every section of this latest edition of the guide – updated instructions for setting up and applying the myriad capabilities of URP, new links, images, and code snippets – so you can rely on it to match as accurately as possible your experience using URP in Unity 2022 LTS.There are new additions and changes for areas like applying decals, URP quality settings and converters, comparing rendering paths to include Forward+, Full Screen Shader Graph including custom post-processing, LOD Crossfade, the SubmitRenderRequest API, and much more.To show you how comprehensive the URP e-book is, here’s a full list of the topics and workflows it covers.The URP e-book concludes with an introduction to the four environments included in the URP 3D Sample, which is available in the Unity Hub. Each environment has a distinct art style that showcases the different lighting and visual effect capabilities of URP for multiple platforms.You can also explore the URP 3D Sample through this short walkthrough video.HDRP is Unity’s high-fidelity SRP built to target modern (compute shader-compatible) PC and console hardware. It utilizes physically based lighting techniques, linear lighting, HDR lighting, and a configurable hybrid Tile/Cluster Deferred/Forward lighting architecture.We added so much new information to the 2022 LTS version of the HDRP e-book that it’s close to double the length of the previous one! Just like the URP guide, this one has new images, links, and information in many of its sections, with the biggest additions as follows.The water systemA 30+ page deep dive into the new water system covers waves, wind effects, swells, ripples, foam effects, decals, caustics, underwater scenes, water scripting, and much more.In addition to the e-book updates you can also dive deeper into the water system in HDRP through this video tutorial.TerrainFrom sparkling water to beautiful landscapes: A whole new Terrain section covers texturing and detailing, trees and vegetation, including how to work with SpeedTree, the Terrain tools package, painting terrain, ray tracing for terrain, and a look at the HDRP Terrain demo.Shaders and materialsWe also cover HDRP materials in detail in the new edition, with a section explaining material samples, variants, and properties; subsurface scattering; translucency; decals; HDRP Master Stack; Full Screen Shader Graph, and Volumetric Shader Graph fog.CloudsThe section on creating clouds is expanded to include steps for atmospheric and sun-based lighting and blending between two distinct cloud systems.Other key topics covered in the HDRP guide include:You can find all of the advanced e-books from Unity in the best practices hub and best practices page in the Unity Manual.

>access_file_
813|blog.unity.com

Mazda and Unity: Pioneering a new future for automotive cockpit HMI

With market-leading multiplatform support and efficient development workflows for user experiences (UX), the Unity Engine and Editor are becoming the go-to solution for carmakers to develop their next-generation in-vehicle Human-Machine-Interfaces (HMI).On March 7, 2024, Unity Japan publicly announced a partnership with Mazda Motor Corporation to embed Unity in future Mazda vehicles. In a conversation with Seiji Goto, general manager of Infotainment and Cockpit Electronics at Mazda, we gained insight on their perspective on HMI and Unity.A vision for 2030: Driving forward with MazdaAs part of Mazda’s ambitious 2030 roadmap, research and development is accelerated in many areas, including HMI. The aim is to take on the challenge of simultaneously improving safety and value for customers through intuitive, great-looking, and responsive UX. Mazda will work directly with Unity to create a more “human-centric” in-vehicle experience.“Drivers process a variety of information while driving, and we believe it is important for them to be able to recognize and understand information inside and outside the car intuitively, and to operate the car intuitively,” explains Mr. Goto.The current world of HMIOver the last few years, the amount of information passed to drivers and passengers in vehicles has increased. Goto, who joined Mazda in 2015, points to the move from hard disks to cloud-connected vehicles, and the continuous growth of data to be managed just for the navigation systems alone.Throughout the industry, the same is true for the recent advent of Advanced Driver Assistance Systems (ADAS), where the amount of information displayed has scaled with system performance. A key challenge is to convey relevant information to the driver in an easy-to-understand manner while keeping the system quick to react and distraction free.Bridging the technology gap and the human-machine gapIn-vehicle HMIs now require technologies that were pioneered in the games industry. Systems like a scene tree for 3D graphics, animation blending, or easily exchangeable prefabs are a challenging thing to build from scratch, but these are standard in the video game industry. These technologies are ideal to tackle the challenges of modern HMI.“By utilizing Unity’s expertise in real-time 3D rendering for our user interface to spatially represent information from many car systems, we will be able to reduce the time and burden on the driver to recognize and understand information, realizing a safer and more convenient driving experience,” says Goto.Which UI works best differs between individuals. But Goto believes that by using Unity, the HMI can be personalized to meet each driver’s individual requirements.Unity and Mazda: A strategic partnershipFor carmakers, it’s important to have an integrated development environment that allows designers, developers, and other contributors to iterate on the project efficiently. Over the course of their search for a toolchain to power their next-generation HMI, Unity emerged as an innovative, future-looking solution.Goto outlines a multitude of reasons that make Unity a clear choice for automotive HMI:An active community provides a trove of documentation, tools, and solutions.The ability to tap into a large user base of game developers makes it easy to hire Unity experts anywhere in the world.Unity has a track record of multiplatform adaptability that reduces the risk of long-term technological changes.Development tools have ease-of-use benefits.The partnership between Mazda and Unity Technologies Japan Corporation is a milestone in automotive HMI development. “Mazda is accelerating research and development in all areas under the 2030 Management Policy,” said Michihiro Imada, Mazda’s executive officer in charge of Integrated Control System Development.“In the cockpit HMI area, Mazda will continue to evolve the interface between the human and car based on the ‘human centric’ development concept to deliver exciting mobility experiences. Specifically, Mazda will take on the challenge of further improving safety and convenience by enabling intuitive human operation and creating new value for vehicles.”Mr. Imada continues, saying, “By working with Unity, which is highly regarded globally for its technical capabilities and high quality in the rapidly innovating game industry, Unity can offer graphical user interface (GUI) solutions in the cockpit HMI and advance Mazda’s goal of human-centric vehicle engineering.”Future outlookWith the complex processes involved in creating an automotive HMI experience, there is a lot of exchange between departments such as marketing, manufacturing, UX design, and software engineering. On top of embedded HMI, Unity’s real-time 3D (RT3D) capabilities are used for VR-based UX testing, prototyping, engineering and design visualizations, car configurators, operational digital twins, and other applications in the automotive sector.Mazda believes that it is possible to introduce Unity in each of these departments, and if they do so successfully, they will be able to communicate through the same development environment. This will help to make the work itself more enjoyable and encourage more customer-oriented proposals. By building an open development environment and system, better products can be created.Learn more about how Unity can boost your HMI project at unity.com/hmi

>access_file_
814|blog.unity.com

How to maintain control and transparency with in-app bidding

Compared to traditional waterfall instances, in-app bidding can be advantageous. However one concern raised within the context of in-app bidding is the loss of manual tweaks to fill ad requests. Given the automation with in-app bidding, it can raise concerns about losing control and transparency.To help mitigate concerns around moving to in-app bidding, there are tools and services that help you maintain transparency and control. This article looks at three key pillars to addressing those concerns: testing, granular reporting, and understanding your ad experiences.Maintain control through robust testingIn the automated in-app bidding environment, you need to be able to test, learn, and adapt ad strategies to see long-term success. A/B testing gives you greater understanding of your monetization metrics, from average revenue per user (ARPU) to retention. Knowledge is power, and A/B testing is the only way you can truly measure the impact of different in-app bidding strategies. Running an A/B test can give you the answers you need to feel secure in this new landscape.Certain KPIs you can look at to help determine success could be: overall average revenue per daily active user (ARPDAU), ARPU growth, increase in the number of impressions per user, and fewer managed instances. Tracking ARPDAU before and after you implement bidding measures overall revenue, not just performance by network.In addition, A/B testing gives you granular insights into your strategy so that you can continue to optimize and build a dynamic marketplace in this new ecosystem. Unity LevelPlay has a robust A/B testing solution that allows you to test a wide range of variables including bidding vs. traditional waterfalls, mediation groups, new networks, instance pricing, and ad strategy. Among LevelPlay customers who A/B tested integrating the Unity Ads bidder, the test group won in 78% of the cases, and those who applied the changes saw ad ARPDAU increases of up to 7%.Overall, A/B testing brings you peace of mind when transitioning to an in-app bidding world, confirming the switch is right for your app. After switching, a robust A/B testing tool continues to give you control over the bidding environment, allowing you to test and implement new optimizations.Learn more about A/B testing.Gain transparency with deeper reportingHaving transparency into your ad performance can help you make informed changes to accelerate your app’s growth. In-app bidding is an automated setup which will run based on general best practices, so the ability to make real-time, data-informed choices for your app is incredibly valuable, giving you a leg up on competitors.Features such as real-time pivot from LevelPlay provide granular visibility into monetization performance and the ability to make instant changes. Publishers can detect performance changes as they occur, compare revenue over time, and analyze network performance. Trackable KPIs such as eCPM buckets, ad latency, impressions, DAU, DEU, and sessions per DEU can help publishers understand not only ad performance, but user engagement patterns as well. These metrics can be sliced and diced by country, time, ad source, and more to give a deeper level of insight to determine what, if any, changes are needed.“In 2024, Unity LevelPlay remains focused on giving developers more control and transparency through granular analytics that provide actionable data in real time, to help optimize their monetization strategies and maintain user experience to drive higher retention and ARPU,” Omer Adato, Senior Director of Product Management at Unity says.Experience ads as your users doAnother concern raised is that since in-app bidding operates in real time and your ad space can be filled quite quickly without you even knowing, having transparency into the ads being shown in your app is key to ensuring you’re selling ad space to advertisers that won’t disrupt the user experience. After all, seeing a frustrating ad or having a glitchy experience can quickly cause users to churn. However you can gain insight that supports both brand and safety as well as revenue.Gaining oversight of your users' ad experiences supports not only branding and safety, but revenue as well. By blocking troublesome ads initially, you cultivate a healthier ecosystem until networks address issues. Once resolved, lifting blocks on revenue sources restores income while maintaining a quality user experience. Overall, targeted blocking and unblocking empowers monetization through cooperation on ad quality.Ad Quality from LevelPlay is designed to give full transparency into your ad experience. You can access a gallery of all the ads shown in your app, ad analysis, and user journeys. Publishers can also define a set of triggers in Ad Quality, called custom notifications. This feature proactively alerts publishers if a specific creative, advertiser, content rating, or buggy ad appears in your app, so that you can immediately respond to any critical needs.While in-app bidding is running automatically in the background, a feature like Ad Quality gives you much needed transparency into how your ad space is being filled.“Ad Quality helps us ensure that ad content in our apps is appropriate for our audiences. We get notified when titles with high content ratings are displayed, so we know when to reach out to the networks. This visibility is key for us.”-- Stefano Accossato, Head of UA & Ad Monetization at TutoTOONSIn an in-app bidding environment, working with a mediation platform that has the right features to maintain control and transparency over your ad strategy can help make the transition from traditional waterfalls easier and help create a top performing ad strategy.Learn more about Unity LevelPlay and how you can get started with A/B testing, Real time pivot, and Ad Quality.

>access_file_
815|blog.unity.com

Unity Asset Bundles tips and pitfalls

Asset Bundles are archive files containing assets for your game. They are used to split your game into logical blocks, allowing you to deliver and update content on demand while making your game build smaller. They’re also commonly used to deliver patches and DLCs for your game. Asset Bundles can contain all sorts of assets, such as Prefabs, Materials, Textures, Audio Clips, Scenes, and more, but they can’t include scripts.Previously, it was necessary to build Asset Bundles manually, marking each asset accordingly, then tracking and resolving dependencies by yourself at runtime. Nowadays, all of this is taken care of by the Addressables system, which will build Asset Bundles for you based on the Asset Groups you define, as well as loading and handling dependencies transparently.While there are a lot of guides on how Asset Bundles work, I’d like to cover some lesser-known aspects of the system, with a focus on game performance, memory runtime usage, and general compatibility.Whenever you attempt to use an asset contained within a bundle, Unity ensures the corresponding bundle is loaded into memory, then in turn loads the asset in memory.While it’s possible to partially load specific assets within an Asset Bundle, the opposite is not allowed. This means that as soon as an asset within an asset bundle is loaded, it can only be unloaded if the entire group of assets is no longer needed.As a result, if your bundle structure is not ideal, you will often see increasing runtime memory usage as the game goes on, leading to deteriorating performance and potential crashes. For this reason, it’s best to avoid bundles with a large amount of assets in it, as it will end up taking up a lot of runtime memory and turn into a bottleneck for your game. Instead, aim to pack assets based on how frequently they are going to be loaded and used together.Asset Bundles are generally forward compatible, so bundles built with older versions of Unity will in most cases work on games built on newer versions of Unity (assuming you do not strip the TypeTree info, as covered later). The opposite is not true, so bundles built on a version of Unity that’s newer than the one used for your game build are unlikely to load correctly.As the difference in version between the bundle and the engine used for the game build increases, compatibility becomes less likely. There are also cases where the bundle might still be loaded, but the objects contained within the bundle cannot be loaded correctly in the new version of Unity, likely due to a change in the way the objects are serialized, thus creating issues. In that case, you’ll need to rebuild your bundles to maintain compatibility.There’s also a performance cost in loading bundles from a different version of Unity, as covered in the TypeTree section below.For these reasons, it’s recommended to test thoroughly whenever you update the Unity version of your game build against existing Asset Bundles, and to also update them whenever possible.Asset Bundles do not generally offer cross-platform support. While in the Editor, you will be able to load bundles from another target platform, however on-device this will fail.This is still true for bundles that contain assets that are not necessarily platform-specific.The reason for this limitation is that data might be optimized or compressed in ways that only work for the target platform. Also, bundles can contain platform-specific data that should not be shared between different platforms, so this prevents leaking content that is not intended for another platform.The Loading cache is a shared pool of pages where Unity stores recently accessed data for your Asset Bundles. This is global, so it’s shared between all Asset Bundles within your game.This has been introduced fairly recently, I believe on Unity 2021.3, then backported to 2019.4. Before this, Unity relied on separate caches for each Asset Bundle, which resulted in significantly higher runtime memory usage (covered below in “Serialized File Buffers”).By default, this is set to 1MB, but it can be changed by setting AssetBundle.memoryBudgetKB.The default cache size should be enough in most cases, although there are some scenarios where changing it might bring benefits to your game. For example, if you have bundles with a lot of small objects contained within, increasing the cache size might lead to more cache hits, improving performance for your game.Along with your game assets, Asset Bundles include a bunch of extra information and headers, used by Unity to know which assets to load and how, as well as a dedicated cache (depending on the Unity version you are using).A map of the assets in a bundle. It’s what allows you to lookup and load each individual asset in the bundle by name. Its size in memory is normally not a concern, unless you have exceptionally large asset bundles containing thousands of objects.The Preload Table lists the dependencies of each asset contained within your bundle. It’s used by Unity to correctly load and construct assets.This can become quite large if the assets contained in your bundle have a lot of explicit and implicit dependencies, as well as cascading dependencies coming from other bundles. For this reason (and many others), it’s a good idea to design your bundles to minimize the dependency chain.TypeTrees define the serialized layout of the objects contained in the Asset Bundles.Their size depends on how many different types of objects are contained within the bundle. For this reason, it’s a good idea to avoid large bundles where objects of many different types are mixed together.TypeTrees are necessary to maintain compatibility when upgrading the Unity version of your game build while still trying to load Asset Bundles built on older versions of the engine. For example, if the format or the structure of the object have changed, they allow you to do a Safe Binary read so Unity can attempt to load it regardless. This has a performance cost, so in general it’s recommended to update bundles whenever possible when you update the engine.It can optionally be disabled, by setting the BuildAssetBundleOptions.DisableWriteTypeTree flag when building your bundles. This will make your bundles and the related memory overhead smaller, but it also means that you’ll need to rebuild all your bundles whenever you update the engine version of your game build. This is especially painful if you rely on bundles built from your players for user-generated content, so unless you have a very strong reason to do so, it’s recommended to keep TypeTrees enabled.One case where TypeTrees can normally be safely disabled is for bundles included directly in your game build. In this case, upgrading the engine would require making a new game build and new Asset Bundles anyway, so its retrocompatibility aspect isn’t relevant.Each bundle has their own TypeTrees, so having multiple small bundles containing the same type of objects will slightly increase the total size on disk. On the other hand, when loaded, TypeTrees are stored in a global cache in memory, so you won’t incur a higher runtime memory cost if multiple asset bundles are storing the same type of objects.Note: Since Unity 2019.4, this has been replaced by a global, shared Loading cache, as described above.When an Asset Bundle is loaded, Unity allocates internal buffers to store their serialized files into memory.Regular Asset Bundles contain one serialized file, while Streaming Scene Asset Bundles contain up to two files for each scene contained in that bundle. The size of these buffers depends on the platform. On Switch, PlayStation, and Windows RT it will be 128KB, while all other platforms have 14KB buffers.For this reason, it’s best to avoid having a large amount of very small asset bundles, since the memory occupied by these buffers might become significant compared to the assets they actually provide.A CRC (Cyclic Redundancy Check) is used to do checksum validation of your Asset Bundles, ensuring the content delivered to your game is exactly what you expect. CRCs are calculated based on the uncompressed content of the bundle.On consoles, Asset Bundles are normally included as part of the title installation on local storage or downloaded as DLCs, which makes CRC checks unnecessary. On other platforms, such as PC or Mobile, it’s important to do CRC checks on bundles downloaded from a CDN. This is to ensure the file is not corrupted or truncated, leading to potential crashes, and also to avoid potential tampering.CRC checks are fairly expensive in terms of CPU usage, especially on consoles and mobile. For these reasons, it’s normally a good compromise to disable CRC checks on local and on cached bundles, enabling them only on non-cached remote bundles.By default, Unity offers three ways to lookup assets within bundles:Project Relative Path (Assets/Prefabs/Characters/Hero.prefab)Asset Filename (Hero)Asset Filename with Extension (Hero.prefab)While this is convenient, it comes at a cost. In order to support the last two methods, Unity needs to build lookup tables, which can consume a significant amount of memory for large bundles.In addition, loading assets using a different method than Project Relative Path will incur a performance cost, again because of the table lookup required.For these reasons, it’s recommended to avoid using those methods. You can even disable them when the Asset Bundles are built, which will improve loading performance for your asset bundles, and runtime memory usage.To do that, you can set these two flags when building your bundles:BuildAssetBundleOptions.DisableLoadAssetByFileNameBuildAssetBundleOptions.DisableLoadAssetByFileNameWithExtensionTo learn more about asset management, share feedback, or engage with the community and Unity staff, check out the Asset Management forum.

>access_file_
816|blog.unity.com

Social networks are not enough: why you should diversify your app marketing channel mix

App marketers now have more capabilities than ever before to reach new audiences. Yet, despite the wealth of options available, many app marketers choose to rely solely on social ad networks (SANs) for their user acquisition (UA) efforts.This is in part because many app marketers believe that SANs are sufficient for effective UA. But, this misses the significant impact that SDK networks can offer apps in terms of scale, optimizations, and resilience. All of which is left behind when marketers choose to only utilize SANs.Here we’ll address the reasons app marketers should be leveraging SDK networks, as well as the common misconceptions that lead app marketers not to do so, and the impactful resources left on the table when choosing not to diversify UA marketing channels.Capturing untapped growth opportunitiesScalability is the measure of app success, and effective scaling requires access to as wide a pool of users who can be converted as possible. While there’s no doubt that social ad networks offer substantial growth opportunities, they are not, or even close to, the totality of the market.In other words, limiting marketing channels to SANs means losing out on the untapped scale that is available through SDK networks, and as a consequence limiting your app's growth potential. Using SDK networks in tandem with SANs mitigates this loss of scale.Resiliency to market and channel policy changesExpanding beyond social networks also has the added benefit of resiliency to market and channel policy changes.SANs operate under a set of requirements different from SDK networks, needing to conform to standards unique to them. While your app may be currently compliant with these guidelines, they continue to evolve and update. When changed, your app would need to quickly adapt or stop running UA. Diversifying your marketing mix enables you to create a buffer with additional avenues for growth.And that’s just at the regulatory level. On a business level, the companies behind these social networks frequently change their policies. A change in policy could mean extensive work to meet the new requirements, which could then result in a loss of growth. By adding SDK networks to your marketing mix you can create a more resilient UA strategy that isn’t totally reliant on one set of policies that are subject to change.A bigger toolbox for optimizationsA significant benefit to a diverse UA marketing mix is having multiple processes for reaching high-quality users. Each SDK and SAN has its optimizations for finding you the right user for your app. But, with differing solutions come differing results. This is a weakness when marketing channels are siloed from one another, or used in isolation. However, used as a part of a comprehensive and diverse marketing strategy, this means you get access to more tools to reach high-quality users at the right price.Each network prioritizes users differently. So while SANs may miss the users you actually want, SDK networks could help you fill in those gaps, and vice versa. The larger your toolbox of algorithmic solutions, the better you can optimize and the more likely you’ll be able to find the right users for your app.Common misconceptions: Implementing SDK networksWhile there are many clear upsides to integrating SDK networks into your UA mix, some app marketers have been reluctant to do so. A large part of this reluctance is connected to the higher investment needed, both in terms of personnel and capital. But, this reluctance is for the most part based on two common misconceptions:Misconception 1: SDK network implementation and optimization is highly manualA common myth around SDK network integration and optimization is that it requires a lot of manual management in order to drive results. While this was true in the past, the industry has since become far more efficient and automation driven. This is particularly true for optimizations.Thanks to advancements like automated bid optimizers, much of the manual heavy lifting has been taken out of the equation. The ironSource Ads tCPA optimizer, for example, uses machine learning functions to optimize bids based on certain actions. In the past, this would all be done manually, but it’s now a streamlined process that only requires the setting of which action and price you wish to optimize for.Misconception 2: ROAS is difficult to solve forAn important metric for utilizing SDK networks successfully is return on ad spend (ROAS). This is the measure of revenue generated in relation to the cost of running the campaign. To effectively leverage SDK networks, app marketers need to know what ROAS goal they should be solving for. Without it, spending could exceed revenue, meaning that your UA could cost you more than it earns you.A common concern is that efficient ROAS is tough to identify and that generating a reliable ROAS benchmark requires a deep understanding of SDK networks and their optimizations. While this was historically the case, the industry has evolved to account for this difficulty. Most SDK networks offer account managers to assist marketers in calculating their ideal ROAS. Plus, solving for ROAS is now an established science - with the correct formulas and tools, it’s now far easier.Diversify your marketing mix for better UA performance and more resiliencyWhile SANs offer performance and scale and should be a part of your UA channels, there is significantly greater growth potential in adding SDK networks into your marketing mix. On top of this, having a diverse marketing mix gives your app a more resilient UA strategy that can adapt to changing policies, both on the regulatory and business levels. Combined with easily accessible automated optimizers and comprehensive account management, app marketers can now easily integrate SDK networks into their marketing mix for a more diverse and efficient UA strategy.Let’s get you started. Talk to a Unity account manager today.

>access_file_
819|blog.unity.com

How Fika Productions set sail with their peer-to-peer multiplayer hit Ship of Fools

When Fika Productions set out to fill the market gap for a co-op roguelite game, they had their sights set on couch co-op. And then 2020 happened. We sat down with lead gameplay programmer Daniel Carmichael and developer Yannick Vanderloo to discuss their game and explore some of the development challenges they had to solve to get Ship of Fools to market during a complicated time for the industry.What was the inspiration behind Ship of Fools? Do you have any colleagues with a nautical background?Daniel: Our inspiration was first and foremost about filling the market gap for a cooperative roguelight. We’re all fans of the roguelite genre, and although there are a lot of great roguelite games, we felt that none of them did the co-op part really well.Thematically, we loved the idea of a boat because if the boat sinks, everyone sinks. That’s the main core idea: work together to keep the boat afloat. No one had any nautical experience, and we’re not sea creatures or anything like that.No octopi or salty sea dogs on staff. Got it. What does market research look like for you?Daniel: Our market research was really just a small Reddit research activity, but we got a lot out of it. On 25–30 subreddits about couch co-ops and roguelites, we asked the question “What do you feel is necessary to have a successful co-op roguelite game?” We received a lot of suggestions, summarized them into a document, and looked for overlaps and themes. This process validated some of our ideas and also gave us some new ones.What was your favorite moment of working on Ship of Fools?Daniel: We had a small running gag in the office. Every time we shipped a small feature, someone would say “We have a game!” And, one day, we merged a big part of the game that was really important, and I playtested it and I’ll always remember saying to the team “We have a sellable game!” and that felt really different. That was a very proud moment for us.Was there a particularly challenging aspect of the game’s multiplayer development, and how did you overcome it?Yannick: Networking in games is usually straightforward when either the host or the client takes full control. However, things get tricky when control needs to be divided, like when some elements are managed by the local player and others by the game host.Projectiles were particularly challenging in this setup. We wanted them to feel snappy when fired, and that involved numerous scenarios to consider. Moreover, when an enemy fires back and a player deflects the shot, we had to meticulously plan the interactions and ensure they felt right for all players, even in high-latency situations. There were a lot of edge cases to think about. Especially how to make it fast and responsive for both players.Daniel: Another big snag we ran into was networking. We spent a good year and a half designing the game for local co-op, not even thinking about online play. Then bam! The pandemic hit. Suddenly, our local-only game didn’t make much sense since everyone was stuck at home, not hanging out together.Originally, we were all about that face-to-face, in-the-moment vibe. That was the heart of our game. But with the pandemic, our publisher was like, “Hey, we gotta go online,” and we were like, “Alright, let’s do this.” And man, it felt like we had to rework a year’s worth of stuff, tweaking every part of the game for online play.So, a little tip for fellow devs: always have online play in mind from the start, even if you’re not 100% on it. Designing with online in mind is generally a solid move, and it’s way easier to strip it out later than to shoehorn it in after the fact.Tell me more about managing projectiles, and how did Netcode for GameObjects come into play here?Yannick: Networking our game ended up being a unique twist. We don’t have a traditional Netcode for GameObjects setup. Instead, we have objects that exist on both the client and host sides, each aware of the other’s actions and who’s in control at any moment. It’s like they’re constantly in a conversation, updating each other on what’s happening.For instance, in a scenario where a bullet is fired, if it hits the target on the host’s side, the game waits for the client to confirm the hit. The client might agree, or it might say, “Nope, I dodged that one,” or even, “I reflected the bullet!” Depending on the client’s response, the game adjusts the outcome, ensuring both sides are in sync.This setup allows for a lot of flexibility. Players on the client side can see immediate reactions to their actions, like a bullet being deflected, making the game feel responsive. However, the final outcome might need adjustments based on the host’s input, which can override initial reactions if there’s a discrepancy.It’s a bit of a dance, with authority potentially shifting back and forth. We found the simplest solution was to let each side do its thing, then reconcile differences as they come up, based on feedback from the other side. It’s a collaborative process, ensuring both host and client contribute to the game’s flow.Here’s a bit of a visual explanation for your readers.In the first image, we’ve got our multiplayer setup, where I’m playing as Todd, the host on the left, and my friend is Hink, the client on the right.Then, a crabster enemy pops up and launches a projectile. It’s all about coordination here: both the host and client are informed via a remote procedure call. Both players see the projectile, but whether it hits the boat or gets deflected depends on player reactions, and the host needs to wait for the client’s input to confirm the final outcome.Finally, here we see what happens when the client, playing Hink, deflects the projectile. There’s a bit of a delay if there’s high ping, so while the host might initially see the projectile hitting the boat, it’ll correct itself once the client’s reaction is confirmed. This way, the client feels no lag – it’s as if they’re playing in real-time, and their actions are mirrored by the host to keep the game in sync.The whole idea is to make sure that when you’re in the heat of the moment, taking a shot or fending off an attack, the game responds instantly, making the multiplayer experience feel seamless.Any other specifics you could share? Anything our readers could take away as a powerful lesson learned?Daniel: We hit a bunch of challenges, but one biggie was all about memory management. Getting our heads around assembly and Addressables was a steep learning curve, especially since this was the first multiplayer game for the whole team.What’s funny is our game isn’t even that asset-heavy, but the load times hit two minutes at one point, which is crazy for a smaller game. That definitely caught some heat from the players.So, yeah, we learned the hard way about keeping things streamlined, memory and asset-wise. We should’ve nailed down the basics from the get-go.What about Addressables? What specifically did you learn there?Yannick: The deal with Addressables is pretty straightforward. You’ve got to organize your assets into groups that make sense to load together at the same time. This way, you’re not bogging down your game with stuff you’re not even using in a particular scene.For example, our game has different sectors, each with its own set of enemies, scenes, and scenery. Initially, we lumped everything into one massive group, which was a nightmare for loading times. To streamline things, we started grouping assets by sector. This made a huge difference because now, we can load just the enemies or just the scenery of a sector as needed, making everything way more efficient and smoother in the end.Why did you choose Netcode for GameObjects (NGO) for networking?Yannick: We went with NGO for networking mainly because it’s backed by Unity. This means it’s likely to evolve alongside the platform and get long-term support, which is crucial for us. Plus, NGO had all the features we needed.The key thing we wanted was a peer-to-peer connection to avoid server costs, which can be a big deal for a game whose future sales and player base are uncertain. With NGO, we felt confident we were making a safe bet for both our present needs and future development. It seemed like the smart choice to stay within the Unity ecosystem and ensure long-term support for our game.What’s next for Ship of Fools?So far, we’ve rolled out two big updates, packed with fresh content, and launched two DLCs, introducing new characters to mix things up. These DLCs are totally optional, giving players more choices without making them feel left out if they decide not to grab them. The cool part? Those major content updates were on the house, and from what we've seen, folks really dug them.As for what’s coming next, we’ve got plans, but we’ve got to keep a lid on them for now. However, when we’re ready to spill the beans on future updates, you’ll definitely be in the know.Interested in multiplayer development? Explore the multiplayer section in the 2024 Unity Gaming Report to get insights from successful studios, fresh data on why more studios are developing multiplayer games, and a wealth of tips to help you and your team stay ahead of the curve.

>access_file_
820|blog.unity.com

Addressing addressability: How brand marketers can adapt their mobile programmatic strategy

ATT and cookie deprecation signifies more than just a technological shift - it's a game-changer for marketers who are looking to reach engaged consumers where they are spending the most time. So how can marketers adapt their mobile advertising strategy and continue to ensure they reach their consumers where they are?In short, as the mobile advertising landscape changes, so should the way advertisers run digital campaigns. Let's break down how advertisers adapt their mobile strategy accordingly (spoiler: all signs point to in-app advertising where more than half of users are still addressable.)Brief history of privacy changes on mobileTraditionally, to advertise on web browsers or apps, marketers have utilized cookies and mobile ad IDs (MAIDs). That means marketers utilize unique user identifiers for tracking, retargeting, frequency capping, audience segmentation, and attribution. But following privacy changes over time, addressability has become significantly more challenging.September 2017: Apple released ITP 1.0 to limit the use of cookies on Safari and prevent cross-site tracking.May 2018: The GDPR became applicable in the EU.January 2020: CCPA gave California residents rights over their personal information.April 2021: Apple introduced App Tracking Transparency (ATT) on iOS.2024: Chrome plans to disable third-party cookies for all of their users. They’ve already started with 1% of their global users, and plan to expand to 100% of users by Q3. You know the history. Now, here are some tips for tailoring your mobile strategy to maximize your impact with both addressable users, users who have a unique identifier, and non-addressable users, users who do not have a unique identifier.Addressability strategy of advertisers in the industryHere are some commonly used practices that marketers use to get ahead of their competitors before the last cookie falls.Embrace first party data: Without third-party cookies, first-party data has a key role to play. Marketers invest extensive resources in building user trust, encouraging opt-ins, and building a robust data infrastructure.Collaborate with industry players to explore alternative ID solutions: Some marketers explore alternative methods to cookies and consider testing with partners like The Trade Desk or Liveramp, who offer other ID solutions to address these cookie-based challenges.Optimize in-app advertising: Mobile users only spend 10% of their time on web, so 90% of mobile time is spent in-app, particularly social media and gaming apps. The best part: the majority of supply in apps is still addressable. Marketers are optimizing in-app strategy with more personalized ad experiences, retargeting, audience segmentation, and strategic placements to engage users. Getting even more granular, let’s discuss how marketers can better understand non-addressable users.How advertisers segment non-addressable usersNon-addressable users are still very valuable - marketers are getting more innovative with how they market to them. While marketers may not be able to get specific-user level information, some contextual data is still available:Contextual information: The type of content users are engaging with (e.g. what they are reading, mobile games they are playing, how long they’re engaging with this content, how the app is rated, etc.) Contextual demographic information: General demographic information can be inferred based on the content users are engaging withTechnical device information: Device type, model, OS, connectivityGeographic location: Country, city, time of dayBy adapting to this constantly-evolving advertising landscape, you can continue to make sure you’re in the best position for growth - just make sure you’re using the right approach. And with users spending 5 hours a day on their mobile devices and 90% of that time happening in-app according to data.ai - building a comprehensive mobile strategy is crucial to long term brand success.That’s why it’s also essential to have the right partners - and at Unity, we can help connect you with premium demand sources, offer full data transparency, and much more.Get started with Unity’s programmatic solutions and get ahead of the game.

>access_file_