// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1695 transmissions indexed — page 40 of 85

[ 2024 ]

20 entries
784|blog.unity.com

Introducing our new e-book: Unity’s Data-Oriented Technology Stack (DOTS) for advanced developers

Unity's Data-Oriented Technology Stack (DOTS) lets you create complex games at large scale by providing a suite of performance-enhancing tools that help you get the most out of your target hardware.This 50+ page e-book, Introduction to the Data-Oriented Technology Stack for advanced Unity developers, is now available to download for free. Use it as a primer to better understand data-oriented programming and evaluate if DOTS is the right choice for your next project. Whether you’re looking to start a new DOTS-based project, or implement DOTS for performance-critical parts of your Monobehaviour-based game, this guide covers all the necessary ground in a structured and clear manner.With Unity 6 in preview and DOTS 1.0 production-ready, this is a great time to explore the opportunities DOTS brings. The e-book, written by Brian Will, senior software engineer at Unity, joins the updated Unity Learn samples, recent DOTS bootcamp, and the GitHub samples in the collection of resources available to developers who want to learn how to work with DOTS.Our goal with this e-book is to help you make an informed decision about whether implementing some or all of the DOTS packages and technologies is the right decision for your existing or upcoming Unity project. Each part of the stack plays a role in enhancing a game's execution speed and efficiency. The guide aims to explain each of these parts, how they can be used together, and their common foundation, the Unity Entity Component System (ECS).A major reason to use DOTS is to get the most performance from your target hardware, and this requires understanding multithreading and memory allocation. Additionally, to leverage DOTS, you’ll need to architect your data-oriented code and projects differently to your C#-based Monobehaviour projects with their higher level of abstraction.Let’s take a closer look at what you’ll find in the e-book.CTA: Download Introduction to the Data-Oriented Technology Stack for advanced Unity developers.The first section in the guide, which we’ve included below, presents some of the factors that can contribute to poor CPU performance in a game, like garbage collection overhead, data and code that aren’t cache-friendly, suboptimal compiler-generated machine code, and more.The next section explains how each of the DOTS packages and features facilitate writing code that avoids CPU performance pitfalls. You’ll find helpful explanations for:C# Job SystemBurst compilerCollectionsMathematicsEntitiesEntities GraphicsUnity PhysicsNetcode for EntitiesAfter a rundown of each part of the stack you’ll get an introduction to the EntityComponentSystemSamples GitHub repo, which includes many samples that introduce both basic and advanced DOTS features. Some of the samples in the Github repo are reproduced in a new Unity Learn course on DOTS, Get acquainted with DOTS.The other key section in the DOTS guide is the appendix. It’s here that Brian Will provides detailed explanations for concepts related to Unity ECS, including memory allocation and garbage collection, memory and CPU cache, multithreaded programming, the limitations of object-oriented programming, and data-oriented programming.If you’re an experienced game developer then you know that performance optimization on target platforms is a task that runs through the entire development cycle. Maybe your game performs nicely on a high-end PC, but what about the low-end mobile platforms you’re also aiming for? Do the frames take much longer than others, creating noticeable hitches? Are loading times annoyingly long, and does the game freeze for full seconds every time the player walks through a door? In such a scenario, not only is the current experience subpar, but you’re effectively blocked from adding more features: More environment detail and scale, mechanics, characters and behaviors, physics, and platforms.What’s the culprit? In many projects it’s rendering: Textures are too large, meshes too complex, shaders too expensive, or there’s ineffective use of batching, culling, and LOD.Another common pitfall is excessive use of complex mesh colliders, which increase the cost of the physics simulation. Or, the game simulation itself is slow. The C# code you wrote that defines what makes your game unique might be taking too many milliseconds of CPU time per frame.So how do you write game code that is fast, or at least not slow?In previous decades, PC game developers could often solve this problem by just waiting. From the 1970’s and into the 21st century, CPU single-threaded performance generally doubled every few years (a phenomenon known as Moore's law), so a PC game would “magically” get faster over its life cycle. In the last two decades, however, CPU single-threaded performance gains have been relatively modest. Instead, the number of cores in the CPU have been growing and even small handheld devices like smartphones today feature several cores. Moreover, the gap between high-end and low-end gaming devices has widened, with a large chunk of the player base using hardware that is several years old. Waiting for faster hardware no longer seems like a workable strategy.The question to ask, then, is“Why is my CPU code slow in the first place?” There are several common pitfalls:Garbage collection induces noticeable overhead and pauses: This occurs because the garbage collector serves as an automatic memory manager that manages the allocation and release of memory for an application. Not only does garbage collection incur CPU and memory overhead, it sometimes pauses all execution of your code for many milliseconds. Users might experience these pauses as small hitches or more intrusive stutters.The compiler-generated machine code is suboptimal: Some compilers generate much less optimized code than others, with results varying across platforms.The CPU cores are insufficiently utilized: Although today’s lowest-end devices have multi-core CPUs, many games simply keep most of their logic on the main thread because writing multithreaded code is often difficult and prone to error.The data is not cache friendly: Accessing data from cache is much faster than fetching it from main memory. However, accessing system memory may require the CPU to sit and wait for hundreds of CPU cycles; instead, you want the CPU to read and write data from its cache as much as possible. The simplest way to arrange this is to read and write memory sequentially, and so the most cache-friendly way to store data is in tightly-packed, contiguous arrays. Conversely, if your data is strewn non-contiguously throughout memory, accessing it will typically trigger many expensive cache misses; the CPU requests data that is not present in the cache memory and instead needs to fetch it from the slower main memory.The code is not cache friendly: When code is executed, it must be loaded from system memory if it’s not already sitting in cache. One strategy is to favor calling a function in as few places as possible to reduce how often it must be loaded from system memory. For example, rather than call a particular function at various places strewn throughout your frame, it’s better to call it in a single loop so that the code only needs to be loaded at most once per frame.The code is excessively abstracted: Among other issues, abstraction tends to create complexity in both data and code, which exacerbates the aforementioned problems: managing allocations without garbage collection becomes harder; the compiler may not be able to optimize as effectively; safe and efficient multithreading becomes harder, and your data and code tend to become less cache-friendly. On top of all this, abstractions tend to spread around performance costs, such that the whole code is slower, leaving you with no clear bottlenecks to optimize.All of the above ailments are commonly found in Unity projects. Let’s look at these more specifically:Although C# allows you to create manually-allocated objects (meaning objects which are not garbage collected), the default norm in C# and most Unity projects is to use C# class instances, which are garbage collected. In practice, Unity users have long mitigated this issue with a technique called pooling (even though pooling arguably defeats the purpose of using a garbage-collected language in the first place). The main benefit of object pooling is the efficient reuse of objects from a preallocated pool, eliminating the need for frequent creation and deallocation of objects.In the Unity Editor, C# code is normally compiled to machine code with the Mono Compiler. For standalone builds you can get better results using IL2CPP (C# Intermediate Language cross-compiled to C++), but this brings some downsides, like longer build times and making mod support more difficult.It’s common that Unity projects run all their code on the main thread, partly because doing so is what Unity makes easy:The Unity event functions, such as the Update() method of MonoBehaviours, are all run on the main thread.Most Unity APIs can only be safely called from the main thread.The data in a typical Unity project tends to be structured as a bunch of random objects scattered throughout memory, leading to poor cache utilization. Again, this is partly because it’s what Unity makes easy:A GameObject and its components are all separately allocated, so they often end up in different parts of memory.The code in a typical Unity project tends to not be cache friendly:Conventional C# and Unity’s APIs encourage an object-oriented style of code, which tends towards numerous small methods and complex call chains. Unlike a data-oriented approach it’s not very hardware friendly.The event functions of every MonoBehaviour are invoked individually, and the calls are not necessarily grouped by MonoBehaviour type. For example, if you have 1000 Monster MonoBehaviours, each Monster is updated separately and not necessarily along with the other Monsters.The object-oriented style of conventional C# and many Unity APIs generally lead to abstraction-heavy solutions. The resulting code then tends to have inefficiencies laced throughout that are hard to disentangle and isolate.This e-book is freely available to everyone, but is tailored to Unity developers who are experienced with Monobehaviour-based, object-oriented game development, but are new to Unity DOTS and data-oriented design development.We hope the guide will help you understand DOTS and how these features might benefit your next Unity project, as well as make it easier for you to get the full value from the samples available on our GitHub repo.

>access_file_
789|blog.unity.com

Introducing the all-new Unity LevelPlay

Partnering with a strong mediation provider has never been more important. As the shift to bidding and mounting privacy regulations present challenges for app publishers, using a mediation that can boost your business across both monetization and user acquisition is critical. In 2024, LevelPlay is releasing significant product updates designed to help you drive more revenue, connect with high-quality users, and streamline the experience from game creation to growth. The combined updates will be transformational, making the ad mediation an all-new LevelPlay.In the first phase, LevelPlay has already released support for a dedicated package in the Unity Editor Package Manager. This powers a dramatically easier integration process that is only available for Unity LevelPlay, simplifying mediation setup to enable Unity developers to start monetizing faster. Developers will be able to complete their mediation integration in the same place they build their games, with the LevelPlay integration a native part of Unity developers’ workflow. The package also reduces overhead for publishers by enabling them to upgrade their SDK without being required to update the whole Unity package.But that’s not all. Over the coming months we’re releasing a series of additional product launches designed to maximize app growth and simplify growth management from every angle.Upcoming phases of the roll-out include:Major upgrades to network UA tech. Powered by a new generation of machine learning tech, the Unity Ads and ironSource Ads networks now make LevelPlay publishers’ UA more impactful than ever. Publishers who run automated ROAS campaigns are already seeing meaningful uplift in both scale and quality on both networks, and additional optimizations will be released on a rolling basis through the rest of the year.Multiple ad units. Publishers will be able to load multiple ad units simultaneously, which means they can create a dynamic waterfall setup and customize the waterfall by in-app placement. This gives publishers more control over their ad strategy with additional ways to optimize for key metrics like latency and ARPDAU.One home for Unity growth data. We’re centralizing data from Unity’s leading growth solutions into a new platform homepage, giving publishers an instant snapshot of the health of their app portfolio in one clear view. This new page will allow publishers to view UA and revenue data side-by-side, compare performance over set time periods, monitor their brand safety, and see combined network performance for Unity Ads and ironSource Ads. In addition to the two networks, the homepage will include LevelPlay and Ad Quality data, with Aura and Tapjoy Offerwall data coming next.Platform experience revamp. We’re also updating our platform UX to make it simpler than ever to grow your game. It’ll take fewer steps to manage and optimize your ad strategy with smoother functionality and greater ease-of-use. The platform revamp will be wrapped up in a new UI reflecting Unity’s look and feel for an even more seamless flow from game creation to growth.Stay tuned on the Unity Grow LinkedIn page and your email inbox for ongoing announcements.

>access_file_
791|blog.unity.com

Using rich LLM integrations to power relevance and reliability with Muse Chat

Unity Muse helps you explore, ideate, and iterate on real-time 3D experiences by empowering you with AI capabilities. Muse Chat is one of several tools that you can use to accelerate creation. Bringing Unity knowledge and Editor awareness to your fingertips, Muse Chat can be your assistant by providing helpful information including debugging advice, using code generation for a first draft, and more, all within the context of the Unity Editor and your project.To show you how exactly Muse Chat is designed to provide helpful solutions, we’re going to give you a peek under the hood of how we structure the plan to generate a response. We’ll also give you a preview of our current explorations and upcoming developments of the LLM pipeline.Muse Chat is built as a pipeline consisting of several different systems and Large Language Model (LLM) integrations for query planning and arbitration of different pieces of information. For each incoming request, Chat derives a plan of action to outline the format of the upcoming response based on the Editor selection or information you provided and the problem you are trying to solve.“I built and coded everything myself using Muse as my personal assistant. Of course, I had the support of my colleagues, but I don’t think I could have achieved this result in such a short time if I didn’t have Muse by my side.” – Jéssica Souza, cocreator of Space Purr-suitWhen assembling a reliable response, there are two challenges. One is retrieving relevant information to build the response, and the other is making sure that the information is usefully embedded in the response, based on the conversation’s context and history.Muse Chat’s knowledge is assembled to address both of these challenges, with more than 800,000 chunks of information such as sections of documentation or code snippets. The chunks are processed and enriched with references to surrounding information, so that each one provides a useful and self-standing unit of information. They are cataloged both by their content and their unique context, as traced through the documentation. It provides transparency and interpretability of the system, and it enables effective retrieval of compatible information. See the diagram and description below to learn how the rest of our current pipeline is structured.REQUEST: Your request comes in.EDITOR CONTEXT: If you are in the Editor, the relevant context is dynamically extracted from the Editor, along with the request to give Muse the proper information.QUERY EXPANSION: The initial planning system performs query expansion, which is intended to derive precise plans. We instruct an LLM to give its best attempt at replicating the knowledge catalog format and recreate the ideal structure of a chunk for each step. This approach allows the system to compute an embedding that captures the desired context, contents, and use case of the chunk we’re looking for. Each of these plan steps are used for fine-grained semantic retrieval.KNOWLEDGE RETRIEVAL: To find the relevant information, we use symmetrical semantic retrieval and metadata filtering to retrieve the chunks in our knowledge catalog that most resemble the ideal estimated chunk, identified in the Query Expansion stage.FORMULATION: To generate the final response, we use another LLM to compose a response, based on the detailed outline containing both the filtered original plan steps and the sources needed to convey the relevant underlying information.RESPONSE: Muse Chat responds with an answer.To drill into the work behind making Muse Chat available in the Editor, we introduced the second step to the pipeline, Editor context extraction. Adding this to the very beginning of the pipeline, we analyze the query to identify what to extract from the Editor, and parse this to inform Muse on next steps. Based on your feedback, we began with project setup, project settings, GameObjects/Prefabs, and console access.Now, if you were to experience a console error with warnings or messages, simply click the relevant row(s) in the console to add the error as part of your selection. In the example below, we triggered an error for a missing curly bracket in a script.Consider a simple example of answering “How can I create a scriptable feature and add it to the Universal Renderer?” in a new conversation in the Editor. This will be converted into plan steps:REQUEST: “How can I create a scriptable feature appropriate for my render pipeline?”EDITOR CONTEXT: Muse identifies which render pipeline is used, the version of Unity that’s running, which project settings are relevant to the question. It then extracts dynamic context, along with any Editor selection you might have.QUERY EXPANSION: LLM generates a plan with the following plan steps:Introduce the concept and purpose of scriptable features for URP. Explain the steps to create a scriptable feature in URP. Provide an example showing how to add the scriptable feature to the Universal Renderer.KNOWLEDGE RETRIEVAL: For this example, the request is fulfilled by following the steps to retrieve information from the embedding.FORMULATION: LLM mediates the final response.RESPONSE: You get an answer, as seen below, along with a code snippet.In the above example involving URP, the final response plan is composed of an introduction built on top of the “What is a Scriptable Renderer Feature” section on in the URP documentation, the step-by-step directions in “Create a scriptable Renderer Feature and add it to the Universal Renderer,” and the directions in the subsection on finally adding the custom Renderer Feature to a Universal Renderer asset.This way, we are able to efficiently swap generic information coming from the LLM’s base knowledge with specific Unity knowledge from first-party sources related to recommended approaches or implementation details. While the occurrence of sometimes inaccurate information is somewhat inevitable when using LLMs, our system is built to minimize their frequency by relying on trusted Unity knowledge.We are working on developing a wide ecosystem consisting of task specific models. As we expand our interoperability with the Editor, we want to enable an accelerated workflow to better serve your needs. We believe that the key to do so is embracing and fostering a culture where we can quickly adapt to research and industry developments for rapid experimentation.Muse Chat serves as a companion for AI-assisted creation, right in the Editor. We are currently working to extend what you can select as part of your context in the Editor, including the full hierarchy and project window, as well as including the associated code for a GameObject. Furthermore, we’re investing in widespread system improvements, improving on our performance benchmarks on Unity knowledge and code generation, and preparing for a future with agent behavior enabled, so that Muse can perform actions on your behalf in the Editor.At GDC, we showcased how you could use all five Muse capabilities together to customize a game loop in the garden scene of our URP sample project. Check out our session “Unity Muse: Accelerating prototyping in the Unity Editor with AI”to learn how you can use all of Muse’s abilities to quickly customize a project scene and gameplay. This interoperability between Muse features is only going to increase as we roll out new improvements to Muse Chat.We’ve updated the Muse onboarding experience to make it easier to start a free trial of Muse and add the Muse packages to your projects. Visit the new Muse Explore page to get started, and let us know what you think of the newest capabilities and improvements in Discussions.

>access_file_