// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1695 transmissions indexed — page 48 of 85

[ 2023 ]

20 entries
941|blog.unity.com

How we maintain quality tech at scale: an inside look into how we build our SDK

At Unity, we continually work to ensure our ironSource SDK is best-in-class. But with a huge amount of scale - more than 1 billion devices - ensuring a top SDK takes a village.Quality is a two part process - we need to (1) build a strong product for our developers, and that product needs to (2) ensure a good experience for users of our developers’ apps. And as we work to keep our developers and their users happy, the market continually pushes toward new and updated features. In this industry, there’s always a delicate balance between progression (e.g., testing for new software) and regression (e.g., confirming our software is still working well). Essentially, we can’t keep manually developing new features without supporting the features - and clients - we already have.Instead of needing to increase our resources, we can balance progression and regression with automation. Normally, you would have to manage a tradeoff between quality and quantity. But with automation, we don’t have to sacrifice one for the other - we can optimize toward progression and regression at the same time.So, how do we manage to scale while meeting and exceeding market demand and carrying the weight of hundreds of millions of daily active users? By ensuring every layer of our SDK delivery process is automated.Here’s how we use automation to ensure every one of our SDK integrations runs smoothly - from our developers and quality engineers, to our integration team. Layer 1: DevelopersLet’s start with developers because they’re the ones writing the code. From the moment our Unity LevelPlay developers start coding, we always have quality in mind - every part of our SDK is backed up with unit tests. Unit tests are basic, automated tests that ensure the SDK components are working well. Unit tests are constantly running, automatically checking if the API (application programming interface) is triggered with input, then output is as expected. Essentially, the tests confirm that the API is communicating properly. For example, if we want to integrate a new ad format into our new SDK version, the unit test would confirm that each ad format is presenting and working well.In fact, we’re not just ensuring that the API information is being communicated - we’re confirming that the information is consistently accurate. For example, within Unity LevelPlay mediation, there are different ad network requests popping up at all times. As the bidding system triggers an auction for the top ad network, it’s critical that every piece of data in this moving puzzle is up-to-date and accurate. If one piece is not accurate, it can affect the entire funnel, so our automated tests ensure we can keep a close eye on all of the moving pieces.This can be a very tedious task - highlighting why automation is so important. One of the biggest roles of automation infrastructure, particularly unit tests, is testing data all the time. Unit tests certainly cover regression, but not necessarily progression - so that’s where the next level of testing, integration testing, comes in.Layer 2: Quality EngineersThe next layer of progression is covered by the quality engineers, who specialize in a variety of testing - particularly integration tests. Integration tests automatically run on a daily basis, but instead of checking if our SDK component tests are working, integration tests check how these components work together. Continuing with our previous example, if we want to add a new ad format to our newest SDK version, quality engineers would set up an integration test, systematically checking how this ad format might interact with other ad formats. Even for automated tests, this can be a tedious process - with so many SDK components, there are an endless number of ways they could combine and interact. That’s why our SDK quality engineers tend to use the 80/20 rule, or testing the top 20% most common interactions to account for 80% of the combination scenarios. The larger the ground to cover, the higher the possibility of technical issues, so our quality engineers are encouraged to be hyper skeptical - and also assume there’s a technical issue, even if there’s not.Layer 3: Integration teamLet’s say the developers and quality engineers have already given this new SDK version the green light, including the new ad format. Even though this new version is ready for launch, it might not be able to thrive in every scenario - for example, it might not work well in a few countries that only have 3G.Before we release this new SDK update to developers, our integration team uses alpha apps, internal production apps made with in-house tech, to measure how the update will perform in a real scenario - both from the developer’s and user’s perspective. In our example, the integration team would test the SDK on real traffic in these countries, using the same tools that developers use, just in a closed environment. Both integrating the SDK into an alpha app and uploading the app into the store are fully automated processes. In many ways, the integration team works like a production line, with many automated steps along the way.Once the new SDK version is live, we can continually make adjustments to ensure the best quality user experience. As we develop many of our new SDK features, we include a toggle option - so if one feature ever becomes faulty for a certain audience, we can turn it off if needed.Combined, these three layers of testing ensure that we have a fortified automation infrastructure - ensuring that our products are high quality for every one of our many users. The automation process grants us the biggest gift possible - time - which we can use to focus on progression, and developing new and innovative features to keep our SDK best-in-class.

>access_file_
942|blog.unity.com

Crafting the stunning scenarios in Warhammer 40,000: Warpforge

In this guest post, Everguild’s team discusses the studio’s upcoming collectible card game for Steam, iOS, and Android – Warhammer 40,000: Warpforge – and how they leveraged the best elements from past successes, elevating them with the latest development tools.We are Isabel Tallos (co-founder and art director) and Cesar Rios (game director), leading the development of Warhammer 40,000: Warpforge at Everguild. For this ambitious project, the studio’s third and largest to date, we’ve leveraged the best elements from Warhammer The Horus Heresy: Legions, and elevated our work using the latest development tools and a lot of craftsmanship.Warhammer 40,000: Warpforge aims to become a reference for quality, depth, and innovation among digital collectible card games. One of its most striking features is the approach our team has taken to depict the scenarios where battles take place.Most card games use a top-down perspective, but in Warpforge all units become “physical” tokens in battle. They fight in astonishing scenarios that provide an immersive experience of leading a player’s army in battles across the Warhammer 40,000 universe.In this deep dive, we’ll examine the design and technical challenges that led to these battle backgrounds, and the tools and processes we used to create them.Building on a successful foundationThe approach we take in Warpforge draws from the experience of Horus Heresy: Legions, our previous card game in the Warhammer 40,000 universe. Originally a mobile game, the scenarios and game logic were done in 2D, faking the perspective to create the illusion of being 3D. This helped keep the game lightweight and able to run on low-spec devices.However, building the scenario and logic in 2D caused important limitations when trying to create VFX for the different cards and abilities, and even more so when trying to animate or add VFX to the background itself.Benefits and challenges of 3D scenariosTo make the scenarios for Warpforge as stunning and immersive as the team wanted, and to elevate the card’s VFX to the next level, it was essential to create them in 3D. Moving to 3D enabled us to stop faking positions, scales, and rotations and just do things in a more natural way. This led to better quality VFX and faster production times.From the start, we wanted our 3D scenarios to maintain the exact same visual aspect as our amazing 2D concept art. When using the usual 3D asset production workflow for creating 3D assets (concept, modeling, unwrap, and texturing), some deviations were introduced in each step, making the final result different from the original concept art.Since the game will be released on mobile, creating high-fidelity 3D assets on less powerful devices risks introducing performance problems, like frame rate or memory issues, and increased loading times. Add to this increased production time when executing this workflow properly for each of the scenarios, and it was clear that we needed a different approach.Camera projection mappingAfter numerous attempts with different approaches, the main breakthrough was the idea of using camera projection mapping. This technique, also known as spatial augmented reality, consists of “projecting” 2D textures over 3D surfaces or objects, creating a “projected texture.” It makes flat surfaces appear to have depth, creating the illusion of very detailed 3D objects while using extremely simple geometric shapes. It also allows for some limited camera movement to further convey the impression of being immersed in a fully 3D space.With this approach, we managed to get all the benefits of a 3D scenario without most of the drawbacks. It allowed us to faithfully keep the art style and level of detail of the concept art, without having to recreate it through 3D assets. It also requires very little processing power at runtime, allowing the game to run perfectly even on low-spec devices. Plus, the end result requires much less work and cost than creating full 3D environments.Compared to a full 3D environment, a scenario built with camera projection mapping doesn’t allow for a lot of camera movement, so it’s clearly not a valid solution for many types of games. For a card game like Warpforge in which the camera is mostly static, though, it is a fantastic solution.Creation processThe process of generating a scenario has several steps. For Everguild, it starts with the creation of a placeholder scenario directly in Unity using simple primitives. It consists only of a floor plane and some vertical cubes to help find the camera perspective that best matches the desired gameplay.Once everything is set up correctly, we capture an image from the camera perspective to send to the painting software. Using the Unity FBX Exporter, we export the 3D placeholder scenario, including the camera position and lens parameters, so we can import it into Blender.Taking the exported image as perspective reference, the concept artists draw the scenario. They have total creative control without any type of restriction, because whatever they do will be translated 1:1 to the game. Concept artists not only draw the scenario itself, but also the visual effects (VFX). Those will be used later as reference, even as textures, for the VFX artists. In this step, it is essential to properly organize the file in layers, always drawing whatever is behind the objects, so that we can export the different layers separately later.Once the concept art is ready, it is brought into Blender, where each element is projected onto a simple 3D object. This projection technique eliminates the need for the 3D artist to laboriously create custom UVs for every object, since the camera projection mapping automatically calculates them. The 3D models are then exported back to Unity. Here, using custom Editor tools, a final pass takes place where the various texture layers are combined into a cohesive atlas texture.This process not only optimizes memory usage and the game’s size, but also minimizes the batch count. Now, the scenario looks exactly like the concept, but we can take it a bit further.Bringing scenarios to lifeAfter creating the 3D scenario concepts, it’s time to bring them to life with a range of different tools. For example, a camera flyover at the start of the match helps provide a sense of depth and immersion, though the path needs to be carefully chosen to work around the limitations of the projection mapping technique.We use a combination of Shader Graph and Render features to craft an array of captivating effects, including dynamic water, blurred planar reflections, vortex portals, and more. These effects are seamlessly integrated with particle effects using both Shuriken and VFX Graph. VFX Graph is used for implementing more complex effects. However, since it relies on compute shaders, we always ensure the availability of a fallback version for devices that don’t support it.Each effect is meticulously designed to align with the original vision from the concept artist, ensuring a cohesive and immersive experience.Bringing processes to lifeAfter a painstaking research process involving the design, art, and technical teams, we’ve developed an approach to creating 3D scenarios which meets all of our core criteria: Stunning aesthetics, a sense of immersion, efficient development workflow, and strong performance on all devices.We believe there are many games which could benefit from these processes, particularly those with limited camera movement. We hope this post will prove useful to some developers. Most importantly we hope players will enjoy diving into the grim, dark universe of Warhammer 40,000: Warpforge when it’s released.The game is currently in its closed alpha phase and due for release on Steam, iOS, and Android before the end of the year. To learn more about the multiplatform release, check out Everguild’s recent case study. Read more Made with Unity stories straight from the developers here.

>access_file_
943|blog.unity.com

How to make nature shaders with Shader Graph in 2022 LTS

In this blog, we explore how to create two distinct nature shaders using Universal Render Pipeline (URP) in 2022 LTS. We also take a closer look at a stylized water shader and a semi-realistic sand shader. The assets are released with the new URP 3D sample scenes.The visual effects appear complex at first glance, but we’ll dive into the design-based thinking process behind these features and cover step-by-step breakdowns of the techniques that bring them to life. Let’s explore the intricacies of shader development and create stunning nature shaders.The goal, as depicted in the video above, is to create a stylized stream that runs in the middle of a Japanese-style garden scene. Based on the rest of the environment, the atmosphere is quiet and zen-like, and the art style is more animated than photorealistic.From the terrain embedded in the scene, the water is a combination of two parts: a waterfall, and a stream running under the bridge. The water scenes from Studio Ghibli are very inspiring, and frequently have three distinct elements:1. Flow lines which help establish the flow of the water2. Edge highlights to show the water interacting with the surrounding terrain3. Foam effects to help sell cascades or waterfallsThe final water shader will use all of these elements. Now, let’s explore the details of how to achieve the look.The waterfall has two distinct meshes – the primary waterfall mesh and a disc-shaped plane that generates ripples. Using a separate unlit shader for the ripples, you can tile a noise node and use it as the alpha value of the shader. This masks out the remaining areas, ensuring that the ripples appear where intended.The most important part is to ensure that each area of the mesh can perform different behaviors. You can achieve this by pre-painting the meshes with vertex colors in the red (R), green (G), and blue (B) channels. The vertex colors are then used as masks to separate operations in certain areas.In Shader Graph, use the Vertex Color Node to access the pre-painted vertex color data. As seen on the right side of the image below, you can use the red channel in vertex color as your T value to interpolate (lerp) between the vertical and the horizontal part of the waterfall, achieving a smooth transition. To create a water cascading effect, combine two Voronoi Nodes, each with different tiling and offsets. This results in a dynamic visual of water falling.In real-life waterfalls, the areas where water cascades from the upper horizontal level and where it strikes the lower horizontal level often have thicker layers of foam and splashes. In the scene, you can pre-paint the vertex color in the blue channel to make sure the effects only show up in specific areas. Using vertex color masks enables you to combine up to four different effects into a single data piece. This approach is more efficient compared to creating separate grayscale mask textures for each effect.To create the illusion of multiple layers of water falling from the edge, utilize noise nodes with different scales and speeds. By scrolling a noise node with a larger scale at a slower speed, alongside a smaller scale noise node at a faster pace, you achieve the desired effect. To maintain consistency with the horizontal part of the waterfall, reuse the data from the stream water. We will delve into further detail about the stream water shortly.Now that you’ve created a calming waterfall, let’s move on to the stream. There are several key features that animation-style water usually has. Unlike the turbulent shoreline foam found in the real world, a Ghibli-style stream typically has very thin foam near the shore. Additionally, the presence of water tails can bring a dynamic and lively impression to the water. Since the scene is set at night, a convincing reflection effect is also necessary. Let’s take a closer look at how you can achieve these effects in Shader Graph.There are multiple ways to capture a reflection on the water’s surface. The most efficient option is a Custom Function Node that calls the GlossyEnvironmentReflectionfunction built into URP. This function returns the reflection color, which is sampled from a box projection reflection probe in the scene. You just need to pass in the world space position, view direction, normal, and screen position required by the function.If you need higher visual quality and more grounded reflections, URP’s planar reflections can be an excellent option. Planar reflection creates a mirror-like reflection of a flat surface, which is ideal for the water mesh, given its flat plane structure.To achieve planar reflection, you need to set up a separate camera and render texture to store the reflection data. The basic concept is to place a reflection camera below the reflection plane (in this case, the stream in the scene) and update it based on the player camera’s position and orientation. The render texture will also be updated in real-time.One advantage is that you can set the reflection camera’s near plane as the reflection plane itself. This eliminates the need to clip out objects that are located below the reflection plane, which simplifies the implementation process. In Shader Graph, you create a texture property, and in the script, you assign the render texture you previously created to it.In order to link the render texture successfully, make sure that when setting the shader property, the property set matches the reference ID of the texture property you created in the Shader Graph. Make sure that the script calls the exact property ID when updating the render texture. Then, use the screen position as the UV to sample the texture. Now you have successfully achieved planar reflection in our shader.Implementing planar reflection involves several technical considerations and details. For a more in-depth understanding and an example of its implementation, feel free to take a look at this URP sample. One thing worth noting is that planar reflection is more computationally expensive compared to using reflection probes because it renders the objects twice.To achieve the edge foam effect, you need to calculate the depth differences. The Linear01 option in Scene Depth Node returns a linear depth value scaled from 0 to 1 for opaque objects. Multiplying this value with the Camera Far Plane Distance enables you to determine the distance between the camera and the opaque object – the rock, in this case. The z component of the Raw option in the Screen Position Node provides the eye-space depth. You can then calculate the depth difference between the transparent water surface and the opaque rock easily, and pass the depth value into the Emission output to create a foam-like effect.In order to retrieve depth values from the scene, make sure to enable Depth Texture in the project settings. You can find the Depth Texture option in the General section of the render pipeline asset. The current render pipeline asset is accessible via Edit > Project Setting > Graphics > Render Pipeline Asset.Creating the trails, which show the movement of the water along the stream, is straightforward. By tiling and offsetting two Voronoi Nodes and masking out the desired areas using vertex colors, you can create stylized water trails flowing along the water surface. Then, adjust the speed of the noise node to match with the previous falling water. You now have stylized water trails flowing through the water surface. This is very similar to the technique used to create the waterfall’s stream lines.Now, let’s change gears and look at a less stylized, more realistic shader. The sand shader is in a realistic desert scene, which necessitates the terrain to closely resemble real-world sand visually.Sand rendering is an interesting challenge. A plain PBR shader does not capture the look of desert sand under a strong sun. There are two main features to cover in the sand shader: the sand glitter, which is a subtle but noteworthy aspect, and the blowing dust, a dynamic feature that adds liveliness.Since sand is composed of countless tiny grains, it sparkles when exposed to sunlight. How do you achieve that glitter effect? Similar to how you do specular reflection, first calculate the reflection vector using the surface normal. Taking inspiration from Journey, instead of calculating the dot product between the normal vector and the halfway vector, you calculate the dot product between the normal and the view direction instead. This adjustment ensures that the glitter pattern varies based on the viewing angles, enhancing the visual appeal of the shader.You have two noise maps in the shader. One is used for sampling the sparkles, and the other is used to mask out the main noise texture for a more dynamic and captivating result. Use the previously calculated reflection vector to distort the UV used for sampling the noise mask.The blowing dust sand effect is a combination of two elements: moving sand trails and sand waves. The concept is fairly straightforward. You are tiling different normal maps to achieve the desired outcome. One key point for the sand trails is that you need a mask to mask out the normal map and make the effect look more dynamic. Instead of using the default UV, sample two noise maps with absolute world position.One thing worth noting is that the World option in the Position Node changes based on the setting of different render pipelines, so select the Absolute World option to avoid any behavior changes when switching pipelines. Next, tile the two maps in a diagonal direction, which will create a ripple-like effect. Then scroll another noise map along the sand wave direction to add to the feeling of sand shifting away.An important aspect is how to achieve normal blending in the shader. In a sand terrain, where the albedo map may not be as complex compared to other terrains, the normals play a significant role in the visual appearance. Blend multiple normal maps in the shader. Different from blending albedo maps, the normal maps store directions, and different blending methods can yield very different results.Let’s take the Normal Blend Node in Shader Graph as an example. When blending normal maps A and B, the default option in the Normal Blend node adds the x and y channels of the two maps together, while multiplying the z channels to obtain the third element of the blended normal. The reoriented option, as the name suggests, involves a more intricate process. It rotates the normals in map B to align with the direction of map A. This approach retains the most data from both maps, but it is also the most computationally expensive option.In our shader, we have opted for a simple blending method for the normals. The main reason for blending the normals is to create a vivid sensation of sand blowing or moving across the desert surface. Accuracy is not the top priority. Additionally, the shader is applied to a relatively large terrain mesh, so minimizing computational costs is important.Considering these factors, here is a straightforward approach: Add the red and green channels of the normal maps together, and, for the blue channel, pass in a value of 1. Then, scale up the normal strength a little bit, and the result looks great.In addition to the discussed features, there are other controls implemented in the shader. One of these is a general fading control, which determines where the effects appear on the terrain based on the camera distance. This allows for a gradual transition and fading of the effects as the camera moves further away.You can also adjust the smoothness value in the distance, enabling better blending of the sand dunes and the background terrain. As the viewer zooms in, a detailed graininess normal map replaces the terrain normals. This substitution provides a more realistic desert experience, adding finer details and textures to the sand surface.Now that you’ve gone through all the features in each of these two shaders, don’t hesitate to create your own versions. If you are passionate about creating shaders with Shader Graph, join our forum or find us on Discord. Be sure to watch for future technical breakdowns from Unity developers as part of the ongoing Tech from the Trenches series.

>access_file_
944|blog.unity.com

Welcome to Megacity: Our competitive-action multiplayer sample

Creating multiplayer games is complex. As the 3D market accelerates its evolution into live games, it is becoming more common for creators to feel overwhelmed by the technology and steps required to build those games. At Unity, we strive to push the boundaries of game development and equip developers with the best tools they need for building great gaming experiences.To that end, we are thrilled to announce the release of Megacity Multiplayer, our new competitive-action multiplayer sample. This release marks a significant milestone for our team, as it showcases how creators can build ambitious multiplayer titles that support more than 64 concurrent players while leveraging Unity Gaming Services (UGS) solutions like Game Server Hosting, Authentication, Voice Chat, and Matchmaker.Players will begin Megacity Multiplayer by hosting or joining a game server, which can either be set up locally on a player’s device or via Game Server Hosting.Once connected, players will join a lobby and be transported into the massive game world of Megacity Multiplayer, which features an immersive environment to play and chat with others. In Megacity Multiplayer, players earn points by shooting other cars while competing for the top spot displayed on the leaderboard.Our Data-Oriented Technology Stack (DOTS) adds much power to the Unity engine so that creators can build more ambitious games. A key feature of DOTS, the Entity Component System (ECS), shipped with Unity 2022 LTS. With this release, the Megacity Multiplayer sample demonstrates built-in features for supporting more than 64 concurrent players.Megacity Multiplayer illustrates how Unity and the Netcode for Entities package can work together to power large-scale multiplayer games without compromising performance. Seasoned creators are able to achieve an unprecedented and extensible level of control and determinism with ECS for Unity.Megacity Multiplayer helps you learn how to structure, network, and operate a competitive-action multiplayer game with solutions from the Unity ecosystem. It also showcases how creators can leverage UGS to develop engaging multiplayer games. For example, Game Server Hosting offers a streamlined approach to maintaining resiliency and scalability in gaming infrastructure, and Matchmaker, Voice Chat, and Authentication are used to connect your players and facilitate in-game communication.We eagerly await your feedback so we can further refine and enhance the multiplayer developer experience. Your insights and suggestions ensure we provide useful samples that inspire and empower our creators.Our current planning includes porting the Megacity Multiplayer sample to the Universal Render Pipeline (URP) to aid in the demonstration of how true, crossplay competitive action can be built with Unity tools. Stay tuned.Get the sample and request features or updates via our public roadmap. Then, keep up with the latest on Megacity by staying in touch with us and sharing your experience (or asking questions) on Discord or in the forums.

>access_file_
946|blog.unity.com

More capabilities to make your multiplayer game even better

Making multiplayer games isn’t easy, but the rewards make it well worth it.In September of last year, we launched a new self-serve experience for our Game Server Hosting (Multiplay) and Matchmaker products to make it easier for everyone to get their hands on the tech that supports some of the biggest multiplayer games in the world.Since then, we’ve been hard at work creating more tools and features to help you build and run your live multiplayer game.We want to make it easier for you to launch and run your multiplayer game, and we know that allowing you to integrate our solutions into your standard workflow can save a huge amount of time and effort.With that in mind, we’ve added the ability to upload your game server build directly from AWS S3 buckets. You can now reference cloud storage buckets as a source of your game binaries from within the Game Server Hosting interface. This allows you to adopt Game Server Hosting without disrupting any existing infrastructure, pipelines, or continuous integration and continuous deployment (CI/CD) workflows that rely on an external cloud storage solution, such as Amazon S3 buckets. After linking an external bucket to a Game Server Hosting project, you can view, manage, and sync the build files from the Unity Dashboard.To help you automate your tasks further, we’re also launching our new Command Line Interface (CLI) for Unity Game Server Hosting. The new CLI introduces an efficient and cohesive way to interact directly with Unity Game Server Hosting. You can now use this capability to upload builds and create and manage fleets and servers without having to access the dashboard. This capability not only saves time but also removes the need for manual intervention. It also allows you to automate key processes such as uploading builds as part of CI/CD pipelines and retrieve server information to accelerate testing cycles.You can quickly perform complex tasks, automate workflows, and do it all in line with the process you have become used to in the dashboard.​​A great matchmaking service is essential for the success of a multiplayer game. Matchmaker has a powerful rules-based engine that gives you the control to match the right players, in the right place, in the right amount of time.We’ve taken this a step further and have launched our matchmaking A/B testing capability. This lets you experiment with different matchmaking configurations while retaining complete control and without hurting your live game operations. Using the capability provided by Unity Analytics, you can streamline the workflow to test, evaluate, and optimize matchmaking rules.In a couple of simple steps, you can configure the match rules you want to test, apply them to your game, and get real game data to evaluate which rules have the best impact. You can target specific player groups by selecting audiences like “All Spenders” or “Churned Players” as defined by Unity Analytics and you can choose the goal metrics that are most relevant to your game, including daily active users, retention, and average revenue per user, to compare and assess the impact of your changes.This is available now so you can get started with Matchmaker A/B testing. You can also see it in action by checking out our talk from GDC 2023.To get started with these new capabilities go to the Unity Dashboard, and to discuss them with the team in more detail go to our Multiplayer forums.

>access_file_
947|blog.unity.com

Unite 2023: How to participate and what to expect

Editor's note: This blog was last updated in October 2023.For the first time in four years, our popular Unite conference is back, live and in person. Whether you’re new to Unity, a seasoned pro, or just curious about our real-time 3D technology, Unite 2023 will celebrate the vibrant Unity community in all its glory.We have an exciting agenda planned that covers everything games, including our renowned Keynote – full of inspiring content, crowd-pleasing announcements, and walk-ons from people behind some of your favorite games. During the day, there’ll be 30+ deep-dive technical sessions organized into four tracks – Ecosystem, AI, Multiplayer, and Growth. Our team will be spotlighting the latest Unity tools and features, demonstrating timesaving workflows, sharing proven strategies for ensuring game success, and giving you a sneak peek of what’s coming soon. Members of the Unity community, including developers from top studios, will be highlighting how they use Unity to achieve amazing games and experiences and run them on multiple platforms.And before, during, and after these sessions, there’ll be parties and plenty of great opportunities to get with your peers and connect with like-minded devs.Here are just some of the tantalizing details about what you can expect at Unite 2023.Unite 2023 will take place in Amsterdam on November 15–16 at two main venues:Welcome Party: November 15, 7:00–10:00 pm: To kick off Unite 2023, we’re all going to the world-famous Heineken Experience. Join us at this historic brewery on November 15 for tasty beverages and bites, multiple DJs, and good times meeting your friends and making new ones.Full Conference: November 16, 8:00 am–11:00 pm: Join us at Taets Art and Event Park located just on the outskirts of Amsterdam – a creative hub stretched over three historic buildings along the waterfront.The Unite 2023 Keynote will take place on November 16 at the Taets Art and Event Park at 10:00 am local time, with Interim CEO and President Jim Whitehurst expected to start the address plus many product experts and top studios taking the stage.Register to attend in person and see the Keynote live. For those not able to join us in person, the Keynote will be available to stream on our YouTube and Twitch channels starting at 1:00 pm ET/10:00 am PT.As well as our Welcome Party at Heineken Experience on November 15, there’s a Happy Hour and an After-Party at the end of the conference. You’ll also have plenty of opportunities to network with your peers.Now for the juicy stuff, let’s get into some of the biggest topics you will hear about during Unite 2023.As with every year, the newest innovations are top of mind for game developers everywhere. When it comes to AI, discover how Unity Sentis and Unity Muse are helping devs push the limits of creativity and build previously unimaginable experiences. Next, hear more about how we worked closely with Apple to provide a deep integration of visionOS with Unity PolySpatial, enabling creators to bring beloved games and apps to a whole new audience and ecosystem, or create something entirely new. Then, in the XR space, find out how Unity is helping developers create and bring their games to platforms like PSVR2 and Meta Quest 3.How do you actually make a living from games or build a successful project? At Unite, get actionable insights to help you apply business-school principles and turn a good game into a great business. Then hear studio success stories about how solutions like Wētā Tools, SyncSketch, Parsec, and Unity Gaming Services helped bring characters and entire games to life.With roadmap deep dives, you will learn the latest ways you can build a better game with Unity. This means announcements about what’s next in graphics technology, Data-Oriented Technology Stack (DOTS), ambitious multiplayer solutions, improved iteration and collaboration tools, and the best options for mobile monetization – all with an added sneak peek at what’s ahead.The fun, however, will have to wait for November. Not to worry, though, because you can catch up with last year’s Unite via our YouTube playlist and watch the most popular sessions (press play below to check one out right now).To keep up with all Unity news, sign up to receive blog posts straight to your inbox.

>access_file_
950|blog.unity.com

SpeedTree 9.5: Unleash creative control and realism with procedural detailing

The latest version of SpeedTree is here to help you elevate your projects to new levels of realism.SpeedTree is the industry-standard vegetation asset creation tool, and can be used across projects from blockbuster films to Indie games to bring environments to life with procedural modeling, flexible art tools, a living asset library, and cutting-edge photogrammetry workflows.SpeedTree 9.5 helps you to bring your environments to life with enhancements in several areas.More creative control: With the new frond manipulation feature, you can now randomize and manipulate the fronds of a leaf. By leveraging gravity, wind, and curling features, the fronds come to life, adding a natural touch to your scenes. We have also eliminated the need to jump in and out of other modeling softwares for editing to further streamline the workflow.Greater levels of realism: SpeedTree 9.5 introduces Projector, a groundbreaking technique for placing procedural details such as moss, snow, and twigs, on model surfaces. Using ray casting technology, these details are efficiently and realistically distributed across large-scale environments. By simulating how these elements would naturally accumulate on surfaces, this feature helps you add a new level of authenticity to scenes and landscapes.New techniques for procedural details: The new height map support allows you to identify subsections on a mesh via the paint tool in the cutout editor, eliminating the need for external modeling software. This streamlined workflow centralizes creative control so you can stay focused on your artistic vision.Request a free trial, or get SpeedTree 9.5 now. For more insights on SpeedTree and other Unity tools for artists, join us at SIGGRAPH 2023.

>access_file_
951|blog.unity.com

Unity support for visionOS: What you need to know

Following the Apple Vision Pro and visionOS announcements at Apple’s Worldwide Developers Conference (WWDC) 2023, we are excited to share that Unity’s beta program for creating spatial experiences on the visionOS platform starts today. We worked closely with Apple to provide a deep integration of visionOS with Unity, enabling creators to bring beloved games and apps to a whole new audience and ecosystem, or create something entirely new.The visionOS platform represents an exciting opportunity for developers to create the next generation of compelling spatial experiences using the Unity Editor they know and love. We’re also thrilled to debut Unity’s PolySpatial technology, which will power Unity content alongside other apps in the Shared Space on Apple Vision Pro.We know developers are excited to get started with this new platform. Beta participants will be added to the program over the next few months, but there are lots of things you can do today to start preparing content. Let’s dive into what you need to know.WWDC 2023 was an exciting moment for Unity and the XR ecosystem as a whole, as Apple announced its collaboration with Unity to help bring creators into the era of spatial computing through Apple Vision Pro.To learn more about Apple Vision Pro, visionOS, SDK, and core concepts around spatial design, check out the Apple Developer website.Two important Unity learning sessions were released as part of the WWDC event. We highly encourage interested developers to watch each session to learn more about Unity development for visionOS:Create immersive Unity apps with Vladimir Vukićević, director of engineeringBring your Unity VR app to a fully immersive space with Peter Kuhn, engineering architectLet’s review the ways apps can run on Apple Vision Pro. There are three main approaches to creating spatial experiences on the visionOS platform with Unity.1. Port an existing virtual reality game or create a new fully immersive experience, replacing the player’s surroundings with your own environments.2. Mix content with passthrough to create immersiveexperiences that blend digital content with the real world.3. Run multiple immersive applications side by side within passthrough while in the Shared Space.Porting an existing application or creating an entirely new one is straightforward with Unity. Here’s a quick overview:Workflow: With full support for the visionOS platform in Unity, you can see your projects running on Vision Pro in just a few steps. To start, select the build target for the platform, enable the XR plug-in, and generate an Xcode project. Then, from within Xcode, you can build and run to either Vision Pro or the device simulator.Graphics: Unity recommends using the Universal Render Pipeline for visionOS projects because it enables a special feature called foveated rendering for higher-fidelity visuals.Input: People will use their hands and eyes to interact with content on Vision Pro. Unity’s XR Interaction Toolkit adds hand tracking to make it easier for you to adapt existing projects. You can also react to built-in system gestures with the Unity Input System, and access raw hand joint data for custom interactions with the XR Hands package.Shared Space: Unity’s new PolySpatial technology enables developers to create apps that can run side by side in the Shared Space.In addition to immersive apps, developers can also run content in a window that the user can resize and reposition in their space. This is the easiest way to bring existing mobile and desktop applications to visionOS, and is the default mode for content targeting the visionOS platform. Beta support for windowed applications is available to try today in Unity 2022 LTS (2022.3.5f1 or newer).While Unity’s beta for visionOS gradually rolls out to participants, there are several important steps you can take to prepare your projects for this new platform:1. Learn more about our support for Apple Vision Pro and our PolySpatial technology in Unity’s WWDC session talks.2. Upgrade your existing projects to the latest version of Unity by installing Unity 2022.3 LTS (2022.3.5f1+) through Unity Hub.3. Familiarize yourself with Unity XR tools:AR Foundation – use to blend digital content with the real worldXR Interaction Toolkit – use to implement input and interactions4. Prepare your project for visionOS:Use (or upgrade to) the Universal Render Pipeline to take advantage of performance optimizations and visionOS platform features like foveated renderingConvert controller-based interactions to hand-based interactionsUse the Unity Input SystemPort shaders to Shader Graph or use standard shaders5. Try porting or creating a windowed app with Unity 2022.3.5f1 or newer.Register your interest in joining Unity’s beta program by signing up today. You’ll be notified by email when participants are selected to join the beta program. We can’t wait to see what you create!

>access_file_
953|blog.unity.com

Where might AI take gamedev next?

In less than a year, AI has gone from being something most people don’t think much about to the subject of near-daily headlines. Generative AI is already revolutionizing creativity, and it will lead to a giant step forward in video games, too. In fact, our latest closed beta releases are taking steps toward that reality already.AI brings the promise of making real-time creation accessible to more people, simplifying some development tasks to let you focus on being creative while helping you to achieve more with less. And generative AI is just the latest in a long line of technological breakthroughs that have revolutionized video games.In 1952, a research product called OXO became the world’s first video game: tic-tac-toe programmed on a massive mainframe computer the size of a room. Atari engineers used an understanding of transistor-transistor logic (TTL) chips to create Pong in 1972, and its simplicity enabled domestic distribution, making it the first commercially successful video game. Space Invaders integrated a microprocessor and a multi-chip barrel shifter circuit into the first-ever fixed shooter, giving the 1978 classic incredibly smooth animations that set the table for this new genre.The introduction of PCs like Apple II and C64 introduced a broad spectrum of new possibilities – the modular setup with adaptable gravity and physics in games like Pinball Construction Set; the fluid rotoscoped animation of the first Prince of Persia adventure; and the widely ported Lemmings, which saw the game’s simple puzzles released on pretty much every gaming system out there at the time. Consoles really came into their own with 8-bit home systems, which offered unprecedented control, the 16-bit Sega and games like Sonic the Hedgehog, and the console-agnostic sports hits of Electronic Arts, which harnessed that control and realism to bring players inside their favorite sports.DVD consoles increased visual and speed bandwidth again in games like Road Rash and Need for Speed. The advent and broad adoption of the internet made MMOs possible, bringing gamers from around the world together to play things like Ultima and World of Warcraft. The 32-bit Sony GPU opened up new dimensions of creativity – literally – which brought new popularity to FPS shooters like Quake II that made stalking, shooting, and dodging more thrilling in 3D.Each of these technical shifts produced a revolution in creativity that reframed how players game. Generative AI, however, could potentially drive the greatest advancements in video games since transistors made the idea of playing tic-tac-toe on a computer possible.We’re at the hyper-fertile start of a new age with AI. Each week brings a staggering number of fresh ideas, demos, and services that simultaneously spark our imaginations and raise new concerns. And while those concerns are very real and will have to be recognized and addressed ethically throughout the ecosystem, the possibilities for generative AI in video game development are impossible to deny.AI has played a role in game development far longer than ChatGPT has been on the scene. It’s already widely used to accelerate elements of graphics production, train simulations through machine learning, and automate testing and repetitive tasks. The future is limitless, but we can already start to see some applications take shape.Every single aspect of game creation has the potential to be impacted, accelerated, and possibly improved by AI.Asset generation: Generative AI can be used to create game content like characters, terrain, vegetation, lighting, and even audio automatically, rather than being programmed by a developer. These capabilities can not only save development time and resources, but also enable previously unthinkable permutations during game play. Environments, for example, might feel less static or hard-coded to offer more immersive game experiences. The tech for this is already close – learn more about Unity Muse and apply for the closed beta to get started now.Non-player characters (NPCs): Interaction with NPCs during game play has been extremely limited by prescriptive programming behaviors. Generative AI may soon be able to power more lifelike and realistic NPCs that behave with greater adaptability and intelligence, for example by reacting realistically (yet unpredictably) to a player’s actions.Greater safety: A consistent issue with massively multiplayer online games (MMOs) has been unwelcomed toxicity and overall safety, especially for younger players. Coupled with natural language processing (NLP), generative AI can both more effectively manage the detection of harmful language and interactions to power safe yet entertaining interactions. Discover how machine learning is already helping devs to monitor in-game comms and build safer player communities with Safe Voice from Unity Gaming Services.Game design: Imagine if generative AI can be used to create game mechanics, levels, and character play that responds to player behavior and dynamically adjusts the game’s difficulty level accordingly. In short, it could be used to produce bespoke gaming experiences tailored to specific players. Non-deterministic game design can also open the door to entirely new genres of games – imagine, for example, compelling, open-ended detective games that change and adapt with every move.But this potential doesn’t end with creation – every aspect of the interactive runtime can be impacted and solved with a mix of compiled code and ML-trained AI solutions.Runtime inference: With Unity Sentis, designers can build game loops that rely on inference – the process of feeding data through a machine learning model – on devices from mobile to console to web and PC, without cloud compute costs or latency issues. This will be used to run NPC characters like Orb or restylize a game without requiring all-new artwork (for a night scene, for example, very much as Hollywood does it), or it could replace a physics engine by something 1,000 times more efficient. This could be fluid dynamics or particle interactions – the use cases of a performant runtime-based inference engine are endless.Generative AI will change our world in profound ways, known and unknown, but it will certainly empower teams to create entirely new game genres that will surprise and entertain us as never before. I am looking forward to the next few years to see what new genres are going to appear.At Unity, we believe the world is a better place with more creators in it, and AI will be a powerful tool to help creators of all kinds unlock new dimensions of productivity and creativity. We can’t wait to see where you take us next.Stay tuned to the blog for more about Unity and AI, and, if you haven’t already, sign up for our AI mailing list to be the first to hear about new tools and services.

>access_file_
954|blog.unity.com

Tackling toxicity with Unity Safe Voice

Toxic behavior can turn a potentially awesome gaming experience into a nightmare. When insults, trolling, and derogatory remarks are thrown around, many gamers choose to peace out, leaving the game behind. This player exodus can hurt a game's community and longevity.Studios dedicate enormous amounts of resources and people to make in-game communities safer, but the tools required to foster healthy, fun, and engaged player communities haven’t been up to the task – until now.The 2023 Unity Gaming Report further identified a need to support healthy community building for games, and we’re delighted to announce that Unity Safe Voice is now available in closed beta.Safe Voice from Unity Gaming Services (UGS) identifies disruptive and toxic behaviors exhibited by players in your game. By harnessing cutting-edge AI techniques such as advanced machine learning and deep learning algorithms, Safe Voice provides analysis of voice communications so you can identify and address toxic behavior effectively and efficiently. It helps you understand which behaviors are negatively impacting your player experience and gives you the insights you need to create communities that are safe by design.Current anti-toxicity solutions lack contextual understanding of interactions between players. They might rely on speech-to-text or transcription-based analysis, which falls short in distinguishing between actual harmful behavior and harmless banter or identify players who use alternate forms of disruption like loud noises. Words alone can be easily misinterpreted, which can further limit the effectiveness of assessing intent.Many solutions also rely solely on player reports to take action on toxicity. This places the burden on players to actively report violations and ignores instances where players might just leave the game, never to return.Safe Voice overcomes these limitations by using context-aware AI technology. It can detect audio disruptions like loud music or toxic behaviors, then classify these into more than a dozen categories including obscenities, threats, insults, identity attacks, problematic speech, and verbal attacks. Safe Voice analyzes unique voice characteristics like tone, loudness, pitch, and emotion to deliver nuanced insights on both session and player-based metrics.You can customize which interactions are monitored, rely on player-initiated or proactive situational triggers, and prioritize reports based on what matters most to your communities.Moderation can be resource-intensive, especially for games with large player bases. This can lead to delays in addressing player reports, slower response times, and constantly overwhelmed moderation teams.Safe Voice’s AI-powered detection and categorization automates formerly manual processes and provides results in near real-time so you can take action faster. You can customize Safe Voice to your team’s specific needs to help your moderation squad be more efficient.Safe Voice kicks in when players flag instances of toxic or disruptive behaviors. You can also proactively monitor activities in your game like player mutes, exits, or when a vulnerable player is in a session. The service categorizes and prioritizes sessions that need attention based on your preferences, so no legitimate player report or serious violation is lost.Some game genres have a higher likelihood of toxic behavior due to their competitive nature, anonymity, or the intensity of player interactions. Perceptions of toxicity can also vary based on individual experiences and community culture.Your toxicity detection tools should align with the norms and culture of the community you want to create. With Safe Voice, you can build keyword lists to target or deprioritize specific words or phrases and customize toxicity threshold scores to make sure your coverage is as unique as your game and community. This contextual analysis is critical to keeping the essence of your game alive while developing a safe environment for your players.Safe Voice’s detailed reporting dashboards give your team a thorough, nuanced understanding of your community. The overview dashboard shows trends over time and surfaces the behaviors that drive the most disruption in your game. You can see a detailed breakdown of player attributes and behaviors and categorize players as toxic or vulnerable. The data is updated in near real-time and reflects your custom priorities.Today’s largest online communities, like those in Valorant and Rainbow Six: Siege, rely on Unity Voice Chat (Vivox) to connect their players through in-game communication across platforms. With Safe Voice, Voice Chat users can now add a layer of safety to their enhanced gaming experience.We invite you to join us in creating the game communities of the future, where safety and inclusivity are top priorities. Whether you’re experiencing toxic behavior in your game or want to be proactive in establishing a safe environment from the start, Safe Voice can help.Chat with us about Safe Voice and how we can further help with any toxicity you’re navigating in your game communities in the forums.

>access_file_
955|blog.unity.com

Improving Burst Inspector search performance

My name is Jonas Reholt and I’m a student working with the Burst team. I’m taking to the blog to share my journey of optimization that helped make recent performance changes to the Burst Inspector possible. Burst Inspector search is now 13 times faster, enabling developers to more quickly focus on the code they care about when optimizing projects.Continue reading to learn how you can use the Unity Profiler to investigate performance bottlenecks in your program and how to fix them.The Unity Burst compiler transforms your C# code into highly optimized assembly code. The Burst Inspector lets you inspect that assembly code directly in the Unity Editor, so you don’t need to use external tools for simple code inspection.When first opening the Burst Inspector and selecting a target job to display, you’ll see a window similar to the image below.As you can see, the Burst Inspector provides syntax highlighting, branch flow arrows, and much more.The inspector will try to scroll to the assembly that’s implementing the chosen target function, but it’s also useful to search the assembly view for specific instructions, comments, etc. That brings us to the topic of this blog post.To perform the search, the inspector has to search the original assembly output and transform these indices into positions in the inspector view. The original search functionality followed the pattern shown below, and relied heavily on the implementation of System.String.IndexOf(*).Running the above search on 135,582 lines of assembly code for a common search hit (21,769 hits in total) resulted in an execution time of about 12 seconds for the first search, and about 5 seconds for subsequent searches. This isn’t really a desirable waiting time for a GUI event, so we had to do something. Running the search through the Unity Profiler revealed that 37.3% of the execution time was spent in IndexOf(*), as seen below.A sensible optimization has to address the reliance on this function, either by making a custom implementation or changing the algorithm altogether. No matter what algorithm is used, it will involve stepping through the entire string. So, some custom implementation for finding matches is required. Given this, it seemed fitting to start the optimization by keeping the original algorithm, but creating a custom IndexOf function.The 3.34 seconds spent on LongTextArea.GetFragNrFromBlockIdx() stems from retrieving uncolored assembly code. This is used to perform the search. The Burst Inspector currently saves the assembly code twice – once formatted for rendering, and once unformatted.Writing a custom function also has the nice side effect of reducing the number of calls, since there’s currently a call for each search hit, plus one.The source code of IndexOf(*) reveals many safety checks needed for a robust general implementation. However, in our case we can safely assume most of these checks are true. To try and squeeze out every drop of performance, you’ll want to create a C-like function to avoid things like bounds check.You can write the function following the pseudo code below, where IsKeyMatch(*) simply checks whether the key is a match or not.However, because C# is a managed language, this C-like function requires you to pin the managed objects used so that the garbage collector does not relocate the memory address. Here is the boilerplate code:Putting these things together enables you to separate the original while loop into a single call to the indices finder and the logic for handling the search hits:What were the gains? Using the small example from before, this change to the code gives a 6.6x speedup on the initial call, and a 13.2x speedup on subsequent calls (measured as old/new). The lower speedup on the initial search stems from the overhead of loading in the unformatted assembly to avoid finding matches in color strings.With these improvements, heavy-load searches with a little under 22,000 hits will now take about 1.8 seconds for the initial search, and around 0.4 seconds for subsequent searches. This makes the Burst Inspector more usable for large assemblies, since there’s no longer enough time to make a cup of tea during each search.You can take advantage of this performance improvement now with the Burst 1.8.7 package.Looking for more on Burst? Connect with us in the Burst forum. Be sure to watch for more new technical blogs from other Unity developers as part of the ongoing Tech from the Trenches series.

>access_file_
956|blog.unity.com

Meet Smart Locks, a new way to reduce merge conflicts with Unity Version Control

At Unity, we’re passionate about enabling creators to do their best work. That’s why the Unity DevOps team is excited to introduce Smart Locks, a new feature of Unity Version Control.Smart Locks vastly reduces the painful merge conflicts commonly associated with file locks and branching. Developers have long been branching for faster and safer iteration. Now everyone, including artists, can use branching to scale projects with confidence.Smart Locks automatically checks to confirm you are working from the latest version before allowing you to lock a file, so it greatly minimizes the risk of merge conflicts and empowers teams to branch without worry.If you’re an artist creating a feature branch, a task branch, or a personal branch, Smart Locks provides the flexibility to branch and work in parallel with teammates without worrying about conflicts. You can experiment and iterate faster while your main project history remains safe.Smart Locks helps you explore diverse workflows. Instead of changing your team to follow your version control system (VCS), you can adapt Unity Version Control to what works best for your team.Many users believe that simply locking or checking out a file will automatically prevent merge conflicts. Unfortunately, that isn’t the reality.Traditional file locking mechanisms do provide protection against some conflicts, however, these file locks failed to travel branch by branch, allowing another artist to check out the same file from a different branch. This inability to travel leaves teams vulnerable to merge conflicts by not addressing the underlying workflow incompatibility with branching.To understand how Smart Locks works, let’s first examine how a merge conflict occurs, even when teams are using file locks properly.This scenario illustrates how teams using branching frequently encounter merge conflicts, despite their best efforts to use file locking. The result? Wasted time and lowered team morale. Smart Locks solves this problem by allowing users to define a branch as the source of truth.Whether you are branching or working out of a single thread, the lock will “travel” across branches, following a unique development line, until it reaches the destination branch where the change is checked or merged back in. Smart Locks enforces this single line of development whether you continue working in one branch or move onto creating child branches.All locking requests associated with a given file will now be aware of any new versions existing in different branches. This means you don’t have to wonder if your changes conflict with a teammate’s or if you’re working on an outdated version.This simple and effective process prevents multiple team members from working simultaneously on conflicting versions, so no changes slip through the cracks. This helps ensure everyone’s artistic vision is considered and makes simultaneous collaboration practically painless.Most programmers, likely familiar with Git-based systems, already understand and appreciate the value of branching. The main benefits of branching for artists is the same as those for coders.When you work within branches, you are effectively separated from the main history of your project. This isolation enables you to prototype and experiment safely, without having to worry about potentially breaking your project.Safe experimentation enables you to iterate continuously and build multiple versions, so you can choose your favorite by navigating the repository history. Let the best idea win.Branching inherently reduces the noise of simultaneous collaboration. It makes space for the creation of fresh ideas while maintaining a relationship with the original concept. In simple terms, think of the difference between iterating in a Google Doc by yourself versus working in a single document with two hundred other collaborators.You can generate new concepts without fear of merge conflicts, storing them within your version control system rather than working independently in your local drive or an external source not integrated with your main project.Branching enables teams to break complex workflows into digestible pieces. You can create branches to match how you have organized your project. In game development, it’s natural to partition work for easier project management. For example, you might divide work within a team by features, characters, or even whole levels. Your team can focus on their assigned work within their specific branch.Partitioning work within branches enables different teams and team members to work at their own pace, in their own style, and with their own processes – all while simultaneously contributing to the greater project. This removal of friction not only makes collaboration smoother, but your team is also more likely to update your project history with greater frequency. You can ship faster and keep up with gamer expectations.Branching makes it easier to see the full picture of your project history, while checking changes into main makes it harder to see the full breadth of changes. Branching helps you identify those changes faster.We designed Smart Locks to give all members of your team flexibility in terms of how they like to work. We also recognize that dealing with complex file locks can be a hindrance in certain situations, like the ideation and experimentation phases of the project.That’s why, in addition to traveling locks, we’ve also built a new branch exclusion capability. This enables you to exclude branches from the locking mechanism by setting custom lock rules. When you know you will never need to merge back into the source branch, you can prototype or experiment within your branch, unencumbered by file locking restraints.To ensure you can keep track of complex projects and enable you to clearly visualize your existing lock list, we’ve also improved the graphical user interface (GUI) on both the desktop client and within uDash. By viewing your lock history, you can easily see who created a lock, and when.Additionally, we’ve placed more prominent affordances to indicate how to lock and unlock files, accompanied by helpful messages that will notify you of any existing locks on a specific file.To take advantage of this game-changing feature, simply update your Unity Version Control installation to the latest release. For on-prem customers or former Plastic SCM Enterprise customers, you’ll need to update your servers and clients to fully experience the benefits of Smart Locks.Be sure to read the documentation before you get started.Unity Version Control is an engine-agnostic version control tool with the agility to handle large files and binaries at speed. With optimized workflows for both artists and programmers in game studios of all sizes, you can improve team collaboration and increase productivity to deliver high-quality games faster and more efficiently. To get started for free, enroll in Unity DevOps (terms apply).

>access_file_
957|blog.unity.com

Unity for Humanity supports social impact creators at 2023 Tribeca Festival

Tribeca Games and Immersive, part of the 2023 Tribeca Festival, took place June 7–18 in New York City, and the Unity for Humanity team was fortunate enough to attend and support some of the impressive projects.The festival program included an incredible lineup of impact-driven games and experiences, shedding light on pressing social issues as well as sharing experimental narratives and introducing awe-inspiring 2D and 3D games.Unity for Humanity supported five impact-driven projects with grant funding: Colored, Maya: The Birth (Chapter 1), The Fury, Kinfolk: Black Lands, and Reimagined: Volume II: Mahal. Of the projects Unity supported, two went home with Tribeca honors.Colored, a location-based augmented reality (AR) experience, tells the often untold story of Claudette Colvin, a 15-year-old girl who stood up to segregation laws in Alabama nine months prior to Rosa Parks. The AR storytelling beautifully combines the Magic Leap device and video projection to create a unique, poetic experience that incorporates theatre, history, and immersive entertainment.Adapted from an essay by Tania de Montaigne, Colored transports audiences to 1950s Montgomery, Alabama to experience the Civil Rights Movement through the embodiment of Claudette Colvin. Through Claudette, Colored not only highlights the experience of segregation, but also explores colorism and how it has persisted throughout history to impact who we choose to honor and celebrate.Maya: The Birth (Chapter 1) is a virtual reality (VR) experience created to bring the “taboo” subject of menstruation to the forefront. The audience follows Maya, a young girl in the U.K., as she experiences shame and bullying due to beginning her menstruation. As Maya confronts this shame, she transforms into a powerful superhero deriving her power from the process of menstruation.Directors Poulomi Basu and CJ Clarke were inspired by stories of real women in Nepal, who are forced into exile and often assaulted because of their menstrual blood, to create Maya as a fantastical alternate reality where women are powerful and their strength is celebrated.Maya was awarded a Tribeca New Voices Special Mention. The jury commented that Maya is “an imaginative way to tell an everyday story in a vivid world. Presenting a shift in perspective, the project opens new imaginaries with under-told narratives. This project left us on a hook and the jury is excited to see its next steps and continued development.”The Fury is an examination of the brutal treatment of political prisoners, including the sexual exploitation of female political prisoners in Iran. Acclaimed artist Shirin Neshat combines performance, video art, and immersive storytelling to create a strikingly powerful experience which mirrors the memory of sexual assault to demand change.Kinfolk: Black Lands is an AR exploration of Black self-sovereignty in New York City. It explores the lives of three ancestors who represent free Black communities that existed in New York from the 1600s to the early 1900s. This experience is educational and aspirational, empowering audiences to rethink oppressive systems and imploring the next generation to reimagine and build a more equitable future.In 2021, Unity for Humanity supported the Kinfolk team’s Tribeca Juneteenth exhibition. This year, the team was awarded a Storyscapes Special Jury Mention for being “a profound and authentic representation of the Black experience in America.” Judges were inspired by Kinfolk’s mission to “bring history to contemporary audiences through AR technology as it not only celebrates the richness of Black culture and history, but also serves as a powerful tool for education and understanding, making it a standout contender deserving of recognition.”Reimagined: Volume II: Mahal was inspired by Director Michaela Ternasky-Holland’s early loss of her father. Weaving in her ancestry and Philippine mythology, Mahal tells the story of four immortal children grieving the loss of their father, the creator god Bathala. Each makes sense of their father’s passing in their own way, causing dangerous effects. The gods must learn to honor their father together to save the universe, reflecting themes of community, loss, love, and acceptance to the audience.Also during Tribeca, Unity for Humanity Community Manager Paisley Smith moderated a panel discussion on “The Birth of a Super-Heroine: How Women Shape Stories for the Future Generation.” The panel featured amazing immersive creators Katayoun Dibamehr (producer, Maya: The Birth [Chapter 1]), Ana Ribeiro (creator, Pixel Ripped 1978), Michaela Ternasky-Holland (director, Mahal), and Eloise Singer (director, The Pirate Queen: A Forgotten Legend). It also touched on the strong characters depicted in each of the creators’ projects and how their personal experiences across games and immersive production shaped the creation of their work.Panelist Eloise Singer was awarded the Storyscapes Award for The Pirate Queen. The jury commented that the project received this top award “for its outstanding technical execution, immersive user experience, and unique and untold story of a nearly forgotten woman in history.” Finally, Goodbye Volcano High (also made with Unity) by KO_OP was awarded the Tribeca Games Award for “how much [it] felt of the moment and questions whether you should still care about anything when everything sucks – complete with doom scrolling, dinosaurs, and high school band drama.”Congratulations to all impact-driven artists and creators who exhibited work at the Tribeca Film Festival. Keep creating meaningful change!Unity for Humanity is designed to celebrate and support impact-driven creators. If you are using real-time 3D for social impact, join our Unity for Humanity Community Discord to attend monthly events, learn about upcoming grant opportunities, and more. You can also subscribe to Unity’s Social Impact newsletter.

>access_file_
959|blog.unity.com

Celebrating the recent success of our creators

We’re continually amazed by what you develop, and there’s nothing more rewarding than seeing your games celebrated and enjoyed by players around the world. Recently, we’ve seen BattleBit Remastered rocket to the top of the Steam charts and join the ranks of other Made with Unity games that have experienced phenomenal recent success on Steam like V Rising and Sons of the Forest. Let’s keep this celebration train moving and take a look at some other Made with Unity game highlights from May and June 2023.Players and game enthusiasts around the world have viewed made with Unity game content over 5 million times on Twitch* in the month of June. You’ve probably watched Rust over on Twitch already, but have you also checked out Dave the Diver? And if you haven’t seen our Twitch channel yet, we recently had the team at Double Stallion talk about their multiplatform game Convergence: A League of Legends Story. Tuatara Games joined us to showcase their multiplayer title Bare Butt Boxing, and we hosted Thomas Waterzooi, who discussed the design behind his popular puzzle game Please, Touch the Artwork (stay tuned for upcoming case studies for a behind-the-scenes peek at both of these games).In the mobile gaming space, Unity creators have once again proven their excellence. The top-five most successful mobile games by revenue made with Unity in June 2023 were Honor of Kings, Honkai: Star Rail, Coin Master, Pokémon GO, and Genshin Impact.** These games are captivating audiences worldwide. In June 2023, the top five most successful PC games (by increase in average concurrent users) on Steam that were made with Unity were Gunfire Reborn, Soulstone Survivors, Captain of Industry, 20 Minutes Till Dawn, and Sands of Salzaar.***If you’re working on a game, we want to hear from you! You can submit your project on the Made with Unity page, and we’ll consider promoting it in our channels. And if you haven’t registered to receive the latest information about Unite 2023 in Amsterdam, you can do that now.*As of June 2023. Source: Twitch API **As of June 2023. Source: Apptopia. Disclaimer: Based on a list of Apptopia's 1,000 most downloaded games for the most recent month, "Successful" is measured here by total revenue, across both ios and android stores (IAP Revenue + Ad Revenue + Download Revenue). ***As of 2023-07-06. Source: SteamSpy API. Disclaimer: Based on Steam’s top 1,000 games (ranked by concurrent user count) and filtered for games with over 500 concurrent users, Measuring “successful” by % increase in average concurrent users for the month.

>access_file_
960|blog.unity.com

Made with Unity Monthly: June 2023 roundup

From Unity 2022 LTS to a new mixed reality package and AI beta announcements, June was a busy month for teams across all of Unity. We’re here to talk about you, though. Read on to discover what Unity creators have been up to, including the latest game releases made with Unity.June had its fair share of milestones to celebrate for games made with Unity.Muro studios released DOOMBLADE, where you play as a sentient weapon looking for revenge, and Mimimi Games revealed the release date for its swashbuckling stealth adventure game, Shadow Gambit. We also shared snekflat’s Tiny Terry’s Turbo Trip and a thread full of exciting demos from Steam Next Fest, featuring titles like Eternights and Sea of Stars.On Instagram, we celebrated Mooneaters’s release of its incredibly cozy game, Everdream Valley, as well as more8bit’s sharp-looking Bleak Sword DX (you can also check out our case study and blog to learn how the game was made). Last but not least, Studio UN JE NE SAIS QUOI’s absolutely gorgeous Dordogne took us to the French countryside.Congrats are in order for the Unity creator teams that achieved new heights during AWE USA 2023. Seven of 11 finalists for the 2023 XR Prize Challenge used Unity to create their projects and an Auggie Award winner was also made with Unity.Unity’s Jessica Lindl presented the 2023 Auggie Award for Biggest Societal Impact during the ceremonies, saying “This award aligns with our belief that the world is a better place with more creators in it, fostering a more sustainable and inclusive world.” Check out the made with Unity winners below.Auggie Awards – Among Us VR, Schell Games, Innersloth, Robot Teddy (Best Game or Toy)XR Prize Challenge – Between Two Worlds, Under The Skin X Virtualosus (Grand Prize)XR Prize Challenge – Mangrove City, University of Miami (Best Educating About Solutions to Climate Change)We share new game releases and milestone spotlights every Monday on the @UnityGames Twitter and @unitytechnologies Instagram. Be sure to give us a follow and support your fellow creators.We love seeing how your projects evolve every month, and we’re super impressed by all the work being shared with the #MadeWithUnity hashtag. Here are a few June highlights.On Instagram, Makan Gilani’s adventurous robot was on a roll, and Krzysztof Maziarz had an adorable house tour in store. Onirism’s new ledge system also seemed like a lot of fun.Meanwhile on Twitter, Goldborough Studio displayed some serious balance skills, while Jakob Wahlberg’s Mecha Ball animations were amazingly slick. And Filip Coulianos took us on a space ride then MONKE added some adorable pigs to their minimalist city builder.We’re always here to continue the #MadeWithUnity love. Keep adding the hashtag to your posts to show us what you’ve been up to.On June 8, we invited the community to talk to us about content builds, import workflows, the asset database, Addressables, and more as part of a Content Pipeline Dev Blitz Day. The event had 10 experts and several threads, generating great conversations throughout the day. Then, on June 22, we hosted a Multiplatform Dev Blitz Day, covering everything from XR and console to mobile and desktop. Held in both the forums and on the Discord server, the event saw 33 experts answering questions. Thank you to everyone who participated in either or both days.Keep an eye on Discord and our forums for future Dev Blitz Day announcements, and don’t forget to bookmark the archive of past Dev Blitz Days. In case you missed it, we’ve also recently brought back in-person Developer Days with events in Austin and Montreal.June was incredibly busy on Twitch. Team Unity took to the channel to finish out the Scope Check Let’s Dev series, sharing two final streams: part seven on testing and part eight on tips for polishing projects.In addition to wrapping the Scope Check series, we took time for not one but three Creator Spotlights. First, we hosted a sit-down with Tuatara Games to discuss Bare Butt Boxing (watch above). Next, members of the Mooneaters team joined us to chat about their latest project, Everdream Valley. Last but not least, Double Stallion hopped on a live stream to talk CONVERGENCE: A League of Legends Story™.Finally, we hosted Xalavier Nelson Jr. to hear his Unity Tales (watch below) and held a live discussion on the Unity LTS 2022 release.If you don’t already, follow us on Twitch today and hit the notification bell so you never miss a stream.We’ve added a new category to the Asset Store – AI Verified Solutions. We’ve vetted nine solutions under three different classifications: Generative AI, AI/ML integration, and Behavior AI. If you’re ready to try them out, you can find them here for free.Taking things to social media, here’s a roundup of some of our favorite creator showcases from Twitter in June:Alien Planets Vol.4 | Red pandaFood – Breakfast Time | Boxx-Games AssetsZombie Dog | Code This LabLooking to be noticed by the Asset Store team? Tag the @AssetStore Twitter account and use the #AssetStore hashtag when posting your latest creations.Last but not least, here’s a non-exhaustive list of games made with Unity that were released in June. Do you see any on the list that have already become favorites or spy that we’re missing a title? Share your thoughts in the forums.Killer Frequency, Team17 Digital (June 1)We Love Katamari REROLL+ Royal Reverie, MONKEYCRAFT Co. Ltd. (June 1)Battle Talent, CyDream (June 1)Pile Up!, Remoob (June 2 – early access)Mask of the Rose, Failbetter Games (June 8)Bleak Sword DX, more8bit (June 8)Space Reign, Propulsive Games (June 12)Dordogne, UN JE NE SAIS QUOI and UMANIMATION (June 13)Oblivion Override, Humble Mill (June 13)Fall of Porcupine, Critical Rabbit (June 15)Pixel Ripped 1978, ARVORE Immersive Experiences (June 15)BattleBit Remastered, SgtOkiDoki, Vilaskis, and TheLiquidHorse (June 15 – early access)Mars First Logistics, Shape Shop (June 22)The Bookwalker: Thief of Tales, DO MY BEST (June 22)A Long Journey to an Uncertain End, Crispy Creative (June 28)If you’re creating with Unity and haven’t seen your project in a monthly roundup yet, submit it here for the chance to be featured.That’s a wrap for June. For more community news as it happens, follow us on social media: Twitter, Facebook, LinkedIn, Instagram, YouTube, or Twitch.

>access_file_