// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1689 transmissions indexed — page 69 of 85

[ 2020 ]

20 entries
1361|blog.unity.com

In parameters in Burst

The Unity Burst Compiler transforms your C# code into highly optimized machine code. One question that we get often from our amazing forum users like @dreamingimlatios surrounds in parameters to functions within Burst code. Should developers use them and where? We’ve put together this post to try and explain them a bit more in detail.C# 7.2 introduced in parameter modifiers as a way to pass something by reference to a function where the called function is not allowed to modify the data. In parameters are a really useful language concept because it enforces a contract between the developer and the compiler as to how data will be used and modified. The in parameter modifier allows arguments to be passed by reference where the called function is not allowed to modify the data. It pairs up with the out parameter modifier (where parameters must be modified by the function) and the ref parameter modifier (where parameter values may be modified).Let's look at the following simple job:The above code can be broken down into:Call the DoSomething method which takes two structs passed by value.It performs some operation on the data, then returns the result (the operation doesn’t really matter for the purposes of this demo).Note that we’ve placed [MethodImpl(MethodImplOptions.NoInlining)] on the DoSomething method. We do this for two reasons:It lets us pinpoint the specific method in the resulting assembly using the Burst Inspector.It lets us simulate what would happen if the DoSomething method was really a very large function that Burst would not have inlined anyway.Now if we pull up the Burst Inspector, we can begin to dive into what is actually produced by the compiler for the above code:Note the assembly we’ve highlighted in the red box - this is the number of bytes of stack required by the function. And now the Execute method itself:And again note the highlighted red rectangle region - this is doing a bunch of copies between some memory address in the register rax, and the stack in rsp. So why is it doing this you might ask? Welcome to the wonderful world of ABI - Application Binary Interface. Aeons and aeons ago when computers were bigger than most modern houses, some smart computer people realised that if two different people were going to write programs such that code from both of these people could be used together - they’d have to agree on the rules for doing that. When data is passed from a caller to a callee, using a function, the compiler has to agree where function parameters are located so that the caller knows where to put the data, and the callee knows where to retrieve the data from. Passing data from one function to another has a set of rules that the caller and callee have to both understand so that they can make sense of the right data in the right location. The rules in this case are called calling conventions, and there are lots of weird and wonderful varieties. Each operating system tends to have a different convention, some operating systems have multiple, but what is important is that both sides follow the same rules and not behave in a way you didn’t expect! Most calling conventions allow simple data (primitive types or small structs) to be passed by value and in registers - the most efficient way to pass data. But large structs, anything more than 16 bytes in size or so, will generally have to be passed indirectly. If we look again at the simple job we showed above, we’ve now modified it to show you what the compiler has had to do to the code to conform to the ABI:So the compiler has:Changed the arguments a and b to the ‘DoSomething’ function to be passed by reference instead.Added two new local variables InDataACopy and InDataBCopy in the Execute method.It has to take a copy of the data from InDataA and InDataB into these variables.Then call the DoSomething function passing these local variables by reference.Now if we look again at the Burst Inspector output:This is the assembly that the compiler generated copies map to. We’re copying a bunch of data. Now let’s instead look at the same example but using in parameters:Now let us look again at the stack allocation size of this new job:We can see that the stack size has shrunk to just 32 bytes from 192 previously. Next let's look at the call to the ‘DoSomething’ function:Here we can see that the loads and stores that we previously had to make a copy of ‘InDataA’ and ‘InDataB’ are now gone - because we’ve told the compiler that it doesn’t need them. Nice! Using in parameters here let us tell the compiler how to do a better job at generating code, and if you imagine the ‘DoSomething’ method was inside an inner loop that was really performance sensitive then we’ve just cut out a ton of instructions from that code.One slightly strange bit of C#’s in parameters is that you don’t have to explicitly mark the call site argument as having in, like you would with ref:What happens behind the scenes is that the compiler will insert a local variable, store 42 into it, and then pass that by in for you, like:So even though we’ve added in to the function, we’re still getting the copy that we were trying to avoid. One case where this comes up is with NativeArray - whose indexer returns the T by value and not by reference. We do this so as to avoid any dangling references to destroyed data in the NativeArray, and to ensure that no memory violations occur.Let’s add a variant of our job to explore this:In the new job we have:Changed it to be an IJobParallelFor.It now runs across arrays of data instead of a single element.The ‘DoSomething’ callsite does not have an explicit ‘in’ because the NativeArray indexer returns a value and not a reference.And let’s look at the assembly as shown in the Burst Inspector:The highlighted region shows that the loads and stores we were previously avoiding by using in parameters have returned, and we’re having to do them for every iteration of the loop now - doh! So how can we avoid this copy? By using a helper function as provided by UnsafeUtility:In the above example we’ve added a new helper method ‘GetElementAsRef’. This just takes a native array and an index, and uses the ‘UnsafeUtility.ArrayElementAsRef’ helper to return a reference to the element, rather than return by value. This code is unsafe because if you deleted the NativeArray, and thus removed the memory backing the allocation, referencing the element of the array would result in reading from either dead, or potentially reused, memory. As long as you’ve taken this consideration into account, we can now pass references into our native arrays explicitly by in to our ‘DoSomething’ method, and if we look in the Burst Inspector once again:We can see that the loads and stores to take a copy of the data are gone, and we’re back to efficient and performant code - nice!When the developers of C# announced in parameters they wrote a blog post talking about the performance characteristics of using them. One line I’ll quote from the post is: ‘It means that you should never pass a non-readonly struct as an in parameter.’The reason the advice is to never pass a non-readonly struct as an in parameter is because if you call an instance method on that struct, it can cause the compiler to have to generate a copy of the in parameter in case the instance method could have modified the state. Let's look at an example of this:So in the above example we’re passing a ‘SomeStruct’ as an in parameter to ‘SomeMethod’, and then calling an instance method on the struct. The C# compiler will notice this and generate a defensive copy of ‘s’ in ‘SomeMethod’:This is the IL generated by the compiler - and we can see that it will perform a ldobj and stloc.0 to take a copy of the in parameter.In nearly all cases, as long as the instance method does not modify the state of the struct, Burst can deduce this and remove the defensive copy:In the code above we can see that because the instance method did not modify the in parameter’s data, Burst has completely removed the copy. So although the general advice for C# code might be to only use in parameters with readonly structs, in Bursted HPC# as long as you do not store into the in parameter data you should be fine.In parameters are a really powerful and useful language construct that provide a contract between developers and the compiler - a contract that lets you get optimal performance. As we’ve explored in this blog post:If you have non-inlined functions that take large structs by value, making these ‘in’ parameters instead can lead to performance gains.You must be careful that at the callsite of the function that you have data that can be passed by ‘in’ without resulting in copies - explicitly using the ‘in’ modifier on the callsite will let the compiler tell you that this is the case.Using the Burst Inspector like we’ve shown here can give you tremendous insights into your code, please use it!If you haven’t started with Burst yet and would like to learn more about our work on the new Data-Oriented Technology Stack (DOTS), head over to our DOTS pages, where we will be adding more learning resources and links to talks from our teams as more becomes available.We always welcome your feedback - join the forum here to let us know how we can help you level up your Burst code in future.

>access_file_
1362|blog.unity.com

Robotics simulation in Unity is as easy as 1, 2, 3

Robot development workflows rely on simulation for testing and training, and we want to show you how roboticists can use Unity for robotics simulation. In this first blog post of a new series, we describe a common robotics development workflow. Plus, we introduce a new set of tools that make robotics simulation in Unity faster, more effective, and easier than ever.Because it is costly and time-consuming to develop and test applications using a real robot, simulation is becoming an increasingly important part of robotic application development. Validating the application in simulation before deploying to the robot can shorten iteration time by revealing potential issues early. Simulating also makes it easier to test edge cases or scenarios that may be too dangerous to test in the real world.Key elements of effective robotics simulation include the robot’s physical attributes, the scene or environment where the robot operates, and the software that runs on the robot in the real world. Ensuring that these three elements in the simulation are as close as possible to the real world is vital for valid testing and training.One of the most common frameworks for robot software development is the Robot Operating System (ROS). It provides standard formats for robot descriptions, messages, and data types used by thousands of roboticists worldwide, for use cases as varied as industrial assembly, autonomous vehicles, and even entertainment. A vibrant user community contributes many open source packages for common functionalities that can bootstrap the development of new systems.Roboticists often architect a robot application as a modular set of ROS nodes that can be deployed both to real robots and to computers that interface with simulators. In a simulation, developers build a virtual world that mirrors the real robot’s target use case. By testing in this simulated ecosystem, users can iterate on designs quickly before testing in the real world and ultimately deploying to production.A common robotics development workflow, where testing in simulation happens before real-world testingThis blog post uses the example of a simple pick-and-place manipulation task to illustrate how users can leverage Unity for this simulation workflow.Following the above workflow, let’s say that our robot’s task is to pick up an object and place it in a given location. The six-axis Niryo One educational robot serves as the robot arm. The environment is minimal: an empty room, a table on which the robot sits, and a cube (i.e., the target object). To accomplish the motion-planning portion of the task, we use a popular set of motion-planning ROS packages collectively called MoveIt. When we are ready to start the task, we send a planning request from the simulator to MoveIt. The request contains the poses of all the robot’s joints, the cube’s pose, and the target position of the cube. MoveIt then computes a motion plan and sends this plan back to the simulator for execution.Now that we’ve set up the problem, let’s walk through how to use Unity in this simulation workflow.A robotics simulation consists of setting up a virtual environment — a basic room, as in this example, or something more complex, like a factory floor with conveyor belts, bins, tools, and parts — and adding to this environment a virtual representation of the robot to be trained or tested. The Unity Editor can be used to create endless permutations of virtual environments. But how can we bring our robots into these environments?When modeling a robot in simulation, we need to represent its visual meshes, collision meshes, and physical properties. The visual meshes are required to render the robot realistically. Collision meshes are required to calculate collisions between the robot’s “links,” the rigid members that connect joints, and other objects in the environment, as well as with itself. These meshes are typically less complex than visual meshes to allow faster collision-checking, which can be compute-intensive. Finally, the physical properties, like inertia, contact coefficients, and joint dynamics, are required for accurate physics simulation — that is, for computing how forces on the links result in changes to the robot state, e.g., pose, velocity, or acceleration.Lucky for us, when using the ROS development workflow, there is a standardized way of describing all these properties: Universal Robot Description Format (URDF). URDF files are XML files that allow us to specify these visual, collision, and physical properties in a human-readable markup language. URDF files can also include mesh files for specifying complex geometries. The example below shows an excerpt from the URDF file for the Niryo One robot.URDF of Niryo One robotTo make it easier for roboticists to import their robots into Unity, we’re releasing URDF Importer, an open-source Unity package for importing a robot into a Unity scene using its URDF file. This package takes advantage of our new support for “articulations” in Unity, made possible by improvements in PhysX 4.1. This update allows us to accurately model the physical characteristics of a robot to achieve more realistic kinematic simulations.When installed in the Unity Editor, this package allows the user to select a URDF file to import. It parses the XML file behind the scenes and stores the links and joints in the appropriate C# classes. It then creates a hierarchy of GameObjects, where each GameObject is an ArticulationBody component representing a particular link in the robot. It assigns properties from the URDF to the corresponding fields in ArticulationBody. When users add a robot to Unity, the URDF Importer automatically creates a rudimentary keyboard joint controller. Users can replace this controller with a custom controller using the ArticulationBody APIs.For example, here is the Niryo One Unity asset, created after importing the URDF file above.A virtual Niryo One robot in Unity, imported via URDF ImporterNow that the robot is in the Unity Editor, we should test our motion-planning algorithm, running in a set of ROS nodes. To support this, we need to set up a communication interface between Unity and ROS. Unity needs to pass messages to ROS that contain state information — namely, the poses of the robot, target object, and target location — along with a planning request to the mover service. In turn, ROS needs to return a trajectory message to Unity corresponding to the motion plan (i.e., the sequence of joint positions required to complete the pick-and-place task).Two new ROS–Unity Integration packages now make it easy to connect Unity and ROS. These packages allow ROS messages to be passed between ROS nodes and Unity with low latency; when tested on a single machine, a simple text-based message made the trip from Unity to a ROS subscriber in milliseconds and a 1036 x 1698 image in a few hundred milliseconds.Since communication in ROS uses a pub/sub model, the first requirement for ROS–Unity communication is classes in Unity corresponding to ROS message types. When users add the ROS-TCP-Connector Unity package to the Unity Editor, users can use the MessageGeneration plugin to generate C# classes, including serialization and deserialization functions, from ROS .msg and .srv files. The ROS-TCP-Connector package also includes scripts that the user can extend to publish messages from Unity to a ROS topic, subscribe in Unity to messages on a ROS topic, and create ROS service requests and responses. On the ROS side, a ROS package called ROS-TCP-Endpoint can create an endpoint to enable communication between ROS nodes and a Unity Scene using these ROS-TCP-Connector scripts.Let’s now take a look at how to use these ROS–Unity Integration packages for the task at hand. First, we use the ROS–Unity Integration packages to create a publisher in Unity to send the pose data to ROS over TCP. On the ROS side, we need to set up a ROS-TCP-Endpoint to subscribe to these pose messages.Next, we will create a “Publish” button in the Unity Scene along with an OnClick callback. This callback function makes a service request to the MoveIt motion planner. The service request includes the current pose of the robot, the pose of the target object, and the target location. When MoveIt receives the planning request, it attempts to compute a motion plan. If successful, the service returns the plan, i.e., a sequence of joint positions, and a Unity script executes the trajectory using the ArticulationBody APIs. Otherwise, it returns a failure message.The gif below shows a Unity simulation of the Niryo One arm successfully performing the pick-and-place task.Simulation of a pick-and-place task on a Niryo One robot in Unity using ROS and MoveIt for motion planningThis example is only the beginning. Developers can use this demo as a foundation on which to create more complex Unity Scenes, to add different robots, and to integrate other ROS packages. Stay tuned for future posts that cover integrating computer vision and machine-learning-oriented tasks into a robotics simulation framework.These tools lay the groundwork for a new generation of testing and training in simulation and make it easier than ever to use Unity for robotics simulation. Our team is hard at work enabling these next-generation use cases, including machine-learning training for robotics, sensor modeling, testing at scale, and more. Stay tuned for our next blog post in this series, which will show you how to train a vision-based machine-learning model to estimate the target object’s pose in the pick-and-place task.Get started with our robotics simulation tools for free. Check out our pick-and-place tutorial on GitHub.For more robotics projects, visit the Unity Robotics Hub on GitHub. To see how our team is making it easier to train computer vision systems using Unity, read our computer vision blog series.For more information on how Unity can be used to meet your robotics simulation needs, visit our official robotics page.If you’d like to contact our team directly with questions, feedback, or suggestions, email us at unity-robotics@unity3d.com.

>access_file_
1363|blog.unity.com

3 tips for scaling UA with ironSource's ROAS optimizer

that used ironSource’s ROAS optimizer to scale their UA. That’s because this tool automates and simplifies bid optimization for advertisers so they can save time, bid as accurately as possible, drive more installs, and maximize return on ad spend (ROAS) - all while getting full visualization into performance. The ROAS optimizer can benefit advertising campaigns for IAP and ad-based games alike, and there are a few tips and tricks that can help you scale and reach revenue goals more effectively.1. Select the right optimizer for your gameThere are three types of ROAS optimizers that can help you reach these goals: impression level revenue (ILR), in-app purchase (IAP), and combo (a mix of ILR and IAP).Each uses different types of revenue data, needs different durations for optimization, and has unique defining features. Games with a short LTV, lower bids, and high IPM include genres like hyper-casual and arcade, which are also the games that usually rely heavily on ads for revenue. The ILR optimizer shares these defining features. Meanwhile, the IAP optimizer shares its defining features - long LTV, high bids, and low IPM - with games that rely more on in-app purchases for revenue, like those in RPG and simulation genres.To know which is the right choice for you, start by considering your revenue breakdown and the optimizer’s prerequisites. The ILR optimizer is best if 90% or more of your app’s revenue comes from ads. Note, though, that this optimizer does require you to be on LevelPlay mediation.Choose the IAP optimizer if more than 90% of your ad revenue comes from in-app purchases. While the IAP optimizer doesn’t require you to be on LevelPlay mediation, it’s necessary to share post-install events (PIE).If your app’s revenue doesn’t meet the 90% threshold for either IAPs or ILR, go for the combo optimizer. Since the combo tool combines the features of the two other optimizers, it needs both PIEs and information from LevelPlay mediation to have all of the relevant data and optimize your campaign to the fullest extent.Neon Play shows the benefits of different types of optimizers with their game Idle Army - they switched from the IAP optimizer to Combo after two months to build upon their success. As a result of the switch and with ironSource’s guidance, their scale grew 350% and ARPU on D10 increased 300%. Mark Allen, Neon Play’s Director of Games, describes how ironSource helped them achieve this scale: “They asked us about our k-factors, the ROAS goals, and then with that they set up the optimizer on our behalf and we kicked it off. And to be honest it's been working really well since."2. Pay attention to the right metricsLooking at the right KPIs is important for measuring the success of the ROAS optimizer. As the name implies, ROAS is the most important metric for determining performance - it’s not CPI or retention like you might think, which are the traditional ways for advertisers to measure how effective their bids and ad creatives are. ROAS is the key metric for the optimizer because it closes the monetization loop - when profit increases, you can scale more revenue quickly and easily. However, you can also measure KPIs like scale and purchase rate, as these are often boosted by the tool as well. When choosing a ROAS goal, it’s necessary to consider a holistic set of data, including LTV, organic uplift, and ARPU curve. Then determine your margin, which ideally results in more profit at higher scale - increasing margins can lower CPI but also inhibit scale, while lowering margins can increase costs but lead to greater scale. It’s important to also factor in granular data, like geos and device type which can affect LTV. The team behind a popular simulation game initially focused on scale before using the ILR optimizer. When they decided that ROAS was a better metric for measuring revenue, they looked to it as their primary KPI and worked with the ironSource team to set up the ROAS optimizer. Using the tool, they exceeded ROAS goals, doubled scale - reaching as high as 3.5x more installs than before they began using the ILR optimizer - and the team was able to buy the best-quality traffic compared to any other SDK networks they were advertising with.3. Set up and analyze the right dataBefore launching the ROAS optimizer, it’s important to have enough data to let the tool do its job. The optimizer performs best when it has historical data to analyze so it can optimize bids more quickly and react faster when you add new sources or make campaign adjustments.After turning on the ROAS optimizer, looking at only matured and aggregated data is a must for measuring performance. Giving data time to mature avoids incorrect conclusions about the optimizer tool’s performance and ROAS goals - it could be 3-7 days until data has matured and results show, depending on the type of optimizer. For example, if you set a ROAS goal for D7, it doesn’t make sense to look at the data of a user who downloaded the game 5 days ago. That user can still impact total ROAS within the next 2 days of using the app. Therefore, you should look only at the data of users who have engaged with the app long enough - at least equal to or more than the optimization age.And regarding aggregated data, it’s not helpful to look at a day-to-day view of performance because ARPU will vary greatly even in the course of one day. Instead, looking on a week-by-week basis gives a more stable, accurate view of performance and the effect of the optimizer.Maintaining ROAS optimizer performanceOnce you’ve set up your optimizer - whether it’s the impression level revenue, IAP, or Combo - you can keep it performing at its best by adjusting ROAS goals and re-optimizing. Holidays, changes in ARPU trends or whale behavior, and game fatigue can all impact campaign performance and should be accounted for when setting new ROAS goals. We suggest updating your KPIs once every two weeks so you have at least 7 days of matured data to use as a benchmark.Learn more about the ironSource ROAS optimizer here or get in touch with us to activate it.

>access_file_
1364|blog.unity.com

Ringing the NYSE bell together: How Unity put the “U” in UPO

What happens when real-time 3D technology meets one of the oldest time-honored traditions in public stock exchanges? Learn how Unity transformed an initial public offering (IPO) – typically attended only by a company’s executives – into an employee-centered virtual event celebrated by thousands.When Unity decided to take action to become a public company, it was on one condition: it would be vastly different from a traditional IPO. Basing this approach on two of Unity’s core values – Go Bold and In It Together – we partnered with the New York Stock Exchange (NYSE) to build a completely different opening ceremony.The NYSE is a fixture in the world of public financial markets and, much like many other organizations, had to take a different approach to a historically traditional process and ceremony. Our 2020 IPO was an ideal opportunity to bridge the old with the new. Traditional IPOs typically follow a number of timed events such as the unveiling of the company’s banner on the NYSE facade, the bell ringing promptly at 9:30 am from the balcony, the physical signing of the book (a 150-year-old tradition), and a variety of meetings and interviews. The rest is up to the company to make its own.What was clear from the beginning of our IPO process was that all employees would have to be involved on the morning of the event. All 3000-plus of them. So by engineering the event according to our company values, we could create a “Unity” Public Offering or UPO. But what kind of real-time, web-based system would we have to build to make it happen?To start, we brought in teams from two recent acquisitions, Furioos and Finger Food Advanced Technology, to brainstorm. And then we brainstormed some more, filling up virtual whiteboard after virtual whiteboard. Our big challenges included: how could we enable thousands of people to ring a virtual bell and sign a virtual wall at the same time? Oh, and to further challenge ourselves – we needed to integrate NYSE live coverage with employee videos (filmed at home and on different devices) and curated Unity creator content as part of the event. Another Unity core value is Best Ideas Win and we put it to the test.With sleeves rolled up, we divided the development across multiple teams, both within Unity and at the NYSE. Since accessibility was critical, we set up a unique event website for everyone to easily reach and participate in the experience when the time came.Next, we tapped into the power of Furioos – a cloud-based streaming technology – to bring the same experience to all employees regardless if they were on a tablet, laptop or desktop, including older devices (Furioos runs 3D apps in the cloud and treats them as if they are on your local device).One of the big processing issues we had to resolve was that we needed to pass a lot of information via the web page (such as user authentication, environment data, and device capabilities) to the Unity app handling the content. Fortunately, the Furioos SDK simplified this so we had more time to focus on perfecting the user experience.To ensure full employee inclusion and participation, several weeks before the UPO we encouraged all Unity staff to contribute a short testimonial video about what Unity means to them. We received hundreds from around the world, giving voice to a large cross-section of employees, some of them sharing the Unity philosophy – “We believe the world is a better place with more creators in it” – in their native languages. We planned to integrate their contributions into a 3D video wall during the countdown to the bell-ringing ceremony.As part of the event, we also needed to integrate a variety of real-time broadcast streams such as interviews with Unity staff, panel discussions, and a dynamic feed of many Made with Unity projects to show how Unity creators are making a difference in multiple industries and initiatives. The goal was to curate an hour-long production of stories, interviews, and key moments to give the look and feel of an in-person event, all from employee homes around the world.With all of this rich content to be handled, it made perfect sense to build and render as much of it as possible using the Unity platform. For example, we used Unity to highlight video moments such as the raising of the NYSE banner and the transition to the virtual bell-ringing on the balcony. To add even more polish to these segments, we tapped Unity’s particle effects capabilities to represent the creators who help make Unity possible.Working closely with the NYSE, Vimeo, and our Furioos team, we enabled the virtual bell at precisely 9:29:50 am ET on September 18 and our CEO, John Riccitiello, invited all employees, shareholders, and guests to ring the bell together at the same moment.It was also then we encouraged participants to mark or sign their name on the virtual wall. This was an overlay we displayed just below the broadcast and included details to help set the context, thus combining the NYSE’s longstanding signature tradition with an innovation that allowed everyone at Unity to make their mark.In typical Unity fashion, we didn’t settle for what others had done before us. With ingenuity and creativity, the UPO team redefined what it means to IPO, setting a new standard for other companies to follow. At its core, the experience was meant to focus on what’s most important: the employees who are the lifeblood of any organization.Unity’s UPO was the first all-digital IPO hosted on the beta version of our new employee-centered, interactive platform, and it created two firsts for the NYSE: the first entirely remote bell-ringing to open the market, and the first time an entire company was able to participate in the event. “In It Together” certainly rang true here!Learn more about Furioos

>access_file_
1366|blog.unity.com

What kinds of games work best with an offerwall?

There’s a lot to like about offerwalls: an eCPM as high as $1,500, 5x-7x higher retention, and ARPDAU up to $0.45 are just a few reasons why it’s an appealing ad monetization strategy. And, the offerwall can benefit your app’s entire monetization ecosystem, from boosting IAPs to increasing ad engagement rates There are certain instances where the offerwall is more effective, so let’s see what kinds of games the offerwall performs best with to help determine if it’s the right ad unit for you.Offerwall benchmarks in the US:Does your app have a deep in-app purchase economy?An offerwall gives users rewards for completing tasks, and in order for this to work, the reward needs to be appealing. Games with a complex economy that includes hard currencies - currencies that are difficult to accumulate and are often paid for using real money - do well with an offerwall because their rewards are highly valuable for users. The types of games that usually have these deep in-app purchase economies tend to be those with long-term retention and high engagement, like RPG, simulation, and strategy.With a deep app economy that lets you offer hard currencies as rewards, offerwalls can encourage users to make in-app purchases . In fact, offerwall users are 10-14x more likely to make an in-app purchase because as they earn rewards, they get to enjoy premium resources that often improve gameplay or lead to advantages that they want to maintain as they keep playing.In a webinar with game developers Kongregate about their experience using the ironSource offerwall, they discussed how integrating the offerwall across a dozen different titles led to an increase in IAPs from both paying and non-paying users. In 2019, the offerwall made up 19% of their total ad revenue, lifted IAP revenue from non-paying users by 10x-14x, and significantly increased the ARPU of paying users.Can your product team implement the offerwall creatively?Engagement rates tend to be high with offerwalls, so developers are able to achieve a high eCPM and boost revenue when it’s optimized. Take the fact that the offerwall eCPM for Android in the US can be more than $1,500 and ARPDAU can be higher than rewarded video:Games that can get creative with the placement, timing, user segmentation, and promotion of offerwalls can achieve these boosts to eCPM and ARPDAU. For example, product teams on RPG games can adjust the traffic driver icon for special promotions, otherwise known as currency sales, and they can segment users by their progressions in the game and offer higher rewards for experienced players. These creative approaches help optimize the offerwall and maximize revenue without affecting the user experience.DomiNations is a MMO combat strategy game that shows how a creative strategy using offerwall promotions can drive up ARPDAU and eCPM while giving their other monetization strategies a boost. They previously ran 2x credit promotions on their offerwall and then increased the payout to 4x during the US holidays. These offerwall promotions increased user engagement and subsequently led to a 50% increase in rewarded video revenue, 3x-5x higher ARPDAU, and a 50%-200% boost to eCPM.Do users tend to stick around for a long time?There’s a good chance that users who engage with an offerwall once will continue to use it as they earn rewards that encourage them to keep playing. The longer they play, the higher the retention rates, and offerwall user retention is 5x-7x higher than non-offerwall users at D7, D14, and D30:If your game has long-term retention already, an offerwall can give you an added boost that enhances your overall monetization strategy. As users play your game for longer, you have more monetization opportunities, like showing rewarded videos.Square Enix, the developers behind the simulation game WAR OF THE VISIONS FINAL FANTASY BRAVE EXVIUS experienced the benefits of the offerwall for retention and engagement, which led to higher incremental ad revenue and eCPM. They already implemented rewarded video and had a deep in-app purchase economy - then they implemented the ironSource offerwall. The offerwall helped preserve IAP rates while boosting ad revenues 2x, increasing ARPDAU 5x, and achieving an eCPM 2x the industry benchmark as users played the game longer and engaged with more rewarded videos.Can you check all of these boxes?If your game and team fulfill the criteria we just laid out, an offerwall could be a great fit with your existing monetization strategy. Driving incremental ad revenue and boosting in-app purchase rates - all while keeping users playing your game - are major benefits this ad unit offers to the right kinds of games. So take a second look at your app economy, consider your team’s creative strategies for implementation, and check your retention rates - is an offerwall right for your game?For further reading about the offerwall, check out our offerwall eBook.Learn more by checking out our other blogs, including ‘Mobile Gaming Industry Trends in 2021’, ‘What Is K Factor?’, and ‘Best Practices for Idle Game Design & Monetization’

>access_file_
1367|blog.unity.com

How Sounds Hannam used Unity to turn a complex cultural space into an interactive 3D environment

The Sounds Hannam Project is an architectural visualization project that digitally created Sounds Hannam, a complex cultural space in Hannam-dong, Yongsan-gu, Seoul, using High Definition Render Pipeline (HDRP) in Unity. This project offers interactive content that lets you virtually explore the Sounds Hannam cultural space by navigating with a mouse and a keyboard. You can click on a store in the building that interests you to see the shortest path to it from your current location. A pop-up window will also display some details about the store you just clicked.This project utilized Unity’s HDRP, which allows for creating high-quality visuals, and Shader Graph, which lets you build shaders visually by using nodes instead of writing code. HDRP, Shader Graph, and other cutting-edge technology from Unity made it possible to present details such as internal lighting and natural sunlight overlapping in certain spots, external foliage swaying in the wind, and the glimmer of leaves reflecting light.Unity ArtEngine, the AI-powered tool that enables you to create ultra-realistic worlds , helped speed up the realistic visualization of the Sounds Hannam design by processing the texture of the buildings’ outer walls.Using ArtEngine’s Seam Removal function on real photos helped remove possible seams when processing textures, and the Contents-Aware Fill function helped fill in unnecessary, damaged, or missing parts of the scans. There are many other helpful features for creating textures with photographed images, such as doubly enhancing a JPEG image’s pixel data that was lost during file compression.PBR Material Generation was used on the external wall to calculate the albedo, normal, roughness, glossiness, height, etc. from standard photographs to create PBR material that dynamically reacted to the lighting.The Sounds Hannam Project Beautification can be largely divided into the buildings’ external appearance, the stores inside, and the interactive elements. The outside appearance was created by first modeling the overall construction. Then, landscaping for filling empty spaces and props such as tables, chairs, and umbrellas were created, while also including surrounding buildings as opaque virtual structures.The stores inside the buildings and the interior design were realized in a more limited way as the Sounds Hannam Revit data didn’t include the indoor information. The interior design of the external stores in the basement, 1st and 2nd floors, and the space above the 2nd floor were presented as scenes seen through glass windows.The way the view changes based on the lighting is one of the highlights of the Sounds Hannam Project. You can see how the buildings’ exterior changes under various lighting conditions as you switch the project’s mode between day and night.When we first received the Revit data, we thought it would be fun to include all the various details of the buildings as interactive elements. Most of it was dummy data because we couldn’t use the Revit data. We still included it so each store would display relevant BIM information.As the project allows for various types of interactions, we included a navigation feature that shows the shortest path possible to a selected store, a pop-up feature that shows details about the stores inside Sounds Hannam, and a color-changing feature for the props.The navigation feature was inspired by my impression that the stores were hiding here and there during my first visit to the physical building. Like GPS in a car, this feature helps you take a virtual tour of the stores inside. It was developed to make it easy to find stores when adding interactive elements to the project.One of the previous works by the Korean Office of Unity Technologies was the Seoul Office Project. The project received a lot of attention for realistically portraying the actual Seoul Office. It had many things in common with Sounds Hannam as well as some differences.The overall workflow was similar since both projects were architectural visualization projects and walkthroughs (intended as an examination of the plan to detect the design’s errors and correct them). We hoped to make the UI and interactive parts different from the Sounds Hannam Project. While the Seoul Office Project lets you experience going from one room to another in the building, Sounds Hannam took advantage of having access to the full models of the buildings. This allowed us to create a vivid experience, as if you’re looking at real buildings, by accurately depicting the exact location of stores within them.Seoul Office and Sounds Hannam were very different in terms of covered space and the original data received. The raw data of the Seoul Office had been provided by an interior design company, so it included the detailed SketchUp model of the interior design.On the other hand, the raw data of Sounds Hannam had been provided by the architecture firm, including various information for a total of five buildings in Revit, but the models themselves were relatively simple.Unity Engine’s strength lies in the fact that it allows for high-quality 3D visualization with simple settings and controls. It is highly expected that this advantage will help with promoting built structures in the architectural industry, increase the accuracy of designs, and enable you to test and develop experimental concepts.Creating visual materials for walkthroughs and presentations lets you interact with what the actual structure would look like from various angles. This makes it possible for timely identification and correction of flaws in the design. Moreover, the virtual tour lets you quickly adapt to the client’s needs and demands in the design and even test those changes, which can help encourage the client’s decision making.“This project done with Unity allowed us to present Sounds Hannam to our customers online and create various interactive content. I believe this will open new doors to introduce the beautiful and sensational spots of Sounds Hannam in an innovative and effective way.“ - Kim Jeong-Ho, CEO of Sounds Hannam

>access_file_
1368|blog.unity.com

Tips for working more effectively with the Asset Database

The Asset Import Pipeline v2 is the new default asset pipeline in Unity, replacing the Asset Import Pipeline v1 with a rewritten asset database to enable fast platform switching. It lays the foundations for scalable asset imports with robust dependency tracking. Read on to explore how the Asset Database works and discover some time-saving tips.Since the release of Unity 2019.3, the new Asset Import Pipeline has been the default pipeline for new projects. Combined with many other improvements, this lays the foundation for a more reliable, performant and scalable Asset Import Pipeline. This rewrite changed the way the Library folder works in order to support new workflows.Let’s have a look under the hood at a number of situations you may encounter and how to manage them efficiently. You’ll learn how to spot and address some of the bottlenecks that cost you time and project performance.The tips you’ll find here apply to Unity 2019.3 and later versions. Remember, if your project is in production beyond prototyping, for maximum stability we recommend you use the latest Long-Term Support (LTS) version of Unity, Unity 2019 LTS.First, let’s talk about some of the things that happen when you’re creating a new project or opening a project where your asset Library isn’t already present.If you look at the Editor.log file, you’ll notice a lot of lines that look like:Start Importing …Done Importing …This is the way the Asset Import Pipeline leaves a trail of its operations so that they can be inspected at a later point in time.You can use this information to figure out a certain type of bottleneck, namely Asset Import Times.When you look at the output from each line, you can extract the following pieces data:Asset pathImport durationFile extensionParsing that data, we can then categorize by extension. Once you know which importer has imported which asset, you can aggregate that data into a pie chart showing you which types of asset take the longest to import. For example:This data can give you a clearer picture of where the bottlenecks for your project are.In this particular example, Textures, Models and Prefabs are the biggest time sinks, providing a starting point for investigating which assets could be optimized.Download this SimpleEditorLogParser sample project and use it as a base for your own parser.Being able to see which asset category takes the longest to import can help you plan where to direct your optimization efforts. If Texture Importing is the category that takes the longest, then examine the textures that take the longest to import and consider removing textures that aren’t supposed to end up in the final build.This will not only speed up your import times, but also improve the performance of your Continuous Integration Pipeline if you’re doing clean nightly builds or something similar.Being able to open your projects quickly is important. The minutes it takes to restart the Editor or open various projects throughout the day can add up to valuable time lost.As a project becomes more complex and uses more features, it takes longer to open. This can be due to a large number of factors, and prior to Unity 2019.3, there was no way to get the profiler up and running while the Editor launched.Among the several command line arguments you can supply upon opening Unity, the -profiler-enable command line argument allows you to profile the Editor during launch.This command tells the Editor to begin recording profiling data for the first frame of the application, which is all the code that is executed until the Editor is visible. Using this argument can help you see what happens during startup and what takes time. You can see whether it’s a system in Unity, an Asset Store package, or code specific to your project. This can help you figure out what to do next.In this capture, you can see that opening this particular project takes ~50 s.At first glance, it appears to take 43 s to load the AssetDatabase. Upon further inspection, it’s clear that 14 s are spent on calls to OnPostProcessAllAssets. Further down, the code that executes during RegisterScriptsAndTryLoadingExistingUserAssemblies adds up to 10 s, and 5.7s of that is loading the Domain, which is further slowed down by calls to Scripts that have the [InitializeOnLoad] attribute.This startup data can help you track down performance bottlenecks, and see whether the code is in your project or in Unity itself.The Library folder can now contain multiple import results for the same asset, so projects that use the new Asset Import Pipeline no longer have a “simple” GUID (global unique identifier) to folder mapping in the Library folder.Files in the Library folder are based on the hash of their contents plus a number of static and dynamic dependencies. This allows Unity to have features like fast platform switching, asset de-duplication, and skipping an import if the hash of an asset is already present in the Library folder.So this means that it is no longer trivial to find import results in the Library folder. Here is an example that shows you where you can find the import result of “Assets/Prefabs/MyPrefab.prefab”:Here are two gists for different versions of Unity:Version for Unity 2020.2 betaVersion for Unity 2020.1 and earlierThe samples are different because as the implementation of the Asset Import Pipeline has matured, a number of APIs have been moved from the Experimental namespace to the AssetDatabase’s own namespace.Often, you may want to find a particular asset in your project and do something with the result. You may even want to do this multiple times when you’re running Editor code.Calling AssetDatabase.FindAssets will traverse the entire project to match the query you’ve given to it. As projects get bigger, this can become a performance bottleneck as the time taken to search through different size projects grows linearly.As expected, the more assets a project has, the greater the time to search through them. Fortunately, the time to find each asset remains somewhat steady over time.As you can see, if your project has hundreds of thousands of assets, searching for assets can lead to a noticeable development slowdown. At 200,000 assets, there is already a 200 ms delay for a simple search query.Using the brute-force approach produces this common usage pattern: Essentially, this code is traversing the entire project to find one texture and then do something with it.The AssetDatabase provides a way to lookup an asset’s path by GUID. You can think of it as looking up something in a Dictionary by key instead of by iterating an array for a match.The benefit of using this approach over brute force is that the AssetDatabase doesn’t need to look through the entire project to find the asset. It can just use the GUID as a database index and follow that path to load the asset into memory.You can find the GUID in an asset’s corresponding .meta file, for example:fileFormatVersion: 2 guid: 9fc0d4010bbf28b4594072e72b8655ab DefaultImporter: externalObjects: {} userData: assetBundleName: assetBundleVariant:In this case, the GUID for this asset is 9fc0d4010bbf28b4594072e72b8655ab.Given that information, you can do the following:var path = AssetDatabase.GUIDToAssetPath("9fc0d4010bbf28b4594072e72b8655ab"); var asset = AssetDatabase.LoadAssetAtPath(path);Now your asset is ready to be used.As a side note, if you’re interested in speeding up searching within the Editor, you should also be aware of the following tools:Quick Search Package which allows you to search over multiple areas of Unity.TypeCache to search for Scripts derived from a type you know.Normally, the GUID of an asset is constant.However, in certain situations, an asset and its .meta file are duplicated, causing a conflict which the AssetDatabase resolves in the following ways:If you duplicate a folder inside a project to another place in the same project Importing an Asset Store package multiple times or copying a folder from another project multiple times into your project When AssetDatabase.Refresh executes, a lot of systems need to work together to present your project in a valid state. In my Unite Copenhagen talk, I detail the various steps of what happens when Refresh is called.Particular callbacks executed during a Refresh interact with code. This can affect how long it takes for the Refresh operation to complete.The more code that gets executed during a Domain Reload, the slower your Editor experience will be. To stay in the flow when iterating on code, think carefully about when code should be executed and whether it can be deferred to later in the project.During a Domain Reload, your code will be executed if it contains any of the following methods:1. Awake2. OnEnable3. OnValidateYour code in those methods should ideally be very fast or should be deferred to run at another time (not during a Refresh, for example). This is because these callbacks are supposed to help you restore certain states, but since there are no restrictions on what can be done within these calls, then any code that doesn’t scale (i.e., anything that traverses the entire project) slows down your iteration velocity for Scripts while the Editor is open.Another approach is to use EditorApplication.delayCall, where your code gets executed on the next Editor tick after the AssetDatabase has had a chance to detect and import all changes on disk.Follow this thread on the Unity Forums to stay up to date with news about improvements in this area.I hope you’ve found these tips useful. Let us know what other things you’d like to know about the Asset Import Pipeline or what your pain points are. We are actively working on improving the Asset Import Pipeline, and we want to make your iteration as close to instant as possible, so you can be more productive when working with the Editor and changing assets and/or scripts.

>access_file_
1369|blog.unity.com

Fixing Time.deltaTime in Unity 2020.2 for smoother gameplay: What did it take?

Unity 2020.2 beta introduces a fix to an issue that afflicts many development platforms: inconsistent Time.deltaTime values, which lead to jerky, stuttering movements. Read this blog post to understand what was going on and how the upcoming version of Unity helps you create slightly smoother gameplay.Since the dawn of gaming, achieving framerate-independent movement in video games meant taking frame delta time into account:This achieves the desired effect of an object moving at constant average velocity, regardless of the frame rate the game is running at. It should, in theory, also move the object at a steady pace if your frame rate is rock solid. In practice, the picture is quite different. If you looked at actual reported Time.deltaTime values, you might have seen this:This is an issue that affects many game engines, including Unity – and we’re thankful to our users for bringing it to our attention. Happily, Unity 2020.2 beta begins to address it.So why does this happen? Why, when the frame rate is locked to constant 144 fps, is Time.deltaTime not equal to 1⁄144 seconds (~6.94 ms) every time? In this blog post, I’ll take you on the journey of investigating and ultimately fixing this phenomenon.In layman’s terms, delta time is the amount of time your last frame took to complete. It sounds simple, but it’s not as intuitive as you might think. In most game development books you’ll find this canonical definition of a game loop:With a game loop like this, it’s easy to calculate delta time:While this model is simple and easy to understand, it’s highly inadequate for modern game engines. To achieve high performance, engines nowadays use a technique called “pipelining,” which allows an engine to work on more than one frame at any given time.Compare this:To this:In both of these cases, individual parts of the game loop take the same amount of time, but the second case executes them in parallel, which allows it to push out more than twice as many frames in the same amount of time. Pipelining the engine changes the frame time from being equal to the sum of all pipeline stages to being equal to the longest one.However, even that is a simplification of what actually happens every frame in the engine:Each pipeline stage takes a different amount of time every frame. Perhaps this frame has more objects on the screen than the last, which would make rendering take longer. Or perhaps the player rolled their face on the keyboard, which made input processing take longer.Since different pipeline stages take different amounts of time, we need to artificially halt the faster ones so they don’t get ahead too much. Most commonly, this is implemented by waiting until some previous frame is flipped to the front buffer (also known as the screen buffer). If VSync is enabled, this additionally synchronizes to the start of the display’s VBLANK period. I’ll touch more on this later.With that knowledge in mind, let’s take a look at a typical frame timeline in Unity 2020.1. Since platform selection and various settings significantly affect it, this article will assume a Windows Standalone player with multithreaded rendering enabled, graphics jobs disabled, vsync enabled and QualitySettings.maxQueuedFrames set to 2 running on a 144 Hz monitor without dropping any frames. Click on the image to see it in full size:Unity’s frame pipeline wasn’t implemented from scratch. Instead, it evolved over the last decade to become what it is today. If you go back to past versions of Unity, you will find that it changes every few releases.You may immediately notice a couple things about it:Once all the work is submitted to the GPU, Unity doesn’t wait for that frame to be flipped to the screen: instead, it waits for the previous one. This is controlled by the QualitySettings.maxQueuedFrames API. This setting describes how far the frame that is currently being displayed can be behind the frame that’s currently rendering. The minimum possible value is 1, since the best you can do is render framen+1 when framen is being displayed on the screen. Since it is set to 2 in this case (which is the default), Unity makes sure that framen gets displayed on the screen before it starts rendering framen+2 (for instance, before Unity starts rendering frame5, it waits for frame3 to appear on the screen).Frame5 takes longer to render on the GPU than a single refresh interval of the monitor (7.22 ms vs 6.94 ms); however, none of the frames are dropped. This happens because QualitySettings.maxQueuedFrames with the value of 2 delays when the actual frame appears on the screen, which produces a buffer in the time that safeguards against dropping frames, as long as the “spike” doesn’t become the norm. If it were set to 1, Unity would have surely dropped the frame, as it would no longer overlap the work.Even though screen refresh happens every 6.94 ms, Unity’s time sampling presents a different image:The delta time average in this case ((7.27 + 6.64 + 7.03)/3 = 6.98 ms) is very close to the actual monitor refresh rate (6.94 ms), and if you were to measure this for a longer period of time, it would eventually average out to exactly 6.94 ms. Unfortunately, if you use this delta time as it is to calculate visible object movement, you will introduce a very subtle jitter. To illustrate this, I created a simple Unity project. It contains three green squares moving across the world space:The top square moves the same distance every frame – this square represents perfect movement and it is our point of reference. It is surrounded by two red vertical lines that make it easier to see if the other cubes are aligned with it.The middle square moves the distance the top cube moves over the course of a second multiplied by Time.deltaTime.The bottom square uses Rigidbody to move (with interpolation enabled), and its velocity is set to the distance that the top square moves over the course of a second.The camera is attached to the top cube, so it appears perfectly still on the screen. If Time.deltaTime is accurate, the middle and bottom cubes would appear to be still as well. The cubes move twice the width of the display every second: the higher the velocity, the more visible the jitter becomes. To illustrate movement, I placed purple and pink non-moving cubes in fixed positions in the background so that you can tell how fast the cubes are actually moving.In Unity 2020.1, the middle and the bottom cubes don’t quite match the top cube movement – they jitter slightly. Below is a video captured with a slow-motion camera (slowed down 20x):So where do these delta time inconsistencies come from? The display shows each frame for a fixed amount of time, changing the picture every 6.94 ms. This is the real delta time because that’s how much time it takes for a frame to appear on the screen and that’s the amount of time the player of your game will observe each frame for.Each 6.94 ms interval consists of two parts: processing and sleeping. The example frame timeline shows that the delta time is calculated on the main thread, so it will be our main focus. The processing part of the main thread consists of pumping OS messages, processing input, calling Update and issuing rendering commands. “Wait for render thread” is the sleeping part. The sum of these two intervals is equal to the real frame time:Both of these timings fluctuate for various reasons every frame, but their sum remains constant. If the processing time increases, the waiting time will decrease and vice versa, so they always equal exactly 6.94 ms. In fact, the sum of all the parts leading up to the wait always equals 6.94 ms:However, Unity queries time at the beginning of Update. Because of that, any variation in time it takes to issue rendering commands, pump OS messages or process input events will throw off the result.A simplified Unity main thread loop can be defined like this:The solution to this problem seems to be straightforward: just move the time sampling to after the wait, so the game loop becomes this:However, this change doesn’t work correctly: rendering has different time readings than Update(), which has adverse effects on all sorts of things. One option is to save the sampled time at this point and update engine time only at the beginning of the next frame. However, that would mean the engine would be using time from before rendering the latest frame.Since moving SampleTime() to after the Update() is not effective, perhaps moving the wait to the beginning of the frame will be more successful:Unfortunately, that causes another issue: now the render thread must finish rendering almost as soon as requested, which means that the rendering thread will benefit only minimally from doing work in parallel.Let’s look back at the frame timeline:Unity enforces pipeline synchronization by waiting for the render thread each frame. This is needed so that the main thread doesn’t run too far ahead of what is being displayed on the screen. Render thread is considered to be “done working” when it finishes rendering and waits for a frame to appear on the screen. In other words, it waits for the back buffer to be flipped and become the front buffer. However, the render thread doesn’t actually care when the previous frame was displayed on the screen – only the main thread is concerned about it because it needs to throttle itself. So instead of having the render thread wait for the frame to appear on the screen, this wait can be moved to the main thread. Let's call it WaitForLastPresentation(). The main thread loop becomes:Time is now sampled just after the wait portion of the loop, so the timing will be aligned with the monitor’s refresh rate. Time is also sampled at the beginning of the frame, so Update() and Render() see the same timings.It is very important to note that WaitForLastPresention() does not wait for the framen - 1 to appear on the screen. If that was the case, no pipelining would be done at all. Instead, it waits for framen - QualitySettings.maxQueuedFrames to appear on the screen, which allows the main thread to continue without waiting for the last frame to complete (unless maxQueuedFrames is set to 1, in which case every frame must be completed before a new one starts).After implementing this solution, delta time became much more stable than it was before, but some jitter and occasional variance still occurred. We depend on the operating system waking up the engine from sleep on time. This can take multiple microseconds and therefore introduce jitter to the delta time, especially on desktop platforms where multiple programs are running at the same time.To improve the timing, you can use the exact timestamp of a frame being presented to the screen (or an off-screen buffer), which most graphics APIs/platforms allow you to extract. For instance, Direct3D 11 and 12 have IDXGISwapChain::GetFrameStatistics, while macOS provides CVDisplayLink. There are a few downsides with this approach, though:You need to write separate extraction code for every supported graphics API, which means that time measurement code is now platform-specific and each platform has its own separate implementation. Since each platform behaves differently, a change like this runs the risk of catastrophic consequences.With some graphics APIs, to obtain this timestamp, VSync must be enabled. This means if VSync is disabled, the time must still be calculated manually.However, I believe this approach is worth the risk and effort. The result obtained using this method is very reliable and produces the timings that directly correspond to what is seen on the display.Since we no longer have to sample the time ourselves, WaitForLastPresention() and SampleTime() steps are combined into a new step:With that, the problem of jittery movement is solved.Input latency is a tricky subject. It’s not very easy to measure accurately, and it can be introduced by various different factors: input hardware, operating system, drivers, game engine, game logic, and the display. Here I focus on the game engine factor of the input latency since Unity can’t affect the other factors.Engine input latency is the time between the input OS message becoming available and the image getting dispatched to the display. Given the main thread loop, you can visualize input latency as part of code (assuming QualitySettings.maxQueuedFrames is set to 2):Phew, that’s it! Quite a lot of things happen between input being available as an OS message and its results being visible on the screen. If Unity is not dropping frames and the time spent by the game loop is mostly waiting compared to processing, the worst-case scenario of input latency from the engine for 144hz refresh rate is 4 * 6.94 = 27.76 ms, because we’re waiting for previous frames to appear on screen four times (that means four refresh rate intervals).You can improve latency by pumping OS events and updating input after waiting for the previous frame to be displayed:This eliminates one wait from the equation, and now the worst-case input latency is 3 * 6.94 = 20.82 ms.It is possible to reduce input latency even further by reducing QualitySettings.maxQueuedFrames to 1 on platforms that support it. Then, the chain of input processing looks like this:Now, the worst-case input latency is 2 * 6.94 = 13.88 ms. This is as low as we can possibly go when using VSync.Warning: Setting QualitySettings.maxQueuedFrames to 1 will essentially disable pipelining in the engine, which will make it much harder to hit your target frame rate. Keep in mind that if you do end up running at a lower frame rate, your input latency will likely be worse than if you kept QualitySettings.maxQueuedFrames at 2. For instance, if it causes you to drop to 72 frames per second, your input latency will be 2 * 1⁄72 = 27.8 ms, which is worse than the previous latency of 20.82 ms. If you want to make use of this setting, we suggest you add it as an option to your game settings menu so gamers with fast hardware can reduce QualitySettings.maxQueuedFrames, while gamers with slower hardware can keep the default setting.Disabling VSync can also help reduce input latency in certain situations. Recall that input latency is the amount of time that passes between an input becoming available from the OS and the frame that processed the input being displayed on the screen or, as a mathematical equation:latency = tdisplay - tinputGiven this equation there are two ways to reduce input latency: either make tdisplay lower (get the image to the display sooner) or make tinput higher (query input events later).Sending image data from the GPU to display is extremely data-intensive. Just do the math: to send a 2560x1440 non-HDR image to the display 144 times per second requires transmitting 12.7 gigabits every second (24 bits per pixel * 2560 * 1440 * 144). This data cannot be transmitted in an instant: the GPU is constantly transmitting pixels to the display. After each frame is transmitted, there’s a brief break, and transmitting the next frame begins. This break period is called VBLANK. When VSync is enabled, you’re essentially telling the OS to flip the frame buffer only during VBLANK:When you turn VSync off, the back buffer gets flipped to the front buffer the moment rendering is finished, which means that the display will suddenly start taking data from the new image in the middle of its refresh cycle, causing the upper part of the frame to be from the older frame and the lower part of the frame to be from the newer frame:This phenomenon is known as “tearing.” Tearing allows us to reduce tdisplay for the lower part of the frame, sacrificing visual quality and animation smoothness for input latency. This is especially effective when the game’s frame rate is lower than VSync interval, which allows a partial recovery of the latency caused by a missed VSync. It is also more effective in games where the upper part of the screen is occupied by UI or a skybox, which makes it harder to notice tearing.Another way disabling VSync can help reduce input latency is by increasing tinput. If the game is capable of rendering at a much higher frame rate than the refresh rate (for instance, at 150 fps on a 60 Hz display), then disabling VSync will make the game pump OS events several times during each refresh interval, which will reduce the average time they’re sitting in the OS input queue waiting for the engine to process them.Keep in mind that disabling VSync should ultimately be up to the player of your game since it affects visual quality and can potentially cause nausea if the tearing ends up being noticeable. It is a best practice to provide a settings option in your game to enable/disable it if it’s supported by the platform.With this fix implemented, Unity’s frame timeline looks like this:But does it actually improve the smoothness of object movement? You bet it does!We ran the Unity 2020.1 demo we showed at the start of this post in Unity 2020.2.0b1. Here is the resulting slow-motion video:This fix is available in the 2020.2 beta for these platforms and graphics APIs:Windows, Xbox One, Universal Windows Platform (D3D11 and D3D12)macOS, iOS, tvOS (Metal)Playstation 4SwitchWe plan to implement this for the remainder of our supported platforms in the near future.Follow this forum thread for updates, and let us know what you think about our work so far.The Elusive Frame Timing, article by Alen LadavacController to display latency in Call of Duty, GDC 2019 presentation by Akimitsu Hogge

>access_file_
1370|blog.unity.com

What is eCPM and how to calculate it

eCPM meaningeCPM is a term that’s thrown around quite often in the mobile advertising world. But it can be quite confusing, especially for new developers looking to monetize their app through in-app advertising. Let’s take a look at what it means and why it matters for app developers. What is eCPM?eCPM means “effective cost per thousand impressions,” which in layman’s terms is the ad revenue generated per 1,000 ad impressions.There are two sides to eCPM: monetization and user acquisition.On the monetization side, eCPM is a metric used to measure an app developer's ad monetization performance. If app developers have a high eCPM, it means that the ads served on their app are doing their job and converting users. The more users the ads convert, the more the app developer gets paid.On the user acquisition side, eCPM measures the ad revenue generated by a specific campaign. Because networks use eCPM to rank campaigns within their ad serving models, they serve the campaigns with the highest eCPMs first and most often - enabling the campaign to garner a high amount of impressions and scale quickly. In this way, eCPM represents the campaign’s buying power.eCPM vs CPM?CPM is the rate that advertisers pay per 1000 impressions while eCPM is the ad revenue per 1000 impressions.How to calculate eCPMeCPM formula for ad monetization:eCPM = (total earnings/total impressions) x 1,000.To calculate eCPM, divide your total advertising earnings by the total number of impressions your app served. Then multiply by 1,000. The final figure is your eCPM, or the amount of money your app earns per 1,000 ad impressions.eCPM formula for user acquisition:eCPM = CPI * IPMAs long as impressions and cost are measured, eCPM on the user acquisition side can be estimated by multiplying the CPI (how much you're willing to pay for each install) and IPM (how many installs out of a thousand the campaign generates) of a campaign.eCPM floorAn eCPM floor, also called a flat eCPM or predefined CPM, is the minimum CPM an ad network must meet to serve an ad on your app. In other words, an ad network will not serve an ad if it's not able to meet the eCPM floor that you manually pre-set in your waterfall. If an ad network can't meet your eCPM floor, it will move on to the next instance in your waterfall. CPMs can be set on individual or groups of specific countries, and on the global level. Read more about eCPM floors and best practices here.Note that eCPM floors are only relevant for traditional waterfalls and not for in-app bidding. Rather than manually assigning an eCPM floor to each line item in a waterfall, an in-app bidding solution asks each ad network how much they're willing to pay to serve an impression and automatically serves the impression to the highest bidder. While the mobile app industry is still operating in a hybrid bidding system, as the industry moves closer to a pure in-app bidding monetization, eCPM floors will become less relevant.eCPM for adseCPM helps app developers evaluate and optimize their monetization strategy by letting them compare ad revenue generated across multiple variables, such as ad network, region, operating system, location, etc.For example, let’s say you want to understand which ad unit is performing better and making you more money. To do so, you’d calculate and compare the eCPM of both.In a month, you see that rewarded video generated 400 impressions and $5.00 in earnings, while banner ads generated 700 impressions and $3.00 in earnings. It’s hard to compare the performance of the two ad units based on these numbers alone.But after some quick math, we find that rewarded video has an eCPM of $12.50 and banner ads have an eCPM of $4.29. Now we see clearly that rewarded video is making you more money.eCPM marketing and advertisingAs we mentioned, eCPM is also critical for measuring and optimizing the performance of user acquisition campaigns - the higher the campaign's eCPM, the more impressions it'll win and the faster it'll scale.Take a look at the graph below and remember that eCPM for user acquisition managers equals CPI multiplied by IPM. That means Campaign A has an eCPM of $5, Campaign B $10, and Campaign C is upwards of $40.Campaign A's eCPM is too low to win a high share of voice on the network, while Campaign B's eCPM is good enough to win a majority of a network's impressions. Campaign C, meanwhile, has such a high eCPM that UA managers can actually increase their margins, or lower their CPI, and still see the same campaign performance. That's usually due to an incredibly high IPM, from a high-performing creative.How to increase eCPMThere are several strategies app developers can implement to improve ad monezation eCPM. Take a look at a few of our top tips:1. Monetize with in-app biddingThink of in-app bidding as an auction - the solution asks all the available bidding ad networks how much they're willing to pay for an impression and serves the impression to the highest bidder. That's in contrast to the traditional waterfall, which has multiple line items that app developers manually arrange according to eCPM.In-app bidding drives higher eCPMs for app developers for a few reasons:The competition among the ad networks drives up the price of each impression - just like in an auction.Because it's such a manual process to optimize waterfalls, app developers typically only look after a few of their most important ones rather than all of them (per geo, segment, etc). This means that many of the waterfalls aren't as optimized as they could be, and developers are missing out on potential revenue. However, in-app bidding completely automates the optimization process, since there's no need to manually arrange anything. This means there's no chance developers lose out on revenue because of poor optimization.If you placed a certain network low in your traditional waterfall but it was potentially was willing to pay a high amount for the impression, you'd simply miss out on the revenue. But because in-app bidding flattens the waterfall, and simultaneously asks all networks how much they're willing to pay for each impression, you never risk losing out on a potentially higher bid.By using in-app bidding to ensure best performing campaigns are delivered first, you can quickly and easily increase your own eCPM.2. A/B test ad formats and remove the ones that don’t performThere are several ad units out there to choose from today: user-initiated ones that users opt-into engaging with like rewarded video and offerwall, and system-initiated ad units that developers choose when to show like banners and interstitials. Each has its own advantages. Be sure to continuously run A/B tests to see which ones are performing well and which ones aren’t. If you see that a certain ad unit is consistently delivering low eCPMs, remove it from your app. ironSource's ad monetization A/B testing solution lets developers test different ad monetization strategies and know with certainty which one will maximize revenue. Here are five examples of A/B tests you can start running now. Using ironSource mediation, ZiMAD was only running rewarded video ads on their game, My Museum. Their project manager was curious about adding interstitials as a second ad unit, to see if it would increase revenue without harming retention. She used ironSource's ad monetization A/B testing tool to set up two groups: the first group acted as a control and ran only rewarded video ads, while the second group ran both rewarded video and interstitials from Day 1. The result? ZiMAD’s revenue increased by 17%.

>access_file_
1371|blog.unity.com

3 in-ad data metrics to optimize in your mobile game ad creative - and how

Mobile ad metricsThe rise of interactive ads has opened up a new world of in-ad data which gives insight into the user journey, from impression to click and everything in between. With the market as saturated as ever, an intricate attention to detail in order to optimize ad creatives can be the difference between a scalable IPM and a poor IPM; a successful game and a failure. Below, we take you through the key in-ad data metrics that you should be optimizing, and how, through various A/B tests, you can do exactly that.Engagement rate and time to engage (TTE)Getting the user to tap on the playable and begin playing is the first challenge for your creative. Engagement rate is the percentage of users who interacted with your ad, and represents the first stage of interaction with the creative. Time to engage is the number of seconds it takes for the user to interact with your ad, for instance tapping on it and starting the play with it.For hyper-casual games, the goal is to have a short TTE, as this will indicate that the creative is intuitive and immediately appealing. By contrast, a playable for a more complex game can have a longer TTE - after all, more complex games will likely require more information on the creative, and you want users to understand what is required from them.To improve engagement rates, focus on optimizing the tutorial at the start of the ad. When developing the tutorial, it's crucial not to overwhelm the user with information; avoid adding too many instructions on the screen, and focus on making the gameplay as intuitive as possible. The key is to hook the user in the first couple of seconds - that should guide your efforts.After the initial interaction, such as engaging with the tutorial, comes the core gameplay of the playable ad. Test combinations of level of difficulty, number of levels, and amount of obstacles within the core gameplay, A/B testing to try to understand what motivates your users. If for example engagement rates are low and a user isn’t interacting with the core gameplay, try adding a hint, such a gesture showing what’s needed to progress.Time to complete and completion rateAchieving high engagement rates is just one step in the funnel - you then need to focus on maximizing completion rates. Completion rate is the percentage of users who get to the end of the ad creative without exiting out or skipping. This indicates how engaging your creative is, although high engagement rates do not always translate to high completion rates.The key to ensure the latter is choosing the right gameplay to showcase. Doing this requires an understanding of what motivates your core audience, and leveraging this in the ad. For instance, in match-3 games, typically most users are primarily motivated by meta features, so it’s important to focus your A/B tests on the metagame and not the core gameplay. In a midcore role-playing game, where players are strongly motivated by characters and leveling up, consider focusing the ad creative on the different characters and highlighting their traits, A/B testing to determine which characters drive the highest engagement and completion rates.Click-to-store rateHigh engagement and completion rates on their own do not necessarily mean a successful playable ad campaign - success will be measured by the number of users who click on the ad’s CTA (call to action) to install the game in the app store. Click-to-store rate is the percentage of users who completed your ad creative and then clicked to go to the app store listing.Often, if the other parts of the puzzle are finely tuned, such as the tutorial and gameplay, then a high click-to-store rate will be a natural outcome. There are also ways to directly optimize the click-to-store rate, such as the CTA. This is the button that appears at the end of the ad, prompting users to download with a message like “Download now” or “Play for free”.The text in the button can be optimized by leveraging the motivations of the users playing your game. For example, Playworks optimized the CTA of a midcore game according to the desire of its core audience to unlock cool characters. Instead of the simple “Download now”, Playworks tested the CTA with “Level up” and “Upgrade your hero”. This helped the ad creative meet the midcore genre’s click-to-store rate benchmark.It’s also worth noting that conversions can also take place before the end card appears. Some developers test slightly more aggressive strategies in which the user is taken straight to the app store after a certain number of clicks on the ad.Data and creatives: Two peas in a podA good game doesn't necessarily translate to a good creative. That’s why rigorous A/B testing, paying close attention to the genre benchmarks, analyzing your ad performance, and optimizing based on your findings is key for any mobile games company, whatever the genre.Ad creative optimization is only becoming more crucial for mobile games, and in the largely automated world of performance marketing, it is one of the last levers UA teams can use to gain an advantage over competitors. By focusing on optimizing the metrics discussed above, you can ensure you’re making the most out of this opportunity to stand out from the crowd and run scalable, high impact campaigns.Learn more by checking out our other blogs, including ‘Mobile Gaming Industry Trends in 2021’ and ‘How to Improve User Experience and Conversions in Playable Ads’

>access_file_
1373|blog.unity.com

Create your first game, brick by virtual brick, with the LEGO® Microgame

New users can start creating in Unity faster than ever with the LEGO Microgame (currently in beta), our most recent addition to the Microgames series.Unity’s Microgames are guided experiences designed to get new users working in the Editor quickly and easily. They’re designed to help you move swiftly from opening your first project to publishing your first game in about 45 minutes. Follow in-Editor tutorials to better understand how everything clicks together, while making your own creative decisions and personalizing along the way. You can access the Microgames via the Unity Hub (v2.4.0).Our latest Microgame release allows you to discover a joyful experience building with virtual LEGO bricks as you learn how to use Unity’s fundamental systems. Create and play your first game then publish it to Unity’s hosting site for user-generated games, where you can show off and share your new creation with friends and the larger Unity community.In partnership with LEGO Games, Unity has brought the LEGO Group’s System in Play and LEGO minifigures into the Unity Editor for the very first time.For anyone who’s ever loved building with LEGO bricks, this Microgame is the perfect place to start your Unity creative journey. Gameplay behaviors and actions have been embedded into the virtual bricks, allowing you to build your interactive project, brick by glossy brick.If you’re new to Unity, you can begin here – it’s as easy as downloading the Unity Hub, launching the Editor, and opening the LEGO Microgame project to follow step-by-step tutorials that guide you through the creative process.Already have Unity (2019.4 LTS) installed? Just launch the Unity Hub (v2.4.0) and start a New project to load the LEGO Microgame.Your comments are vital to helping us improve the LEGO Microgame experience for others, so tell us what you think in our form, and stay tuned for more exciting Microgame news to help level up your Unity skills. We can’t wait to see what you create!

>access_file_
1374|blog.unity.com

How to choose the right netcode for your game

Almost all multiplayer games have to account for and solve inherent network-related challenges that impact the game experience, such as latency, packet loss, and scene management. Games solve these challenges in a variety of ways.Finding the right solution depends on your game’s genre, the scale of its players and networked objects, competitiveness, and other aspects, like how much control is needed over the networking layer. Different scenarios require different netcode solutions.In this blog, we cover common networking libraries used in the Unity Engine, plus the results of a study conducted on developers’ experience with these solutions, to help you determine what might be right for your project.Netcode is a high-level term many engineers use to refer to frameworks that are specifically designed to help make building certain aspects of networked gameplay easier – like data synchronization or lag compensation.A fully networked framework contains two essential components:1. A transport (base) layer that manages all traffic and packets sent to/from a client, host, or server2. A higher-level abstraction layer that simplifies common networking gameplay needs and integrates with toolsOftentimes, a “netcode solution” refers to that second, abstraction layer, as this is where you implement your networked gameplay and optimizations.Different netcode solutions have different limitations and capabilities that may make it easier or harder to build your multiplayer experience. It’s important to know exactly what you want to build and evaluate your options before you start, to help reduce refactors that could be expensive.Unity has two different first-party netcode solutions: Netcode for GameObjects and Netcode for Entities.The Netcode for GameObjects package was built to help you more easily synchronize scenes and GameObjects data across multiple clients and platforms with either client- or server-authoritative models. The Unity engine helps you optimize your multiplayer games with tools to profile the network both in Play mode and at runtime.You can also use Relay from Unity Gaming Services, which is a cost-effective peer-to-peer companion service, to scale playtests and build a multiplayer game without having to invest in dedicated hosting.Expand the range of possibilities in the upcoming Unity 2022 LTS. Add the ability to target competitive action multiplayer games with the Netcode for Entities package based on ECS, built for performance and scalability. You can target ambitious server-authoritative gameplay featuring prediction, interpolation, and lag compensation.Manage the costs with a dedicated server build target that can automatically strip assets. Deploy it with Game Server Hosting from Unity Gaming Services, a streamlined approach to maintaining resiliency and scalability in your gaming infrastructure, so you can focus on providing the best experience to your players.Unity has gathered feedback about some of the most widely used third-party netcode solutions, and we’ve created a decision tree to help guide you through the process of deciding which framework might work best for you.To create these tools, we gathered and analyzed data from three sources:A survey of over 200 Unity users that asked for information about their experiences with specific netcode frameworksOver 20 in-depth interviews with users actively shipping multiplayer games with UnityLearnings from prototypes we built with MLAPI (now known as Netcode for GameObjects), DarkRift 2, Mirror, and Photon Quantum.Customers scored and ranked the top netcode solutions across different axes based on their experience.Networking is complex, so the level of stability and support you receive through your netcode solution is critical. Stability and support of each netcode solution was evaluated along three axes – the likelihood of bugs or crashes, response time to fix issues or help debug a challenge, and the likelihood of breaking changes to the APIs.We compiled users’ evaluations of how easy it is to get started and perform common tasks, including the provision of good samples, documentation, tutorials, and the solution’s offering of simple APIs for prototyping.Who wants a solution that has poor performance? To score this for each netcode solution evaluated, we looked for limited GC/allocations, minimal latency overhead, performant compute, and ideally the ability to multithread.Depending on the genre of game you’re looking to create, scalability of the netcode solution is an important consideration. Similar to performance, we evaluated the solution’s ability to support a larger number of connected clients without a large sacrifice in performance.Having a fully-featured netcode solution is important to support any genre or unique game-specific needs your project has. For ranking the solutions, we focused on mid-level features like object and variable replication, RPCs, scene management, and so on. We also looked for higher-level features like prediction and lag compensation.In order to properly budget for your netcode solution, we’ve included evaluations of the cost of each solution as well. This consideration factors in both the cost of the libraries/solution and possible hidden costs, such as operating overhead that has to be managed separately.Before making a decision about a netcode solution, it’s important to take a few things into consideration.First, we highly recommend that you still perform your own evaluation. Our summary of the most common options can be helpful, but you should also do an assessment based on the specifics of your game.Secondly, this list is based on an evaluation run in 2020, and doesn’t represent all of the alternatives for netcode or transport layer solutions that are currently available.Lastly, consider how much extra network-related work and maintenance you’re prepared to take on. Does your game need that much network overhead?If you are building something casual or co-operative that doesn’t require perfect synchronization of all player states across devices, consider a netcode solution with less overhead and development cost like Netcode for GameObjects.If you’re making a game that’s more fast-paced and action-oriented, where players’ physical skills are put in competition with each other, consider a solution like Netcode for Entities. This can support things like client prediction and has a method of compensating for lag.The information below is a start, but we recommend that you also download the full netcode report, where we go into greater detail about these third-party netcode solutions:MLAPI (now acquired by Unity and evolved into Netcode for GameObjects)DarkRift 2Photon PUNPhoton Quantum 2.0MirrorNote: The PDF covers solutions most referenced by customers, but there are more. Some customers discussed other solutions for which we haven’t yet gathered enough customer evidence to evaluate, such as Forge, Normcore, Bolt, LL (Enet, LiteNet, and so on). We encourage you to add these to your considerations to see if they would be an option for your setup.Whether you’re building the next battle royale smash hit or a cozy online co-op, understanding the basics of multiplayer networking and the netcode solutions available to you is essential.See Unity’s Boss Room co-op sample for a production quality example of a project made with Netcode for GameObjects. If you’re looking for an example of a fast-paced, competitive multiplayer game that is also network performant, check out Unity’s ECS Network Racing sample built with the Entities Component System. To see a game example that fully utilizes everything Game Server Hosting has to offer, check out the Battle Royale sample built with Photon Fusion.Happy creating.Editor’s note: This blog was updated in March 2023 with the latest information on Unity’s netcode solutions in order to provide more helpful information for developers choosing the right netcode solution for their game. The report data is still from 2020.

>access_file_
1375|blog.unity.com

Toyota makes mixed reality magic with Unity and Microsoft HoloLens 2

Learn how Unity and HoloLens 2 have become essential tools at one of the world’s largest automakers to streamline processes, increase understanding, and save time.One of the core principles of Toyota Motor Corporation is Kaizen (continuous improvement). In both production equipment and work procedures, Kaizen seeks to drive maximum quality, efficiency gains, and elimination of waste.Toyota often turns to technology to deliver these improvements, which is why the automaker was an early adopter of 3D data for digital engineering and later embraced real-time 3D technology. Toyota uses Unity’s real-time 3D development platform in many ways across its automotive lifecycle.Its virtual pipeline starts by importing vehicle data into Unity using Pixyz. This process quickly converts Toyota’s large computer-aided design (CAD) assemblies into lightweight content suitable for real-time 3D.The company then uses Unity to develop applications tailored to its needs and deploy them to various platforms, whether it’s conducting training sessions in virtual reality (VR), creating stunningly realistic car configurators for its luxury Lexus brand, or condensing inspection workflows from days to hours with HoloLens.Toyota has used Unity to create and deploy mixed reality applications to Microsoft’s revolutionary device across its automotive production process. Naturally, its team was eager to expand their mixed reality capabilities with HoloLens 2, the next generation of Microsoft’s wearable holographic computer.Watch the talk below from Koichi Kayano, the project leader of mixed reality for automotive digital engineering at Toyota, which introduces several proof of concept cases in progress. Learn how Unity and Microsoft’s new mixed reality devices are helping Toyota achieve Kaizen in several aspects of design, manufacturing, and field service.Here are some of the many ways Toyota is saving time, reducing costs and driving efficiencies with mixed reality.Previously an arduous task, CFD analysis is now made simpler with the assistance of mixed reality. Toyota uses Unity and HoloLens 2 to capture and display CFD analysis on a vehicle in real-time to streamline the design review process.Going around a stationary vehicle, the user can simulate and analyze how its design affects aerodynamics. And using multiple HoloLens 2 devices, Toyota’s team can share their view with one another to better communicate and collaborate during a review process.Using Unity and HoloLens 2, Toyota captures and displays CFD analysis on a vehicle in real-timeOnce a vehicle is assembled, it is challenging to explain the functionality of hidden mechanisms within the vehicle. This is made more difficult when the function requires the vehicle to be in motion.Using HoloLens 2 and Unity, users can now move around and inspect the inner workings of a “moving” vehicle – making a task that was once impossible now able to be easily and safely performed.Users can see how the vehicle operates at startup, upon acceleration and deceleration in mixed realityThe possibility of human error is always present, even among expert technicians. Simple errors such as a loose coolant cap carry consequences if not detected and corrected immediately.With the help of machine learning, Unity, and HoloLens 2, engineers are being guided to recognize and remedy inconsistencies that are easily missed by ordinary inspection.Toyota’s team can easily spot mistakes that the human eye would miss with mixed reality. In this case, HoloLens 2 detects that the oil level gauge is improperly installed and highlights it in red text.Trying to reduce human error previously required a lot of human effort, however. To train machine learning models on Microsoft Azure to recognize these simple mistakes, Toyota’s team needed to take 20,000 photos and spend 200 hours annotating the photos manually.Manual annotation was a time-consuming process for Toyota and required its team to label tens of thousands of photos to help the AI system detect errors accurately.Using Unity, Toyota created 3D models of the vehicle and the body parts under the hood, varied the model’s position in 3D space, and automatically captured a large volume of labeled images to train its machine learning models.Toyota used Unity to automatically capture and tag images to train its machine learning models to recognize errors after body parts are installedCompared to the traditional workflow that took 200 hours, this approach generated the needed amount of auto-labeled images in just 30 minutes—a 400X improvement in speed. This synthetic data also proved just as effective at training the machine learning models as the manually annotated photographs. For Toyota, this kind of time and cost savings is an ideal example of Kaizen in action.Proper electrical wiring configuration is crucial for ensuring vehicles operate as intended. In a finished vehicle, however, inspection of connector positions and pin assignments is a serious challenge.Instead of relying on 2D diagrams, Toyota’s team now has the ability to visualize the entire three-dimensional electrical wiring diagram inside the engine, doors, dashboard, or any other part of the car they require. This allows Toyota’s field service engineers to gain contextual understanding and to visualize the location of the wiring systems without the labor and time needed to remove physical parts.Toyota field service engineers can visualize the entire three-dimensional electrical wiring diagram inside the vehicle.Toyota also leveraged mixed reality applications from Dynamics 365 for two additional use cases.--As a global company, Toyota needs to connect field support engineers and experts across various locations. This presents numerous challenges and hinders effective collaboration and training.With HoloLens 2 and Dynamics 365 Remote Assist, Microsoft’s mixed reality distance collaboration tool, two or more participants can share the same view, communicate, and collaborate no matter their location. This solution allows remote staff to inspect work, educate, and train field engineers, while saving considerable time and money and improve overall results.Toyota uses HoloLens 2 and Microsoft’s collaboration tools to seamlessly connect offsite experts to remote support engineersStep-by-step tutorials are critical to the ability of Toyota’s service engineers to make repairs effectively, but creating these manuals is time- and resource-intensive. In the past, a digital version of a work procedure app would take up to ten days and require an on-site computer graphics engineer.With Dynamics 365 Guides, Microsoft’s virtual training, performance, and instruction solution, the same task now takes 90 percent less time—just one day. This automated process allows anyone with basic training to create necessary applications through detailed instructions and guidance, freeing up programmers to focus on other tasks.Dynamics 365 Guides reduced person-hours needed to create training content for car repairs by 90 percent--Learn more about tools for developing for HoloLens 2 with Unity. Unity is the leading platform for creating content for augmented reality and virtual reality applications – subscribe to Unity Industrial Collection to get started today or learn more about our solutions for your business. Unity Technologies is the author of this blog post; Toyota Motor Corporation is not responsible for its content.

>access_file_
1376|blog.unity.com

Enhanced Aliasing with Burst

The Unity Burst Compiler transforms your C# code into highly optimized machine code. Since the first stable release of Burst Compiler a year ago, we have been working to improve the quality, experience, and robustness of the compiler. As we’ve released a major new version, Burst 1.3, we would like to take this opportunity to give you more insights about why we are excited about a key performance focused feature - our new enhanced aliasing support.The new compiler intrinsics Unity.Burst.CompilerServices.Aliasing.ExpectAliased and Unity.Burst.CompilerServices.Aliasing.ExpectNotAliased allow users to gain deep insight into how the compiler understands the code they write. These new intrinsics are combined with extended support for the [Unity.Burst.NoAlias] attribute, we've given our users a new superpower in the quest for performance.In this blog post we will explain the concept of aliasing, how to use the [NoAlias] attribute to explain how the memory in your data structures alias, and how to use our new aliasing compiler intrinsics to be certain the compiler understands your code the way you do.Aliasing is when two pointers to data happen to be pointing to the same memory allocation.The above is a classic performance related aliasing problem - the compiler without any external information cannot assume whether a aliases with b, and so produces the following nonoptimal assembly:As can be seen it:Stores 13 into b.Stores 42 into a.Reloads the value from b to return it.It has to reload b because the compiler does not know whether a and b are backed by the same memory or not - if they were backed by the same memory then b will contain the value 42, if they were not it would contain the value 13.Let's look at the following simple job:The above job is simply copying from one buffer to another. If Input and Output do not alias above, EG. none of the memory locations backing them do not overlap, then the output from this job is:If a compiler is aware that these two buffers do not alias, like Burst is with the above code example, then the compiler can vectorize the code such that it can copy N things instead of one at at time:Let's look at what would happen if Input and Output happened to alias above. Firstly, the safety system will catch these common kinds of cases and provide user feedback if a mistake has been made. But let's assume you've turned safety checks off, what would happen?As you can see, because the memory locations slightly overlap, the value a from the Input ends up propagated across the entirety of Output. Let's assume that the compiler also vectorized this example because it wrongly thought the memory locations did not alias, what would happen now?Very bad things happen - the Output will not contain the data you expected.Aliasing limits the Burst compilers ability to optimize code. It has an especially hard toll on vectorization - if the compiler thinks that any of the variables being used in the loop can alias, it generally cannot safely vectorize the loop. In Burst 1.3.0 and later, with our extended and improved aliasing support, we have vastly improved our performance story around aliasing.In Burst 1.3.0 we've extended where the [NoAlias] attribute can be placed to four places:On a function parameter it signifies that the parameter does not alias with any other parameter to the function, or with the ‘this’ pointer.On a field it signifies that the field does not alias with any other field of the struct.On a struct itself it signifies that the address of the struct cannot appear within the struct itself.On a function return value it signifies that the returned pointer does not alias with any other pointer ever returned from the same function.In cases of fields and parameters, if the field type or parameter type is a struct, "does not alias with X" means that all pointers that can be found through any of the fields (even indirectly) of that struct are guaranteed not to alias with X.In cases of parameters, note that a [NoAlias] attribute on a parameter guarantees it does not alias with this, which often is a job struct, which contains all data for the struct. In Entities.ForEach() scenarios, this will contain all the variables that were captured by the lambda.We will now go through an example of each of these uses in turn.If we look again at the example with Foo above, we can now add a [NoAlias] attribute and see what we get:Which turns into:Notice that the load from ‘b’ has been replaced with moving the constant 13 into the return register.Let's take the same example from above but apply it to a struct instead:The above produces the following assembly:Which when parsed into our speech translates to:Loads the address of the data in ‘b’ into rax.Stores 42 into it (1109917696 is 0x‭42280000‬ which is 42.0f).Loads the address of the data in ‘a’ into rcx.Stores 13 into it.Reloads the data in ‘b’ and converts it to an integer for returning.Let's assume that you as the user know that the two NativeArray's are not backed by the same memory, you could:By attributing both a and b with [NoAlias] we have told the compiler that they definitely do not alias with each other within the struct, which produces the following assembly:Notice that the compiler can now just return the integer constant 42!Nearly all structs you will create as a user will be able to have the assumption that the pointer to the struct does not appear within the struct itself. Let's take a look at a classic example where this is not true:Lists are one of the few structures where it is normal to have the pointer to the struct accessible from somewhere within the struct itself.Now onto a more concrete example of where [NoAlias] on a struct can help:Which produces the following assembly:As can be seen it:Loads ‘p’ into rax.Stores 42 into ‘p’.Loads ‘p’ into rax again!Loads ‘i’ into ecx.Returns the index into ‘p’ by ‘i’.Notice that it loaded ‘p’ twice - why? The reason is that the compiler does not know whether ‘p’ points to the address of the struct bar itself - so once it has stored 42 into ‘p’, it has to reload the address of ‘p’ from ‘bar’, just in case. A wasted load!Let's add [NoAlias] now:Which produces the following assembly:Notice that it only loaded the address of ‘p’ once, because we've told the compiler that ‘p’ cannot be the pointer to ‘bar’.Some functions can only return a unique pointer. For instance, malloc will only ever give you a unique pointer. For these cases [return:NoAlias] can provide the compiler with some useful information.Let's take an example using a bump allocator backed with a stack allocation:Which produces the following assembly:It's quite a lot of assembly, but the key bit is that it:Has ‘ptr1’ in rdi.Has ‘ptr2’ in rax.Stores 42 into ‘ptr1’.Stores 13 into ‘ptr2’.Loads ‘ptr1’ again to return it.Let's now add our [return: NoAlias] attribute:Which produces:And notice that the compiler doesn't reload ‘ptr2’ but simply moves 42 into the return register.[return: NoAlias] should only ever be used on functions that are 100% guaranteed to produce a unique pointer, like with the bump-allocating example above, or with things like malloc. It is also important to note that the compiler aggressively inlines functions for performance considerations, and so small functions like the above will likely be inlined into their parents and produce the same result without the attribute (which is why we had to force no-inlining on the called function).For function calls where Burst knows about the aliasing between parameters to the function, Burst can infer the aliasing and propagate this onto the called function to allow for greater optimization opportunities. Let's look at an example: Previously the code for Bar would be:This is because within the Bar function, the compiler did not know the aliasing of ‘a’ and ‘b’. This is in line with what other compiler technologies will do with this code snippet.Burst is smarter than this though, and through a process of function cloning Burst will create a copy of Bar where the aliasing properties of ‘a’ and ‘b’ are known not to alias, and replace the original call to Bar with a call to the copy. This results in the following assembly:Which as we can see doesn't perform the second load from ‘a’.Since aliasing is so key to the compilers ability to optimize for performance, we've added some aliasing intrinsics:Unity.Burst.CompilerServices.Aliasing.ExpectAliased expects that the two pointers do alias, and generates a compiler error if not.Unity.Burst.CompilerServices.Aliasing.ExpectNotAliased expects that the two pointers do not alias, and generates a compiler error if not.An example:using static Unity.Burst.CompilerServices.Aliasing; These intrinsics allow you to be certain that the compiler has all the information that you as the user know. These are compile time checks. When the code you write to produce the arguments for the intrinsics do not have side effects, there is no runtime cost for these aliasing intrinsics. They are particularly useful when you have some code that is performance sensitive that you want to be sure that any later changes do not change the assumptions the compiler can make about aliasing. With Burst, and the control we have over the compiler, we can provide this sort of in-depth feedback from the compiler to our users to ensure your code remains as optimized as you intended.The Unity Job System has some built-in assumptions it can make about aliasing. The rules are:Any struct with a [JobProducerType] (EG. anything like IJob, IJobParallelFor, etc) knows that any field of that struct that is a [NativeContainer] (EG. NativeArray, NativeSlice, etc) cannot alias with any other field that is also a [NativeContainer].The above is true except for fields that have the [NativeDisableContainerSafetyRestriction] attribute on them. For these fields, the user has explicitly told the Job System that this field can alias with any other field of the struct.Any struct with a [NativeContainer] cannot have the ‘this’ pointer of that struct within the struct itself.Ok all the formal definitions over, let's look at some code to better explain the above rules:Walking through the above aliasing checks:a and b do not alias since they are both [NativeContainer]'s contained within a [JobProducerType] struct.But since c has the field attribute [NativeDisableContainerSafetyRestriction] it can alias with a or b.And the pointers to each of a, b, or c cannot appear within them (EG. in this case the data backing the NativeArray cannot be the data backing the contents of the array).These built-in aliasing rules allow Burst to perform pretty darn good optimizations for most user code, allowing the performance by default that we strive for.Many users will write code along the lines of BasicJob below:The code is loading from three arrays, combining their results, and storing it to a fourth array. This kind of code is great for the compiler because it allows it to generate vectorized code, making the most of the powerful CPUs we all have in our mobiles and desktop computers today.If we look at the Burst Inspector view of the above job:We can see the code is vectorized - the compiler has done a good job here! The compiler is able to vectorize because as we explained above the Unity Job System has rules that each variable in a job struct cannot alias any other member in the struct.But there are cases that can be seen in the wild where developers are building up data structures where Burst has no information on how the aliasing works with those structures, for example:In the above example we've just wrapped the data members from the BasicJob in a new struct Data, and stored this struct as the only variable in the parent job struct. Let's see what the Burst Inspector shows us now:Burst has been smart enough to vectorize this example - but at the cost of having to check that all of the pointers being used are not overlapping at the start of the loop.This is because the Job System aliasing rules only give Burst guarantees about direct variable members of a struct - not anything derived from them. So Burst has to assume that the native array backing the variables a, b, c, and o is the same variable - meaning the complicated and performance draining dance of 'Do any of these pointers actually equal each other?'. So how can we fix this? By using our [NoAlias] attribute to explain this to Burst!In the WithAliasingInformationJob job above, we can see that there are new [NoAlias] attributes set on the fields of Data. These [NoAlias] attributes are telling Burst that:a, b, c, and o do not alias with any other member of Data that has a [NoAlias] attribute.So each variable does not alias with any other variable in the struct because they all have the [NoAlias] attribute.And again we'll look at the Burst Inspector:With this change we have removed all those expensive runtime pointer checks, and can just get on with running the vectorized loop - nice!Using the new Unity.Burst.CompilerServices.Aliasing intrinsics will ensure that you never accidentally change the code to affect aliasing again in the future. For example:These checks do not cause a compiler error in the above job - which means as we already seen, Burst has enough information because of the added [NoAlias] attributes to detect and optimize this case.Now while this is a contrived example for the sake of conciseness in this blog, these kind of aliasing hints can provide very real-world performance benefit in your code. As we always recommend, using the Burst Inspector when iterating on code modifications you have made will ensure that you keep stepping towards a more optimized future.With the release of Burst 1.3.0 we provided you another set of tools to get the maximum performance from your code. With the extended and enhanced [NoAlias] support you can perfectly control how your data structures work. And the new compiler intrinsics give you a meaningful insight into how the compiler understands your code.If you haven’t started with Burst yet and would like to learn more about our work on the new Data-Oriented Technology Stack (DOTS), head over to our DOTS pages, where we will be adding more learning resources and links to talks from our teams as more becomes available.We always welcome your feedback - join the forum here to let us know how we can help you level up your Burst code in future.

>access_file_
1378|blog.unity.com

3 tips for running a killer UA campaign on OEM and carrier traffic

A few years ago, relying on tried and tested channels such as search and social was enough for growth managers to succeed. But with increased market saturation and competition, marketers today can’t afford to rest on their laurels. Forward-thinking marketers are constantly thinking about ways to unlock the next level of growth, and many are recognizing the potential of on-device distribution channels to do just that. Normally only accessible through direct partnerships with OEMs and telecom operators, today promoting your app as a native part of the device experience has become easier and more accessible than ever.Growth marketers from verticals ranging from food delivery to news aggregation and streaming are using platforms connecting them with on-device inventory to reach users at high-impact, high-intent moments directly on their devices. Having said that, this ecosystem has several notable differences from traditional channels like social and search, and as a result measuring and achieving success requires a switch in mindset. Below, we go through three top tips for maximizing performance with on-device distribution channels.Utilize a multi-bidding strategyLike with any user acquisition channel, the first step to launching a campaign is determining your KPI and bidding strategy. Typically, this requires calculating a single bid according to your KPI and running it across the entire channel. However, with a single bid, campaign results end up being binary: you either meet it or you don’t. For the campaigns that don’t, advertisers often pull their bids on (in this case on OEM and carrier traffic) the device segments that fail to meet the goal – ultimately excluding a chunk of traffic. While a more exclusionary strategy like this aims to bring up the overall campaign’s average KPI, it can result in inefficient bidding and exclude segments that are worth less but ultimately not worthless.Unlike some social and search channels, on-device channels allow for a multi-bidding strategy – i.e. bidding differently for each device segment. Doing so enables you to cover a wider range of users by including user bases that could still provide value, just for a lower bid. The key idea here is to bid on all segments – the most valuable and the less valuable ones.Let’s take a look at this in practice. If your KPI is “$20 per first order” and you see that Segment A costs $10 and Segment B costs $22, chances are you’d stop showing ads to Segment B, since you’re effectively losing money. On any traditional channel that doesn’t enable multi-bidding, this would make sense. However, with a multi-bidding strategy, you’d be able to raise bids on Segment A and lower them on Segment B, enabling you to even out and perfectly hit the $20 KPI on both segments, all while scaling traffic from the more lucrative Segment A and minimizing traffic from Segment B without excluding it entirely. In addition to being more cost-effective, multi-bidding also facilitates scale, as you’re not limited to just one device segment.It’s best to start out with one blanket bid, and then adjust it after 2-3 weeks with a multi-bidding strategy on the segments that don’t match the initial KPI.Measure performance with a long-term viewWhen assessing post-install behavior, it is important to understand the unique nature (and therefore value) of on-device installs. In contrast to campaigns on traditional channels, low engagement post-install on OEM and carrier traffic is not necessarily an indicator of a poor campaign. In fact, the strongest value of on-device channels is often in their ability to deliver users with high LTVs over the long-term, as opposed to demonstrating immediate quick returns. For that reason, it’s important to measure performance on D30, D60 and D90 when looking to evaluate your campaigns on this channel.A food delivery app, for instance, might convert a user to install the app during the device onboarding, but they might not open or interact with the app in a meaningful way for days or weeks – after all it’s unlikely the user will be immediately interested ordering a meal the moment they unbox their new device. However, when the time comes for a family meal or date night, they are likely to use this app, because it’s already installed on their device.A/B test the right elementsNo matter the channel, it’s always important to iterate and improve performance through A/B testing. Though there is generally less content to A/B test on on-device channels compared to traditional channels, that doesn’t make it a less critical pillar in your optimization strategy.Typically, on-device channels will offer a variety of placements, each with different value propositions and rates of engagement. Testing new placements on a regular basis is crucial – you may find that a certain placement may perform better for the special goal or message you’re trying to push during Christmas, versus the placement you rely on year-round. For example, special promotion campaigns such as free shipping for a shopping app, or free first ride for a taxi app, tend to deliver better results on full screen offers and on-device notifications, since these placements offer the most real estate for messaging.In addition to the placement itself, like with any other channel, the creatives used within the placement can have a significant impact on performance. For on-device channels specifically, there isn’t always a creative to A/B test – some placements such as the device setup manager simply show the app’s logo and name. But even in these situations, using an icon that’s updated according to season can boost results.Full screen offers and smart notification banners offer the most possibilities for creative tinkering. Because you’re running campaigns directly on-device, your primary goal is to design a creative that’s going to stand out in a user’s notification tray. The key elements to A/B test are CTAs, descriptions, background colors, and app screenshots.Tweaking colors is especially relevant with on-device channels – since carriers often have strong and recognizable visual brand languages, testing creatives that match those colors can help increase conversion rates. For CTAs, it’s best to highlight the primary function of the app and the value proposition users will most enjoy, and then A/B test variants of that action – for example, instead of writing “sign up” or “open now”, try “book a ride” and test it against “ride now”. The screenshots displayed in the creative should showcase the app’s primary function that you’re highlighting in the copy. We recommend iterating creatives every 2 weeks in order to stay ahead of any possible ad fatigue.With the challenge of getting discovered by high-value users tougher than ever, trying new channels that offer an innovative advertising experience for users is key. Adding on-device marketing to your media mix is a great case in point, allowing you to incorporate your app into a native part of the device experience. Just remember not to treat it the same as your other channels – by adopting a longer-term mindset that prioritizes late retention, utilizing a multi-bidding strategy, and A/B testing the right elements.Check out our other popular blogs on Aura including Our Guide to App Distribution, How To Market Your App Successfully, How To Master App Discovery For Your Fitness App and more.

>access_file_
1379|blog.unity.com

How to optimize user flow conversion to level up your offerwall performance

If we look at key areas of a UA manager’s ad playbook, for example video ad campaigns, a significant amount of effort is put into optimizing conversion rates by improving every aspect of the ad creative. However, until the recent emergence of rich media intermediate pages within the offerwall user flow, creative optimization was not a focus for offerwall advertisers, who focused instead on adjusting their bids. The idea behind increasing bids is to get a higher spot on the publisher’s offerwall store, to increase visibility, attract users with higher rewards, and therefore engagement from users. However, advertisers can only raise their bids to a certain limit based on their ROAS goals.Now, with ironSource there is a huge opportunity for advertisers to improve their offerwall campaigns beyond raised bids - using a similar approach to creative optimization seen in other ad units. The goal with this is optimizing each step of the user journey in offerwall campaigns to increase conversions. It has several similarities to app store optimization (ASO), with both aiming to help your app stand out from the crowd, capture users’ attention, and drive installs. Below we run through each step of the offerwall user flow and explain the different ways you can optimize them to take your ironSource offerwall performance up a notch.Step 1: Clicking on the offerAt the first stage - getting the user to click on your offer - it’s important to remember that your offer is shown on a list with a number of other competitive offers. Therefore, the focus of your conversion optimization should be on standing out from the crowd and capturing attention. To that end, the offer's headline can make a significant impact: you have two lines of text at your disposal, which you should optimize through A/B tests of different variations. As a rule of thumb, make sure to localize the language based on the geo of your target audience.In addition to the text, also experiment with the icon’s design, and test if your campaign performs better with the icon as a GIF or a still image. The icon is the first thing users will see, so make sure it’s eye catching.Benchmark*: Out of the total campaign impressions, 2.3% of users generally click on an offerStep 2: Clicking on the intermediate pageOnce a user clicks on an offer from the offerwall, they are taken to a new page not dissimilar to an app store listing: it shows a video of the advertiser's game and instructions for completing the offer. At this stage, you are just two clicks away from converting the user to install your app and begin completing your event. Therefore, the aim here is to make the user perception of your game and the post-install event as appealing as possible. To optimize this stage, experiment with different videos and images - just like ASO, a small change in design can have a big impact. Also play around with the offer's instructions: they should be easy to understand, and need to accurately reflect the difficulty and time required to complete the event. Because users see the reward on the intermediate page too, make sure to align the reward with the time investment required from them.Benchmark: Of the users who converted to this stage, an average of 55% click to initiate the offer and go to the app store to install.Step 3: Installing the gameOnce you convert the user on the intermediate page, they are taken to the app store listing of your game where they can see reviews, information about the game, and screenshots of the gameplay. Here, you are one click away from the user installing your app, and all the core principles of ASO can be applied.For instance, having good reviews is important for encouraging users to install. Apart from having a great game, good reviews (or at least avoiding bad reviews) can also be achieved by ensuring your offerwall events are accurately represented in your description. To learn more about how to leverage positive app store reviews, check out our eBook here. The icon on your app store listing is also very important, so be sure toA/B test different ways to optimize it. As a rule of thumb, avoid cramming too many elements together - a simpler look tends to work better - and choose one simple visual to be the core component of the icon, like your game's main character. Try to make the background a soft color, and contrast that with a more striking color for the character or symbol you include.Learn more about ASO on our webinar here.Benchmark: 50% of users who make it to the app store listing decide to install.Step 4: Completing the taskOnce you convert users, and they install your app to begin the event, focus on ensuring the maximum number of users actually complete the event. In the realm of user flow optimization, this can be achieved by ensuring the event is accurately described. If a user starts your event and sees that it requires more effort and time than anticipated, the likelihood of them becoming frustrated and ultimately churning increases. Choosing the right event is also very important: aim for an event that is appealing and challenging enough to create strong engagement with the game, while also ensuring it is achievable. Learn more about choosing the right event here.Another way to boost completion rates is sending pop-up notifications to users who stopped playing, informing them that they can still complete the event and enjoy its rewards. Doing so also increases retention.Benchmark: 40% of users who download the game and begin the event, make it to the end to receive their reward in the original game.*Benchmarks vary according to the game genre and the depth / complexity of the campaignReady to acquire high quality users at scale? Get started with the ironSource offerwall nowAnd download our new eBook, The Ultimate Offerwall Guide, for more insights direct from ironSource’s team of experts

>access_file_
1380|blog.unity.com

Enhancing mobile performance with the Burst compiler

As part of a recent session at Unite Now, we discussed how technology in the Burst compiler enables developers who are building projects with Unity to take advantage of the Arm Neon instruction set. You can use the Burst compiler when targeting Android devices to improve the performance of Unity projects supported by Arm architecture.Unity and Arm have formed a partnership to enhance the mobile game development experience for the billion-plus Arm-powered mobile devices in the Android ecosystem.For game developers, performance is paramount. Year after year, Arm invests in improving its CPU and GPU technologies to provide the advances in performance and efficiency needed to build richer experiences. Recently, Arm announced two new products, Cortex-A78, which provides greatly improved power efficiency, and the even more impressive Cortex-X1. These hardware developments are complemented by advances in compiler technology for the Arm architecture. Compilers ensure that when you develop high-performance games, they are translated and optimized into efficient binaries that make the best use of the Arm architecture’s features.Burst is an ahead of time compiler technology that can be used to accelerate the performance of Unity projects made using the new Data-Oriented Technology Stack (DOTS) and the Unity Job System. Burst works by compiling a subset of the C# language, known as High-Performance C# (HPC#), to make efficient use of a device’s power by deploying advanced optimizations built on top of the LLVM compiler framework.Burst is great for exploiting hidden parallelism in your applications. Using Burst from a DOTS project is easy, and it can unlock big performance benefits in CPU-bound algorithms. In this video, you can see a side-by-side comparison of a scripted run through in a demo environment with and without Burst enabled.The demo shows three examples of simulations using Unity Physics. You will see that the Burst-compiled code is able to compute frames with higher numbers of physics elements faster, allowing for better performance, less thermal throttling, lower battery consumption, and more engaging content.We say that Burst brings performance for free, but how does that work?Burst transforms HPC# code into LLVM IR, an intermediate language used by the LLVM compiler framework. This allows the compiler to take full advantage of LLVM’s support for code generation for the Arm architecture to generate efficient machine code optimized around the data flow of your program. A diagram of this flow is shown below.Mike Acton has given a talk called “Data-oriented design and C++,” which features the key line “know your hardware, know your data” as a means of achieving maximum performance. Burst works well because it gives visibility to the constraints on array aliasing that are guaranteed by the HPC# language and the DOTS framework, and it can make use of LLVM’s knowledge of your hardware architecture. This enables Burst to make target-specific transformations based on the properties of scripts written against the Unity APIs.You can use Burst to compile C# scripts that make use of the Unity Jobs System in DOTS. This is done by adding the [BurstCompile] attribute to your Job definition:We can use the Burst Inspector, found in the Jobs menu, to see what code will be generated. Note that for this demonstration, we have disabled Safety Checks and are using Burst 1.3.3.In the Burst Inspector that appears, we enable code generation for Armv8-A by selecting the ARMV8A_AARCH64 target.We can now see the AArch64 code that will be generated for our C# loop, including a core loop using the Neon instruction set.For more details on using the Burst compiler, please see the instruction manual, check out this Unite Now talk, where we go through the steps above in more detail, or head to the forums to get more information or ask questions about using Burst in your next project.

>access_file_