// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1689 transmissions indexed — page 76 of 85

[ 2018 ]

20 entries
1502|blog.unity.com

Making the most of TextMesh Pro in Unity 2018

Whether you’re working on an FPS, a puzzle game or a VR experience, one of the aspects you have to take care of is User Interface. Luckily enough, as of March 2017, TextMesh Pro has joined the Unity family. This means that making your UI in Unity look great has become much easier and faster!For those of you who haven’t followed these developments, TextMesh Pro is a replacement for Unity’s default text components. It is just as performant (even more, in some cases) and it uses a completely different rendering technique called Signed Distance Field (SDF), originally used by Valve in Team Fortress 2. Together with having the power to make your text look great without much effort, TextMesh Pro also provides you with a much more advanced control over it, via inspector or scripting. In this post, we’re going to take a look at how to make the most out of this incredible tool!There are two main reasons why it is a great idea to start your projects using TextMesh Pro. First, visual improvements. Thanks to SDF rendering, it’s easy to change the way your text looks without having to recreate its font. Every TextMesh Pro component has a material attached to it that you can tweak in order to modify the style of your text. Second, better control. The TextMesh Pro component includes all variables you can find in the ordinary text component, plus a lot more. And if this wasn’t enough, just know that TextMesh Pro is currently used by over a quarter of a million developers around the world!Working with the default Unity text, you might have noticed that sometimes stretching or resizing the object causes it to look blurry. This is because the text doesn’t hold information of what it would look like when resized, and Unity therefore has to “improvise” and attempt to generate the missing pixels on the fly. Because of the different rendering technique that TextMesh Pro is using, this is no longer an issue. SDF is based on the principle of rendering a Font Atlas at a high resolution so that the font always has information about what a character would look like when resized.With TextMesh Pro you can import any font file and create your own font asset (Window > TextMesh Pro > Font Asset Creator). This allows you to choose the resolution for its Font Atlas (which will determine how effective SDF rendering will be for your text). Obviously, the lower the resolution you choose, the faster the Font Atlas will be generated.Since the Font has information about how it would look like at different sizes, it can also reconstruct its Outline and Dropped Shadow from the Font Atlas. Simply tweak the material properties and watch your text change its look entirely!As we have seen, TextMesh Pro offers great looking text. But what’s the point of nice text if you have no control over it? The TextMesh Pro component has options that allow you to customize font size, spacing, alignment, kerning, or to enable Autosize and to fit your text into a Container. The last two, in particular, give you great control over working with different platforms or different languages, as they allow your text to autosize depending on a given text container without the need of any scripting. However, if you do wish to change these settings at runtime, you can access all variables in the TextMesh Pro components from the TextMeshPro API.For an extra layer of customization, you can also add the Text Info Debug Tool component to your text object so that you can visually represent characters, words, links, lines etc.If you want to save time reformatting your text every time you insert a header, a title, a quote etc., you can set up a Style Sheet for any specific purpose. One example could be to inject a decoration to a header. To create a Style Sheet, select Create > TextMesh Pro > Style Sheet. You can set this new asset as your default Style Sheet from Edit > Project Settings > TextMesh Pro Settings.In addition to controlling the way the text looks from the inspector or a script, you can control it from the text field itself. If you are familiar with HTML or XML, you can customize the look of your text as you are typing. If you’re not, just read the guide on how to use Rich Text with TextMesh Pro) to get started! This is particularly useful in cases where you want to use multiple styles, sizes, materials in the same text object.Another way of making TextMesh Pro look great is to apply a Surface Shader to your text. This will allow lighting in the scene to affect the text. In the example below, a few real-time point lights are moving around the scene and affecting the text.The material properties give you the option to customize settings like Face, Outline, Bevel, Lighting, BumpMap, EnvMap, Glow, and Debug settings.Once you’ve created a material that you’re happy with, you can create a Material Preset that we will be able to reuse at any point, specifically for the Font Asset we’re using. We can do so by right-clicking on the Material name and selecting ‘Create Material Preset’. This will create an asset that we can select from our TextMesh Pro component > Font Settings > Material Preset.If you have generated a Font Atlas that is missing certain characters, TextMesh Pro will make the character fall back to a default glyph whenever typed. You can change this glyph by going on Edit > Project Settings > TextMesh Pro Settings. Alternatively, you can set up one or more Font Assets to which TextMesh Pro will fall back to if the character is not found in the primary Atlas. For optimization purposes, it makes sense to keep the main Font Atlas at a higher resolution, and all the fallback Atlases to a lower one.In addition to that, you can use the TextMesh Pro Settings to set up Resources paths for Font & Materials or Sprite Assets. When using Rich Text, you can insert a Sprite depending on which are available in the primary Sprite Asset you have defined in the Settings. The easiest way of doing so is by importing a Sprite Sheet, then slicing it in the Sprite Editor, right-clicking on the asset and selecting Create > TextMesh Pro > Sprite Asset. From this new asset, you can customize settings such as offset or pivot. Likewise, you can set up a series of Fallback Sprite Assets to fall back to if a Sprite is not found in the primary Sprite Assets.In terms of performance, as we have already mentioned, TextMesh Pro works similarly to the default text: it still renders on quads, so it is as efficient as using a bitmap font. There is also no runtime memory allocation: TextMesh Pro will only allocate space for the text objects when you press play. If you’re increasing the number of characters by a significant amount, only one reallocation is made; if you are decreasing the number of characters, no reallocation is made, unless it’s decreased of at least 256 characters. And in terms of improvements, you can obtain better results for styles like Outline and Drop Shadow simply because of the SDF rendering technique used by TextMesh Pro.Most of what was mentioned in this blog is available as an example scene when you import TextMesh Pro into your project (from Window > Package Manager > TextMesh Pro > Install from Unity 2018.1 onwards). I strongly recommend you to take a look at them before you start using TextMesh Pro, as their content might answer most of the questions you have. However, if you want to provide your feedback or still have any questions, you are welcome to do so via the Forums.

>access_file_
1503|blog.unity.com

Measuring time-to-engagement in interactive ads

The recent emergence of interactive ads is causing an industry transition from passive to active ad experiences, opening up a realm of optimization possibilities for advertisers which didn’t exist before. The rich data points interactive ads provide, give advertisers the ability to understand and track milestones throughout the ad experience - such as how many users choose to initially engage, how many choose to engage with subsequent interactive touchpoints, how many convert and when - data that is far more useful than just an impression or a click. To fully leverage this data, our team at ironSource has begun using new metrics to better measure a user’s progression throughout interactive ads. Enter TTE (Time to Engagement).What is Time to Engagement?We noticed that some of the interactive ads we were building for advertisers were resulting in low completion rates and low conversion rates. At first, our team thought that users were dropping out after the first or second touchpoint because the interactive ads themselves were too long. But even when we reduced the length of the ad, we didn’t see any significant improvements.Since shortening the entire ad didn’t work, we looked to optimize individual touchpoints. That’s when we came up with TTE, or Time to Engagement.TTE measures the time it takes for users to perform an action within an ad - from the moment a choice screen appears until the user reacts to it. Our goal was to understand how we could control the length of the ad experience by diagnosing which touchpoints were resulting in the most wasted time.What did we learn?We conducted a series of tests, analyzing over 1B ad impressions from tier-one countries, across multiple in-ad touchpoints and different verticals. Here is what we learned: Clarity is keyFirst, we looked at the change in time between the moment the choice screen appeared and the user’s first action. The weighted average was 5.6 seconds, with 85% of engaged users engaging with an interactive ad within the first 10 seconds. Clearly, we were wasting precious time at the first touch point, likely because users were either failing to understand that the ad itself was interactive, or were unaware of what action was required of them.Too many choices, too little timeNext, we tried to determine what happened at the second and third touchpoints, which would inform us where else in the in-ad funnel users were dropping off. We found that with each additional choice (i.e. the number of choices presented to a user at a given point within an interactive ad), the TTE, or the time it took for users to engage with the ad, increased. The more options users were presented with, the longer it took them to engage - that’s if they engaged with it at all. Short TTE = higher conversionLastly, our analysis pointed towards a strong correlation between overall conversion and TTE, as we discovered that users who ultimately convert, engage with creatives earlier than those who don’t convert - as they fully completed the ad and got all of the information about the product.Or, put differently: users with a short TTE are more likely to convert. In one ad we looked at, the group who converted had, on average, a 60% shorter TTE than those who didn’t convert. The lower the TTE, the higher the completion rate, and accordingly, the lower the dropout rate as time frame of the ad is being used more efficiently. How to lower your TTEOnce we understood how long it took for users to engage with touchpoints in interactive ads, we were able to implement a number of changes. In one case, we added a dark overlay in order to make the UI clearer and more focused. We also added a clear and straightforward call to action (“tap to choose”) with pulsating arrows to show users the ad itself was interactive. Users then clearly understood what actions were required of them.In the end, TTE not only helped us understand how long it takes a user to engage with an interactive creative, but also how different triggers affect a user’s reaction time, allowing us to provide users with a richer experience while simultaneously decreasing the time where users weren’t interacting with the ad at all - improving overall conversion.ConclusionThe rise of in-ad data has brought about new ways to optimize ads that were never possible before. With access to rich data points provided by interactive ad units, and with the creation of metrics like TTE, advertisers can fully exploit the limited time before the exit button appears on an ad. As user behavior becomes increasingly transparent, and by diagnosing which touchpoints users get stuck on or engage with the most, advertisers can improve an ad’s overall performance KPIs.

>access_file_
1506|blog.unity.com

2018.3 terrain update: Getting started

We now have a team dedicated to terrain, and our initial efforts will soon be publicly available! Unity 2018.3 will ship with an update to the terrain system. This update features improved tools and performance by taking better advantage of the GPU. It also adds support for the HD and LW render pipelines, while being backward compatible with the built-in render pipeline and the existing Unity Terrain system.Get the Unity 2018.3 beta now to get early access to the updates! Please be aware that the user interface and the API are both still subject to change as the beta is still under active development.On the performance side, we added a GPU-instanced render path for terrain. In most cases, instancing yields a dramatic reduction in the number of draw calls issued. Many of our tests saw more than a 50% reduction in CPU costs (though, of course, actual numbers will depend on your platform and use case). You can choose this new render path by enabling ‘Draw Instanced’ in the Terrain settings:When enabled, Unity transforms all of the heavy terrain data, like height maps and splat maps, into textures on the GPU. Instead of constructing a custom mesh for each terrain patch on the CPU, we can use GPU instancing to replicate a single mesh and sample the height map texture to produce the correct geometry. This reduces the terrain CPU workload by orders of magnitude, as a few instanced draw calls replace potentially thousands of custom mesh draws.As a nice side effect, it also improves our load times! Not only can we skip building all of those custom meshes, but we can also use the GPU to build the basemap (pre-blended LOD texture); the GPU is much faster at that kind of thing. This also means that if you have your own custom terrain shader, you can now override the ‘build basemap’ shader and generate matching basemap LOD textures.Instancing also improves the appearance of terrain normals; we decouple the terrain mesh normals from the geometry by storing them in a normal map texture that is generated from the heightmap and sampled in the pixel shader. This means the normals are independent of the mesh LOD level. Consequently, you can increase the ‘pixel error rate’ to decrease vertex cost, with fewer artifacts.We also developed terrain shaders for both the HD and LW render pipelines, available in package versions 4.0.0 or later, with support for instanced rendering and per-pixel normals. The HD shader was further enhanced to support new features such as height and density blend modes, normal scaling, texture controlled surface metalness and smoothness. The HD terrain shader is limited to a single pass, but it does support blending up to 8 terrain layers in one pass.On the editor side, we have exposed a script API for building your own custom terrain tools, along with a suite of utility functions you can use to easily implement seamless cross-tile sculpting and painting operations on the GPU. The new TerrainAPI includes TerrainPaintTool, a base class for terrain tools, and TerrainPaintUtility, containing utility functions for modifying terrain data.Applying these changes, we converted all of the existing terrain tools to GPU operations. Aside from making these tools much faster, this also gave us larger brush sizes, improved brush previews, and the ability to paint across terrain tile borders with automatic seam-stitching.We’ve also begun experimenting with brush features such as brush rotation and randomization and more advanced painting tools, like heightmap and mesh stamping, clone brushes, and more. These painting tools features are currently not in 2018.3, but we are making them available via our GitHub Terrain Tools project.We made it easier to work with multiple terrain tiles. Aside from seamless painting between terrains, you can now manage the connections between neighboring terrains automatically. Previously, this required writing a script to connect neighbors manually.Enable ‘Auto connect’ in the Terrain Settings, and the Terrain will automatically connect to its neighbors with the same grouping ID.When expanding your existing terrain, you can use our new ‘Create Neighbor Terrain’ tool to quickly add matching terrain tiles along empty borders.We are also working to make resizing and resolution changes less destructive. In 2018.3, the heightmap and splat maps will resample when you change their resolution, instead of the previous behavior of clearing the data and losing all of your work. We are working towards improving all resizing operations in the near future.In order to simplify workflows, we also created two new terrain-related asset types: the TerrainLayer asset and the Brush asset.TerrainLayer lets us define terrain materials independent of the terrain object so that we can easily track the same material across multiple terrains. This helps with seamless painting and material modification. We also extend the TerrainLayer asset to support “mask map” textures, which can be used for arbitrary shading purposes, and a script interface to provide shader-dependent custom GUI for the TerrainLayer asset. The Brush represents the GPU brush shapes used by painting and sculpting tools. They are now defined by a texture and a radial falloff curve. This makes it much easier to create and tweak brush shapes (which previously required dropping arcanely crafted image files into a specially named folder).We also added support for the R16 texture format (a single channel 16-bit format) to Unity. This allows us to avoid 8-bit quantization on our brush shapes, which can cause undesirable ‘terracing’ effects if used as a heightmap stamp.Our terrain team is just getting started and development continues. Please send us feedback in the World Building forum!

>access_file_
1508|blog.unity.com

Optimizing loading performance: Understanding the Async Upload Pipeline

Nobody likes loading screens. Did you know that you can quickly adjust Async Upload Pipeline (AUP) parameters to significantly improve your loading times? This article details how meshes and textures are loaded through the AUP. This understanding could help you speed up loading time significantly - some projects have seen over 2x performance improvements!Read on to learn how the AUP works from a technical standpoint and what APIs you should be using to get the most out of it.The latest, most optimal implementation of the Asset Upload Pipeline is available in the 2018.3 beta.Download 2018.3 Beta TodayFirst, let’s take a detailed look at when the AUP is used and how the loading process works.Prior to 2018.3, the AUP only handled textures. Starting with 2018.3 beta, the AUP now loads textures and meshes, but there are some exceptions. Textures that are read/write enabled, or meshes that are read/write enabled or compressed, will not use the AUP. (Note that Texture Mipmap Streaming, which was introduced in 2018.2, also uses AUP.)During the build process, the Texture or Mesh Object is written to a serialized file and the large binary data (texture or vertex data) is written to an accompanying .resS file. This layout applies to both player data and asset bundles. The separation of the object and binary data allows for faster loading of the serialized file (which will generally contain small objects), and it enables streamlined loading of the large binary data from the .resS file after. When the Texture or Mesh Object is deserialized, it submits a command to the AUP’s command queue. Once that command completes, the Texture or Mesh data has been uploaded to the GPU and the object can be integrated on the main thread.During the upload process, the large binary data from the .resS file is read to a fixed-sized ring buffer. Once in memory, the data is uploaded to the GPU in a time-sliced fashion on the render thread. The size of the ring buffer and the duration of the time-slice are the two parameters that you can change to affect the behavior of the system.The Async Upload Pipeline has the following process for each command:1. Wait until the required memory is available in the ring buffer.2. Read data from the source .resS file to the allocated memory.3. Perform post-processing (texture decompression, mesh collision generation, per platform fixup, etc).4. Upload in a time-sliced manner on the render thread5. Release Ring Buffer memory.Multiple commands can be in progress simultaneously, but all must allocate their required memory out of the same shared ring buffer. When the ring buffer fills up, new commands will wait; this waiting will not cause main-thread blocking or affect frame rate, it simply slows the async loading process.A summary of these impacts are as follows:To take full advantage of the AUP in 2018.3, there are three parameters that can be adjusted at runtime for this system:QualitySettings.asyncUploadTimeSlice - The amount of time in milliseconds spent uploading textures and mesh data on the render thread for each frame. When an async load operation is in progress, the system will perform two time slices of this size. The default value is 2ms. If this value is too small, you could become bottlenecked on texture/mesh GPU uploading. A value too large, on the other hand, might result in framerate hitching.QualitySettings.asyncUploadBufferSize - The size of the Ring Buffer in Megabytes. When the upload time slice occurs each frame, we want to be sure that we have enough data in the ring buffer to utilize the entire time-slice. If the ring buffer is too small, the upload time slice will be cut short. The default was 4MB in 2018.2 but has increased 16MB in 2018.3.QualitySettings.asyncUploadPersistentBuffer - Introduced in 2018.3, this flag determines if the upload ring buffer is deallocated when all pending reads are complete. Allocating and deallocating this buffer can often cause memory fragmentation, so it should generally be left at its default(true). If you really need to reclaim memory when you are not loading, you can set this value to false.These settings can be adjusted through the scripting API or via the QualitySettings menu.Let’s examine a workload with lots of textures and meshes being uploaded through the Async Upload Pipeline using the default 2ms time slice and a 4MB ring buffer. Since we’re loading, we get 2 time-slices per render frame, so we should have 4 milliseconds of upload time. Looking at the profiler data, we only use about 1.5 milliseconds. We can also see that immediately after the upload, a new read operation is issued now that memory is available in the ring buffer. This is a sign that a larger ring buffer is needed.Let’s try increasing the Ring Buffer and since we’re in a loading screen, it is also a good idea to increase the upload time-slice. Here’s what a 16MB Ring Buffer and 4-millisecond time slice look like:Now we can see that we are spending almost all our render thread time uploading, and just a short time between uploads rendering the frame.Below are the loading times of the sample workload with a variety of upload time slices and Ring Buffer sizes. Tests were run on a MacBook Pro, 2.8GHz Intel Core i7 running OS X El Capitan. Upload speeds and I/O speeds will vary on different platforms and devices. The workload is a subset of the Viking Village sample project that we use internally for performance testing. Because there are other objects being loaded, we aren’t able to get the precise performance win of the different values. It’s safe to say in this case, however, that the texture and mesh loading is at least twice as fast when switching from the 4MB/2MS settings to the 16MB/4MS settings.Experimenting with these parameters outputs the following results.To optimize loading times for this particular sample project, we should, therefore, configure settings like this:General recommendations for optimizing loading speed of textures and meshes:Choose the largest QualitySettings.asyncUploadTimeSlice that doesn’t result in dropping frames.During loading screens, temporarily increase QualitySettings.asyncUploadTimeSlice.Use the profiler to examine the time slice utilization. The time slice will show up as AsyncUploadManager.AsyncResourceUpload in the profiler. Increase QualitySettings.asyncUploadBufferSize if your time slice is not being fully utilized.Things will generally load faster with a larger QualitySettings.asyncUploadBufferSize, so if you can afford the memory, increase it to 16MB or 32MB.Leave QualitySettings.asyncUploadPersistentBuffer set to true unless you have a compelling reason to reduce your runtime memory usage while not loading.Q: How often will time-sliced uploading occur on the render thread?Time-sliced uploading will occur once per render frame, or twice during an async load operation. VSync affects this pipeline. While the render thread is waiting for a VSync, you could be uploading. If you are running at 16ms frames and then one frame goes long, say 17ms, you will end up waiting for the vsync for 15ms. In general, the higher the frame rate, the more frequently upload time slices will occur.Q: What is loaded through the AUP?Textures that are not read/write-enabled are uploaded through the AUP.As of 2018.2, texture mipmaps are streamed through the AUP.As of 2018.3, meshes are also uploaded through the AUP so long as they are uncompressed and not read/write enabled.Q: What if the ring buffer is not large enough to hold the data being uploaded(for example a really large texture)?Upload commands that are larger than the ring buffer will wait until the ring buffer is fully consumed, then the ring buffer will be reallocated to fit the large allocation. Once the upload is complete, the ring buffer will be reallocated to its original size.Q: How do synchronous load APIs work? For example, Resources.Load, AssetBundle.LoadAsset, etc.Synchronous loading calls use the AUP and will essentially block the main thread until the async upload operation completes. The type of loading API used is not relevant.We’re always looking for feedback. Let us know what you think in the comments or on the Unity 2018.3 beta forum!

>access_file_
1509|blog.unity.com

Art that moves: Creating animated materials with Shader Graph

In Unity 2018.2 we added the “Vertex Position” input to Shader Graph, allowing you to adjust and animate your meshes. In this blog post, I’ll demonstrate how you can create your own vertex animation shaders, and provide some common examples such as a wind and a water shader.If you’re new to Shader Graph you can read Tim Cooper’s blog post to learn about the main features or watch Andy Touch’s “Shader Graph Introduction” talk on the Unity YouTube channel.This scene does not use any textures or animation assets; everything you see is colored and animated using Shader Graph.Shaders are an incredibly powerful aspect of the rendering pipeline, allowing a great degree of control over how our scene assets are displayed. Using a series of inputs and operations, we can create shaders that change the various rendering properties of our assets, such as their surface color and texture, and even the vertex positions of the mesh. You can also combine all of these into complex, rich animations. This blog post will demonstrate how you can get started with vertex animations, introduce the concept of using masks and properties, and finish by explaining how we made the shaders for the Desert Island Scene.Clone Repository from GitHub or Download .Zip from GitHubDownload the Desert Island Scene sample project to start experimenting and interacting with the shaders yourself! This project contains everything you need to get started with Shader Graph. Ensure you launch the project using Unity version 2018.2 or above.Every shader in the Desert Island Scene was built with customization in mind, so feel free to start playing around with the shader values in the Inspector! Each object also has a preset file that will return the values to default.This work is licensed under the Creative Commons Attribution 4.0 International License.In order to use Shader Graph, your Project must meet the following requirements:Running on Unity Version 2018.2 or above.Using either the new Lightweight or High Definition render pipelines (LWRP is suggested for experimentation due to faster compile times).Have the Shader Graph package installed in the Package Manager.To install Shader Graph, either create or update a Project to version 2018.2 or above, navigate to Window > Package Manager > All, find Shader Graph in the list and click install.If your materials are not animating in the Scene view, make sure you have Animated Materials checked:Making something fancy with Shader Graph? You can preview Animated Materials by clicking the little picture drop down at the top left of the scene view #UnityTips #Unity3D Before we can start using fancy maths to move things, we need to understand what it is that we’re moving.A Mesh in the scene has four types of spaces:Object: Vertex position relative to the mesh pivot.View: Vertex position relative to the camera.World: Vertex position relative to the world origin.Tangent: Addresses some special use cases, such as per-pixel lighting.You can select which Space you wish to affect in the dropdown of the Position node.By using the Split node we can select which axis we want to affect.The Split node outputs to four channels, the first three correspond to our Transform axis (R=X, G=Y, B=Z). In the example above, I’ve split out the y-axis of the object and added 1, moving our object up by 1 on its own axis.Sometimes you may wish to move the object in world space. To do this, select World from the Position node, then convert the output back to object space using the Transform node.Now that we’ve established how to move a Mesh, it’s often useful to know how we can restrict the effect.By using nodes such as Lerp, we can blend between two values. The T Input is the control value for the Lerp. When our T input is 0 (visualized as black), the A channel is used. When our input is 1 (visualized as white), the B channel is used. In the example below, the slider is used to blend between the two inputs. Any of the following examples can be used in place of the slider.With a black and white texture, we can use detailed shapes to push our Mesh. In the above example, you can see how white represents the maximum height of our range, while black represents no effect on the Mesh position. This is because black has the numerical value of 0, and so adding 0 to the Mesh position doesn’t move it.To use a texture with vertex position, you must use the Sample Texture 2D LOD node instead of the typical Sample Texture 2D node. Textures are particularly useful if you need a mask with a unique shape or a certain degree of falloff.While similar to a Texture mask, with a UV mask you can choose which part of the mesh you wish you affect based on the UV Unwrap. In the above screenshot, I’m using the u-axis of the UV to create a gradient from left to right. To offset the gradient, use an Add node; to increase the strength, use a Multiply node; and to increase the falloff, use a Power node.Each vertex stores a unit of Vector3 information that we refer to as Vertex Colour. Using the Poly Brush package, we can directly paint vertex colors inside the editor. Alternatively, you can use 3D modeling software (such as 3ds Max, Maya, Blender, 3D Coat or Modo) to assign vertex colors. It is worth noting that, by default, most 3D modeling software will export models with the maximum value for RGB assigned to each vertex.In the above screenshot, the Vertex Colour node is split into the red (R) channel, then connected to the T channel of the Lerp node, acting as a mask. The A channel of the Lerp node is used when the input is 0, with the B channel being used when the input is 1. In practice, the above set up will only add 1 to the y-axis if the vertices have the red vertex color assigned.By using the Normal Vector node, we can mask an input by the orientation of the Mesh faces. Again, the Split node allows us to select which axis we wish to affect (R=X, G=Y, B=Z). In the above screenshot, I mask using the y-axis, so that only the faces that face up are positive. It’s important to use a Clamp node to discard any values that are not between 0 and 1.This series of nodes masks an input if the object’s position is above world position 0 on the y-axis.When building Shaders, it can be difficult to get the correct input values for the desired effect. For this reason, and for later customization with Prefabs and presets, it’s important to use properties.Properties allow us to modify the Shader’s values after the Shader has compiled. To create a property, click the + symbol in the Blackboard (pictured on the right). There are six types of properties:Vectors (1 to 4): A string of values, with the option of a slider for Vector1.Colour: RGB values with a color picker, and an optional HDR version.Texture2D (and Texture2D Array): A 2D Texture sampleTexture 3D: A 3D texture sampleCubemap: A generated Cubemap sampleBoolean: An either off or on option. Equivalent to 0 or 1.The flag Shader pans an object space sine wave across the flag, using a UV mask to keep the left side still.Full resolution imageA UV mask inverted then multiplied against itself to create a smooth gradient across the y-axis. This is used to bend the center of the flag away from the oar.An object space sine wave is generated, with properties to control the amplitude, frequency, and speed of the wave. The wave is masked by a UV mask on the x-axis to keep the left side of the flag still.By outputting a Gradient Noise into a Step function and then into the Alpha Clip Threshold, we can discard some pixels to tear the flag.The wind Shader uses world space Gradient Noise panning along a single axis to gently push and pull the leaves and grass.Full resolution imageUsing world position, we place a Gradient Noise across the y-axis and x-axis. Using a Vector2, we can control the speed and direction at which it is offset.Properties are used to control the density and strength of the offset. Subtracting 0.5 from the Gradient Noise ensures that the Mesh is equally pushed and pulled.A UV mask is used to keep the base of the leaves and grass stationary. Finally, a Transform node is used to convert the world position to object position.With this Shader, we calculate the distance between the Camera and the clam, then use this as a mask for rotating the top half. Full resolution imageBy inputting the GameObject’s position and the Camera position into the Distance node, we can create a mask. The One Minus node inverts the distance so that we have a positive value when we’re close to the clam. The Clamp node discards any values above 1 and below 0.This UV mask rotates only the top of the clam, but in most cases a vertex colour mask would be easier and more flexible.A Lerp node is used to blend between the clam being shut and open. The Rotation is applied to the GameObject’s y-axis and z-axis. Rotating it around the x-axis.In this Shader, we’re using a sine wave that’s generated across the object’s axis to make the fish wobble. We then mask off the head of the fish, so that the head stays still.Full resolution imageGenerate a sine wave in object space along the y-axis and z-axis, with properties controlling the frequency and speed of the wave. Because we’re using both the x-axis and the y-axis, the fish wobbles along its width, and over its height.Multiply the output of the sine wave to control the amplitude/distance/strength of the wobble, and add it to the object’s x-axis.Use a Lerp node to mask off the front of the fish using the x-axis of the UV channel. By using a Power node with a property, we can push the wobble effect to the back of the fish.Finally, we have the ocean Shader! This Shader offsets the top of the Mesh using three sine waves of different scales and angles. These sine waves are also used to generate the colours for the wave troughs and tips.Full resolution imageThree separate sine waves are generated in world space, each using properties to control the amplitude, frequency, speed, convergence, and rotation of the waves.The three sine waves are then combined with two Add nodes, and multiplied by a world scale gradient to break up the height of the wave tips. After this the combined, waves are added to the object position.Two vertex masks are used to first restrict the waves to the top of the dome, and then to push the waves back down when the froth is painted in.Full resolution imageBy Splitting out the x-axis and z-axis, we generate waves in two directions. The two multipliers are used to set the influence of each wave. For example, multiplying the Z channel by 0, would output a sine wave exclusively across the x-axis.Splitting out a World Position node to the x-axis and z-axis, and then combining them in a Vector2, gives us a UV space in world space. This orientates the Gradient Noise flat across the world. By adding this output to time, we offset the sine waves, helping break up the otherwise straight lines.The Sine node uses world space and time to generate a simple sine wave, to make the wave tips we use an absolute node to flip the negative values. The One Minus node then inverts these values so that the wave tips are at the top.If you would like to know how to get started with Shader Graph, Andy Touch’s GDC talk is a great place to start. If you’re looking for other Shader Graph examples, Andy also has an Example Library available on GitHub.For detailed documentation about Shader Graph, including descriptions of every node, go to the Shader Graph developer Wiki on GitHub. Get stuck in and join the conversation in our Graphics Experimental Previews forum! And finally, if you’re making something cool with Shader Graph, I’d love to see it! Feel free to reach out to me on Twitter @John_O_Really.

>access_file_
1510|blog.unity.com

Puppo, The Corgi: Cuteness overload with the Unity ML-Agents Toolkit

Building a game is a creative process that involves many challenging steps including defining the game concept and logic, building assets and animations, specifying NPC behaviors, tuning difficulty and balance and, finally, testing the game with real players before launch. We believe machine learning can be used across the entire creative process and in today’s blog post we will focus on one of these challenges: specifying the behavior of an NPC.Traditionally, the behavior of an NPC is hard-coded using scripting and behavior trees. These (typically long) lists of rules process information about the surroundings of the NPC (called observations) to dictate its next action. These rules can be time-consuming to write and maintain as the game evolves. Reinforcement learning provides a promising, alternative framework for defining the behavior of an NPC. More specifically, instead of defining the observation to action mapping by hand, you can simply train your NPC by providing it with rewards when it achieves the desired goal.Training an NPC using reinforcement learning is quite similar to how we train a puppy to play fetch. We present the puppy with a treat and then throw the stick. At first, the puppy wanders around not sure what to do, until it eventually picks up the stick and brings it back, promptly getting a treat. After a few sessions, the puppy learns that retrieving a stick is the best way to get a treat and continues to do so.That is precisely how reinforcement learning works in training the behavior of an NPC. We provide our NPC with a reward whenever it completes a task correctly. Through multiple simulations of the game (the equivalent of many fetch sessions), the NPC builds an internal model of what action it needs to perform at each instance to maximize its reward, which results in the ideal, desired behavior. Thus, instead of creating and maintaining low-level actions for each observation of the NPC, we only need to provide a high-level reward when a task is completed correctly and the NPC learns the appropriate low-level behavior.To showcase the effectiveness of this technique, we built a demo game, “Puppo (read as ‘Pup-o’), The Corgi”, and presented it at Unite Berlin. It is a mobile game where you play fetch with a cute little corgi. Throw a stick to Puppo by swiping on the screen and Puppo brings it back. While the higher-level game logic uses traditional scripting, the corgi learns to walk, run, jump and fetch the stick using reinforcement learning. Instead of using animation or scripted behaviors, the movements of the corgi are trained solely with reinforcement learning. Not only does it look super cute, but the corgi’s motion is driven by the physics engine exclusively. This means for instance that the motion of the corgi can be affected by surrounding RigidBodies.Puppo became so popular at Unite Berlin that many developers asked us how we made it. That’s why we decided to write this blog post and release the project for you to try it out yourself.Download the Unity ProjectTo get started, we will cover the requirements and preliminary work that you need to do to train the corgi. Then, we will share our experience in training it. Finally, we will go over the steps we took to create a game with Puppo as its hero.Before we get into the details, let’s define a few important notions in reinforcement learning. The goal of reinforcement learning is to learn a policy for an agent. An agent is an entity that interacts with its environment: Every learning step, the agent collects observations about the state of the environment, performs an action, and gets a reward for that action. The policy defines how an agent acts based on the observations it perceives. We can develop a policy by rewarding the agent when his behavior is appropriate.In our case, the environment is the game scene and the agent is Puppo. Puppo needs to learn a policy so it can play fetch with us. Similar to how we train a real dog with treats to fetch sticks, we can train Puppo by rewarding it appropriately.We used a ragdoll to create Puppo and its legs are driven by joint motors. Therefore, for Puppo to learn how to get to the target, it must first learn how to rotate the joint motors so that it can move.A real dog uses vision and other senses to orient itself and to decide where to go. Puppo follows the same methodology. It collects observations about the scene such as proximity to the target, the relative position between itself and the target and the orientation of its own legs, so it can decide what action to take next. In Puppo’s case, the action describes how to rotate the joint motors in order to move.After each action Puppo performs, we give a reward to the agent. The reward is comprised of:- Orientation Bonus: We reward Puppo when it is moving towards the target. To do so, we use the Vector3.Dot() method.- Time Penalty: We give a fixed penalty (negative reward) to Puppo at every action. This way, Puppo will learn to get the stick as fast as possible to avoid a heavy time penalty.- Rotation Penalty: We penalize Puppo for trying to spin too much. A real dog would be dizzy if it spins too much. To make it look real, we penalize Puppo when it turns around too fast.- Getting to the target Reward: Most importantly, we reward Puppo for getting to the target.Now Puppo is ready to learn. It took us two hours on a laptop for the dog to learn to run towards the target efficiently. During the training process, we noticed one interesting behavior. The dog learned to walk rather quickly in about 1 min. Then, as the training continued, the dog learned to run. Soon after, it began to flip over when it tried to make a sudden turn while running. Fortunately, the dog learned how to get back up just as a real dog will do. This clumsy behavior is so cute that you could stop the training at this point and use it directly in the game.If you are interested in training Puppo yourself, you can follow the instruction in the project. It includes detail steps on how to set up the training and what parameters you should choose. For a more detailed tutorial on how to train agents, please visit the ML-Agents documentation site.To create “Puppo, The Corgi” game, we need to define the game logic that lets a player interact with the trained model. Because Puppo has learned to run to a target, we need to implement the logic that changes the target for Puppo within the game.In game mode, we set the target to be the stick right after the player has thrown it. When Puppo arrives at the stick, we change Puppo’s target to the player’s position in the scene so that Puppo returns the stick to the player. We do this because it’s much easier to train Puppo to move to a target while defining the game flow logic with a script. It’s our belief that Machine Learning and traditional game development methods can be combined to get the best of both approaches. “Puppo, The Corgi” project includes a pre-trained model for the corgi that you can use immediately and even deploy on mobile devices.We hope this blog post has shed some light on what is achievable with the ML-Agents Toolkit for game development.Want to dive deep into the code of this project? We released the project and you can download it here. To learn more about how to use the ML-Agents Toolkit, you can find our official documentation and a step-by-step beginner’s guide here. If you are interested in getting a deeper understanding of the math, algorithms, and theories behind reinforcement learning, there is a Reinforcement Learning Nanodegree we offer in partnership with Udacity.We would love to hear about your experience using the ML-Agents Toolkit for your games. Feel free to reach out to us on our GitHub issues page or email us directly.Happy creating!

>access_file_
1511|blog.unity.com

Performance benchmarking in Unity: How to get started

As a Unity developer, you want your users to love playing your games, enjoying a smooth experience across all the platforms they may play on. What if I told you that we just made it easier to create performance benchmarks? If you want to learn how to develop games or Unity tools with an eye on performance, please read on!In this post, I explain how to use a couple of Unity tools that give you an easy way to start collecting performance metrics and creating benchmarks with them: the Unity Test Runner that ships with the Unity Editor, the Unity Performance Testing Extension, and the Unity Performance Benchmark Reporter.As a Unity Developer, you might find yourself in the following situation: your project was running fast and smooth not too long ago, but then one or more changes have come in, and now scenes are noticeably slow, frames are dropping, and other performance issues have started popping up. Tracking which changes led to the performance regression can be difficult.If you’re a Unity Partner, you want to understand the performance changes across your SDKs, drivers, platforms, packages, or other artefacts. Or you’d like to collect performance metrics across different versions of Unity with your products, but it’s not very clear how to do this and then make the comparisons.Those are just a couple of examples where establishing performance benchmarks can really save the day. Now, let me show you how you can start collecting performance metrics, create benchmarks with them, and visualize changes in performance metrics.For this discussion, we’ll be looking at the test code in the UnityPerformanceBenchmark sample performance test project.Download the latest XRAutomatedTests release from GitHub. You’ll find the UnityPerformanceBenchmark project in the PerformanceTests subdirectory.The UnityPerformanceBenchmark project contains a variety of sample scenes that are in turn used in Unity Performance Tests using the Unity Performance Testing Extension.The first thing we’re going to do is take a look at how we write performances test using the Unity Test Runner with the Unity Performance Testing Extension. Here is a bit of background info on both of these tools before we proceed.We’re using the Unity Test Runner to run our performance tests. The Unity Test Runner is a test execution framework built into the Unity Editor allowing you to test your code in both Edit and Play mode on target platform players such as Standalone, Android, or iOS. If you aren’t familiar with the Unity Test Runner, check out the Unity Test Runner documentation.The Unity Performance Testing Extension is a Unity Editor package that provides an API and test case attributes allowing you to sample and aggregate both Unity profiler markers and non-profiler custom metrics, in the Unity Editor and players. You can learn more by checking out the Unity Performance Testing Extension documentation, but we’re going to look at some examples here.The Unity Performance Test Extension requires Unity 2018.1 or higher. Be sure to use Unity version 2018.1 or higher if you want to run the sample performance tests in the UnityPerformanceBenchmark project or whenever you are using the Unity Performance Test Extension.The UnityPerformanceBenchmark project implements the IPrebuildSetup interface, a Unity Test Runner facility, where we can implement a Setup method that is automatically called before the test run is executed by the Unity Test Runner.The first thing the UnityPerformanceBenchmark project’s IPrebuildSetup.Setup method does is parse the command line arguments looking for player build settings. This allows us to flexibly build the player for our performance tests using the same Unity project against different platforms, render threading modes, player graphics APIs, scripting implementations, and XR-enabled settings such as stereo rendering path and VR SDKs.Thus, we’ll need to open the UnityPerformanceBenchmark project with Unity from the command line, passing in the player build options we want to use when we run the tests in the Unity Test Runner.Example: Launch UnityPerformanceBenchmark project from Windows to Build Android Player:Unity.exe -projectPath C:\XRAutomatedTests-2018.2\PerformanceTests\UnityPerformanceBenchmark -testPlatform Android -buildTarget Android -playergraphicsapi=OpenGLES3 -mtRendering -scriptingbackend=monoHere we launch Unity on Windows to build for Android with OpenGLES3 graphics API, multithreaded rendering, and mono scripting backend.Example: Launch UnityPerformanceBenchmark project from OSX to Build iOS Player./Unity -projectPath /XRAutomatedTests-2018.2/PerformanceTests/UnityPerformanceBenchmark -testPlatform iOS -buildTarget iOS -playergraphicsapi=OpenGLES3 -mtRendering -scriptingbackend=mono -appleDeveloperTeamID= -iOSProvisioningProfileID=Here we launch Unity on OSX to build for iOS with OpenGLES3 graphics API, multithreaded rendering, and mono scripting backend. We also provide the Apple developer team and provisioning profile information needed to deploy to an iOS device.When we open the UnityPerformanceBenchmark project with Unity from the command line like in the examples above, the command line args will be in memory for the IPrebuildSetup.Setup method to parse and use to build the player with.While this launch-from-command-line approach isn’t required to run tests in the Unity Test Runner, it’s a good pattern to use to avoid using a separate test project for each player configuration.I’ve detailed the command line options for opening the project, or just running the tests, from the command line on the wiki for the test project: How to Run the Unity Performance Benchmark Tests. To learn more about how we’re parsing the player build settings in the test project, take a look at the RenderPerformancePrebuildStep.cs file in the Scripts directory of the UnityPerformanceBenchmark test project.After we open the UnityPerformanceBenchmark, we need to open the Unity Test Runner window in the Unity Editorin Unity 2018.1, go to Window > Test Runner.in Unity 2018.2, go to Window > General > Test Runner.The Unity Test Runner window will open and look like the image below.These are our Unity Performance tests. We can run them in the Editor using the Run button at the top left of the window, or on the actual device or platform using the “Run all in player” button at the top right of the window.Debugging Tip If you want to debug code in your IPrebuildSetup.Setup method1. Set breakpoints in your IPrebuildSetup.Setup code in Visual Studio2. Attach to the Unity Editor with the Visual Studio Tool for Unity extension3. Run your tests in the Editor using the “Run All” or “Run Select” button in the Unity Test Runner window. At this point the Visual Studio debugger will break into your code where you can debug as needed.Let’s take a look at a performance test example so we can get a better understanding of how it works.Example: Sampling Profiler Markers in a Unity Performance TestIn this example, our test method is called SpiralFlame_RenderPerformance. We know from the method decorator [PerformanceUnityTest], that this is a Unity Performance Test.All of the tests in the UnityPerformanceBenchmark test project follow the same pattern we see in this test method:1. Load the scene for the test2. Set the scene as active so we can interact with it in the test method3. Create a test object of type DynamicRenderPerformanceMonoBehaviourTest and add it to the test scene (this happens in SetupPerfTest method)4. Wait for a constant value of time for the scene to “settle” after loading and adding the test object to the scene before we start sample metrics.5. Setup our profiler markers for capture by the Performance Test Extension API6. Let the performance test know we’re ready to start capturing metrics7. Then yield return the test object (an IMonoBehaviourTest) to capture metrics during the rendering loop.We also sample custom metrics (metrics that don’t fall into one of Unity profiler markers, framecount, or execution time) in the RenderPerformanceMonoBehaviourTestBase base class (this class inherits from MonoBehaviour).Example: Sampling Custom Metrics in a Monobehaviour ScriptIn the example above, we’re capturing FPS, GpuTimeLastFrame (if XR is enabled), and application startup time (if Unity Analytics is enabled and we’re running on Unity 2018.2-or- newer where the API we need is available).Finally, notice in the same RenderPerformanceMonoBehaviourTestBase base class that we have implemented a property public bool IsTestFinished.We’re required to implement this property because our RenderPerformanceMonoBehaviourTestBase implements the IMonoBehaviourTest interface.This property is important because the Unity Test Runner uses it to know when to stop the test. When it’s value is true, the test ends. It’s up to you to implement the logic you want in order to determine when the Unity Test Runner should stop running the test.Example: Sampling Custom Metrics in the IsTestFinished PropertyIn this final example, we’re capturing the number of rendered game objects, triangles, and vertices in the scene when the test finishes.Now that we’ve seen some examples of how we make calls into the Performance Testing Extension to sample metrics, let’s talk about how we configure these to begin with.The Measure.* methods generally take a struct as a parameter called a SampleGroupDefinition. When we create a new SampleGroupDefinition we define some properties for the samples we are interested in collecting.Example: Define new SampleGroupDefinition for GpuTimeLastFrame, using Milliseconds as the sample unit, aggregate samples using a minimum valueBelow is the SampleGroupDefinition for GpuTimeLastFrame. This is how we let the Performance Testing Extension know how to collect samples and aggregate them for GpuTimeLastFrame.This SampleGroupDefinition is from the dynamic scene render performance test example, so here we’ve chosen to aggregate our samples using the minimum value collected. But why would we do that rather than use a more common aggregation measure, like median or average?The answer is because the scene is dynamic. In a dynamic scene using a median or average aggregation would be unreliable or inconsistent for the same scene run against the same code given the changing nature of the rendering. This is most likely the best we can do if we want to track a single aggregate for a rendering metric in a dynamic scene. However, when we define a similar SampleGroupDefinition for our static scenes, we definitely use a median aggregation.new SampleGroupDefinition(GpuTimeLastFrameName, SampleUnit.Millisecond, AggregationType.Min)Example: Define new SampleGroupDefinition for FPS, using none as the sample unit, aggregate samples using a median value, an increase in the value is betterBelow is the SampleGroupDefinition for FPS (Frames Per Second). FPS doesn’t have a separate measurement unit; it’s just FPS, so we specify SampleUnit.None here. We’ll using a median aggregation type here; this is in a static scene so we don’t have to worry about an unpredictable rendering experience. We’re explicitly establishing a 15% threshold for the sample group, and passing true for the increaseIsBetter argument because, if FPS increases, it’s a good thing!These last two arguments are collected and saved in our performance test results .xml file when running from the command line, and can be later used in the Unity Performance Benchmark Reporter to establish benchmarks.new SampleGroupDefinition(FpsName, SampleUnit.None, AggregationType.Median, threshold: 0.15, increaseIsBetter: true)When the test completes, all of the metric samples we enabled earlier are then aggregated by the Performance Testing Extension.I want to point out that in our code examples we use a couple of different Unity Performance Testing Extension APIs, namelyMeasure.ProfilerMarkers, andMeasure.CustomThe Unity Performance Testing Extension provides other Measure methods as well that may suit your specific needs depending on what, and how, you’re wanting to measure performance in Unity. These additional methods include:Measure.MethodMeasure.FramesMeasure.ScopeMeasure.FrameTimesLearn more about the different Measure methods in the Unity Performance Testing Extension documentation, specifically in the “Taking measurements” section.Now that we’ve looked at some examples of how we write performance tests using the Unity Test Runner using the Unity Performance Testing Extension, let’s look at how we run them.There are two primary ways we can execute our performance tests1. From the command line, launching Unity with the -runTests option. This is the preferred way for performance tests because the Unity Performance Test Extension will generate a result .xml file for us that we can use in the Unity Performance Benchmark Reporter to view and compare our results.2. Directly from within the Editor. This is a useful approach if you a. Just want to run the tests and view the results in the Unity Test Runner window without needing to capture the results for later use, or b. Want to verify your tests will run or you need to debug into test code.Here are two examples of how to run performance tests with Unity Test Runner from the command line. These examples should look very familiar, because we’re building off the same examples we saw earlier in our discussion about opening the UnityPerformanceBenchmark project from the command line.Example: Running the UnityPerformanceBenchmark Performance Tests from Windows against an Android PlayerHere we launch Unity on Windows to build for Android with OpenGLES3 graphics API, multithreaded rendering, and mono scripting backend.Unity.exe -runTests [-batchmode] -projectPath C:\XRAutomatedTests-2018.2\PerformanceTests\UnityPerformanceBenchmark -testPlatform Android -buildTarget Android -playergraphicsapi=OpenGLES3 -mtRendering -scriptingbackend=mono -testResults C:\PerfTests\results\PerfBenchmark_Android_OpenGLES3_MtRendering_Mono.xml -logfile C:\PerfTests\logs\PerfBenchmark_Android_OpenGLES3_MtRendering_Mono_UnityLog.txtExample: Running UnityPerformanceBenchmark Performance tests from OSX against an iOS PlayerHere we launch Unity on OSX to build for iOS with OpenGLES3 graphics API, multithreaded rendering, and mono scripting backend. We also provide the Apple developer team and provisioning profile information needed to deploy to an iOS device../Unity -runTests [-batchmode] -projectPath /XRAutomatedTests-2018.2/PerformanceTests/UnityPerformanceBenchmark -testPlatform iOS -buildTarget iOS -playergraphicsapi=OpenGLES3 -mtRendering -scriptingbackend=mono -appleDeveloperTeamID= -iOSProvisioningProfileID= -testResults /PerfTests/results/PerfBenchmark_Android_OpenGLES3_MtRendering_Mono.xml -logfile /PerfTests/logs/PerfBenchmark_Android_OpenGLES3_MtRendering_Mono_UnityLog.txtFor both of these examples, we’ve introduced three to four new command line options that will help us run our tests instead of just opening the Unity Editor with the command line arguments available to the IPrebuildSetup.Setup method.-runTests This option tells the Unity Test Runner that you want to run your tests-testResults This option specifies the filename and path to the .xml file that the Unity Test Runner should save your performance tests results in.-logfile This option specifies the filename and path to the file that the Unity Editor should write its logging to. This is optional, but can be really helpful when you’re investigating failures and issues if you can quickly access the Unity Editor log file.-batchmode This option will force the Unity Editor to open in a headless mode. We use this option when we are only running player performance tests and there is no need to actually open the Unity Editor window. This can save time during automated tests execution. When this option is not used, the Unity Editor will open on the screen before executing the tests.At Unity we run our performance tests from the command line, often in batchmode, in our continuous integration system.Example: Running the UnityPerformanceBenchmark Tests from the Command LineWith the Unity Test Runner window open near the top when PlayMode is selected (PlayMode tests run in either the build player or in the playmode window of the Editor), we have1. Run All - click this button to run all tests in the PlayMode tab2. Run Selected - click this button to run the select test or node and all tests beneath it.3. Run all in player - click this to have the Unity Editor build the player type configured in build settings and run the tests thereImportant Requirement Running performance tests prior to version 0.1.50 of the Performance Testing Extension in the Unity Editor from the Test Runner window will not produce a result .xml file needed for the Unity Performance Benchmark Reporter. However, if you're using version 0.1.50 or later of the Performance Testing Extension, a results.xml file will be written to the `Assets\StreamingAssets` project folder.If you are using a version of the Performance Testing Extension earlier than version 0.1.50 and want to create a result .xml file when you've done your performance tests, you need to run the tests by launching Unity from the command line with the -runTests command line option. Be aware however, that when you're running Unity with the -runTests command line option, the Editor will open and begin running the tests.The result .xml files will contain the results and metadata from the test runs that we'll use with the Unity Performance Benchmark Reporter to create benchmark results and compare to subsequent test runs.Example: Running Performance Tests in the Unity EditorIf we’re running these tests from within the editor, the aggregate values can be seen near the bottom of the Unity Test Runner window by selecting each test.Example: Viewing Performance Test Sample Aggregations from Unity Test RunnerIf you want to see the results of running your Unity Performance Tests from the command line, you’ll need to use the Unity Performance Benchmark Reporter (or just open the result .xml file, but it’s not an easy read).With that, let’s transition to talking about how we can use the Unity Performance Benchmark Reporter to view and compare results.Unity Performance Benchmark Reporter enables the comparison of performance metric baselines and subsequent performance metrics (as generated using the Unity Test Runner with the Unity Performance Testing Extension) in an html report with graphical visualizations.The reporter is built as a .NET Core 2.x assembly so that it is compatible to run across different .NET supported platforms (Windows, OSX, etc). Therefore, to run it, you'll need to ensure you have installed the .NET Core SDK.Executing the Unity Performance Benchmark reporter entails invoking the assembly with the dotnet command like this:dotnet UnityPerformanceBenchmarkReporter.dll --baseline=D:\UnityPerf\baseline.xml --results=D:\UnityPerf\results --reportdirpath=d:\UnityPerfAfter the reporter runs, a directory named UnityPerformanceBenchmark will be created with an html report and supporting .css, .js, and image files in it. Open the html report to view visualizations of the performance metrics captures in the .xml result files.--results The path to a directory where we have one or more non-baseline result .xml files to be included in the html report.At least one --results value must be passed to the UnityPerformanceBenchmarkReporter.dll assembly. This is the only required field.This command line option also can be used to specify the path to a single .xml non-baseline result file. Additionally, you can specify several directories or files by repeating the option like this:--results=D:\UnityPerf\results --results=D:\UnityPerf\results.xml--baseline The path to a result .xml file that will be used when comparing other results.--reportdirpath The path to a directory where the reporter will create the performance benchmark report. This is created in a UnityPerformanceBenchmark subdirectory.If the report location is not specified, the UnityPerformanceBenchmark subdirectory will be created in the working directory that the UnityPerformanceBenchmarkReporter.dll was invoked.Let’s compare some performance test results with the Performance Benchmark Reporter.Example: Experiment with Configuration Changes in a VR-enabled Gear VR Scene to Improve Frame RateI have a Unity scene with the following complexity characteristics.732 objects95,898 triangles69,740 verticesI ran a Unity Performance Test against this scene sampling metrics that would help me understand if I could sustain close to 60 FPS using Multi Pass Stereo Rendering. Next, I ran the Performance Benchmark Reporter with the results of my test.What I found is that my FPS is closer to 30 FPS, half of what I’d like to be at.Next, I’m going to try using Single Pass Multiview Stereo Rendering to see how close to 60 FPS I can get. I’ll rerun my Performance Test with the configuration change, then create another Unity Performance Benchmark Report comparing my first results with the new ones.Looks like the configuration switch to Single Pass Multiview rendering improved our FPS to 37. We still need to be closer to 60 FPS if we want this scene to run without significant frame drop on Gear VR.The last thing I’m going to experiment with is reducing the number of rotating cubes in my scene to see if we can get FPS up.After a couple of tries I’m able to improve performance to ~55 FPS. But I had to reduce the number of object in the scene from 732 to 31. That’s quite a bit.I’ll circle back on other improvements I can make for performance optimization, but for now, I’m going to use this as an FPS baseline. I’ll use this as my benchmark going forward, hoping to improve it if I can.Establishing benchmarks can mean many things depending on your project. In this context, running performance tests in Unity, we’re talking about establishing a baseline set of results, a last-known-good set of performance metrics that we can compare subsequent results to as we make changes. These become our benchmark.In the previous section I arrived at a configuration using Single Pass Multiview Stereo Rendering for Gear VR, and a decreased scene object count, that resulted in an “acceptable” FPS. At that point, I decide to use my test results as my benchmark. Let’s see an example of how we can use this benchmark as we make further changes to the player configuration.Example: Use Performance Benchmark to Detect Performance Regression with Configuration ChangesI’d like to enable antialiasing in my scene to smooth the appearance out. The default Quality Settings in Unity for Android disable antialiasing, but I’d like to see if we can enable it and still maintain an acceptable FPS for our Gear VR scene.First I set the antialiasing value in my IPrebuildSetup.Setup method to 4.QualitySettings.antiAliasing = 4;Next I rerun the performance test from earlier on my Gear VR-enabled Android phone. I then use the Unity Performance Benchmark Reporter to compare this new run with my newly-established benchmark results.But look, with the reconfiguration of my Unity player to use antialiasing at level 4, my FPS dropped to 32 FPS, which is about where I originally started out when I created this scene with 732 objects.I’d like to experiment with a few lower antialiasing values to see if I can recover an acceptable FPS for the scene before I bail on this idea. So, I try with antialiasing set to 2, and then finally 1. The results are in the image below.In this reconfiguration scenario, using the performance benchmark I established earlier, I was able to experiment with changes in my Unity player settings and then verify the performance impacts before committing to them.Even though I’m within my default 15% threshold of variance for FPS using antialiasing set to 1, FPS is now at 49, a bit too far from the 60 FPS for my VR-enabled scene that I’d like to be at. I don’t think I’ll commit to these changes today.Unity is putting a lot of focus on great performance by default. But the Unity Engine is only part of what ultimately results in users loving to play your games, enjoying a smooth and high performance experience across all the platforms they may play on. And SDKs, drivers, or Unity packages, for example, that work great without introducing performance regressions are critical to an overall great Performance experience for everyone.I’ve introduced you to a couple of Unity tools that make it easier to start collecting performance metrics and creating benchmarks with them: the Unity Performance Testing Extension, and the Unity Performance Benchmark Reporter. I encourage you to experiment with what they can do for you and your performance-focused efforts.We looked atHow we can use the Unity Test Runner to write performance tests to sample profiler and other metrics,Some different ways we can execute performance tests using the Unity Test Runner, andHow to use the Unity Performance Benchmark Reporter to analyze and compare performance metrics, run over run, as you begin to up your performance testing game.Establishing baselines for these metrics, and using them to create a benchmark for your scenes, game, SDK, driver, package, or other Unity integrations can be an effective way to start creating visibility into impacts your changes have. Good luck!Many thanks and credit go to my Unity colleagues for their help contributing, brainstorming, experimenting, developing, and iterating on this work with me.Qi JiangSakari PitkänenGintautas SkersysBenjamin Smith

>access_file_
1512|blog.unity.com

The High Definition Render Pipeline: Getting started guide for artists

Editor's note: The information in this post is outdated. For newer versions of Unity, we recommend The definitive guide to lighting in the High Definition Render Pipeline (HDRP) e-book, which was last updated in 2022.In this post we will explore authoring a scene to be rendered using Unity’s High Definition Render Pipeline, also known as HDRP. We’ll walk through starting a new HDRP Project, upgrading the Materials of any imported assets, and learn how to use the new parameters within the Material Inspector to create a realistic glass material. We’ll also highlight the differences between the built-in pipeline and HDRP.In 2018.1, Unity introduced a new system called the Scriptable Render Pipeline (SRP), allowing you to create your own rendering pipeline, based on the needs of your project. SRP includes two ready-made pipelines, called Lightweight (LWRP) and High Definition (HDRP). HDRP aims for high visual fidelity and is suitable for PC or console platforms.If you haven’t already, we recommend that you install Unity Hub. It helps you to keep track of your projects as well as your installed versions of Unity. When creating a new Project in Unity Hub, under Template, you will see an option to select High-Definition RP (Preview).Since HDRP is still in preview, it’s not a good idea to switch to HDRP in the middle of production. However, you can try upgrading your project to HDRP by going into the new Package Manager and installing it. Be advised, once you have upgraded your project to HDRP, you won’t be able to revert. Make sure to create a backup of the project prior to upgrading.As mentioned above, HDRP is still in preview, so it's subject to change in the future. In order to upgrade a project from the built-in render pipeline to HDRP, navigate to Window > Package Manager. In the Package Manager, you can see all of the current packages installed within your Unity project. Under All, locate “HD Render Pipeline” (Render-pipelines.high) and install the latest version. Installing the pipeline will also integrate the Render-pipeline core, Shader Graph and the post processing packages.After installing the HDRP package, you need to navigate to Edit > Project Settings > Graphics to assign the Scriptable Render Pipeline asset for HDRP.The Inspector displays the currently installed Render Pipeline Asset under in the “Scriptable Render Pipeline Settings” field. The HDRP Render Pipeline Asset will be assigned if you are installing the Pipeline from Unity Hub. If you're upgrading your project from the Built-In pipeline this field will be set to “None”. We can assign a Pipeline Asset by clicking the button next to the Asset Selection box or by dragging the asset in from the Settings folder.HDRP uses the C# Scriptable Render Pipeline API. With this comes a whole host of different preferences you can set to customise the rendering of your project. The fact that your rendering settings are stored in a Render Pipeline Asset means that you can change your render settings by assigning a new Render Pipeline Asset to this field.To create a new Render Pipeline Asset, right click within your settings folder and choose Create > Rendering > High Definition Render Pipeline Asset.When using a HDRP Project, any Unity built-in, Standard or Unlit Material will not be rendered, and therefore appear using the default pink unlit shader which Unity displays when a shader is broken. This may occur when attempting to upgrade an existing project or when integrating legacy content such as Asset Store assets which do not use HDRP compatible shaders. In order to be rendered by HDRP, the Material needs to be upgraded.Unity 2018.1 is equipped with a Built-in Material Conversion Tool. It takes the Material properties from Unity’s Standard Shader and converts them to new HDRP Materials. It’s worth noting that this does not work for custom shaders, which need to be re-written for HDRP.To access the Material Conversion Tool, navigate to Edit > Render Pipeline.Unity offers several upgrade options in this menu. We’ll focus on the first two here. “Upgrade Project Materials to High Definition Materials”, will upgrade all upgradable Materials in the Project. “Upgrade Selected Materials to High Definition Materials”, lets you select which Materials you want to upgrade from the Project window.It is at this point that we recommend you create a separate backup of your project.Once the Materials have been converted, the Material’s shader will now be called “HDRenderPipeline/Lit”. Now you have complete access to the brand new features of the HDRP lit shader within the Material Inspector.Furthermore, within the Materials Shader options, under “HDRenderPipeline”, you can select and apply a variety of shader types such as LitTesseleation or Unlit, to name a few.The subsequent sections provide an introduction to some of the new features added as part of HDRP. We’ve used some of these new features to enhance the look of our kitchen scene.Lighting in HDRP uses a system called Physical Light Units (PLU). PLU means that these units are based on real-life measurable values, like what you would see when browsing for light bulbs at the store or measuring light with a photographic light meter.We use LUX for Directional Lights because in the real world, those are the values used to measure the intensity of sunlight, which can be easily done with a LUX meter. Other real-world light sources use Lumens to measure intensity, which can be used as a reference for the smaller light emitters in our scene.The Realtime Line Light light maintains a seamless, constant light output emanating from a line of a user-definable length. These lights types are commonly used in animated films to achieve realistic lighting. They add a filmic quality to the lighting of your scenes. Line Lights can be created by selecting the shape type in the Inspector after a Light has been placed in a scene.A lot of modern kitchens use a style of Line Light to illuminate the kitchen workspace, so the Line Light here not only produces realistic lighting, but is accurate to what would be found in a real kitchen.In addition, the Light Inspector can determine the color of a light emitted through temperature. Ranging on a scale of 1000 to 20000 kelvins, the lower the value, the less heat is emitted, the light appears more red. In contrast, as you increase the temperature value, it appears more blue.Similarly, the Rectangle shape type emits a light output based on custom X and Y axis values.Note: Shadows are currently not supported for Line or Rectangle light shape types.As an added tip, using the Light Explorer allows you to easily manage any type of Light within your project. You can modify values, change the type of Lights and even manipulate Shadow types without the need to locate them in the scene. Reflection Probes, Light Probes and Static Emissives can additionally be managed through this window.To access the Light Explorer, navigate to Window > General > Light Explorer:Volume Settings allow you to visually alter your environment preferences, adjusting elements such as your Visual Environment, Procedural Sky and HD shadow settings. This also enables you to create custom volume profiles and switch between them.Volume Settings are managed by creating a GameObject and adding the Volume component. This workflow is similar to the one for creating a volume for the Post-Processing Stack v2. In HDRP, there will be one present within the hierarchy by default.HD Shadow SettingsThe HD shadow settings allow you to determine the overall quality of the Shadows in a Volume. The Max Distance field calculates the quality of the Shadows based on the distance of the Camera from the Shadow.Visual EnvironmentYou have two drop-down menus within Visual Environment.Sky Type provides three options: Procedural Sky, Gradient Sky and HDRI Sky.The Procedural sky produces an environment based on the values you choose within the procedural sky component.HDRI Sky constructs an environment map based on an image set within the Component. By default, the HDRISky Component is not assigned to the Volume Settings, by clicking “Add component overrides...” at the bottom of the Inspector tab and selecting “HDRI Sky”, the component will become available.Now you can assign an HDRI Sky Cubemap and alter the values to achieve accurate, real-world lighting.Unity HDRI Pack is available on the Asset Store for free from Unity Technologies and provides 7 pre-converted (1024x2014 resolution) HDR Cubemaps ready for use within your project.For this Scene, “TreasureIslandWhiteBalancedNoSun” from the Unity HDRI Pack worked best as it supplied enough light to brighten up the kitchen, but not wash it out. Of course, with the modifiers supplied within the Component such as Exposure and Multiplier, brightness can be altered and adjusted. It's important to pick a HDRI map that complements your Scene.Finally, Fog Type gives you 3 options, Linear, Exponential and Volumetric. In order to determine the values, repeat the previous component step ( “Add Component Override”, apply the relevant component to the Inspector)Before the introduction of the HDRP, creating a glass Material was not an easy endeavor. There was no simple way to construct a realistic glass Material without extensive research and shader programming or resorting to the Asset Store to use a custom shader.Now with the new features of the HDRP Lit Shader available in the Material Inspector, you can create glass which not only looks great but refracts light based on definable settings.To start, we want to create a new HDRenderPipeline/Lit Material. This is the default Material shader applied to any new Material created in HDRP.To create a new Material, right-click within the preferred folder, and choose Create -> Material. The Material Inspector will now show the brand new HDRP Material Inspector. In it, there are a few noticeable changes. Let’s review them.Here you can start to determine the surface of the Material.Surface TypeThere are two options for the surface type, Opaque or Transparent. Opaque simulates a completely solid Material, with no light penetration.In contrast, Transparent is an alpha blend and simulates a translucent surface, although useful, this type of surface is more costly to render.An important feature of HDRP is unified lighting across both transparent and opaque objects.Select Transparent for this example. This will provide access to parameters discussed later below.Double SidedThis preference allows the Material to be rendered on both sides. By default, the Normal Mode is set to Mirror, but within the drop-down, we can select Flip or None.If Double sided isn't active, Unity will only render the sides of the Material facing the cameras direction.The Material type options create new behaviors that allow for even more realistic Materials. Each of these options provide additional parameters within the Inspector once activated.StandardUses the basic parameters and is the default Material type.Subsurface Scattering (SSS)Subsurface Scattering works by simulating how light interacts and penetrates translucent objects such as plants. It is also in used in rendering skin. If you have ever shined a light through the tip of your finger, You will have seen that the light changes color as it is scattered under the surface. This can be replicated using this Surface Type.Once activated, a Transmission parameter will appear. Using this, you can determine the translucency of an object by using a Thickness Map.Both of these features can be manipulated by using Diffusion Profiles. Two default profiles called Skin and Foliage are provided and can be used as a basis for these type of SSS Materials. An additional 13 profiles can be customised using the profile settings shown below.For a brief video demonstration, check out my Unity tip on SSS:I have always shied away from Subsurface Scattering (SSS) as it always sounded complicated!! ? With HD RP, SSS has 2 preset profiles, as well as 13 other profiles which can be customised and add additional depth to any Material. See below for a simple video demo ?#unitytips pic.twitter.com/NM4Z03l1U1 — Kieran Colenutt Unity (@kierancolenutt) August 28, 2018AnisotropyAnisotropy simulates a surface material which changes properties depending on its orientation, for example, mimicking the look of brushed aluminum. Instead of creating a metallic surface that has clean, neat reflections, using both a Tangent and Anisotropy Maps, you can alter the intensity of the reflections, as well as the orientation.IridescenceProvides the parameters to create an iridescent effect on the surface of the Material, similar to how light appears on an oil spill. The output is determined by an Iridescence Map and Iridescence Layer Thickness Map.Specular ColorA Specular color is used to control the color and strength of specular reflections in the material. This makes it possible to have a specular reflections of a different color other than the diffuse reflection..TranslucentThe Translucent Option can be extremely effective at simulating light interaction for vegetation. This Material type uses profiles, similar to SSS except in this case the thickness map is used to determine how light is transmitted.This useful parameter easily enables the Material to respond to a decal Material, this works for both workflows, either through the Decal Projector or as an object component.Base Color + OpacityAt this point the Glass Material will still appear opaque, this is because you need to change the value of the opacity within Inputs to allow light to penetrate.To do this, open the color swatch window next to “Base Color + Opacity”.Red, Green and Blue channels are used as a base color and an alpha channel determines the opacity. The opacity of the current Material is determined from a value of 0 to 255, 255 is fully opaque, 0 is fully transparent. For this example, we want to set the color of the Material to a light green.We want set the Opacity to 30 as this will alter the Material so it is mostly transparent.Below are the color values I used:RGB Values:R - 201G - 255B - 211Hexadecimal Value - C9FFD3The important thing to remember is that even if you set the alpha of the Material to a low numerical value but keep the surface type set to Opaque, the Material will not be transparent and will retain its opacity.Metallic and SmoothnessThese options can be altered on a slider with values from 0 to 1. Both of the values and outputs are generated from the Mask Maps Alpha and Red Channel below in the Inspector. When a Mask Map is assigned, the sliders are subsequently used to remap the Minimum and Maximum values.Normal MapWith a Normal Map applied, the strength can be modified when adjusting the parameter slider within a range of 0 to 2You could add additional detail and depth to your glass Material by applying a Normal, such as indentation or scratches.Mask MapWithin HDRP, a Mask Map is a combination of:Red Channel - Metallic ranging from 0 to 1 Green Channel - Ambient Occlusion Blue Channel - Detail Map Mask Alpha Channel - SmoothnessBy default, textures imported into Unity use sRGB. Within the Texture Inspector, un-checking “sRGB (Color Texture)” converts the texture to using a Linear Format. As the Mask Map uses Mathematics to generate an output, this texture must be linear.Coat Mask The Coat Mask simulates a clear coat effect on the Material, increasing the smoothness along with it. By Default, the Coat Mask value is set to 0, but the slider can adjust the parameter within a range of 0 to 1. The clear coat mask can be used to mimic Materials such as car paint or plastics. Detail Inputs The Detail Map is a new map introduced into HDRP and an amalgamation of additional maps which add minute detail to the Material. The Detail Map uses the following channels: Red: Grayscale using Overlay Blending Green: Normal Map Y channel Blue: Smoothness Alpha: Normal Map X channelBy modifying the Transparency Input properties of the shader, you can start to determine the overall transparent effect. The Transparency Inputs only become available once the Surface Type is set to Transparent.For this example, the following section will enable you to create the refraction for the glass Material.Refraction ModelThe Refraction Model defines how the bending of light through the Material will be simulated. There are two options, Plane and Sphere.Choosing the Refraction model depends on the shape and size of the object that the Material is being applied to:Sphere: For filled objects, use a Sphere model with a Refraction thickness comparative to the size of the object the Material is placed on.Plane: For an empty object, use a Plane mode with a small Refraction thickness.The Index of Refraction and Refraction Thickness options allow you to control the behavior of the refraction model.Index of RefractionRanging on a scale of 1 to 2.5, adjusting the parameter will provide a different refraction intensity. By default, the value is set to 1, which generates no refraction.Between 1.1 and 1.2 is where the refraction flips and the environment seen through the Material appears upside down.Now that the base of the glass Material has been made, custom adjustments can be added to assemble a Material that will work best for you and for the object the Material is being applied to.I hope this overview has helped you to better understand how to practically apply HDRP within your projects! While it’s still in an experimental preview, we have some preliminary documentation on GitHub that you can also read to get started.HDRP is an ever-growing, exciting new tool for creating projects and I just can’t wait to see what you’re going to make with it. Feel free to contact me on Twitter @kierancolenutt for any question or queries. I want to hear about your experiences, let me know how is it going!To follow and discuss the development of HDRP and SRP in general, join our experimental graphics forum.

>access_file_
1513|blog.unity.com

WebAssembly load times and performance

A few weeks ago we talked about WebAssembly and its advantages over asm.js. As promised, now it’s time to look at the performance and load times of Unity WebGL in four major browsers.It’s been a long time since we ran the Unity WebGL benchmark and published our findings. During this time, both Unity and the browser vendors released many new versions, added support for WebAssembly and implemented post-launch optimizations, especially during the past year or so.On the Unity side, that means many changes have gone into the engine, both new features and optimizations, as well as WebGL2.0 graphics API support and an updated emscripten.What to expect then? Given what we mentioned in the WebAssembly blog post, we expect Unity WebGL to perform better and load faster compared to the last time we ran the Benchmark using asm.js.We rebuilt the Benchmark project with Unity 2018.2.5f1 using the following Unity WebGL Player Settings:On WebAssembly, we take advantage of the automatic Heap growth feature described in the Wasm blog post, so we set the Memory Size to the minimum value. To measure asm.js we made a different build with a fixed Memory Size of 512 MB, which will be enough to run the benchmark. We changed linker target accordingly.We tested four major browsers: Firefox 61, Chrome 70, Safari 11.1.2 and Edge 17. These are the latest stable releases at the time of writing this post, the only exception is Chrome 70 which is due to be released next month, containing a performance regression fix. We should mention that Firefox 62 also regressed in performance compared to Firefox 61 and we reported the issue to Mozilla.Contrary to the last round of performance tests, we only tested desktop 64-bit browsers, for consistency of the results, and we used newer OS/HW:Windows 10 - Intel Xeon W-2145, 32GB RAM, NVIDIA 1080macOS 10.13.6 - 2018 MacBook Pro 15”, Radeon Pro 560XTo run the Benchmark on your machine, use this link (or as a zip). Note that depending on the browser version and os/hw you are testing on, performance may vary. On Windows, make sure to use a 64-bit browser.Note that there is nothing preventing you from running the benchmark on your own mobile device (The alert box about Unity WebGL not supported on mobile devices has been disabled in this build).One thing that changed since last time is that since Unity 5.6, Unity WebGL generates several unityweb files (Code, Data and JS Framework) that will be downloaded on startup, or fetched from browser IndexedDB cache when loading the same content again. This works pretty much the same in both WebAssembly and asm.js, however, you can expect loading Wasm code to be faster for the simple reason that the generated Wasm code is smaller. The Benchmark project outputs 4.6 MB compressed Wasm code as opposed to 6.1 MB for the asm.js version (data file is 5.6 MB and JS Framework file is ~87 KB).Since network latency can affect the results, we measured Benchmark reloads (so that code and data were already in cache), and we served the build files locally. In addition, to speed up unityweb files load from IndexedDB, we changed cacheControl setting to immutable (default is must-revalidate). Here is how you can do the same for your own project html template:var instance = UnityLoader.instantiate("gameContainer", "%UNITY_WEBGL_BUILD_URL%", { onProgress: UnityProgress, Module : { cacheControl: {"default": "immutable"}, } } This technique works well combined with Name Files As Hashes setting which makes Unity generate unique filenames.First, we are going to look at the total amount of time it takes to get to the main screen for both WebAssembly and asm.js (lower is better):Findings:Firefox is blazingly fast to load on both Windows and macOSBoth Chrome and Edge load massively faster when using WebAssemblyAll browsers, except Safari, load faster with WebAssembly compared to asm.js.Now let’s dive into the numbers that are relevant to WebAssembly. We are going to measure:WebAssembly Instantiation: WebAssembly compilation and instantiation.Engine Initialization: Unity engine initialization and first scene load.Time to Screen: time it takes to render first frame.Time to Interactive: time it takes to load and have a stable frame-rate.Again, we are reloading the Benchmark so the unityweb files can be fetched from IndexedDB cache:Findings:Firefox is the fastest overall on both Windows and MacEdge performs really well. It’s interesting to see that it compiles Wasm really quickly (even faster than Firefox) but then is a bit slower to initialize Unity (Engine Initialization).As we can see, all browsers are faster to load when using WebAssembly compared to asm.js, but where does this improvement come from?This is mainly due to the fact that they implemented a tiered compilation for WebAssembly. This means that the browser will now perform a very quick compilation pass at startup, then optimize hot functions later on.Firefox shipped a tiered compiler with Firefox 58, back in January. Whereas Chrome shipped their new Liftoff compiler with Chrome 69. To give you a bit of perspective on this approach, let's see what difference it makes in Chrome:As we can see, the increase in engine initialization time is negligible, but the speed up in WebAssembly Instantiation is massive. This is great news since load times are critical for the web!For more information about the tiering systems in browsers, check these blog posts:Firefox’s new streaming and tiering compilerWebKit’s Assembling WebAssemblyChrome’s LiftOffBear in mind that the Benchmark project doesn’t use a lot of assets and uses a small number of scripts. Both code and data files are relatively small, but real-world projects might result in larger builds which will impact the end-user's experience.Although loading times from cache are pretty fast now, don't forget that you should still optimize your build size, so that first load time is reasonable. There won't be a second load if the user drops-off while your content is loaded the first time!We recommend checking the Optimizing Binary Deployment Size Unite Berlin talk, as well as the Building and running a WebGL project manual page.Among the things that can also affect load times are shader compilation and audio decoding, so try to minimize that. The complexity of your shaders, as well as audio assets in your build can also lead to slower loading.As explained in the first blog post, the benchmark consists of a collection of scenes that stress different parts of the Unity engine and produces a score based on the number of iterations that can be performed in a limited amount of time.Last time, Firefox outperformed the other browsers. Let’s see what changed.Here is an overview of total scores using WebAssembly and asm.js (higher scores are better):Findings:All browsers perform better when using WebAssemblyOn Windows, all browsers perform very similarlyOn macOS, Firefox outperforms all other browsers. Notice that even the asm.js implementation is faster than other browsers WebAssembly implementation.Safari is the browser that benefits the most by WebAssembly since it doesn’t support asm.js optimizations.Now, let’s take a look a the Individual Benchmark scores (scaled so that Chrome equals to 1):Firefox is the fastest browser in nearly all benchmark scenes and excels in a few individual tests. However, if you measure Firefox 62, it will not perform as well because of the performance regression mentioned earlier, but we expect this problem to be fixed soon.Note that WebGL2.0, a feature we haven’t benchmarked with before, is enabled by default in the build we used. So Chrome and Firefox use WebGL2.0, whereas Edge and Safari still use WebGL1.0. Having said that, we tried to disable it so that all browsers would use the same graphics API, but that didn't seem to affect the results.Outside of the context of a simple demo project, however, WebGL2.0 will result in reduced GC pressure and frame-rate will therefore be more stable.For more information about performance in Unity WebGL, please check the WebGL performance considerations page in the manual.The main takeaway is that today modern browsers load faster and perform better thanks to WebAssembly and that you can expect a more consistent user experience for your web content compared to asm.js. Having said that, we still recommend that you optimize your projects and test it on different browsers and os/hw.In the future, we might update the benchmark project again so that it also stresses other areas, like ECS and the C# Job System as well as test with WebAssembly streaming instantiation/compilation and the upcoming multi-threading support.We're looking forward to hearing your feedback on the Unity WebGL Forum.

>access_file_
1515|blog.unity.com

Extending Timeline: A practical guide

Unity launched Timeline along with Unity 2017.1 and since then, we have received a lot of feedback about it. After talking with many developers and responding to users on the forums, we realized how many of you want to use Timeline for more than as a simple sequencing tool. I have already delivered a couple of talks about this (for instance, at Unite Austin 2017) on how to use Timeline for non-conventional uses. Timeline was designed with extensibility as a main goal from the beginning; the team which designed the feature always had in mind that users would want to create their own clips and tracks in addition to the built-in ones. As such, there are a lot of questions about scripting with Timeline. The system on which Timeline is built upon is powerful, but it can be difficult to work with for the non-initiated.But first, what’s Timeline? It is a linear editing tool to sequence different elements: animation clips, music, sound effects, camera shots, particle effects, and even other Timelines. In essence, it is very similar to tools such as Premiere®, After Effects®, or Final Cut®, with the difference that it is engineered for real-time playback.For a more in-depth look at the basics of Timeline, I advise you to visit the Timeline documentation section of the Unity Manual, since I will make extensive use of those concepts.Timeline is implemented on top of the Playables API.It is a set of powerful APIs that allows you to read and mix multiple data sources (animation, audio and more) and play them through an output. This system offers precise programmatic control, it has a low overhead and is tuned for performance. Incidentally, it’s the same framework behind the state machine that drives the Animator Component, and if you have programmed for the Animator you will probably see some familiar concepts.Basically, when a Timeline begins playing, a graph is built composed of nodes called Playables. They are organised in a tree-like structure called the PlayableGraph.Note: If you want to visualise the tree of any PlayableGraph in the scene (Animators, Timelines, etc.) you can download a tool called PlayableGraph Visualizer. This post uses it to visualize the graphs for the different custom clips. I will now go through three simple examples that will show you how to extend Timeline. In order to lay the groundwork, I will begin with the easiest way to add a script in Timeline. Then, more concepts will be added gradually to make use of most of the functionalities.I have packaged a small demo project with all of the examples used in this post. Feel free to download it to follow along. Otherwise, you can enjoy the post on its own.Note: For the assets, I have used prefixes to differentiate the classes in each example (“Simple_”, “Track_”, “Mixer_”, etc.). In the code below, these prefixes are omitted for the sake of readability.This first example is very simple: the goal is to change the color and intensity of a Light component with a custom clip. To create a custom clip, you need two scripts:One for the data: inheriting from PlayableAssetOne for the logic: inheriting from PlayableBehaviourA core tenet of the Playable API is the separation of logic and data. This is why you will need to first create a PlayableBehaviour, in which you will write what you want to do, like so:What’s going on here? First, there is information about which properties of the Light you want to change. Also, PlayableBehaviour has a method named ProcessFrame that you can override.ProcessFrame is called on each update. In that method, you can set the Light’s properties. Here’s the list of methods you can override in PlayableBehaviour. Then, you create a PlayableAsset for the custom clip:A PlayableAsset has two purposes. First, it contains clip data, as it is serialized within the Timeline asset itself. Second, it builds the PlayableBehaviour that will end up in the Playable graph.Look at the first line:This creates a new Playable and attaches a LightControlBehaviour, our custom behaviour, to it. You can then set the light properties on the PlayableBehaviour.What about the ExposedReference? Since a PlayableAsset is an asset, it is not possible to refer directly to an object in a scene. An ExposedReference then acts as a promise that, when CreatePlayable is called, an object will be resolved.Now you can add a Playable Track in the timeline, and add the custom clip by right-clicking on that new track. Assign a Light component to the clip to see the result.In this scenario, the built-in Playable Track is a generic track that can accept these simple Playable clips such as the one you just created. For more complex situations, you will need to host the clips on a dedicated track.One caveat of the first example is that each time you add your custom clip, you need to assign a Light component to each one of your clips, which can be tedious if you have a lot of them. You can solve this by using a track’s bound object.A track can have an object or a component bound to it, which means that each clip on the track can then operate on the bound object directly. This is very common behaviour and in fact it’s how the Animation, Activation, and Cinemachine tracks work.If you want to modify the properties of a Light with multiple clips, you can create a custom track which asks for a Light component as a bound object. To create a custom track, you need another script that extends TrackAsset:There are two attributes here:TrackClipType specifies which PlayableAsset type the track will accept. In this case, you will specify the custom LightControlAsset.TrackBindingType specifies which type of binding the track will ask for (it can be a GameObject, a Component, or an Asset). In this case, you want a Light component.You also need to slightly modify the PlayableAsset and PlayableBehaviour in order to make them work with a track. For reference, I have commented-out the lines that you don’t need anymore.The PlayableBehaviour doesn’t need a Light variable now. In this case, the method ProcessFrame provides the track’s bound object directly. All that you need is to cast the object to the appropriate type. That’s neat!The PlayableAsset doesn’t need to hold an ExposedReference for a Light component anymore. The reference will be managed by the track and given directly to the PlayableBehaviour.In our timeline, we can add a LightControl track and bind a Light to it. Now, each clip we add to that track will operate on the Light component that is bound to the track.If you use the Graph Visualizer to display this graph, it looks something like this:As expected, you see the clips on the right side as 5 blocks that feed into one. You can think of the one box as the track. Then, everything goes into the Timeline: the purple box.Note: The pink box called “Playable” is actually a courtesy mixer Playable that Unity creates for you. That’s why it’s the same colour as the clips. What is a mixer? I'll talk about mixers in the next example.Timeline supports overlapping clips to create blending, or crossfading, between them. Custom clips also support blending. To enable it though, you need to create a mixer that accesses the data from all of the clips and blends it.A mixer derives from PlayableBehaviour, just like the LightControlBehaviour you used earlier. In fact, you still use the ProcessFrame function. The key difference is that this Playable is explicitly declared as a mixer by the track script, by overriding the function CreateTrackMixer.The LightControlTrack script now looks like this:When the Playable Graph for this track is created, it will also create a new behaviour (the mixer), and connect it to all of the clips on the track.You also want to move the logic from the PlayableBehaviour to the mixer. As such, the PlayableBehaviour will now look quite empty:It basically only contains the data that will come from the PlayableAsset at runtime. The mixer, on the other hand, will have all of the logic in its ProcessFrame function:Mixers have access to all of the clips present on a track. In this case you need to read the values of intensity and color of all the clips currently participating in the blend, so you need to iterate through them with a for loop. On each cycle, you access the inputs (GetInput(i)) and build up the final values using the weight of each clip (GetInputWeight(i)) to obtain how much that clip is contributing to the blend.So, imagine you have two clips blending: one is contributing red and the other is contributing white. When the blend is a quarter of the way through, the color is 0.25 * Color.red + 0.75 * Color.white, which results in a slightly faded red.Once the loop is over, you apply the totals to the bound Light component. This lets you create something like this:You can see now that the red box is exactly the mixer Playable that you programmed, and on which you now have full control. This is in contrast with the Example 2 above, where the mixer was the default one provided by Unity.Also notice that because the graph is in the middle of a blend, the green boxes 2 and 3 both have a bright line connecting to the mixer, indicating that their weight is somewhat like 0.5 each.Keep in mind that whenever you implement blends in a mixer, it’s up to you to decide what the logic is. Blending two colors is easy, but what happens when you’re blending (wild example) two clips which represent different AI states in your AI system? Two lines of dialogue in your UI? How do you blend two static poses in a stop-motion animation? Maybe your blend is not continuous, but it’s “stepped” (so the poses morph into each other, but in discrete increments: 0, 0.25, 0.5, 0.75, 1).With this powerful system at your disposal, the scenarios are exciting and endless!As a final step in this guide, let’s go back to the previous example and implement a different way of moving data around using something we refer to as “templates”. One big advantage of this pattern is that it lets you keyframe the properties of the template, making it possible to create animations for custom clips directly on the Timeline.In the previous example, you had a reference to the Light component, the color and the intensity on both the PlayableAsset and the PlayableBehaviour. The data was set-up on the PlayableAsset in the Inspector, then at runtime it was copied into the PlayableBehaviour when creating the graph.This is a valid way of doing things, but it duplicates the data which then needs to be kept in sync at all times. This can easily lead to mistakes. Instead, you can use the concept of a PlayableBehaviour “template”, by creating a reference to it in the PlayableAsset.So, first, rewrite your LightControlAsset like this:The LightControlAsset now only has a reference to the LightControlBehaviour rather than the values themselves. It’s even less code than before!Leave the LightControlBehaviour unchanged:The reference to the template now automatically produces this Inspector when you select the clip in the Timeline:Once you have this script in place, you are ready to animate. Notice that if you create a new clip, you will see a circular red button on the Track Header. This means that the clip can now be keyframed without needing to add an Animator to it. You just click the red button, select the clip, position the playhead where you want to create a key, and change the value of that property.You can also expand the Curves view by clicking on the white box button, to see the curves created by the keyframes:There’s one extra perk: you can double-click on the Timeline clip, and Unity will open the Animation panel and link it to Timeline. You will noticed they are linked when this button shows up:When this happens, you can scrub on both the Timeline and the Animation window and the playheads will be kept in sync, so you have full control over your keyframes. You can now modify your animation in the Animation window to work on the keyframes in a more comfortable environment:In this view, you can use the full power of animation curves and the dopesheet to really refine the animations of your custom clips.Note: When you animate things this way, you are creating Animation Clips. You can find them under the Timeline asset:I hope this post was a valuable introduction to the endless possibilities that Timeline can offer when you take it to the next level with scripting.Please ping me on Twitter with your questions, feedback, and your Timeline creations!

>access_file_
1516|blog.unity.com

Animation C# Jobs

In Unity 2018.2, the Animation C# Jobs feature extends the animation Playables with the C# Job System released with 2018.1. It gives you the freedom to create original solutions when implementing your animation system, and improve performance with safe multithreaded code at the same time. Animation C# Jobs is a low-level API that requires a solid understanding of the Playable API. It’s therefore aimed at developers who are interested in extending the Unity animation system beyond its out-of-the-box capabilities. If that sounds like you, read on to find out when is it a good idea to use it and how to get the most out of it!With Animation C# Jobs, you can write C# code that will be invoked at user-defined places in the PlayableGraph, and thanks to the C# Job System, the users can harness the power of modern multicore hardware. For projects which see a significant cost in C# scripts on the main thread, some of the animation tasks can be parallelized. This unlocks valuable performance gains. The user made C# scripts can modify the animation stream that flows through the PlayableGraph.New Playable node: AnimationScriptPlayableControl the animation data stream in the PlayableGraphMultithreaded C# codeThe Animation C# Jobs is still an experimental feature (living in UnityEngine.Experimental.Animations). The API might change a bit over time, depending on your feedback. Please join the discussion on our Animation Forum!So, say, you want to have a foot-locking feature for your brand new dragon character. You could code that with a regular MonoBehaviour, but all the code would be run in the main thread, and not until the animation pass is over. With the Animation C# Jobs, you can write your algorithm and use it directly in a custom Playable node in your PlayableGraph, and the code will run during PlayableGraph processing, in a separate thread.Or, if you didn’t want to animate the tail of your dragon, the Animation C# Jobs would be the perfect tool for setting up the ability to procedurally compute this movement.Animation C# Jobs also gives you the ability to write a super-specific LookAt algorithm that would allow you to target the 10 bones in your dragon’s neck, for example.Another great example would be making your own animation mixer. Let’s say you have something very specific that you need - a node that takes positions from one input, rotations from another, scales from a third node, and mixes them all together into a single animation stream - Animation C# Jobs gives you the ability to get creative and build for your specific needs.Before getting into the meaty details of how to use the Animation C# Jobs API, let’s take a look at some examples that showcase what is possible to do with this feature.All the examples are available on our Animation Jobs Samples GitHub page. To install it you can either git clone it or download the latest release. Once installed, the examples have their own scenes which are all located in the “Scenes” directory:The LookAt is a very simple example that orients a bone (also called a joint) toward an effector. In the example below, you can see how it works on a quadruped from our 3D Game Kit package.The TwoBoneIK implements a simple two-bone IK algorithm that can be applied to three consecutive joints (e.g. a human arm or leg). The character in this demo is made with a generic humanoid avatar.The FullbodyIK example shows how to modify values in a humanoid avatar (e.g. goals, hints, look-at, body rotation, etc.). This example, in particular, uses the human implementation of the animation stream.The Damping example implements a damping algorithm that can be applied to an animal tail or a human ponytail. It illustrates how to generate a procedural animation.The SimpleMixer is a sort of “Hello, world!” of animation mixers. It takes two input streams (e.g. animation clips) and mixes them together based on a blending value, exactly like an AnimationMixerPlayable would do.The WeigthedMaskMixer example is a bit more advanced animation mixer. It takes two input streams and mixes them together based on a weight mask that defines how to blend each and every joint. For example, you can play a classic idle animation and take just the animation of the arms from another animation clip. Or you can smooth the blend of an upper-body animation by applying successively higher weights on the spine bones.The Animation C# Jobs feature is powered by the Playable API. It comes with three new structs: AnimationScriptPlayable, IAnimationJob, and AnimationStream.The AnimationScriptPlayable is a new animation Playable which, like any other Playable, can be added anywhere in a PlayableGraph. The interesting thing about it is that it contains an animation job and acts as a proxy between the PlayableGraph and the job. The job is a user-defined struct that implements IAnimationJob.A regular job processes the Playable inputs streams and mixes the result in its stream. The animation process is separated in two passes and each pass has its own callback in IPlayableJob:ProcessRootMotion handles the root transform motion, it is always called before ProcessAnimation and it determines if ProcessAnimation should be called (it depends on the Animator culling mode);ProcessAnimation is for everything else that is not the root motion.The example below is like the “Hello, world!” of Animation C# Jobs. It does nothing at all, but it allows us to see how to create an AnimationScriptPlayable with an animation job:using UnityEngine; using UnityEngine.Playables; using UnityEngine.Animations; using UnityEngine.Experimental.Animations; public struct AnimationJob : IAnimationJob { public void ProcessRootMotion(AnimationStream stream) { } public void ProcessAnimation(AnimationStream stream) { } } [RequireComponent(typeof(Animator))] public class AnimationScriptExample : MonoBehaviour { PlayableGraph m_Graph; AnimationScriptPlayable m_ScriptPlayable; void OnEnable() { // Create the graph. m_Graph = PlayableGraph.Create("AnimationScriptExample"); // Create the animation job and its playable. var animationJob = new AnimationJob(); m_ScriptPlayable = AnimationScriptPlayable.Create(m_Graph, animationJob); // Create the output and link it to the playable. var output = AnimationPlayableOutput.Create(m_Graph, "Output", GetComponent()); output.SetSourcePlayable(m_ScriptPlayable); } void OnDisable() { m_Graph.Destroy(); } }The stream passed as a parameter of the IAnimationJob methods is the one you will be working on during each processing pass.By default, all the AnimationScriptPlayable inputs are processed. In the case of only one input (a.k.a. a post-process job), this stream will contain the result of the processed input. In the case of multiple inputs (a.k.a. a mix job), it’s preferable to process the inputs manually. To do so, the method AnimationScriptPlayable.SetProcessInputs(bool) will enable or disable the processing passes on the inputs. To trigger the processing of an input and acquire the resulting stream in manual mode, call AnimationStream.GetInputStream().The AnimationStream gives you access to the data that flows through the graph from one playable to another. It gives access to all the values animated by the Animator componentpublic struct AnimationStream { public bool isValid { get; } public float deltaTime { get; } public Vector3 velocity { get; set; } public Vector3 angularVelocity { get; set; } public Vector3 rootMotionPosition { get; } public Quaternion rootMotionRotation { get; } public bool isHumanStream { get; } public AnimationHumanStream AsHuman(); public int inputStreamCount { get; } public AnimationStream GetInputStream(int index); }It isn’t possible to have a direct access to the stream data since the same data can be at a different offset in the stream from one frame to the other (for example, by adding or removing an AnimationClip in the graph). The data may have moved, or may not exist anymore in the stream. To ensure the safety and validity of those accesses, we’re introducing two sets of handles: the stream and the scene handles, which each have a transform and a component property handle.The stream handles manage, in a safe way, all the accesses to the AnimationStream data. If an error occurs the system throws a C# exception. There are two types of stream handles: TransformStreamHandle and PropertyStreamHandle.The TransformStreamHandle manages Transform and takes care of the transform hierarchy. That means you can change the local or global transform position in the stream, and future position requests will give predictable results.The PropertyStreamHandle manages all other properties that the system can animate and find on the other components. For instance, it can be used to read, or write, the value of the Light.m_Intensity property.public struct TransformStreamHandle { public bool IsValid(AnimationStream stream); public bool IsResolved(AnimationStream stream); public void Resolve(AnimationStream stream); public void SetLocalPosition(AnimationStream stream, Vector3 position); public Vector3 GetLocalPosition(AnimationStream stream); public void SetLocalRotation(AnimationStream stream, Quaternion rotation); public Quaternion GetLocalRotation(AnimationStream stream); public void SetLocalScale(AnimationStream stream, Vector3 scale); public Vector3 GetLocalScale(AnimationStream stream); public void SetPosition(AnimationStream stream, Vector3 position); public Vector3 GetPosition(AnimationStream stream); public void SetRotation(AnimationStream stream, Quaternion rotation); public Quaternion GetRotation(AnimationStream stream); } public struct PropertyStreamHandle { public bool IsValid(AnimationStream stream); public bool IsResolved(AnimationStream stream); public void Resolve(AnimationStream stream); public void SetFloat(AnimationStream stream, float value); public float GetFloat(AnimationStream stream); public void SetInt(AnimationStream stream, int value); public int GetInt(AnimationStream stream); public void SetBool(AnimationStream stream, bool value); public bool GetBool(AnimationStream stream); }The scene handles are another form of safe access to any values, but from the scene rather than from the AnimationStream. As for the stream handles, there are two types of scene handles: TransformSceneHandle and PropertySceneHandle.A concrete usage of a scene handle is to implement an effector for a foot IK. The IK effector is usually a GameObject not animated by an Animator, and therefore external to the transforms modified by the animation clips in the PlayableGraph. The job needs to know the global position of the IK effector in order to calculate the desired position of the foot. Thus the IK effector is accessed through a scene handle, while stream handles are used for the leg bones.public struct TransformSceneHandle { public bool IsValid(AnimationStream stream); public void SetLocalPosition(AnimationStream stream, Vector3 position); public Vector3 GetLocalPosition(AnimationStream stream); public void SetLocalRotation(AnimationStream stream, Quaternion rotation); public Quaternion GetLocalRotation(AnimationStream stream); public void SetLocalScale(AnimationStream stream, Vector3 scale); public Vector3 GetLocalScale(AnimationStream stream); public void SetPosition(AnimationStream stream, Vector3 position); public Vector3 GetPosition(AnimationStream stream); public void SetRotation(AnimationStream stream, Quaternion rotation); public Quaternion GetRotation(AnimationStream stream); } public struct PropertySceneHandle { public bool IsValid(AnimationStream stream); public bool IsResolved(AnimationStream stream); public void Resolve(AnimationStream stream); public void SetFloat(AnimationStream stream, float value); public float GetFloat(AnimationStream stream); public void SetInt(AnimationStream stream, int value); public int GetInt(AnimationStream stream); public void SetBool(AnimationStream stream, bool value); public bool GetBool(AnimationStream stream); }The last piece is the AnimationJobExtension class. It’s the glue that makes it all work. It extends the Animator to create the four handles seen above, thanks to these four methods: BindStreamTransform, BindStreamProperty, BindSceneTransform, and BindSceneProperty.public static class AnimatorJobExtensions { public static TransformStreamHandle BindStreamTransform(this Animator animator, Transform transform); public static PropertyStreamHandle BindStreamProperty(this Animator animator, Transform transform, Type type, string property); public static TransformSceneHandle BindSceneTransform(this Animator animator, Transform transform); public static PropertySceneHandle BindSceneProperty(this Animator animator, Transform transform, Type type, string property); }The “BindStream” methods can be used to create handles on already animated properties or for newly animated properties in the stream.API documentation:AnimationScriptPlayableIAnimationJobAnimationScriptTransformStreamHandlePropertyStreamHandleTransformSceneHandlePropertySceneHandleIf you encounter a bug, please file it using the Bug Reporter built in Unity.

>access_file_
1518|blog.unity.com

How to create retention in playable ads

In contrast to other ad formats, playable ads are unique in that they aren’t just one component in a larger user acquisition funnel - they’re a funnel in and of themselves. The goal is to push users down that funnel, getting them to progress past the ad’s tutorial all the way through to the call-to-action.This means we can break the ad experience into multiple touch points, just as you would when designing a mobile game. PlayWorks, which is ironSource’s in-house creative studio, quickly made that mental connection, understanding that we can effectively apply game design principles to ad design. The team did just that, marrying user acquisition and game design for the first time.By treating playable ads like mini-games, the PlayWorks team was able to significantly increase various metrics, including “in-ad retention rates.” That’s when having a creative team made up of game designers and developers comes in handy.Where game design and user acquisition meetThere are several parts to a playable ad: the tutorial introducing the player to the game, the gameplay itself, and the end card which displays the call-to-action (CTA). Depending on your UA campaign’s KPIs, any part can be optimized for retention. Let’s take a look.D1 retention is our S1 retentionInstead of optimizing for Day 1 (D1) retention like you would in a mobile game, we optimize for Second 1 (S1) retention in a playable ad. At S1, retention rests on making sure users understand the ad is interactive and playable. This is where short, clear copy and icons like blinking hands work best. Tell the user they need to take action. Use strong action verbs like “match,” “tap,” “strike,” etc.High S1 retention is a good indication of how well the playable ad will perform. If you set your KPI to S1 retention, it’s likely your campaign is centered around brand awareness. In other words, you want users to see your logo, associate your brand or game with a fun and enjoyable ad experience, and hope to convert them in a retargeting campaign later on. You’re measuring the initial look and feel, and first impressions of the ad.S6 retentionIn a 30 second playable ad, you don’t have much time to teach users how to play the mini-game. The user has to understand how the playable ad works in the first few seconds. That means the ‘tutorial’, which takes place in the first few seconds of the ad, must be optimized for retention.There are many tips and best practices for increasing S6 retention - for example, show hands indicating where to swipe, highlight key buttons, offer obvious hints, and provide concise and explicit instructions that are impossible to miss.At S6, you’re measuring the effectiveness of ‘onboarding’ the user into the ad. If the user doesn’t understand how to play the mini-game, they may get frustrated and close the ad. Or worse, they may continue playing the ad, install the app, and then uninstall later after realizing they don’t enjoy the game.S14 retentionHalfway through the ad, at approximately 14 seconds, the user should be completely engaged, having fully understood the tutorial. The gameplay makes up the bulk of the ad, making it one of the most important, yet most difficult sections to optimize.Just like in a mobile game, if the gameplay of a playable ad is too difficult, users will grow frustrated and move on. If it’s too easy, users might not feel like installing the app is worth their time. You need to find the sweet spot, setting the gameplay on the easy side of medium to get the best results. Guide them, but don’t give them all the answers. Keep them interested, and above all keen to come back for more.It’s also important to know your genre. We’ve noticed an interesting correlation between game genre and difficulty: users who lose hyper casual playable ads are more likely to install. This type of information will guide your optimization strategy.To still be playing the game at S14 means users have progressed through the tutorial or ‘early onboarding’ phase and are engaged within the core gameplay loop. Players at S14 are high-quality users who enjoy the gameplay, and thus are more likely to enjoy your game as well, making S14 a measure of overall playable ad enjoyment.S30 retentionBy S30, the user has completed the gameplay, reached the end card, and seen the call-to-action to install the game. This is as far as a user can get, making them the highest-quality acquired players possible. Any user who isn’t interested will have already x’d out by now. But not all players make it this far - and just because they have made it this far, doesn’t necessarily mean they’ll click through to the app store and install.In other words, quality can be very high, but scale is low, and the work of the ad is still not done. Ultimately, you still need to close the loop and entice users to click through and install. That’s what makes optimizing for S30 one of the more difficult bits of playable ad creative optimization.In addition to the copy in the call to action, the colors of the button, and the graphics, it’s important to keep difficulty level in mind here as well. Did the player win or lose? The end result significantly impacts S30 retention, as users who win are more likely to make it through S30, and eventually convert.Wrapping upWe understand that there’s no quick fix, or one-size-fit-all solution for increasing a game’s retention rates. It takes failing, optimizing, learning, and tweaking to first understand what makes your users tick, and then adapt your game design accordingly. Now, no matter the KPI, UA teams today can be sure that their playable ads perform as well as their games.Make sure you subscribe to Level Up in order to continue receiving tips from trailblazers in the gaming industry and stay updated on all things related to gaming.

>access_file_