// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1689 transmissions indexed — page 82 of 85

[ 2016 ]

19 entries
1623|blog.unity.com

#UnityTips: ParticleSystem Performance – Culling

Culling is only possible when a system has predictable behaviour.Turning on a single module will not only add to that module’s overhead, but may increase the overall systems impact due to switching from procedural to non-procedural mode.Changing values via script will prevent culling.Using custom culling can provide a performance benefit, but only you the developer can decide if and when it is appropriate. Take into consideration the type of effect, if the player may notice it isn’t animating when invisible, and if it is possible to predict the area it will affect?Internally, each particle system has 2 modes of operating, procedural and non-procedural.In procedural mode it is possible to know the state of a particle system for any point in time (past and future) whereas a non-procedural system is unpredictable. This means that it is possible to quickly fast forward (and rewind) a procedural system to any point in time.When a system goes out of the view of any camera, it becomes culled. When this occurs, a procedural system will halt updating. It will efficiently fast forward to the new point in time when the system becomes visible again. A non-procedural system cannot do this, it must continue updating the system even when invisible due to its unpredictable nature.For example, the following system is predictable.It’s in local space, so the movement of the particle system’s transform does not matter; the particles are not influenced by any external forces such as collisions, triggers and wind. This means we are able to calculate the total bounds that particles will exist within during the lifetime of the system (yellow box) and can safely cull the system when not visible.By changing the particle system to world space, it becomes unpredictable. When a particle is spawned it will need to sample the position of the transform at that moment. The position of the transform is unpredictable, its history and future are unknown.,Therefore, the system must continue to update itself even when invisible in order for the particles to be in the correct positions when it becomes visible again.When a particle system doesn’t support procedural mode, a small icon is displayed in the Inspector. Mousing over this icon will provide a tooltip that lists the reasons why the system no longer supports procedural mode and cannot be culled. It’s also possible to see that non-procedural mode is in use by looking at the bounding bound of the particle system, a continually changing bounds that only encapsulate the particles are a sign that procedural mode isn’t being used.The following are examples of conditions that break support for procedural mode.Module Property What breaks it? Simulation Space World space Main Gravity modifier Using curves Emission Rate over distance Any non zero value External forces enabled true Clamp velocity enabled true Rotation by speed enabled true Collision enabled true Trigger enabled true Sub Emitters enabled true Noise enabled true Trails enabled true Rotation by lifetime Angular Velocity if using a curve and the curve does not support procedural* Velocity over lifetime X, Y, Z If using a curve and the curve does not support procedural* Force over lifetime X, Y, Z If using a curve and the curve does not support procedural* Force over lifetime Randomise enabled*A curve can not support procedural mode if it has more than 8 segments. A segment is the number of keys plus an additional key if the curve does not start at 0.0 and another if does not end at 1.0.Procedural mode is based on knowing exactly how the system will behave at a specified point in time with no external influences. If a value is changed via script or in the editor during play mode then those assumptions can’t be made and procedural mode is invalidated. This means that even though a system is using all procedurally safe settings, it’s no longer possible to use procedural mode and the system will not be culled anymore.Changing a value or Emitting via script will invalidate procedural mode, which you can notice by examining the bounds of the system in the scene. If the bounds are continuously changing then the procedural mode is no longer being used.Sometimes this can be avoided by using the particle system’s built in features to change properties, instead of using a script.Calling Play on a system that has been stopped will reset the system and re-validate procedural mode.The performance difference between a procedural and a non-procedural system can be significant. This difference is most noticeable when a system is offscreen. In a scene containing 120 default systems, each simulating 1000 particles, the following performance difference is shown between local space (procedural) and world space (non-procedural). The left area shows the performance when not culled, and the right shows when culled.The following example shows a simple 2D rain effect that uses the collision module (breaking procedural mode).By using the collision module the system is now unpredictable; colliders could be moved or have their properties changed throughout the life of the system. This means that predicting where particles will be in the future is impossible, and therefore the system must continue to update whilst culled.We can see that the collision effect is localised within an area and we will not be moving the transform throughout the life; the particle system has no way to know this though.It is safe for an effect like this to not be updated whilst invisible; it could benefit from custom culling.The CullingGroup can be used to integrate into Unity’s culling system, using it we can create a culling area using bounding spheres. When the spheres go in and out of visibility a notification is sent; we can use this to pause the particle system when it is not visible and resume it when it becomes visible again. One downside is that off-screen particles will appear motionless, which can be noticeable in some effects. It's possible to hide this issue by simulating the system forward a little so as to give the illusion that the system was still active whilst not visible.using UnityEngine; public class CustomParticleCulling : MonoBehaviour { public float cullingRadius = 10; public ParticleSystem target; CullingGroup m_CullingGroup; Renderer[] m_ParticleRenderers; void OnEnable() { if(m_ParticleRenderers == null) m_ParticleRenderers = target.GetComponentsInChildren(); if (m_CullingGroup == null) { m_CullingGroup = new CullingGroup(); m_CullingGroup.targetCamera = Camera.main; m_CullingGroup.SetBoundingSpheres(new[] { new BoundingSphere(transform.position, cullingRadius) }); m_CullingGroup.SetBoundingSphereCount(1); m_CullingGroup.onStateChanged += OnStateChanged; // We need to start in a culled state Cull(m_CullingGroup.IsVisible(0)); } m_CullingGroup.enabled = true; } void OnDisable() { if(m_CullingGroup != null) m_CullingGroup.enabled = false; target.Play(true); SetRenderers(true); } void OnDestroy() { if (m_CullingGroup != null) m_CullingGroup.Dispose(); } void OnStateChanged(CullingGroupEvent sphere) { Cull(sphere.isVisible); } void Cull(bool visible) { if(visible) { // We could simulate forward a little here to hide that the system was not updated off-screen. target.Play(true); SetRenderers(true); } else { target.Pause(true); SetRenderers(false); } } void SetRenderers(bool enable) { // We also need to disable the renderer to prevent drawing the particles, such as when occlusion occurs. foreach (var particleRenderer in m_ParticleRenderers) { particleRenderer.enabled = enable; } } void OnDrawGizmos() { if (enabled) { // Draw gizmos to show the culling sphere. Color col = Color.yellow; if (m_CullingGroup != null && !m_CullingGroup.IsVisible(0)) col = Color.gray; Gizmos.color = col; Gizmos.DrawWireSphere(transform.position, cullingRadius); } } }Not all effects are suited to custom culling. The system on the left is custom culled and can be seen to clearly go out of sync whilst the system on the right isn’t culled. This illustrates why non-procedural systems must be updated when not visible.

>access_file_
1624|blog.unity.com

Just in time for the holidays: How to get the most out of currency sales

In-app promotional offers can spark positive feelings for your game and earn you extra money to boot. But how do you get the most out of such campaigns? Here’s a few best practices to follow for maximizing currency sale promotions."Q4 with its various festivities and celebrations is the perfect quarter to benefit from video ads" Jacob Krüger (Head of Marketing, Social Point)It’s important to consider the timing of when you run your sales. Timing refers to when your campaign starts and ends, but also how often you run your campaigns. You can take advantage of the highest activity days of the week by running your campaigns on the weekend, for example. But it’s important not to run the currency sales more than twice a month. Otherwise you risk lowering the perceived value of the rewards you normally offer within your game outside of your currency sales.Tis the season to play gamesIn addition to peak times like weekends, you also might want to take advantage of holiday seasons. As with the weekend, the number of Daily Active Users (DAU) increases during holidays offering you a great opportunity to double down on impressions.Plus the holidays make your campaign communications seem more relevant and natural, so there’s less chance of wearing out your welcome. You can dress your campaign up by designing themed updates around Christmas, Thanksgiving, or even Halloween, for example.“Q4 with its various festivities and celebrations is the perfect quarter to benefit from video ads,” says Jacob Krüger, Head of Marketing at Social Point, which has an estimated monthly 50M+ monthly active users (MAU).“There’s a lot of high quality inventory available, and the users appreciate the chance to earn virtual currency. This, in turn, helps to increase user engagement and revenues of our games, such as, Monster Legends, World Chef, and Dragon City,” Krüger says.On Social Point’s Dragon Land Facebook community, some players even excitedly discuss upcoming campaigns and wonder when they’ll be able to earn more valuable game gems. “Maybe for Christmas, they will add more than one campaign!!!” one player commented.In order to make sure your players embrace your rewarded video ads, it’s important that you offer them clear incentives for watching. Players need to feel that the reward is worth the time they spend on the video ad. And during a currency sale, it has to be really obvious that the value they derive is even greater.How do rewards become a cool part of your game?More spins, bigger perks, double currency, the nature of the rewards depends on the given title. But the important thing is that they’re designed to feel like a natural part of the game, and that players perceive them as adding value to their experience.A good example of this is the way the studio Seriously integrates ads seamlessly into gameplay, offering players rewards that help them succeed in their challenging and addictive match 3 game. Seriously designs the rewards to fit right in with the wacky, cartoon-like fun style of their Best Fiend’s game. At the same time, the rewards themselves help players succeed in the challenging and addictive match 3 game.In order to get the most out of your holiday campaign, it’s important ramp up expectations by letting your players know it’s coming. You could inform players of an upcoming campaign via push notifications or some sort of in-game message: a timer counting down to when the currency sale goes live, for example.How Futureplay give players the scoopFutureplay designed a special communication feature in their Build Away! game, which informs players of upcoming events in their city-building game. A built-in newspaper feature tells players about rewarded video sales on the way.After their recent currency sale went live, ad impressions increased by 50% during the events and stabilized at a higher baseline. “We saw a 20% increase in Average Revenue Per Daily Active Users (ARPDAU) and a 10% boost in players’ Lifetime Value (LTV),” said Camilo Fitzgerald, Analyst and Product Manager, Futureplay.“What’s more, player retention and average time spent playing per day increased from 7 minutes to 8.5 minutes, and player feedback has been overwhelmingly positive on the new feature.”Will you be running a virtual currency sale this holiday season?If so, remember to time the start and end dates during peak periods, offer a clear exchange of value, and build anticipation by letting your players know it’s coming. To learn exactly how one developer ran currency sales, check out how Futureplay did it.

>access_file_
1626|blog.unity.com

Free VFX image sequences and flipbooks

It is not so common to have either resources or budget to author smoke, fire and explosion flipbooks in video game industry. Here are some image sequences we want to share with you under CC0 license. Feel free to use them in your projects!At Unity Labs Paris, we are working on Real Time VFX R&D tools and as such we authored over the time some sequences using Houdini to try out our tools. Today, we release some of these sequences under CC0 license so you can use them in your projects.These sequences comes in various flavors: either as raw frames sequences you can import and assemble into flipbooks using our experimental VFX Toolbox Image Sequencer*, or already assembled flipbook texture sheets (done by us using this same tool). Image sequences are available in zip packages of two flavors, either as HDR Linear EXR* or as Uncompressed TGA*Please note that EXR support and the Image Sequencer are compatible with Unity 5.5 and newerCandleSmoke01Flipbooks EXR Sequence TGA SequenceCloud01Flipbooks EXR Sequence TGA SequenceCloud02Flipbooks EXR Sequence TGA SequenceCloud03Flipbooks EXR Sequence TGA SequenceCloud04Flipbooks EXR Sequence TGA SequenceDiscSmoke01Flipbooks EXR Sequence TGA SequenceExplosion00Flipbooks EXR Sequence TGA SequenceExplosion01Flipbooks EXR Sequence TGA SequenceExplosion01-lightFlipbooks EXR Sequence TGA SequenceExplosion01-light-nofireFlipbooks EXR Sequence TGA SequenceExplosion01-nofireFlipbooks EXR Sequence TGA SequenceExplosion02Flipbooks EXR Sequence TGA SequenceExplosion02HDFlipbooks EXR Sequence TGA SequenceFireBall01Flipbooks EXR Sequence TGA SequenceFireBall02Flipbooks EXR Sequence TGA SequenceFireBall03Flipbooks EXR Sequence TGA SequenceFireBall04Flipbooks EXR Sequence TGA SequenceFlame02Flipbooks EXR Sequence TGA SequenceFlame02-temperatureFlipbooks EXR Sequence TGA SequenceFlame03Flipbooks EXR Sequence TGA SequenceFlame03-hollowFlipbooks EXR Sequence TGA SequenceSmallFlame01-mini-temperatureFlipbooks EXR Sequence TGA SequenceSmallFlame01-smaller-temperatureFlipbooks EXR Sequence TGA SequenceSmallFlame01-temperatureFlipbooks EXR Sequence TGA SequenceWispySmoke01Flipbooks EXR Sequence TGA SequenceWispySmoke02Flipbooks EXR Sequence TGA SequenceWispySmoke03Flipbooks EXR Sequence TGA SequenceWispySmoke03bFlipbooks EXR Sequence TGA Sequence

>access_file_
1629|blog.unity.com

Understanding memory in Unity WebGL

Some users are already familiar with platforms where memory is limited. For others, coming from desktop or the WebPlayer, this has never been an issue until now.Targeting console platforms is relatively easy in this respect, since you know exactly how much memory is available. That allows you to budget your memory and your content is guaranteed to run. On mobile platforms things are a bit more complicated because of the many different devices out there, but at least you can choose the lowest specs and decide to blacklist lower-end devices at the marketplace level.On the Web, you simply can’t. Ideally, all end-users had 64-bit browsers and tons of memory but that’s far from reality. On top of that, there is no way to know the specs of the hardware your content is running on. You know the OS, Browser and not much more. Lastly, the end-user might be running your WebGL content as well as other web pages. That’s why this is a tough problem.Here is an overview of memory when running Unity WebGL content in the browser:This image shows that on top of the Unity Heap, Unity WebGL content will require additional allocations in browser's memory. That's really important to understand, so that you can optimize your project and therefore minimize users drop-off rate.As you can see from the image, there are several groups of allocations: DOM, Unity Heap, Asset Data and Code which will be persistent in memory once the web page is loaded. Other ones, like Asset Bundles, WebAudio and Memory FS will vary depending on what's happening in your content (e.g.: asset bundle download, audio playback, etc.).At loading-time, there are also several browser’s temporary allocations during asm.js parsing and compilation that sometimes cause out-of-memory problems to some users on 32-bit browsers.In general, the Unity Heap is the memory containing all Unity-specific game objects, components, textures, shaders, etc.On WebGL, the size of the Unity heap needs to be known in advance so that the browser can allocate space for it and once allocated, the buffer cannot shrink or grow.The code responsible for allocating the Unity Heap is the following:buffer = new ArrayBuffer(TOTAL_MEMORY); This code can be found in the generated build.js, and will be executed by the browser’s JS VM.TOTAL_MEMORY is defined by WebGL Memory Size in the Player Settings. The default value is 256mb, but that’s just an arbitrary value we chose. In fact an empty project works with just 16mb.However, real-world content will likely need more, something like 256 or 386mb in most cases. Keep in mind that the more memory is needed, the fewer end-users will be able to run it.Before the code can be executed, it needs to be:downloaded.copied into a text blob.compiled.Take into consideration that, each of these steps will require a chunk of memory:The download buffer is temporary, but the source and the compiled code ones are persistent in memory.The size of the downloaded buffer and the source code are both the size of the uncompressed js generated by Unity. To estimate how much memory will be needed for them: make a release buildrename jsgz and datagz to *.gz and unpack them with a compression tooltheir uncompressed size will also be their size in browser’s memory.The size of the compiled code depends on the browser.An easy optimization is to enable Strip Engine Code so that your build will not include native engine code that you don’t need (e.g.: 2d physics module will be stripped if you don’t need it). Note: Note: Managed code is always stripped.Keep in mind that Exceptions support and third party plugins are going to contribute to your code size. Having said that, we have seen users that need to ship their titles with null checks and array bounds checks but don't want to incur in the memory (and performance) overhead of full exception support. To do that, you can pass --emit-null-checks and --enable-array-bounds-check to il2cpp, for instance via editor script:PlayerSettings.SetPropertyString("additionalIl2CppArgs", "--emit-null-checks --enable-array-bounds-check"); Finally, remember that Development builds will produce larger code because it is not minified, though that's not a concern since you are only going to ship release builds to the end user... right? ;-)On other platforms, an application can simply access files on the permanent storage (hard-drive, flash memory, etc...). On the web this is not possible since there is no access to a real file system. Therefore, once Unity WebGL data (.data file) is downloaded, it is then stored in memory. The downside is that it will require additional memory compared to other platforms (as of 5.3, the .data file is stored in memory lz4-compressed). For instance, here is what the profiler tells me about a project that generates a ~40mb data file (with 256mb Unity Heap):What’s in the .data file ? It's a collection of files that unity generates: data.unity3d (all scenes, their dependent assets and everything in the Resources folder), unity_default_resources and a few smaller files needed by the engine.To know the exact total size of the assets, have a look at data.unity3d in Temp\StagingArea\Data after you built for WebGL (remember the Temp folder will be deleted when Unity editor is closed). Alternatively, you can look at the offsets passed to the DataRequest in UnityLoader.js:new DataRequest(0, 39065934, 0, 0).open('GET', '/data.unity3d'); (this code might change depending on the Unity version - this is from 5.4)Although there is no real file system, as we mentioned earlier, your Unity WebGL content can still read/write files. The main difference compared to other platforms is that any file I/O operation will actually read/write in memory. What's important to know is that this memory file system does not live in the Unity Heap, therefore, it will require additional memory. For instance, let's say I write an array out to file:var buffer = new byte [10*1014*1024]; File.WriteAllBytes(Application.temporaryCachePath + "/buffer.bytes", buffer);The file will be written to memory, which can also be seen in the browser's profiler:Note that Unity Heap size is 256mb.Similarly, since Unity's caching system depends on the file system, the whole cache storage is backed in memory. What does that mean? It means that things like PlayerPrefs and cached Asset Bundles will also be persistent in memory, outside of the Unity Heap.One of the most important best practices to reduce memory consumption on webgl, is to use Asset Bundles (If you are not familiar with them, you can check the manual or this tutorial to get started). However, depending on how they are used, there can be a significant impact on memory consumption (inside the Unity Heap and outside as well) that will potentially make your content not work on 32-bit browsers.Now that you know you really need to use asset bundles, what do you do? Dump all your assets into a single asset bundle?NO! Even though that would reduce pressure at web-page loading time, you will still need to download (a potentially very big) asset bundle causing a memory spike. Let’s look at memory before the AB is downloaded:As you can see, 256mb are allocated for the Unity Heap. And this is after downloading an asset bundle without caching:What you see now is an additional buffer, approximately of the same size of the bundle on disk (~65mb), which was allocate by XHR. This is just a temporary buffer but it will cause a memory spike for several frames until it’s garbage collected.What to do then to minimize memory spikes? create one asset bundle for each asset? Although it’s an interesting idea, it’s not very practical.The bottom line is that there is no general rule and you really need to do what makes more sense for your project.Finally, remember to unload the asset bundle via AssetBundle.Unload when you are done with it.Asset Bundle caching works like it does on other platforms, you just need to use WWW.LoadFromCacheOrDownload. There is one pretty significant difference though, which is memory consumption. On Unity WebGL, AB caching relies on IndexedDB for storing data persistently, the problem is that the entries in the DB also exist in memory file system.Let’s look at a memory capture before downloading an asset bundle using LoadFromCacheOrDownload:As you can see, 512mb are used for the Unity Heap and ~4mb for other allocations. This is after loading the bundle:The additional required memory jumped to ~167mb. That’s additional memory we need for this asset bundle (~64mb compressed bundle). And this is after js vm garbage collection:It’s a bit better, but ~85MB are still required: most of it is used to cache the asset bundle in memory file system. That’s memory you are not going to get back, not even after unloading the bundle. It's also important to remember that when the user opens your content in the browser a second time, that chunk of memory is allocated right away, even before loading the bundle.For reference, this is a memory snapshot from Chrome:Similarly, there is another caching-related temporary allocation outside of the Unity Heap, that is needed by our asset bundle system. The bad news is that we recently found it is much larger than intended. The good news though, is that this is fixed in the upcoming Unity 5.5 Beta 4, 5.3.6 Patch 6 and 5.4.1 Patch 2.For older versions of Unity, in case your Unity WebGL content is already live or close to release and you don’t want to upgrade your project, a quick workaround to set the following property via editor script:PlayerSettings.SetPropertyString("emscriptenArgs", " -s MEMFS_APPEND_TO_TYPED_ARRAYS=1", BuildTargetGroup.WebGL); A longer term solution to minimize asset bundle caching memory overhead is to use WWW Constructor instead of LoadFromCacheOrDownload() or use UnityWebRequest.GetAssetBundle() with no hash/version parameter if you are using the new UnityWebRequest API.Then use an alternative caching mechanism at the XMLHttpRequest-level, that stores the downloaded file directly into indexedDB, bypassing the memory file system. This is exactly what we have developed recently and it is available on the asset store. Feel free to use it in your projects and customize it if you need to.In 5.3 and 5.4, both LZMA and LZ4 compressions are supported. However, even though using LZMA (default) results in smaller download size compared to LZ4/Uncompressed, it has a couple of downsides on WebGL: it causes noticeable execution stalls and it requires more memory. Therefore, we strongly recommend to use LZ4 or no compression at all (as a matter of fact, LZMA asset bundle compression will not be available for WebGL as of Unity 5.5), and to compensate for the larger download size compared to lzma, you may want to gzip/brotli your asset bundles and configure your server accordingly.See the manual for more information about asset bundle compression.Audio on Unity WebGL is implemented differently. What does that mean for memory?Unity will create specific AudioBuffer’s objects in JavaScript land, so that they can be played via WebAudio.Since WebAudio buffers live outside the Unity Heap and therefore cannot be tracked by the Unity profiler, you need to inspect memory with browser-specific tools to see how much memory is used for audio. Here’s an example (using Firefox about:memory page):Take into consideration that these Audio Buffers hold uncompressed data, which might not be ideal for large audio clip assets (e.g.: background music). For those, you may want to consider writing your own js plugin so that you can use tags instead. This way audio files remain compressed, therefore use less memory.Here is a summary:Reduce the size of the Unity Heap: Reduce your code size: Reduce your Data size: Yes, the best strategy would be to use the memory profiler and analyse how much memory your content actually requires, then change WebGL Memory Size accordingly.Let's take an empty project as an example. The Memory Profiler tells me that "Total Used" amounts to just over 16MB (this value might differ between releases of Unity): that means I must set WebGL Memory Size to something bigger than that. Obviously, "Total Used" will be different based on your content.However, if for some reason you cannot use the Profiler, you could simply keep reducing the WebGL Memory Size value until you find the minimum amount of memory required to run your content.It’s also important to note that any value that is not a multiple of 16 will be automatically rounded (at run-time) to the next multiple as this is an Emscripten requirement.WebGL Memory Size (mb) setting will determine the value of TOTAL_MEMORY (bytes) in the generated html:So, to iterate on the size of the heap without re-building the project, it is recommended to modify the html. Then, once you found a value you are happy with, you can change the WebGL Memory Size in the Unity project.Thankfully this is not the only way and the next blog post on the Unity heap will try to provide a better answer to this question.Finally, remember that Unity’s profiler will use some memory from the allocated Heap, so you might need to increase WebGL Memory Size when profiling.It depends on whether it’s Unity running out of memory or the browser. The error message will indicate what the problem is and how to solve it: "If you are the developer of this content, try allocating more/less memory to your WebGL build in the WebGL player settings." Then you can adjust the WebGL Memory Size setting accordingly. However, there’s more you can do to solve the OOM. If you get this error message:In addition to what the message says, you can also try to reduce the size of code and/or data. That’s because when the browser loads the web page, it will try to find free memory for several things, most importantly: code, data, unity heap and compiled asm.js. They can be quite large, especially Data and Unity heap memory, which can be a problem for 32-bit browsers.In some instances, even though there is enough free memory, the browser will still fail because the memory is fragmented. That’s why, sometimes, your content might succeed to load after you restart the browser.The other scenario, when Unity runs out-of-memory, will prompt a message like:In this case you need to optimize your Unity project.To analyze browser’s memory used by your content, you can use Firefox Memory Tool or Chrome Heap snapshot. Though, be aware that they will not show you WebAudio memory, for that you can use about:memory page in Firefox: take a snapshot, then search for “webaudio”. If you need to profile memory via JavaScript, try window.performance.memory (Chrome-only).To measure memory usage inside the Unity Heap, use the Unity Profiler. Though, be aware that you might need to increase WebGL Memory Size, in order to be able to use the profiler.In addition, there is a new tool we have been working on that allows you to analyze what’s in your build: To use it, make a WebGL build, then visit https://files.unity3d.com/build-report/. Although this is available as of Unity 5.4, note that this functionality is a work-in-progress and subject to change or being removed at any time. But we are making it available for testing purposes for now.16 is the minimum. The maximum is 2032, however, we generally advise to stay below 512.This is a technical limitation: 2048 MB (or more) will overflow the 32-bit signed integer size of the TypeArray used to implement the Unity heap in JavaScript.We have been considering using the ALLOW_MEMORY_GROWTH emscripten flag to allow the Heap to be resized, but so far decided not to because doing so would disable some optimizations in Chrome. We have yet to do some real benchmarking on this impact of this. We expect that using this might actually make memory issues worse. If you have reached a point where the Unity Heap is too small to fit all the required memory, and it needs to grow, then the browser would have to allocate a bigger heap, copy everything over from the old heap, and then deallocate the old heap. By doing so, it needs memory for both the new and the old heap at the same time (until it finished copying), thus requiring more total memory. So the memory usage would be higher than when using a predetermined fixed memory size.32-bit browsers will run into the same memory limitations regardless of whether the OS is 64 or 32-bit.The final recommendation is to profile your Unity WebGL content using browser-specific tools as well, because as we described there are allocations outside of the Unity Heap that Unity's profiler cannot track.Hopefully, some of this information will be useful to you. If you have further questions, please don’t hesitate to ask them here or in the WebGL forum.Update:We talked about memory used for code and we mentioned that the source JS code is copied into a temporary text blob. What we discovered is that the blob was not properly deallocated so effectively it was a permanent allocation in browser memory. In about:memory, it’s labelled as memory-file-data:Its size is dependent on the code size and for complex projects can easily be 32 or 64mb. Thankfully, this has been fixed in 5.3.6 Patch 8, 5.4.2 Patch 1 and 5.5.In terms of Audio, we know that memory consumption is still a problem: Audio streaming is not currently supported and audio assets are currently kept in Browser memory as uncompressed. So we suggested to use tag to playback large audio files. For this purpose we recently published a new Asset Store package to help you minimize memory consumption by streaming audio sources. Check it out!

>access_file_
1633|blog.unity.com

How offerwall makes high-engagement ads cost effective

It’s notoriously difficult to balance quality and scale in user acquisition campaigns, since increasing scale can mean risking quality. The key to minimizing this risk is to buy app advertising inventory according to different pricing models, such as CPE.What is cost per engagement (CPE)?CPE, or cost-per-engagement, is a pricing model used in mobile campaigns in which advertisers choose a post-install event to measure and only pay for the users who engage in that specific app event. In these sorts of campaigns, advertisers can set the event to whatever they’d like, such as: completing registration for an app, or reaching Level 2 in a game.The trick is to ensure you use your app analytics to set the engagement at a point where users will have become hooked on your app. If the user doesn’t complete the post-install event that your analytics say will guarantee you a certain LTV, the advertiser simply doesn’t pay. This means that positive ROI is almost assured.With CPE, performance advertisers only need to shell out for users that they deem are “high quality.” After all, a user who took the time to complete a registration or get to a certain level is more likely to have a higher LTV than a user who tapped ‘install’ and never looked back.The problem is scale. In contrast to CPE, rewarded CPI campaigns (using an offerwall ad unit, for example) are often used by advertisers looking to get as many installs as possible. In instances like this, advertisers are satisfied with getting lots of (possibly low quality) installs as long as the rush of installs pushes their app to Number 1. These batch users are low quality because advertisers typically set the threshold to be acquired very low - just a quick tap to install. (Aaddicted gamers might install any app presented to them as long as it got them those extra few coins. More on this later.)CPE, on the other hand, prioritizes quality - sometimes at the expense of scale. Since the threshold for payment is higher (ie, getting to Level 2), only some will pass through.To pick up the slack, it’s best to combine the CPE pricing model with an ad unit that is known more for its ability to acquire users at scale, such as an offerwall.What are offerwall ads?An offerwall is a type of app advertisement that gives users in-app rewards or incentives in exchange for completing an action, such as installing an app listed on the ‘wall’. It is user-initiated, meaning users tap on a button to view the offerwall and choose whether or not they want to engage with it.Because installs from offerwalls are rewarded, the acquired users run the risk of being low in quality. But this can actually be used to an app marketer’s advantage, since it means offerwall inventory is more competitively priced than inventory of other ad units. The goal with rewarded ad units like offerwall is to find the right price for those lower quality users to make the campaign sustainable. Offerwalls can be an ideal fit for advertisers looking for cost-effective scale.Combining CPE and offerwalls to your advantageSince CPE is high-quality and low-scale and offerwalls are often low-quality and high-scale, buying offerwall inventory on a CPE basis is like putting two pieces of a puzzle together, it just fits. Together, you have a combination that drives high-quality, high-scale campaigns - which critically, are cost effective.The biggest benefit of CPE is that it pushes users to “try” the product that’s being advertised. Because organic discovery is so difficult, users who wouldn’t have otherwise come across the app on the App Store might not have known there was an app out there that they either or enjoyed or needed.For example, let’s say you purchase offerwall inventory and set the engagement event to “users who reach Level 2.” Since it’s rewarded, you’re likely to see a large sum of app installs. But since it’s CPE, you’re not paying for everyone who taps “install” -- just the users who first install your app from the offerwall and then continue playing long enough to reach Level 2.How to set your CPE engagement eventIt’s recommended that advertisers set the engagement event to an event that happens just before the “tipping point.” The tipping point is the point in the game that is most likely to bring about payout, scale, and engagement. In other words, it’s the point in the app in which users begin spending enough money or watching enough ads to generate a high LTV. The ratio between the CPE conversions and how many users reaching the tipping point should be your main optimization methodology. In this case, Level 3 is the tipping point - which is why the engagement event was Level 2.Bonus: Adding a CPA campaign objective as a failsafeYour campaign optimization should always take into account the impact rewarded ads can have on behavior. You may see a higher drop off with rewarded CPE, since users might reach the engagement event, collect their reward, and leave. Therefore, it’s wise to pair CPE with another, deeper campaign objective, like a CPA, ROAS, or a specific retention goal. If the CPE is Level 2, then set a CPA target for Level 5. The second target acts as a failsafe and will balance out the low quality users that only completed the initial engagement event because it was rewarded.If after calculating their LTV at various stages throughout the game, advertisers see that users who reach Level 5 will go on to generate $40, then paying $30 for that user in a CPA campaign is worth it. In the end, they’re guaranteed a $10 profit. The next step is to set the CPE to Level 2, while you’re optimizing towards a $30 eCPA for Level 5.Advertisers must be strategic when it comes to managing their user acquisition campaigns. It’s important to carefully consider which ad units match with which pricing models - offerwalls and CPE is just one of many successful marriages. If advertisers continue thinking prudently, they’ll never have to make the Sophie’s choice between quality and scale again.Click here to learn how ironSource can help you find users that matter

>access_file_
1634|blog.unity.com

List View Framework

In order to implement project Carte Blanche’s card system, we developed an extensible framework for creating dynamic, scrollable lists of objects. This article discusses the structure and contents of this framework. The code and sample scenes are available as a Unity Asset Store package and a public repository for the developer community.Project Carte Blanche (PCB) is Unity Lab's research initiative on VR-in-VR authoring tools for non-technical users. As illustrated in the concept video, its user interface is based on a playing cards metaphor. Central to the design of Carte Blanche is the idea that objects and actions are represented by virtual cards which users grab and place on a virtual table. The user will physically interact with cards by using tracked motion controllers.PCB’s cards are a more complex version of conventional scrollable lists. List Views are common widgets provided in many GUI toolkits. Unity’s UI system includes several layout components and customizable controls to make dynamic lists which are scrollable and have hover states. A GridLayout or VerticalLayout will get most of the work done, and there’s even a handful of packages on the Asset Store just for lists. However, the existing solutions we found required the lists to be inside of a canvas and exist within the UI system. PCB requires cards to animate in and out of existence and to let the user touch them. Furthermore, performance in VR is critical. One common drawback of "classic" UI list views is that the full list is represented by scene objects which are only masked. If the list structure changes, or perhaps an item is expanded to take up more space, the system must re-evaluate all of the items in the list. We also want to avoid instantiating and destroying scene objects if possible, because this can be costly. Finally, for reusability and consistency of look and feel, we need an extensible solution that allows us to create similar behavior in other UI elements.We developed a general framework for creating list views, which serves as the foundation of PCB’s card system. Since there are also broad use-cases for such a framework, we decided to release it as an Asset Store package for the community.One goal of this framework is to follow the MVC and MVVM design patterns, decoupling the logic which displays the underlying data (the view) from the state of the data itself (the model). In any given frame, the framework should automatically handle displaying the current state of the list. This way we only need to take into account the current state of the data without worrying about how to trigger updates to the view. Likewise, we don’t need to worry about synchronization issues when user actions are performed quickly or frames are taking too long to render. As an added bonus, the CPU overhead for the list will stay relatively consistent, since it will be doing the same amount of work all the time. Performance will not degrade significantly when the list size increases or states change in unexpected ways.The idea to represent a list this way is borrowed from the Android and iOS UI frameworks. Both use slightly different implementations, but essentially take the same approach. A container element controls the position of a number of child elements, which are pooled in memory to avoid the cost of allocating and freeing them each time they are scrolled on/off screen. Separately, an interface is defined for how the view gets the information for each list item from the data source. The developer writes code for the data source and designs the list items, and the framework takes care of the rest. Both SDKs provide helpful guides on how to implement a list following the same design pattern of pooling rows and establishing an interface with a data source. iOS calls theirs a UITableView, and Android just calls it a ListView with an associated ListAdapter which talks to the data source.Both implementations set up a framework for laying out scrollable UI elements, and allow developers to define custom functions for how to define what goes on those elements based on rows in a database, lines in a text file, or any data source they desire. The view needs to know the total number of elements in the data set, as well as a method for getting the information to display such as a movie title and rating. Generally there is a default template for just displaying text, but developers can also customize the design of each list row with custom UI layouts. The framework itself takes care of allocating memory for these list rows, and re-using them after they are scrolled offscreen to display upcoming list rows. If each row is using the same template, the system should never have to allocate more than one full-screen’s worth of rows, plus one additional row if the list can scroll smoothly. That last row exists in order to display the first and last row only part-way on screen. We can think of it as having extra “bleed” space at each end of the list.The List View Framework is available as an Asset Store package, as well as an open-source git repository on Unity’s BitBucket account. We hope that this package, and others that we publish in the future, will live on to be used, improved, and re-used by the community. Feel free to fork this repository into your own project with its own fixes or enhancements. The code is released under the MIT/X11 license, which basically means that you can do whatever you want with it as long as you keep the disclaimer. In the future, we at Unity want to release more of our original content as modular open-source packages that can be valuable and improved by the community. This is the first of many modules from Project Carte Blanche which will be released in this way.The framework boils down to three C# classes: ListViewController (split in two), ListViewItem, and ListViewItemData: The reason for splitting ListViewController in two is so that we can access properties which are not dependent on the data type without knowing what implementation of the list we are using. In this way, InputHandler scripts can scroll any kind of list, regardless of what class is actually implementing ListViewController. The framework includes a very basic implementation of ListViewController which accepts items with no data (other than a template) which can be set up in the inspector:The simplest possible list (Example 0) uses all of the base classes and just allows the user to scroll a set of objects based on template prefabs and a data array set up in the Inspector. To display meaningful data, users are expected to extend ListViewItemData and ListViewController into classes that describe their particular data, as illustrated in further examples.The ListViewInputHandler class sets up a base for scrolling or click behavior based on mouse input, for example. And of course, the ListViewScroller sub-class sets up some useful patterns for scrolling behavior. Mouse and Touch input are easy to handle at the same time, but these classes could also conceivably handle gamepad, UI, gestural input or VR devices as well. In the case of PCB , the list views in are manipulated via hand-tracked motion controllers.We hope that this article helps to explain the framework, examples, and how to get started including list views into your next project. Consult the wiki for further reading and an in-depth description of the Core Classes and Examples. Even if you don’t end up using any of this code directly, the concept of decoupling model and view code is a powerful one, and leads to more efficient code that is easy to maintain as your project grows. Game systems often benefit from stateless designs that make very few assumptions, and constantly re-evaluate as much available information as possible, within reason. Our goal was to come up with a system to create a scrollable list that is highly customizable and performs well given a potentially infinite set of data. We now have a robust toolkit of features to support asynchronous caching, non-uniform template sizes, nested data, and complex animation behavior. As should be clear by the diversity of example code, one size does not fit all, and every implementation will have its own caveats, many of which have not been covered.The intent with this package is that it becomes property of the community. It was created at Unity, but the source is publicly available on BitBucket, and we encourage users to fork the repo and share their improvements. We can’t wait to see what you come up with!We'll leave you with some eye candy:Cover image: Timoni West, Unity Labs Principal Designer. Article images: Dennis DeRyke, Unity Graphics Software Development Engineer in Test Matt Schoen & Dio Gonzalez work at Unity Labs; Schoen is a Senior Software Engineer and Dio is a VR Principal Engineer.

>access_file_
1636|blog.unity.com

IL2CPP optimizations: Devirtualization

The scripting virtual machine team at Unity is always looking for ways to make your code run faster. This is the first post in a three part miniseries about a few micro-optimizations performed by the IL2CPP AOT compiler, and how you can take advantage of them. While nothing here will make code run two or three times as fast, these small optimizations can help in important parts of a game, and we hope they give you some insight into how your code is executing.There is no other way to say it, virtual method calls are always more expensive than direct method calls. We’ve been working on some performance improvements in the libil2cpp runtime library to cut back the overhead of virtual method calls (more on this in the next post), but they still require a runtime lookup of some sort. The compiler cannot know which method will be called at run time - or can it?Devirtualization is a common compiler optimization tactic which changes a virtual method call into a direct method call. A compiler might apply this tactic when it can prove exactly which actual method will be called at compile time. Unfortunately, this fact can often be difficult to prove, as the compiler does not always see the entire code base. But when it is possible, it can make virtual method calls much faster.As a young developer, I learned about virtual methods with a rather contrived animal example. This code might be familiar to you as well:Then in Unity (version 5.3.5) we can use these classes to make a small farm:Here each call to Speak is a virtual method call. Let’s see if we can convince IL2CPP to devirtualize any of these method calls to improve their performance.One of the features of IL2CPP I like is that it generates C++ code instead of assembly code. Sure, this code doesn’t look like C++ code you would write by hand, but it is much easier to understand than assembly. Let’s see the generated code for the body of that for each loop:I’ve removed a bit of the generated code to simplify things. See that ugly call to Invoke? It is going to lookup the proper virtual method in the vtable and then call it. This vtable lookup will be slower than a direct function call, but that is understandable. The Animal could be a Cow or a Pig, or some other derived type.Let’s look at the generated code for the second call to Debug.LogFormat, which is more like a direct method call:Even in this case we are still making the virtual method call! IL2CPP is pretty conservative with optimizations, preferring to ensure correctness in most cases. Since it does not do enough whole-program analysis to be sure that this can be a direct call, it opts for the safer (and slower) virtual method call.Suppose we know that there are no other types of cows on our farm, so no type will ever derive from Cow. If we make this knowledge explicit to the compiler, we can get a better result. Let’s change the class to be defined like this:The sealed keyword tells the compiler that no one can derive from Cow (sealed could also be used directly on the Speak method). Now IL2CPP will have the confidence to make a direct method call:The call to Speak here will not be unnecessarily slow, since we’ve been very explicit with the compiler and allowed it to optimize with confidence.This kind of optimization won’t make your game incredibly faster, but it is a good practice to express any assumptions you have about the code in the code, both for future human readers of that code and for compilers. If you are compiling with IL2CPP, I encourage you to peruse the generated C++ code in your project and see what else you might find!Next time we’ll discuss why virtual method calls are expensive, and what we are doing to make them faster.

>access_file_
1637|blog.unity.com

3 Key Pokemon Go Takeaways for App Developers

I’ll admit it -- I proudly collected and traded Pokemon Cards as a child, bought and played nearly every Game Boy game, slept with a Pikachu doll tucked under my arms, and watched Ash Ketchum foil Team Rocket weekly. That was 1998.But it’s 2016, and Pokemon Fever is back.In early July, Nintendo released Pokemon Go, an augmented reality mobile game that lets us nineties kids relive our childhood, allowing us to capture, battle, and train virtual Pokemon.Nintendo and Niantic, the company that developed Pokemon Go and branched off of Google, successfully created the fastest growing game to ever top mobile revenue charts, outpacing even Clash Royale. It’s become an international phenomenon in just two weeks.At ironSource, our goal is to help apps succeed. Naturally, we were curious about Pokemon Go’s success and decided to explore further, discovering some key takeaways for app developers looking to translate the success of Pokemon Go into their own applications.Going viral through IRL socialIf you’ve read any post about the secrets to going viral, you probably know that implementing social interaction functionality tops the list. Usually, this includes adding share buttons within the app so that users can easily share their actions with friends on social media -- the point being that the shares will multiply and the application will spread through social media like a ‘virus.'In the case of Pokemon Go, however, this sort of interaction through social media in-app is unavailable. (Though, it’d be great to have an in-app screen shot button that auto-posts to Facebook, for the next time you see a Pidgey squatting in your bathtub.) Rather, Niantic takes the social graph to a completely new level, that is, to the streets IRL. In some ways, this is the ultimate form of social virality -- to see someone nearby walking seemingly aimlessly with their head buried in their smartphone look up and share with you (literally) that he’s looking for an Ivysaur. In this way, Pokemon Go is able to seamlessly marry the online world with the real world and also utilize that real world to drive online virality.It doesn’t mean, however, that applications need to be augmented reality in order to experience similar real world virality. Instead, the lesson is that mobile apps must encourage conversation among users -- not instead of, but in addition to simple shares and user generated content -- in order to truly go viral. Think of the success of Yo (1.2M MAU in 2014), Venmo (179M MAU), Draw Something (24M DAU in 2012), and Words with Friends (5.6M DAU in 2011), or even the Pokemon Go chat extension called GoChat that a third-party developed. All these apps utilize cross-user engagement to increase public awareness and virality.Long sessions + timed incentives = high engagementIt’s often the case that apps that go viral have high engagement rates, but few have seen the incredible engagement rates Pokemon Go is boasting in their first week. In a study by SimilarWeb, analysts found that 60% of users who downloaded Pokemon Go in the US are using it daily. This means that as of now, there are just as many daily active users on Pokemon Go as there are on Twitter and even more than there are on Tinder.If that doesn’t impress you, perhaps this will. As of July 8, users played Pokemon Go for an average of 43 minutes per day -- an average higher than WhatsApp (30), Instagram (25), Snapchat (23), and Facebook Messenger (13).It might be easy to attribute this high engagement to the popularity of the franchise, but because all age groups are equally addicted, it seems clear that the app offers more than just nostalgia. In fact, you could argue that Pokemon Go has perfected the art of maintaining high engagement rates.Specifically, the mobile game includes four great engagement features: endless session times, lots of early rewards, various currency layers, and just enough incentive to get the user to stay in the game.The session time in Pokemon Go is essentially endless, unconstrained by lost lives or a set amount of levels. This means that unless something calls them away from their mobile device, the app is doing all it can to keep players within the game.In addition, it offers just the right amount of rewards and incentives at the most opportune times: a lot in the beginning and then less and less as users continue to play. This works to get players excited, grab their attention and get them in the door in the first instance, and then keeps them engaged throughout by building on previous rewards, such as offering power ups just in time for your Charmander to evolve. The fact that they received multiple rewards in the beginning, users will not only be itching to receive more, they’ll trust that they will since they know it’s possible.Location-based monetization is the futureIf you’ve played Pokemon Go, you know just how critical PokeSpots and Gyms are (if not, PokeSpots are where you can get free items and Gyms are where you can train and battle Pokemon). In order to use them, you have to be in close physical proximity to these specific ‘places’.Since the app’s launch, there have been dozens of articles illustrating how small businesses located near a PokeSpot or Gym have profited from the immense amount of foot traffic. Niantic, having recognized how profitable integrating the real world with the digital world is, said that they may soon be adding “sponsored locations,” where companies and brands would pay to become a PokeStop or Gym. It’s not difficult to imagine users arriving at a specific location and seeing a location-based ad pop up on their screen.The key to monetization in this case is marrying the digital and real worlds through location-driven experiences. In the coming future, apps will likely begin utilizing localized surroundings to drive revenue, much like Snapchat and their geotags.Of course, in addition to this, Pokemon Go excels because it monetizes everything. If you can’t wait and want to evolve your Pokemon ASAP, there’s an IAP for that. If you’re short on Poke Balls and can’t find any, there’s an IAP for that. In doing so, Pokemon Go ensures that no monetization opportunity falls through the cracks.It’s been a couple of weeks and Pokemon Go shows no signs of slowing down. Even if Pokemon Mania begins to wane, the key takeaways we learned from its success will continue to stand. It will be interesting to see how other apps attempt to mimic the virality, engagement, and monetization strategies of Pokemon Go in the future.

>access_file_
1638|blog.unity.com

‘Wait, I’ve changed my mind’ – State Machine Transition interruptions

So let’s dive into some intricate details of State Machine Transitions and interruptions!By default in the animation system, transitions cannot be interrupted: once you start going from one state to the other, there’s no way out. Like a passenger on a transatlantic flight, you’re cozily nestled in your seat until you reach your destination and you can’t change your mind. For most users, this is fine.But if you need more control over transitions, Mecanim can be configured in a variety of ways to meet your needs. If you’re unhappy with your current destination, you can hop in the pilot’s seat and can change plans midway through your flight. This means more responsive animations, but also many opportunities to get lost in the complexity.So let’s walk through a few examples to sort that out. We can begin with a fairly simple state machine with four states, labeled A to D, and triggers hooked to every transition on the state machine.By default, when we trigger the A->B transition, our state machine transitions towards B and nothing can keep it from reaching its destination. But if we go on the A->B transition inspector and change the interruption source from “None” to “Current State”, our journey from A to B can be interrupted by some triggers on state A.Why only “some”? Because the “Ordered Interruption” checkbox is also checked by default. This means only transitions on state A that have a higher priority than the current one are allowed. Looking at the inspector of state A, we can see that this only applies to the A->C transition.So if we activate the A->B trigger, then shortly after the A->D trigger, our transition remains uninterrupted. However, if we press the A->C trigger instead, then the transition is immediately interrupted and the state machine starts transitioning towards C. Internally, the animation system records the pose at the time of the interruption, and will now blend between that static pose (X) and the new destination animation. Why a static pose, instead of a possibly smoother blend between the current and new transitions? Simply put: performance. When a game faces a cascade of interruptions, keeping track of several dynamic transitions taking place simultaneously would quickly made the animation system unscalable.Now, if we uncheck that “Ordered Interruption” checkbox, then both A->C and A->D can interrupt the transition. However, if they are both triggered on the same frame, A->C will still take precedence because it has a higher priority.If we change the interruption source to “Next State”, A->C and A->D can no longer interrupt the transition, regardless of their order. However, if we press the B->D trigger, we will immediately start transitioning from A to D, without completing the transition towards B.Transition order matters on state B too. The “Ordered Interruption” checkbox is not available anymore (any triggered transition on B can interrupt the transition because they do not have a priority ranking relative to A->B), but the order of the transitions on B will determine which transition wins if both are triggered within the same frame. In this case, if B->D and B->C are triggered in the same frame, B->D will be selected.Finally, for complete control, we can set the Interruption Source to “Current State Then Next State”, or “Next State Then Current State”. In that case, the transitions will be analyzed independently on one state, then the other.So, let’s assume we have the following configuration.During the A->B transition, a very excited player triggers four transitions within the same frame: A->C, A->D, B->C and B->D. What happens?First, “Ordered Interruption” is checked, so we can ignore A->D right away: it has lower priority than A->B. The current state gets resolved first, so we do not even have to look at state B to know that transition A->C wins.

>access_file_
1639|blog.unity.com

Adam: Production design for the real-time short film

We are launching a series of articles about the making of our latest demo, Adam. Over the course of the next several weeks, we will cover various aspects of production: concept art, assets production, in-engine setup, animation pipeline, VFX, and custom features as well as the tools we created for the project. We also plan to release various character and environment projects - basically all the material we can with no dependencies on Unity’s upcoming (but currently not publicly available) sequencer tool.In addition to publishing a series of making of blog posts, we’re working hard to prepare as much as possible from this project for release. The plan is to make our assets and custom tech available this autumn.My name is Georgi Simeonov and I was responsible for a lot of the art direction and production design on the project. I previously worked as a concept artist on games like Brink, Batman: Arkham origins, and Dirty Bomb as well as designing Volund for the Blacksmith Demo.The film is set in a future where human society is transformed by harsh biological realities and civilization has shrunk to a few scattered, encapsulated communities clinging to the memory of greatness. Adam, as our main character, was the starting point of our visual design process. He was designed to provide a glimpse into the complex backstory of the world, by revealing himself as a human prisoner whose consciousness has been trapped in a cheap mechanical body.One of the early ideas that stuck was of that the mechanical body, while being functional, should still resemble and reference the human anatomy and organs in multiple ways, being in a way a mock écorché - a twisted stripped down copy of the real thing. This broad idea gave direction to multiple smaller details and decisions, both functional and decorative like - the cranium as a steel box reminiscent of human brain, covered by geometric machine cuts to create a pattern resembling the brain folds; the part exposed ribcage and spine - resembling a patient or a corpse awoken mid surgery/autopsy.Another key concept for the design was reducing the convicts to walking records of their crimes, manifested in the chest monitor or tablet perpetually showing their sentence. For most of the production the sentence had their original mugshot as an additional reference to their human past and to provide a stark contrast with the near blank expressionless masks. At the end having a human face on the chest proved too distracting, so we went with the simpler design.To continue the theme of a rough, inferior copy, the face was created as a Death Mask, from a hastily-scanned 3d print of the original. The concept included a mouth for quite a while, but a frozen expression, no matter if it was neutral or not, proved distracting from the eyes’ expressiveness. At the end, removing the mouth completely not only helped draw the focus to what mattered, but also helped to emphasize the convicts’ oppression.We needed the characters partially dressed in orange, a color evocative of prison uniforms. But since mechanical bodies don’t exactly need clothes, and painting them in any risked pushing them far into the realm of maintenance or utility robots, our way of hinting at clothing was through a shrink wrap package - such as could be used for mechanical elements before they are unwrapped, in a factory, mass-production setting.Our two other key characters were supposed to work as a pair from the start. We needed Sebastian to look wise and majestic but also fearsome, but in a non combative way. The Lieutenant, whom we came to call Lu, was to be his right hand, near equal, and counterpoint: the person of action and wielder of aggression when needed.Pursuing the design of the two strangers we blended eastern and western archetypal savior influences, designs that can specifically serve their function and yet convey universal ideas and narratives beyond their few minutes of screen time.Some of the main goals with Sebastian’s design were to show the two travelers an embodiment of a new culture that has developed outside, and to showcase the effort of its people to preserve their humanity and regain the ability to express their individuality.We wanted Sebastian to look ancient, possibly one of the first or even the first to be cast out. At the same time, he carries the idea of a new beginning - of rebirth and regeneration. We mixed tribal elements like the sprouting seedling symbol on his forehead and in his chest cavity with more traditional messianic features. Material-wise, we went to one of our source inspirations and made Sebastian resemble a living bronze monument.FacesSebastian and Lu’s masks/faces had to show the lengths to which some would go to reshape themselves in the pursuit of regaining and expressing their personality. We explored a carved/moulded and painted/stained look to emphasize the self-created image of the character as much as possible.It was important that Lu didn’t just come across as a subordinate - we wanted her to be more of a partner. The warrior counterpoint to Sebastian’s spiritual leadership. In contrast to Sebastian, Lu is lighter and more mobile, but still intimidating.Initially we nicknamed the guards “surgeon sentries,” reflecting the two simple ideas behind their design. The guards of The City were one of only a few chances to directly visually communicate any aspect of the civilization within the the walls. In a way, they became a human manifestation of the city and its dwindling civilization. Closed in, wrapped, sterile, purist, desperate to retain their own identity in the face of self inflicted decline.Initial Research and IdeasThe Cell provided the start of our narrative and the “birthplace” of Adam. As such, it played an important role and it naturally went through a number of iterations until we arrived at the balance of key elements we wanted. We wanted it to be claustrophobic and in continuation of the theme of pseudo anatomy, we went for something resembling a mechanical womb with thick intestine-like pipes filling the space on all sides and compressing it even further.Finding The Shape LanguageThumbnail explorations for the wall details of the Reformation Cell. These came at a relatively late stage in the project at a point when we already had a base variant of the room in blockout, a camera and a mocap setup: we just needed to distil the shapes and elements we wanted to use.3D BlockoutMost of the actual environment design beyond the initial reference boards and thumbnail explorations was done in Maya. This allowed us to have a working version of the world in Unity, including spaces, key structures and distances at a very early stage, allowing us the freedom and flexibility to iterate with all other aspects of production. Some of the blockout meshes worked so well we ended using them in the final version of the short.Initial Research and IdeasThe Wall of the City as the primary manifestation of the setting had to express the nature of the society living behind. Medical themes and influences were also to the fore here; combining the sterile simplicity of Brutalism with the rows of indexed folders in a medical archive, thus making the wall an actual physical archive of the medical/biological memory of disappearing humanity.Finding The Shape LanguageInitial Research and Ideas For the Broken highway or meeting point where the strangers meet Adam and the other convicts, we wanted to create the impression of a holy place: an accidental temple. We used the symmetrical shape of a broken off highway section to create our shrine-like backdrop. A strong influence here was a woodblock print by Hiroshi Yoshida (In A Temple Yard).Concept Art

>access_file_