// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1688 transmissions indexed — page 83 of 85

[ 2016 ]

13 entries
1642|blog.unity.com

Evolution of our products and pricing

Firstly, our move to subscription had some of you asking why?! We’ve explained the rationale in the follow-up blog post by Joachim - Subscription! Why?. If you haven’t read it yet, check it out.Secondly, some of you raised major concerns with the new pricing:Some of you have been customers for a long time and have made use of upgrade discounts to your perpetual licenses. Those averaged to a yearly cost that was lower than the new subscription costs.If you develop for desktop only, the new all-platform cost is significantly higher than your old yearly cost.For those of you hit hard by this, we think we've found a good solution.Now, we don't want to go back to the old model of having iOS and Android be paid add-ons, while desktop is included in the base cost. This was always rather arbitrary. Developing and maintaining our desktop platforms is a real cost for us, just like for our mobile platforms, and yet mobile developers have been paying more. Now we're committed to making the price the same regardless of which platforms you target. We hope you understand and are with us in this decision.At the same time, a lot of customers are telling us that the new prices are a very good deal. Some have the privilege to even say it's too cheap, though these people, being happy with the new prices, have not been as vocal in comments and social media.The objective for us is to make everyone as happy as is possible given a rapidly growing global group of developers using Unity for many, many different things. We want paid versions of Unity to be affordable for developers big and small who want to go beyond Unity Personal, or are required to due to the revenue cap.So with this goal in mind we are going to make these changes to what we announced previously:We're making the Unity splash screen in Unity Plus optional, like it is in Unity Pro.We're raising the revenue cap in Unity Plus from $100k to $200k so that more of you are able to take advantage of it.In order to be able to do this, we removed the option to subscribe to Unity Plus without a one-year commitment. We also restricted Pay to Own to only apply to Unity Pro and not Unity Plus (see Pay to Own details further down). We let these things go from Unity Plus in order to be able to introduce the new advantages.We know this is not going to change things for all of you. If your revenue is beyond the $200k cap for Unity Plus, and you are already a Unity user, we're announcing Transition offers below. We hope though that these changes will make Unity a great choice to those of you who might otherwise have had difficulties affording it.Apart from price changes, we’re also working on some changes to the Unity splash screen.The new splash screen will read “Made with Unity” in all editions of Unity - no more mention of “Personal Edition”. You will also be able to customize it with your own (blurred) background image and your own company logo in addition to the Unity logo. This feature is coming, but give us a bit of time to perfect the technical aspects of it before we release it. The customizable splash screen will be available in all versions of Unity, but can be completely turned off in Unity Plus and Unity Pro. We’ll have a blog post with further details later.Free$100k revenue or funding capAll platformsUnity splash screen (with customization options)Personal tier servicesPay $35 per month with 12 month commitment$200k revenue or funding capAll platformsOptional Unity splash screen (with customization options)Dark Editor SkinPlus tier servicesPay $125 per month with 12 month commitmentNo revenue capAll platformsOptional Unity splash screen (with customization options)Dark Editor SkinPro tier servicesPay to OwnBoth Plus and Pro tiers can be paid monthly or upfront (for people who find that easier for budgeting or billing purposes), and require you to commit to at least 12 months of subscription. There will also be an option to commit to Pro for 24 months, for people who want long term price certainty.We will launch the new products soon, for new customers to buy. As an existing Unity 5.x perpetual license customer, you will no longer get new updates after March 2017. However, you have a few options if you want to keep getting updates:For up to five seats, you may subscribe to Unity Pro at the special price of $75 per month for a limited transition period, after which the price will revert to the normal subscription price of $125 per month: If you make less than $200k per year, you may choose Unity Plus and pay $35 per month with an annual commitment.We will start sending out these transition offers by email after the new products have launched in our store.If you pay for 24 or more consecutive months of a new Unity Pro subscription, you get to keep and use the version you have when you notify us that you are stopping subscription and choosing pay to own. At such point, you will stop receiving access to Pro tier services, new features and upgrades. You will receive the next 3 patches. We reserve the right to grant access to additional patches in the event that we find severe bugs. If you later resume the subscription, you will still own the perpetual license you elected but again start receiving updates, fixes and services. Once you have subscribed for another 24 consecutive months, and should you then elect to cease this new subscription, you will then be granted a new perpetual license of the then current version of Unity.Thanks for taking the time to read all the detail. Please let us know your thoughts.Read the updated Unity Subscription FAQ.Talk to us about the new licensing on the forum.

>access_file_
1643|blog.unity.com

Serialization, MonoBehaviour constructors and Unity 5.4

The majority of the Unity API should only be called from the main thread, e.g. from Start/Update/etc on MonoBehaviour. Similarly, only a subset of the Unity API should be called from script constructors/field initializers, like Debug.Log or Mathf. As Unity will invoke constructors when creating an instance of a class during deserialization (load), which might run on a non-main thread.These requirements were not strictly enforced in versions of Unity prior to 5.4. Which could lead to crashes, race conditions and issues that were hard to diagnose and reproduce.In Unity 5.4, the new errors will in most cases not throw a managed exception and will not interrupt the execution flow of your scripts. This approach has been taken to reduce the amount of friction caused by upgrading your projects to Unity 5.4. These errors will throw a managed exception in a future release of Unity. We recommend that you fix these errors (if any) in your project when upgrading to 5.4.The new errors are documented in the “Script Serialization Errors” section on the Script Serialization page in the manual.Let’s have a look at the new serialization errors in some common scenarios and how to fix them.When Unity creates an instance of your MonoBehaviour/ScriptableObject derived class, it calls the default constructor to create the managed object. When this happens, we are not yet in the main loop and the scene has not been fully loaded yet. Field initializers are also called when calling the default constructor of a managed object. Calling the Unity API from a constructor is considered unsafe for the majority of the Unity API.Examples:public class FieldAPICallBehaviour : MonoBehaviour {    public GameObject foo = GameObject.Find("foo"); }public class ConstructorAPICallBehaviour : MonoBehaviour {    ConstructorAPICallBehaviour()    {        GameObject.Find("foo");    } }In these case we will get the the error “Find is not allowed to be called from a MonoBehaviour constructor (or instance field initializer), call in in Awake or Start instead. ...”. The fix is to put the call to the Unity API in MonoBehaviour.Start.When Unity loads a scene, it recreates the managed objects from the saved scene and populates them with the saved values (deserializing). In order to create the managed objects, the default constructor for the objects must be called. If a field referencing an object is saved (serialized) and the object default constructor calls the Unity API you will get an error when loading the scene. As in the previous error, we are not yet in the main loop and the scene is not fully loaded. This is considered unsafe for the majority of the Unity API.Example:public class SerializationAPICallBehaviour : MonoBehaviour{    [System.Serializable]    public class CallAPI    {        public CallAPI()        {            GameObject.Find("foo");        }    }    CallAPI callAPI; }Here we get the error “Find is not allowed to be called during serialization, call it from Awake or Start instead.” Fixing this issue requires us to refactor the code to make sure that no Unity API calls are made in any constructors for any serialized objects. If it is necessary to call the Unity API for some objects, then this must be done in the main thread from one of the MonoBehaviour callbacks, such as Start, Awake or Update.If you have any comments, questions or general feedback on these errors, you can post them here in this “Script Serialization Errors Feedback” thread on the Unity 5.4 beta forums.

>access_file_
1644|blog.unity.com

Subscription! Why?

Over the last days I’ve been reading all comments about the new products and prices, and first of all, do know that we are very carefully listening to everything, discussing a lot especially what we can do to make the subscription pricing appealing especially to Indie devs who have been using Unity Pro for the longest time.I especially care a lot about this group of developers, who effectively funded this company with us and have been with us on this journey for a very long time. So we will figure something out. Needs a bit of time but we’ll follow up soon…In the meantime I want to give a bit of background about why we are doing this subscription thing and some thoughts on what was a bit lost in the announcement so far.Why Subscription?When we started Unity, we would ship Unity every once in a while on just 2 platforms. Initially just Aras and I, gradually adding a couple engineers every few months. We’d decide on a couple major features and focus working on that for a year and a bit, go through beta and then ship it.Today Unity lets you target 28 platforms. No one targets all platforms at the same time, but the ability to choose to easily switch your game to any platform gives Unity developers incredible advantages.Each platform is supported by a team of dedicated engineers. We have teams focused on different areas of the engine, working on improving each major area all the time.We ship a patch release every week. Supported by the awesome Sustained Engineering team.We ship point releases with major new features and improvements multiple times per year.All of this is necessary because the platforms we support rapidly change. In today’s world, we can’t leave customers behind for a year because we are in the process of releasing a major version. We think it would be very bad for Unity developers if we held features for a full number release, rather than launch these features along the way, when they are ready.With this in mind, we want to be clear. There will be no major Unity 6 release.In the dev team we wanted to stop doing major releases for a long time. With the major releases model we had done up until Unity 5, it has always forced us to bundle up a bunch of features and release them in one big splash. Usually it results in that good & complete features would be artificially held back for a long time while other features are still maturing, and eventually releasing some of these features before they are ready. All in the name of creating one big splashy release that customers feel is worth upgrading to. It’s what we did because we had to in a model where we worked toward an unnatural new major release every few years. This is not some evil marketing team pushing for it, it is the inherent nature of that business model. It was always a painful process for us and you and it really serves no one.With our switch to subscription we can make Unity incrementally better, every week. When a feature is complete, we will ship it. If it is not ready we will wait for the next point release.Our switch to subscription is absolutely necessary in order for us to provide a robust and stable platform.Pay to own!Along with the new subscription model, we are introducing “pay to own”. After having paid for 24 months of subscription, you can stop paying and keep on using the version you have at that point. Of course, you would also stop getting new features, services or fixes; choice is yours.If you are upgrading from a previously bought perpetual license of Unity and you are switching to subscription after March 2017, then you get “pay to own” right away with your subscription license. Pay to own applies to everyone; there’s no special “license option” you have to get. Simple!Thanks for listening, I hope this gives some much needed background on our switch to subscription.

>access_file_
1648|blog.unity.com

Debugging memory corruption: Who wrote ‘2’ into my stack?!

Several weeks ago we received a bug report from a customer that said their game was crashing when using IL2CPP scripting backend. QA verified the bug and assigned it to me for fixing. The project was quite big (although far from the largest ones); it took 40 minutes to build on my machine. The instructions on the bug report said: “Play the game for 5-10 minutes until it crashes”. Sure enough, after following instructions, I observed a crash. I fired up WinDbg ready to nail it down. Unfortunately, the stack trace was bogus:Clearly, it tried executing an invalid memory address. Although the stacktrace had been corrupted, I was hoping that only a part of the whole stack got corrupted and that I should be able to reconstruct it if I look at memory contents past the stack pointer register. Surely enough, that gave me an idea where to look next:Here’s a rough reconstructed stacktrace:Alright, so now I knew which thread was crashing: it was the IL2CPP runtime socket polling thread. Its responsibility is tell other threads when their sockets are ready to send or receive data. It goes like this: there’s a FIFO queue that socket poll requests get put in by other threads, the socket polling thread then dequeues these requests one by one, calls select() function and when select() returns a result, it queues a callback that was in the original request to the thread pool.So somebody is corrupting the stack badly. In order to narrow the search, I decided to put “stack sentinels” on most stack frames in that thread. Here’s how my stack sentinel was defined:When it’s constructed, it would fill the buffer with “0xDD”. When it’s destructed, it would check if those values did not change. This worked incredibly well: the game was no longer crashing! It was asserting instead:Somebody had been touching my sentinel’s privates - and it definitely wasn’t a friend. I ran this a couple more times, and the result was the same: every time a value of “2” was written to the buffer first. Looking at the memory view, I noticed that what I saw was familiar:These are the exact same values that we’ve seen in the very first corrupted stack trace. I realized that whatever caused the crash earlier was also responsible for corrupting the stack sentinel. At first, I thought that this was some kind of a buffer overflow, and somebody was writing outside of their local variable bounds. So I started placing these stack sentinels much more aggressively: before almost every function call that the thread made. However, the corruptions seemed to happen at random times, and I wasn’t able to find what was causing them using this method.I knew that memory was always getting corrupted while one of my sentinels is in scope. I somehow needed to catch the thing that corrupts it red handed. I figured to make the stack sentinel memory read only for the duration of the stack sentinel life: I would call VirtualProtect() in the constructor to mark pages read only, and call it again in the destructor to make them writable:To my surprise, it was still being corrupted! And the message in the debug log was:Memory was corrupted at 0xd046ffeea8. It was readonly when it got corrupted. CrashingGame.exe has triggered a breakpoint.This was a red flag to me. Somebody had been corrupting memory either while the memory was read only, or just before I set it to read only. Since I got no access violations, I assumed that it was the latter so I changed the code to check whether memory contents changed right after setting my magic values:My theory checked out:Memory was corrupted at 0x79b3bfea78. CrashingGame.exe has triggered a breakpoint.At this point I was thinking: “Well, it must be another thread corrupting my stack. It MUST be. Right? RIGHT?”. The only way I knew how to proceed in investigating this was to use data (memory) breakpoints to catch the offender. Unfortunately, on x86 you can watch only four memory locations at a time, that means I can monitor 32 bytes at most, while the area that had been getting corrupted was 16 KB. I somehow needed to figure out where to set the breakpoints. I started observing corruption patterns. At first, it seemed that they are random, but that was merely an illusion due to the nature of ASLR: every time I restarted the game, it would place the stack at random memory address, so the place of corruption naturally differed. However, when I realized this, I stopped restarting the game every time memory became corrupted and just continued execution. This led me to discover that the corrupted memory address was always constant for a given debugging session. In other words, once it had been corrupted once, it would always get corrupted at the exact same memory address as long as I don’t terminate the program:Memory was corrupted at 0x90445febd8. CrashingGame.exe has triggered a breakpoint. Memory was corrupted at 0x90445febd8. CrashingGame.exe has triggered a breakpoint.I set a data breakpoint on that memory address and watched as it kept breaking whenever I set it to a magic value of 0xDD. I figured, this was going to take a while, but Visual Studio actually allows me to set a condition on that breakpoint: to only break if the value of that memory address is 2:A minute later, this breakpoint finally hit. I arrived at this point in time after 3 days into debugging this thing. This was going to be my triumph. “I finally pinned you down!”, I proclaimed. Or so I so optimistically thought:I watched at the debugger with disbelief as my mind got filled with more questions than answers: “What? How is this even possible? Am I going crazy?”. I decided to look at the disassembly:Sure enough, it was modifying that memory location. But it was writing 0xDD to it, not 0x02! After looking at the memory window, the whole region was already corrupted:As I was ready to bang my head against the wall, I called my coworker and asked him to look whether I had missed something obvious. We reviewed the debugging code together and we couldn’t find anything that could even remotely cause such weirdness. I then took a step back and tried imagining what could possibly be causing the debugger to break thinking that code set the value to “2”. I came up with the following hypothetical chain of events:1. mov byte ptr [rax], 0DDh modifies the memory location, CPU breaks execution to let debugger inspect the program state 2. Memory gets corrupted by something 3. Debugger inspects the memory address, finds “2” inside and thinks that’s what changed.So… what can change memory contents while the program is frozen by a debugger? As far as I know, that’s possible in 2 scenarios: it’s either another process doing it, or it’s the OS kernel. To investigate either of these, a conventional debugger will not work. Enter kernel debugging land.Surprisingly, setting up kernel debugging is extremely easy on Windows. You’ll need 2 machines: the one debugger will run on, and the one you’ll debug. Open up elevated command prompt on the machine which you’re going to be debugging, and type this:Host IP is the IP address of the machine that has the debugger running. It will use the specified port for the debugger connection. It can be anywhere between 49152 and 65535. After hitting enter on the second command, it will tell you a secret key (truncated in the picture) which acts as a password when you connect the debugger. After completing these steps, reboot.On the other computer, open up WinDbg, click on File -> Kernel Debug and enter port and key.If everything goes well, you’ll be able to break execution by pressing Debug -> Break. If that works, the “debugee” computer will freeze. Enter “g” to continue execution.I started up the game and waited for it to break once so I could find out the address at which memory gets corrupted:Memory was corrupted at 0x49d05fedd8. CrashingGame.exe has triggered a breakpoint.Alright, now that I knew the address where to set a data breakpoint, I had to configure my kernel debugger to actually set it:After some time, the breakpoint actually hit...Alright, so what’s going on here?! The sentinel is happily setting its magic values, then there’s a hardware interrupt, which then calls some completion routine, and that writes “2” into my stack. Wow. Okay, for some reason Windows kernel is corrupting my memory. But why?At first, I thought that this has to be us calling some Windows API and passing it invalid arguments. So I went through all the socket polling thread code again, and found that the only system call that we’ve been calling there was the select() function. I went to MSDN, and spent an hour rereading the docs on select() and rechecking whether we were doing everything correctly. As far as I could tell, there wasn’t really much you could do wrong with it, and there definitely wasn’t a place in docs where it said “if you pass it this parameter, we’ll write 2 into your stack”. It seemed like we were doing everything right.After running out of things to try, I decided to step into the select() function with a debugger, step through its disassembly and figure out how it works. It took me a few hours, but I managed to do it. It seems that the select() function is a wrapper for the WSPSelect(), which roughly looks like this:The important part here is the call to NtDeviceIoControlFile(), the fact that it passes its local variable statusBlock as an out parameter, and finally the fact that it waits for the event to be signalled using an alertable wait. So far so good: it calls a kernel function, which returns STATUS_PENDING if it cannot complete the request immediately. In that case, WSPSelect() waits until the event is set. Once NtDeviceIoControlFile() is done, it writes the result to statusBlock variable and then sets the event. The wait completes and then WSPSelect() returns.IO_STATUS_BLOCK struct looks like this:On 64-bit, that struct is 16 bytes long. It caught my attention that this struct seems to match my memory corruption pattern: the first 4 bytes get corrupted (NTSTATUS is 4 bytes long), then 4 bytes get skipped (padding/space for PVOID) and finally 8 more get corrupted. If that was indeed what was being written to my memory, then the first four bytes would contain the result status. The first 4 corruption bytes were always 0x00000102. And that happens to be the error code for… STATUS_TIMEOUT! That would be a sound theory, if only WSPSelect() didn’t wait for NtDeviceIOControlFile() to complete. But it did.After figuring out how the select() function worked, I decided to look at the big picture on how socket polling thread worked. And then it hit me like a ton of bricks.When another thread pushes a socket to be processed by the socket polling thread, the socket polling thread calls select() on that function. Since select() is a blocking call, when another socket is pushed to the socket polling thread queue it has to somehow interrupt select() so the new socket gets processed ASAP. How does one interrupt select() function? Apparently, we used QueueUserAPC() to execute asynchronous procedure while select() was blocked… and threw an exception out of it! That unrolled the stack, had us execute some more code, and then at some point in the future the kernel would complete the work and write the result to statusBlock local variable (which no longer existed at that point in time). If it happened to hit a return address on the stack, we’d crash.The fix was pretty straightforward: instead of using QueueUserAPC(), we now create a loopback socket to which we send a byte any time we need to interrupt select(). This path has been used on POSIX platforms for quite a while, and is now used on Windows too. The fix for this bug shipped in Unity 5.3.4p1.This is one of those bugs that keep you up at night. It took me 5 days to solve, and it’s probably one of the hardest bugs I ever had to look into and fix. Lesson learnt, folks: do not throw exceptions out of asynchronous procedures if you’re inside a system call!

>access_file_
1650|blog.unity.com

Key metrics to measure your app’s success

We live in an increasingly data-driven world, and that’s no less true in the mobile industry. App metrics have become absolutely vital to developers looking to understand their users better and improve their experience in the app.From engagement metrics allowing developers to optimize app content and campaigns, to business metrics measuring revenue against UA costs, the challenge remains in understanding the power of app metrics to improve user acquisition.A recent survey revealed that more than 90% of developers implement third party analytics within their app, but are still unsure how to leverage their app metrics correctly. Between industry jargon and all the different formulas, too many developers are simply at a loss of what to do with all the collected data.In an attempt to help you understand the power of app metrics, and to an extent, mobile game KPIs, we’ve compiled a glossary of the many different app metrics.User & usage metricsUser metrics shed light on how large your user base is and illustrate what kind of users your app appeals to, i.e. users segmented by demographic, geo, or device. Developers can use this information to optimize and localize their app for a better user experience.Daily active users (DAU): This metric reflects the total amount of users that visit your app on a daily basis. This gives you insights on how immersed users are within your app, and is a measure of your app’s success.Monthly active users (MAU): This metric is similar to the above - it reflects the total amount of users visiting your app on a monthly basis. This will help you understand your app’s popularity (increasing, stagnant or decreasing).Device/OS: Mobile apps are deployed on so many different devices. It’s important for you to identify which device,operating system (OS) and OS version (iOS 7.x, iOS 8, …) your loyal user base is coming from, so you can optimize your app for those specific platforms.Geo segmentation: This metric will tell you where your users are located. This will help you localize your app, and identify issues more easily. (Before introducing your app to new geos, it is also highly recommended to check out your competitor’s traction in the local market. This will help you understand where it would be best to launch first.)Engagement metricsEngagement metrics provide valuable insights into user behavior and engagement. Understanding users’ interaction with your app is essential to improving its functionality, retaining a loyal user base, and optimizing your monetization strategy.Average session duration: This measures how long your average user will spend on your app within one session. The more engaged your users are, the longer they will interact with your app.Session interval: The time-span between one app session and the next. This metric shows how frequently your users engage with your app.Retention rate: This metric is calculated by dividing the amount of Monthly Active Users with the Amount of Monthly Installs. This metric gives you an overview of how loyal your user base is over a longer period of time.Churn rate: The churn rate is the opposite of the Retention Rate (RR), and references how many users uninstalled your app. You can easily calculate this by using this formula {1-RR%}. So if your app’s retention rate is 60% from one month to the next, the churn rate will amount to 40%.App rating: One of the strongest engagement metrics is your app’s rating - i.e. the average rating your users grant your app - this metric references user satisfaction with your app. It’s prominently displayed on your app profile within the app marketplace and is a major incentive for users to install your app.Business metricsA vital part of your app’s success lies in its ability to generate revenue and remain ROI positive. Business metrics keep track of monetization and user acquisition strategies, helping developers make informed decisions as well as foresee any financial issues with their app.eCPM: This is a revenue measurement every developer should be familiar with. It literally means “effective Cost Per Mille”, meaning your advertising revenue per 1K impressions. eCPM ultimately measures how well your ads are performing and how much revenue your app is amassing from them. Simply divide your total earnings by total impressions and multiply that number by 1,000. This metric is also very useful to predict future earnings. However, developers beware - eCPM neglects to account for fill rate.Average revenue per user (ARPU): Not to be confused with LTV, this metric reflects the current average revenue derived from your user base. It is calculated by summing up your app’s total revenue, and dividing that number by active users. The higher the average, the more lucrative your user base is.Lifetime value (LTV): This is your primary revenue assessment metric, defining the financial value of your app and each user’s net worth. It estimates the amount of revenue a user will contribute during their lifetime using your app. This metric varies from app type: for example a commerce app like Uber will calculate user LTV in terms of frequency of use or money spent on rides. LTV is probably the easiest way to quantify your app’s overall success. High LTV users = higher overall revenue. Tracking LTV is necessary in order to identify which user base your revenue is flowing from (existing or new users), foresee revenue fluctuation, and also ensure you make calculated decisions on acquisition spending.Performance metricsPerformance Metrics provide valuable insights on the user’s experience within your app, tracking technical errors and failures. It’s critically important to understand these metrics since users won’t give your app the time of day if it keeps crashing, or loads really slowly.Crash rate: The crash rate will show you the ratio of crashes versus the action performed, so you can pinpoint the problem within your app. At first, all mobile apps crash. Developers need to stay on top of this technical failure to avoid disrupting the user’s experience and causing data loss. Some app intelligence solutions also provide deeper insights, such as how much a crash costs you, if they happen with app updates, etc.API latency: Most apps use several APIs or services (for example mobile advertising networks.) API latency refers to the amount of time it takes for a response after your user makes a request or a transaction within your app. The rule of thumb is one second per request.App loading: This metric measures how long different requests take to load in your app. For example, how long it takes for a game level to load or the amount of time it takes for a search to return results. Also known as app load per session/period.

>access_file_
1651|blog.unity.com

Profiling with Instruments

In the Enterprise Support team, we see a lot of iOS projects. At some point, in any iOS development, developers often end up running their game and sitting there thinking “Why the hell is this running so slowly?”. There are some great sets of tools for analysing performance out there and, one of the best is Instruments. Read on to find out how to use it to find your issues!To use Instruments, or any of XCode’s debugging tools, you will need to build a Unity project for the iOS Build Target (with the Development Build and Script Debugging options unchecked). Then you will need to compile the resultant XCode project with XCode in Release mode and deploy it to an attached iOS device.After starting Instruments (by either a long press on the play button, or selecting Products>Profile), select the Time Profiler. To begin a profiling run, select the built application from the application selector, then press the red Record button. The application will launch on the iOS device with Instruments connected, and the Time Profiler will begin recording telemetry. The telemetry will appear as a blue graph on the Instruments timeline.P.S. To clean up the call hierarchy, press the Call Tree button at the bottom left of the Details pane to show options and select Flatten Recursion and Hide System Libraries.A list of method calls will appear in the detail section of the Instruments window. Each top-level method call represents a thread within the application.In general, the main method is the location of all hotspots of interest, as it contains all managed code.Expanding the main method will yield a deep tree of method calls. The major branch is between two methods:[startUnity] and UnityLoadApplication (These method names sometimes appear in ALL CAPS).PlayerLoop[startUnity] is of interest as it contains all time spent initializing the Unity engine. A method named UnityLoadApplication will be found beneath it. It is beneath UnityLoadApplication that startup time can be profiled.Once you have a nice time-slice of your application profiled, pause the Profiler, and start expanding the tree. As you work down the tree, you will notice the time in ms reduces in the left hand column. What you are looking for are items that cause a significant reduction in the time. This will be a performance hotspot. Once you have found one, you can go back to your code-base, and find out WTF is going on that is taking so much time. It could be that it is a totally necessary operation, or it could be some time in the distant past you hacked some pre-production code in that has made it over to your production project, or ...well… it could be a million reasons really. If or how you decide to fix this hotspot would be largely up to you, as you know your codebase better than anyone. :DInstruments can also be used to look for performance sinks that are distributed broadly — ones that lack a single large hotspot, but instead show up as a few milliseconds of lost time in many different places in a codebase. To do this, type either a partial or full function name into Instruments’ symbol search box, shown by pressing ⌘F or clicking Find/Find… in the Edit menu. If profiling a slice of gameplay, expand PlayerLoop and collapse all the methods beneath it. If profiling startup time, expand UnityLoadApplication and collapse the methods beneath it. The total number of milliseconds wasted on a specific operation can be roughly estimated by looking at the total time spent in PlayerLoop or UnityLoadApplication and subtracting the number of milliseconds located in the self column.Common methods to look for:“Box(“, “box” and “box” — these indicate that C# value boxing is occurring; most instances of boxing are trivially fixed“Concat” — string concatenation is often easily optimized away“CreateScriptingArray” — All Unity APIs that return arrays will allocate new copies of arrays. Minimize calls to these methods.“Reflection” — reflection is slow. Use this to estimate the time lost to reflection and eliminate it where possible.“FindObjectOfType” — Use this to locate repeated or unnecessary calls to FindObjectOfType, or other known-slow Unity APIs.“Linq” — Examine the time lost to creating and discarding Linq queries; consider replacing hotspots with manually-optimized methods.As well as profiling CPU time, Instruments also allows you to profile memory usage. Instruments’ Allocations profiler provides two probes that offer detailed views into the memory usage of an application. The Allocations probe permits inspection of the objects resident within memory during a specific time-span. The VM Tracker probe permits monitoring of the dirty memory heap size, which is the primary metric used by iOS to determine when an application must be forcibly closed.Both probes will run simultaneously when selecting the Allocations profiler in Instruments. As usual, begin a profiling run by pressing the red Record button.To set up the Allocations probe correctly, ensure the following settings are correct. At the bottom of the window, ensure Allocation Lifespan (middle option) is set to Created & Persistent. In the Recording Options (File menu), ensure Discard events for freed memory is checked.The most useful display for examining memory behavior is the Statistics display, which is the default display when using the Allocations probe. This display shows a timeline. When used with the recommended settings, the graph displays blue lines indicating the time and magnitude of memory allocations which are still currently live. By watching this graph, you can watch for long-lived or leaked memory by simply repeating the scenario under test and ensuring that no blue lines remain alive between runs.Another useful display is the Call Trees display. It displays the line of code at which allocations are performed, along with the amount of memory consumption the line of code is responsible for. You can change the display by clicking on the right of Details, as shown here:Below you can see that about 25% of the total memory usage of the application under test is solely due to shaders. Given that the shaders’ location in the loading thread, these must be the standard shaders bundled with default Unity projects which are then loaded at application startup time.As before, once you have identified a hotspot, what you do with it is totally dependent on your project.So there you go. A brief guide to Instruments. 1000(ish) words and no A-Team references. We don’t want to get into trouble like last time. Copyright violations are officially Not Funny™.The Enterprise Support team is creating more of these guides, and we will be posting the full versions of our Best Practice guides in the coming months!We love it when a plan comes together.

>access_file_
1652|blog.unity.com

3 tips for increasing app retention (what is retention marketing?)

Guest post by Eugine Dychko, Content Manager at GoWide.Until recently, the bulk of mobile app marketing concentrated on amassing new users. The user acquisition funnel consisted of the following: thousands of users see an ad, many of them install your app, and just as many stop using it after the first few days. In the end, there are a number of users who stuck till the point of generating revenue. Although this CPI model wasn't too consistent or well-calculated, it managed to work the majority of the time. However, as the mobile industry continues to change, so have the strategies to gain loyal users.User acquisition has shifted its focus to retaining existing users, as opposed to simply generating downloads. As the mobile market gets increasingly saturated, app owners can no longer ignore the importance of retention marketing in ensuring they build a sustainable user base for their mobile app.Here’s why:– App stores are getting increasingly saturated. Apple’s App Store started off with just 500 apps in 2009, and six years later, there are more than 1.5 million apps available.– The cost of acquiring a new user increases every year on both iOS and Android platforms. Based on Fiksu’s November 2015 indexes below, the average CPI for iOS is $1.54, which marks a 40% annual increase. For Android, the CPI is $2.27, which is 101% higher than last year.– Users have dozens of apps installed on their smartphones, yet use only a few of them. A Pew Research Center report shows that 46% of users utilize only 1-5 apps per week, whereas only 35% use 6 to 10.– Most apps are abandoned by users after being used 3-4 times.The shift to retention marketingUser Acquisition is a starting point for every mobile app. If done correctly, it is intended to effectively draw in new users and create an active user base. While it’s important to spend some of your budget on UA in order to build your app’s initial user base, this activity should be accompanied by a long-term retention marketing plan.Retention marketing is a safety net that keeps in users who just happened to leave your app for one reason or another. Retention helps activate otherwise dormant users, by reaching out and reminding them of the product they were at some point interested enough to install.Below is a prime example of People Tree App utilizing the tools of retention marketing, with a banner retargeting a user. The banner shows an image of the exact dress the user has previously browsed in an effort to lead them directly back to the app page where that item can be purchased.How to use retention app marketingThere are several retention marketing strategies, techniques and media channels an app owner can use. The choice depends greatly on your app’s category and the type of services it offers. Below are the most common principles of retention marketing:1. Start from the get-goRetention begins with onboarding and should start right away, as an average app loses as much as 80% of users in the first 3 days post-install. This is the ugly truth: by not describing their features properly to a new user, most apps ignore the importance of a diligent onboarding process.If your user does not understand how to use your app, he or she will leave shortly thereafter. A thought-out registration and onboarding process for new users is a much more effective, and easily established, practice of retention than launching an email campaign the following week.For example, Flipboard offers an engaging onboarding process that starts before a user signs up. A new user chooses their theme preferences and immediately gets a general idea of how to use the app, stimulating a trusting and loyal relationship with the brand:2.Know your audienceRetention won’t work without a vast knowledge base on user behavior, so the sooner you gather all the data you need, the better. Don’t hesitate to install a good in-app analytics tool such as Flurry Analytics or Localytics to help you track metrics such as number of sessions and session lengths, buttons clicked, time spent on page, and more demographic and technical data.This data creates a detailed picture of how users interact with content and behave inside your app and, more importantly, where in your app users are less engaged, helping you gain insight into why they abandon your app and how to reactivate them.3.Keep it briefWhen it comes to retaining existing users, your content should be as simple and as brief as possible. Keep your message clear and to the point. Below are some options of media channels to choose from when reaching out to your dormant users:Email Marketing: While often the most low-cost method, it’s not the fastest one, and would be more appropriate to use in a long-term retention strategy.Push Notifications: Limited in content length and quantity, these can still be very effective for immediate messages that then deep-link to a specific page in your app (for example, a push notification for a ‘Sale’ announcement that will open to your app’s ‘discounted items’ page)In-App Messaging: These can be seen as a part of an app’s onboarding process and are a great re-engagement tool for people already using your app. According to Localytics, apps that utilize in-app messages increase user retention in 2-3.5 times.Retargeting Ads: Constitute the least intrusive way to reach users. Works best for users who have recently shown interest in your app (such as, for example, filling a shopping cart without checking out and making an in-app purchase).The vital benefits of retention app marketing1. You gain insight into your users' in-app behaviorApp retention marketing helps measure how valuable your app is to your users. They have installed it, which is a clear sign that they are interested. But what next? How many of them return? Which functions or parts of the app engage users, and which seem to deter them from engaging with or visiting your app again? Monitoring user behavior through in-app analytics is essential for a successful retention strategy, which will provide unique insight into how users view your app and what needs to be enhanced or adjusted.2. You save on budgetDid you know it costs 7 times more to acquire a new user than to retain an existing one? Retention marketing is a cost-saving opportunity. Data-driven remarketing ads have much higher conversion rates than regular ads, plus they boost in-app revenue. According to Kelsey Ricard from Taplytics, existing users spend 33% more than new ones.3. You build a loyal user baseRetention marketing concentrates on turning users into loyal customers. Take the gaming industry, for example, where every gaming app is looking for the highest-paying users. Utilizing retention marketing through the use of demographic and behavioral targeting can help determine your app’s most relevant audience segment and nurture your most loyal users.Even though retention marketing is relatively new in the mobile app ecosystem, it has proven to be an effective strategy for growing an app’s user base long-term. According to an Appboy study, apps that ran re-engagement campaigns with push notifications and emails or in-app messages during the seven day post-install period saw user retention increase 130% over the month.You can expect to see retention marketing become a driving force in the mobile app industry moving forward. Data shows that of last year’s total global growth in mobile app usage, an impressive 40% came from existing users, a jump from the 20% in 2014 and 10% in 2013.More and more apps across all categories have come to rely on retention marketing in order to succeed. While this marketing approach demands extensive planning, UX expertise, data analysis, and effective UA budget management, these efforts pay off in the long run - creating a durable base of loyal and engaged users.Learn about more mobile app retention strategies you can use.Guest Post by: Eugine Dychko, Content Manager at GoWide, a global mobile app marketing platform with mobile traffic solutions for app owners, developers, affiliate marketers and advertising agencies.

>access_file_
1653|blog.unity.com

The what, how, and why of deep linking

Deep linking empowers app developers to more quickly, seamlessly, and effectively answer specific consumer needs. Below we decipher the what, how, and why of deep linking. We look at exactly what it is, how you can apply it to your app, and why it can benefit your mobile strategy.What is deep linking?A deep link is a hyperlink that, rather than linking to an app’s main page, links to a specific piece of content or page in an app. Deep linking is a way for you to display your product more precisely, and therefore effectively, by directing your user past your app’s homepage and into the specific pages containing the relevant content that they engaged with.In order to increase conversion, apps utilize deep linking to generate traffic to pages with in-app purchases or special offerings. Deep linking, for example, allows for a retail app to show an ad for a dress on mobile web and, upon clicking the ad, the user is taken directly to the page within the app where the dress is made available for purchase - seamlessly converting mobile web users to app users, as well as driving revenue - all through the use of deep linking.In another example, a gaming app can promote today’s ‘special deal’ on weapons in an ad, which will then take the user directly to that specific IAP offering, as opposed to the game’s main page or general IAP Store, where they would need to look for that specific IAP package. In other words, by simplifying the conversion process, targeting content discoverability and seamlessly optimizing the app’s user flow, it can directly lead potential buyers to what they would like to buy.1. Traditional deep linkingThis is the basic hyperlink that links to a specific page within an app. The issue with traditional deep links is that they won’t work if the app has not previously been installed in the user's’ phone. The phone will look for the app content and, if not installed, will show an error. This means this kind of deep linking won’t work as a way to drive new installs of your app.2. Deferred deep linkingWith deferred deep linking, access to specific content in an app is simply ‘deferred’ until after a user installs the app. The user can click on a link that points to a specific page or content inside an app which they haven’t actually installed. In the meantime, the deep link will direct the user to the relevant app store to install the app and then, once installed, will instantly redirect the user to the specific content they wanted and were expecting - no search required. In other words, deferred deep linking allows for seamless and precise movement across mobile web, apps and the app store.3. Contextual deep linkingContextual deep links take deferred deep linking one step further. While the user is immediately directed to the content of the links within the app whether or not the app has been installed, contextual deep links can also store and pass referring information through the app store - bringing the app relevant information about who the user is, where they came from, which ad campaign they clicked, and where they want to go.Contextual deep links are useful for both sides of the exchange. They can improve a developer's app personalization process by incorporating features such as personalized welcomes (where you see your friend's recommendation in the app if they share an item with you), while providing the user with a better experience, more personal ads, and highly relevant information.How can you apply it to your app?1. In-app adsIn-app messages can appear as a part of an app’s onboarding process and also function as a great re-engagement tool for people already using your app. According to Localytics, apps that utilize in-app messages increase user retention by 2-3.5 times. Add a ‘buy now’ button in a banner over relevant mobile content, instantly delivering the user the same content in-app to give them a continuous experience.2. Push notificationsPromote any special deal or content offered in your app using push notifications.You can utilize push notifications to bring back users who don’t frequent your app very often. Limited in length and quantity, this method can be very effective for immediate messages that deep link to a specific page in your app (for example, a push notification for a Sale announcement that will open to your app’s ‘discounted items’ page).So your users will see an ad promoting a new IAP offering, and will then be brought to the specific page with that offering. Push notifications can also be used to spur users to download a new feature of your app - bringing them straight to the app store for the download, and then back to the last page they were at in your app.Why is it beneficial?1. Deep linking boosts user acquisitionInstead of expecting a new or potential user to move from a browser (where they saw an ad) to the app store and conduct their own search for your app, you can use a deep link to direct them straight to a specific screen in your app. This gives you more control over your app's targeting process, with the potential to reach an entirely new audience that is experiencing your brand for the first time in a more effective way. You can track what these potential users liked, and direct them to pages within your app that they are more likely to engage with.2. Deep linking eases your app’s onboarding processNormally, clicking an ad promoting a specific feature or offering within an app would lead a user to that app’s generic homepage, its mobile website, or to the app store to download the app. This takes a potential user away from the app, putting in an extra step that might deter them from completing the installation process. Even once installed, a user would have to navigate to the specific page or special offer that was shown in the ad by themselves.These links tend to dump the user at the front door, leaving him or her to sift through countless pages within the app to find what they were looking for in the first place. By linking to specific pages within apps, deep links allow you to place the user directly within reach of the content, product or functionality they want - providing you with one of the most effective ways to get your content in front of the right consumers.3. Deep linking optimizes user engagementEvery app page is now a potential home or landing page, as every section in your app is now an area you can deep link to. By allowing potential customers to go directly to specific or relevant landing pages rather than simply opening the app, you create an invaluable and highly personalized user experience - increasing efficiency and improving your app’s UX by optimizing the browsing experience for the user and engaging with them on their terms.4. Deep linking increases revenueDeep links allow developers to drastically improve ROI by driving traffic to specific pages based on different user interests - building a shortcut for users looking to make a purchase or convert on an in-app offering. Specific pages, when linked to specific users at a specific time, will lead to greater opportunity for revenue.With more of our time spent in mobile apps, providing targeted navigation between apps and within pages in apps is becoming more critical. Deep links provide exposure for the content inside your apps, and shorten the distance and time between new users and the content they will enjoy. Through pinpointed navigation based on specific user interests, deep links allow you to bridge the gap between you and your users by building app experiences that directly and precisely fulfill their needs.

>access_file_

[ 2015 ]

7 entries
1654|blog.unity.com

10000 Update() calls

For an experienced developer this code is a bit odd.1. It's not clear how exactly this method is called.2. It's not clear in what order these methods are called if you have several objects in a scene.3. This code style doesn't work with intellisense.No, Unity doesn't use System.Reflection to find a magic method every time it needs to call one.Instead, the first time a MonoBehaviour of a given type is accessed the underlying script is inspected through scripting runtime (either Mono or IL2CPP) whether it has any magic methods defined and this information is cached. If a MonoBehaviour has a specific method it is added to a proper list, for example if a script has Update method defined it is added to a list of scripts which need to be updated every frame.During the game Unity just iterates through these lists and executes methods from it — that simple. Also, this is why it doesn't matter if your Update method is public or private.The order is specified by Script Execution Order Settings (menu: Edit > Project Settings > Script Execution Order). It might be not the best way to manually set the order of 1000 scripts but if you want one script to be executed after all other ones this way is acceptable. Of course, in the future we want to have a more convenient way to specify execution order, using an attribute in code for example.We all use an IDE of some sort to edit our C# scripts in Unity, most of them don't like magic methods for which they can't figure out where they are called, if at all. This leads to warnings and makes it harder to navigate the code.Sometimes developers add an abstract class extending MonoBehaviour, call it BaseMonoBehaviour or alike and make every script in their project extend this class. They put some basic useful functionality in it along with a bunch of virtual magic methods like so:This structure makes using MonoBehaviours in your code more logical but has one little flaw. I bet you already figured it out...All your MonoBehaviours will be in all update lists Unity uses internally, all these methods will be called each frame for all your scripts, mostly doing nothing at all!One might ask why should anyone care about an empty method? The thing is that these are the calls from native C++ land to managed C# land, they have a cost. Let's see what this cost is.For this post I created a small example project which is available on Github. It has 2 scenes which can be changed by tapping on a device or pressing any key in editor:(1) In the first scene 10000 MonoBehaviours are created with this code inside:(2) In the second scene another 10000 MonoBehaviours are created but instead of having an Update they have a custom UpdateMe method which is called by a manager script every frame like so:The test project was run on 2 iOS devices compiled to Mono and IL2CPP in non-Development mode in Release configuration. Time was measured as following:Set up a Stopwatch in the first Update called (configured in Script Execution Order),Stop the Stopwatch at LateUpdate,Average the timings over a few minutes.Unity version: 5.2.2f1 iOS version: 9.0WOW! This is a lot! There must be something wrong with the test!Actually, I just forgot to set Script Call Optimization to Fast but no Exceptions, but now we can see what impact on performance this particular setting has... not that anyone cares anymore with IL2CPP.OK, this is better. Let's switch to IL2CPP.Here we see two things:1. This particular optimization still makes sense in IL2CPP.2. IL2CPP still has room for improvement and as I'm writing this post Scripting and IL2CPP teams are working hard to increase performance. For example the latest Scripting branch contains optimizations making the test run 35% faster.I’ll explain what Unity is doing under the hood in a few moments. But right now let’s change our Manager code to make it 5 times faster!If you haven't read this great series of posts about IL2CPP internals you should do it right after you finish reading this one!It turns out that if you'd wanted to iterate through a list of 10000 elements every frame you'd better use an array instead of a List because in this case generated C++ code is simpler and array access is just faster.In the next test I changed List to ManagedUpdateBehavior[].This looks much better!Update: I ran the test with array on Mono and got 0.23ms.We figured out that calling functions from C++ to C# is not fast, but let’s find out what Unity is actually doing when calling Updates on all these objects. The easiest way to do this is to use Time Profiler from Apple Instruments.Note that this is not a Mono vs. IL2CPP test — most of the things described further are also true for a Mono iOS build.I launched the test on iPhone 6 with Time Profiler, recorded a few minutes of data and selected a one minute interval to inspect. We are interested in everything starting from this line: If you haven't used Instruments before, on the right you see functions sorted by execution time and other functions they call. The most left column is CPU time in ms and % of these functions and functions they call combined, second left column is self execution time of the function. Note that since CPU wasn't fully used by Unity during this experiment we see 10 seconds of CPU time spent on our Updates in a 60 seconds interval. Obviously we are interested in functions taking most time to execute.I used my mad Photoshop skills and color coded a few areas for you to better understand what’s going on.In the middle you see our Update method or how IL2CPP calls it — UpdateBehavior_Update_m18. But before getting there Unity does a lot of other things.Unity goes over all Behaviours to update them. Special iterator class, SafeIterator, ensures that nothing breaks if someone decides to delete the next item on the list. Just iterating over all registered Behaviours takes 1517ms out of total 9979ms.Next, Unity does a bunch of checks to make sure that it is calling a valid existing method on an active GameObject which has been initialized and its Start method called. You don’t want your game to crash if you destroy a GameObject during Update, do you? These checks take another 2188ms out of total 9979ms.Unity creates an instance of ScriptingInvocationNoArgs (which represents a call from native side to managed side) together with ScriptingArguments and orders IL2CPP virtual machine to invoke the method (scripting_method_invoke function). This step takes 2061ms out of total 9979ms.scripting_method_invoke function checks that passed arguments are valid (900ms) and then calls Runtime::Invoke method of IL2CPP virtual machine (1520ms). First, Runtime::Invoke checks if such method exists (1018ms). Next, it calls a generated RuntimeInvoker function for method signature (283ms). It in turn calls our Update function which according to Time Profiler takes 42ms to execute.And a nice colorful table.Now let’s use Time Profiler with the manager test. You can see on the screenshot that there are the same methods (some of them take less than 1ms total so they are not even shown) but most of the execution time is actually going to UpdateMe function (or how IL2CPP calls it — ManagedUpdateBehavior_UpdateMe_m14). Plus, there’s a null check inserted by IL2CPP to make sure that the array we are iterating over is not null.The next image uses the same colors.So, what do you think now, should one care about a little method call?To be honest, this test is not completely fair. Unity does a great job guarding you and your game from unintended behavior and crashes: Is this GameObject active? Wasn’t it destroyed during this update loop? Does Update method exist on the object? What to do with a MonoBehaviour created during this update loop? — my manager script doesn’t handle anything of that, it just iterates through a list of objects to update.In real world manager script probably would have been more complicated and slower to execute. But in this case I am the developer — I know what my code is supposed to do and I architect my manager class knowing what behavior is possible and what isn’t in my game. Unity unfortunately doesn’t possess such knowledge.Of course it all depends on your project, but in the field it’s not rare to see a game using a large number of GameObjects in the scene each executing some logic every frame. Usually it’s a little bit of code which doesn’t seem to affect anything, but when the number grows very large the overhead of calling thousands of Update methods starts to be noticeable. At this point it might already be too late to change the game’s architecture and refactor all these objects into manager pattern.You have the data now, think about it at the beginning of your next project.

>access_file_
1655|blog.unity.com

Going deep with IMGUI and Editor customization

Strange timing, you might think. Why care about the old UI system now that the new one is available? Well, while the new UI system is intended to cover every in-game user interface situation you might want to throw at it, IMGUI is still used, particularly in one very important situation: the Unity Editor itself. If you're interested in extending the Unity Editor with custom tools and features, it's very likely that one of the things you'll need to do is go toe-to-toe with IMGUI.First question, then: Why is it called ‘IMGUI’? IMGUI is short for Immediate Mode GUI. OK, so, what’s that? Well, there’s two major approaches to GUI systems: ‘immediate’ and ‘retained.’A retained mode GUI is one in which the GUI system ‘retains’ information about your GUI: you set up your various GUI widgets - labels, buttons, sliders, text fields, etc - and then that information is kept around and used by the system to render the screen, respond to events, and so on. When you want to change the text on a label, or move a button, then you’re manipulating some information which is stored somewhere, and when you’ve made your change then the system carries on working in its new state. As the user changes values and moves sliders, the system simply stores their changes, and it’s up to you to query the values or respond to callbacks. The new Unity UI system is an example of a retained mode GUI; you create your UI.Labels, UI.Buttons and so on as components, set up them up, and then just let them sit there, and the new UI system will take care of the rest.Meanwhile, an immediate mode GUI is one in which the GUI system generally does not retain information about your GUI, but instead, repeatedly asks you to re-specify what your controls are, and where they are, and so on. As you specify each part of the UI in the form of function calls, it is processed immediately - drawn, clicked, etc - and the consequences of any user interaction returned to you straight away, instead of you needing to query for it. This is inefficient for a game UI - and inconvenient for artists to work with, as everything becomes very code-dependent - but it turns out to be very handy for non-realtime situations (like Editor panels) which are heavily code-driven (like Editor panels) and want to change the displayed controls easily in response to current state (like Editor panels!) so it’s a good choice for things like heavy construction equipment. No, wait. I meant, it’s a good choice for Editor panels.If you want to know more, Casey Muratori has a great video where he discusses some of the upsides and principles of an Immediate Mode GUI. Or you can just keep reading!Whenever IMGUI code is running, there is a current ‘Event’ being handled - this could be something like ‘user has clicked the mouse button,’ or something like ‘the GUI needs to be repainted.’ You can find out what the current event is by checking Event.current.type.Imagine what it might look like if you’re doing a set of buttons in a window somewhere and you had to write separate code to respond to 'user has clicked the mouse button' and 'the GUI needs to be repainted.' At a block level it might look like this:Writing these functions for each separate GUI event is kinda tedious; but you’ll notice that there’s a certain structural similarity between the functions. Each step of the way, we are doing something relating to the same control (button 1, button 2, or button 3). Exactly what we’re doing depends on the event, but the structure is the same. What this means is that we can do this instead:We have a single OnGUI function which calls library functions like GUI.Button, and those library functions do different things depending on which event we’re handling. Simple!There are 5 event types that are used most of the time:EventType.MouseDownSet when the user has just pressed a mouse button.EventType.MouseUpSet when the user has just released a mouse button.EventType.KeyDownSet when the user has just pressed a key.EventType.KeyUpSet when the user has just released a key.EventType.RepaintSet when IMGUI needs to redraw the screen.That’s not an exhaustive list - check the EventType documentation for more.How might a standard control, such as GUI.Button, respond to some of these events?EventType.RepaintDraw the button in the provided rectangle.EventType.MouseDownCheck whether the mouse is within the button’s rectangle. If so, flag the button as being down and trigger a repaint so that it gets redrawn as pressed in.EventType.MouseUpUnflag the button as down and trigger a repaint, then check whether the mouse is still within the button’s rectangle: if so, return true, so that the caller can respond to the button being clicked.The reality is more complicated than this - a button also responds to keyboard events, and there is code to ensure that only the button that you initially clicked on can respond to the MouseUp - but this gives you a general idea. As long as you call GUI.Button at the same point in your code for each of these events, with the same position and contents, then the different behaviours will work together to provide all the functionality of a button.To help with tying these different behaviours together under different events, IMGUI has the concept of a ‘control ID.’ The idea of a control ID is to give a consistent way to refer to a given control across every event type. Each distinct part of the UI that has non-trivial interactive behaviour will request a control ID; it’s used to keep track of things like which control currently has keyboard focus, or to store a small amount of information associated with a control. The control IDs are simply awarded to controls in the order that they ask for them, so, again, as long as you’re calling the same GUI functions in the same order under different events, they’ll end up being awarded the same control IDs and the different events will sync up.If you want to create your own custom Editor classes, your own EditorWindow classes, or your own PropertyDrawer classes, the GUI class - as well as the EditorGUI class - provides a library of useful standard controls that you’ll see used throughout Unity.(It’s a common mistake for newbie Editor coders to overlook the GUI class - but the controls in that class can be used when extending the Editor just as freely as the controls in EditorGUI. There’s nothing particularly special about GUI vs EditorGUI - they’re just two libraries of controls for you to use - but the difference is that the controls in EditorGUI cannot be used in game builds, because the code for them is part of the Editor, while GUI is a part of the engine itself).But what if you want to do something that goes beyond what’s available in the standard library?Let’s explore how we might create a custom user interface control. Try clicking and dragging the coloured boxes in this little demo:(NOTE: Original WebGL application embedded here no longer works in browsers)(You’ll need a browser with WebGL support to see the demo, like current versions of Firefox).These custom sliders each drive a separate ‘float’ value between 0 and 1. You might want to use such a thing in the Inspector as another way of displaying, say, hull integrity for different parts of a spaceship object, where 1 represents ‘no damage’ and 0 represents ‘totally destroyed’ - having the bars represent the values as colours may make it easier to tell, at a glance, what state the ship is in. The code for building this as a custom IMGUI control that you can use like any other control is pretty easy, so let’s walk through it.The first step is to decide upon our function signature. In order to cover all the different event types, our control is going to need three things:- a Rect which defines where it should draw itself and where it should respond to mouse clicks.- the current float value that the bar is representing.- a GUIStyle, which contains any necessary information about spacing, fonts, textures, and so on that the control will need. In our case that includes the texture that we’ll use to draw the bar. More on this parameter later.It’s also going to need to return the value that the user has set by dragging the bar. That’s only meaningful on certain events like mouse events, and not on things like repaint events; so by default we’ll return the value that the calling code passed in. The idea is that the calling code can just do “value = MyCustomSlider(... value ...)” without caring about the event that is happening, so if we’re not returning some new value set by the user, we need to preserve the value that currently stands.So the resulting signature looks like this:Now we begin implementing the function. The first step is to retrieve a control ID. We’ll use this for certain things when responding to the mouse events. However, even if the event being handled isn’t one we actually care about, we must still request an ID anyway, to ensure that it isn’t allocated to some other control for this particular event. Remember that IMGUI just dishes out IDs in the order they’re requested, so if you don’t ask for an ID it’ll end up being given to the next control instead, causing that control to end up with different IDs for different events, which is likely to break it. So, when requesting IDs, it’s all-or-none - either you request an ID for every event type, or you never request it for any of them (which might be OK, if you're creating a control that is extremely simple or non-interactive).The FocusType.Passive being passed as a parameter there tells IMGUI what role this control plays in keyboard navigation - whether it’s possible for the control to be the current one reacting to keypresses. My custom slider doesn’t respond to key presses at all, so it specifies Passive, but controls that respond to key presses could specify Native or Keyboard. Check the FocusType docs for more info on them.Next, we do what the majority of custom controls will do at some point in their implementation: we branch depending on the event type, using a switch statement. Instead of just using Event.current.type directly, we’ll use Event.current.GetTypeForControl(), passing it our control ID; this filters the event type, to ensure that, for example, keyboard events are not sent to the wrong control in certain situations. It doesn’t filter everything, though, so we will still need to perform some checks of our own as well.Now we can begin implementing the specific behaviours for the different event types. Let’s start with drawing the control:At this point you could finish up the function and you’d have a functioning ‘read-only’ control for visualising float values between 0 and 1. But let’s continue and make the control interactive.To implement a pleasant mouse behaviour for the control, we have a requirement: once you’ve clicked on the control and started to drag it, you shouldn’t need to keep the mouse over the control. It’s much nicer for the user to be able to just focus on where their cursor is horizontally, and not worry about vertical movement. This means that they might move the mouse over other controls while dragging, and we need those controls to ignore the mouse until the user releases the button again.The solution to this is to make use of GUIUtility.hotControl. It’s just a simple variable which is intended to hold the control ID of the control which has captured the mouse. IMGUI uses this value in GetTypeForControl(); when it’s not 0, then mouse events get filtered out unless the control ID being passed in is the hotControl.So, setting and resetting hotControl is pretty simple:Note that when some other control is the hot control - i.e. GUIUtility.hotControl is something other than 0 and our own control ID - then these cases simply won’t be executed, because GetTypeForControl() will be returning ‘ignore’ instead of mouseUp/mouseDown events.Setting the hotControl is fine, but we still haven’t actually done anything to change the value while the mouse is down. The simplest way to do that is actually to close the switch and then say that any mouse event (clicking, dragging, or releasing) that happens while we’re the hotControl (and therefore are in the middle of click+dragging - though not releasing, because we zeroed out the hotControl in that case above) should result in the value changing:Those last two steps - setting GUI.changed and calling Event.current.Use() - are particularly important, not just to making this control behave correctly, but also to make it play nice with other IMGUI controls and features. In particular, setting GUI.changed to true will allow calling code to use the EditorGUI.BeginChangeCheck() and EditorGUI.EndChangeCheck() functions to detect whether the user actually changed your control’s value or not; but you should also avoid ever setting GUI.changed to false, because that might end up hiding the fact that a previous control had its value changed.Lastly, we need to return a value from the function. You’ll remember that we said we would return the modified float value - or the original value, if nothing has changed, which most of the time will be the case:And we’re done. MyCustomSlider is now a simple functioning IMGUI control, ready to be used in custom Editors, PropertyDrawers, editor windows, and so on. There’s still more we can do to beef it up - like support multi-editing - but we’ll discuss that below.There’s one other particularly important non-obvious thing about IMGUI, and that is its relation to the Scene View. You’ll all be familiar with the helper UI elements that are drawn in the scene view when you go to translate, rotate, and scale objects - the orthogonal arrows, rings, and box-capped lines that you can click and drag to manipulate objects. These UI elements are called ‘Handles.’What’s not obvious is that Handles are powered by IMGUI as well!After all, there’s nothing inherent in what we’ve said about IMGUI so far that is specific to 2D or Editors/EditorWindows. The standard controls you find in the GUI and EditorGUI classes are all 2D, certainly, but the basic concepts like EventType and control IDs don’t depend on 2D at all. So while GUI and EditorGUI provide 2D controls aimed at EditorWindows and Editors for components in the Inspector, the Handles class provides 3D controls intended for use in the Scene View. Just as EditorGUI.IntField will draw a control that lets the user edit a single integer, we have functions like:Vector3 PositionHandle(Vector3 position, Quaternion rotation);that will allow the user to edit a Vector3 value, visually, by providing a set of interactive arrows in the Scene View. And just as before, you can define your own Handle functions to draw custom user interface elements as well; dealing with mouse interaction is a little more complex, as it’s no longer enough to just check whether the mouse is inside a rectangle or not - the HandleUtility class may be of help to you there - but the basic structure and concepts are all the same.If you provide an OnSceneGUI function in your custom editor class, you can use Handle functions there to draw into the scene view, and they’ll be positioned correctly in world space as you’d expect. Though bear in mind that it is possible to use Handles in 2D contexts like custom editors, or to use GUI functions in the scene view - you just may need to do things like setting up GL matrices or calling Handles.BeginGUI() and Handles.EndGUI() to set up the context before you use them.In the case of MyCustomSlider, there were only really two pieces of information we needed to keep track of: the current value of the slider (which was passed in by the user and returned to them) and whether the user was in the process of changing the value (which we effectively used hotControl to keep track of). But what if a control needs to keep hold of more information than that?IMGUI provides a simple storage system for ‘state objects’ that are associated with a control. You define your own class for storing values, and then ask IMGUI to manage an instance of it, associated with your control’s ID. You’re only allowed one state object per control ID, and you don’t instantiate it yourself - IMGUI does that for you, using the state object’s default constructor. State objects also aren’t serialised when reloading editor code - something that happens every time your code is recompiled - so you should only be using them for short-lived stuff. (Note that this is true even if you mark your state objects as [Serializable] - the serializer simply doesn’t visit this particular corner of the heap).Here’s an example. Suppose we want a button which returns true whenever it’s pressed down, but also flashes red if you’ve been holding it down for longer than two seconds. We’ll need to keep track of the time at which the button was originally pressed; we’ll do this by storing it in a state object. So, here’s our state object class:We’ll store the time at which the mouse was pressed in ‘mouseDownAt’ when MouseDownNow() is called, and then use the IsFlashing function to tell us ‘should the button be colored red right now’ - as you can see, it will definitely not be red if it’s not the hotControl or if fewer than 2 seconds have passed since it was clicked, but after that we make it change color every 0.1 seconds.Here’s the code for the actual button control itself:Pretty straightforward - you should recognise the code in the mouseDown/mouseUp cases as being very similar to what we did for capturing the mouse in the custom slider, above. The only differences are the call to state.MouseDownNow() when pressing down the mouse, and changing GUI.color in the repaint event.The eagle-eyed amongst you might have noticed that there’s one other key difference about the repaint event - that call to style.Draw(). What’s up with that?When we were building the custom slider control, we used GUI.DrawTexture to draw the bar itself. That worked OK, but our FlashingButton needs to have a caption on it, in addition to the ‘rounded rectangle’ image that is the button itself. We could try and arrange something with GUI.DrawTexture to draw the button image and then GUI.Label on top of that to draw the caption… but we can do better. We can use the same technique that GUI.Label uses to draw itself, and cut out the middleman.A GUIStyle contains information about the visual properties of a GUI element - both basic things like the font or text color it should use, and more subtle layout properties like how much spacing to give it. All of this information is stored in a GUIStyle alongside functions to work out the width and height of some content using the style, and the functions to actually draw the content to the screen.In fact, GUIStyle doesn’t just take care of one style for a control: it can take care of rendering it in a bunch of situations that a GUI element might find itself in - drawing it differently when it’s being hovered over, when it has keyboard focus, when it’s disabled, and when it’s “active” (for example, when a button is in the middle of being pressed). You can provide the color and background image information for all of these situations, and the GUIStyle will pick the appropriate one at drawing-time based on the control ID.There’s four main ways to get hold of GUIStyles that you can use to draw your controls:- Construct one in code (new GUIStyle()) and set up the values on it.- Use one of the built-in styles from the EditorStyles class. If you want your custom controls to look like the built-in ones - drawing your own toolbars, Inspector-style controls, etc - then this is the place to look.- If you just want to create a small variation on an existing style - say, a regular button but with right-aligned text - then you can clone the styles in the EditorStyles class (new GUIStyle(existingStyle)) and then just change the properties you want to change.- Retrieve them from a GUISkin.A GUISkin is essentially a big bundle of GUIStyle objects; importantly, it can be created as an asset in your project and edited freely through the Inspector. If you create one and take a look, you’ll see slots for all the standard control types - boxes, buttons, labels, toggles, and so on - but as a custom control author, direct your attention to the ‘custom styles’ section near the bottom. Here you can set up any number of custom GUIStyle entries, giving each one a unique name, and then later you can retrieve them using GUISkin.GetStyle(“nameOfCustomStyle”). The only missing piece of the puzzle is figuring out how to get hold of your GUISkin object from code in the first place; if you keep your skin in the ‘Editor Default Resources’ folder, you can use EditorGUIUtility.LoadRequired(); alternatively, you could use a method like AssetDatabase.LoadAssetAtPath() to load from elsewhere in the project. (Just don’t put your editor-only assets somewhere that packs them into asset bundles or the Resources folder by mistake!)Armed with a GUIStyle, you can then draw a GUIContent - a mix of text, icon, and tooltip - using GUIStyle.Draw(), passing it the rectangle you’re drawing into, the GUIContent you want to draw, and the control ID that should be used to figure out whether the content has things like keyboard focus.You’ll have noticed that all of the GUI controls we’ve discussed and written so far include a Rect parameter that determines the control’s position on screen. And, now that we’ve discussed GUIStyle, you might have paused when I said that a GUIStyle includes “layout properties like how much spacing it needs.” You might be thinking: “uh oh. Does this mean we have to do a bunch of work to calculate our Rect values so that the spacing values are respected?”That’s certainly an approach which is available to us; but there’s an easier way. IMGUI includes a ‘layouting’ mechanism which can automatically calculate appropriate Rect values for our controls, taking things like spacing into account. So how does it work?The trick is an extra EventType value for controls to respond to: EventType.Layout. IMGUI sends the event to your GUI code, and the controls you invoke respond by calling IMGUI layout functions - GUILayoutUtility.GetRect(), GUILayout.BeginHorizonal / Vertical, and GUILayout.EndHorizontal / Vertical, amongst others - which IMGUI records, effectively building up a tree of the controls in your layout and the space they require. Once it’s finished and the tree is fully built, IMGUI then does a recursive pass over the tree, calculating the actual widths and heights of elements and where they are in relation to one another, positioning successive controls next to one another and so on.Then, when it’s time to do an EventType.Repaint event - or indeed any other kind of event - controls call the same IMGUI layout functions. Only this time, instead of recording the calls, IMGUI ‘plays back’ the calls it previously recorded on the Layout event, returning the rectangles it computed; having called GUILayoutUtility.GetRect() during the layout event to register that you need a rectangle, you call it again during the repaint event and it actually returns the rectangle you should use.Like with control IDs, this means you need to be consistent about the layout calls you make between Layout events and other events - otherwise you’ll end up retrieving computed rectangles for the wrong controls. It also means that the values returned by GUILayoutUtility.GetRect() during a Layout event are useless, because IMGUI won’t actually know the rectangle it’s supposed to give you until the event has completed and the layout tree has been processed.What does this look like for our custom slider control? We can actually write a Layout-enabled version of our control really easily, as once we’ve got a rectangle back from IMGUI we can just call the code we already wrote:The call to GUILayoutUtility.GetRect will do two things: during a Layout event, it will record that we want to use the given style to draw some empty content - empty because there is no specific text or image that we need to make room for - and during other events, it will retrieve an actual rectangle for us to use. This does mean that during a layout event we’re calling MyCustomSlider with a bogus rectangle, but it doesn’t matter - we still need to do it, in order to make sure that the usual calls are made to GetControlID(), and the rectangle isn't actually used for anything in there during a Layout event.You might be wondering how IMGUI can actually work out the size of the slider, given ‘empty’ content and just a style. It’s not a lot of information to go on - we’re relying on the style having all the necessary information specified, that IMGUI can use to work out the rectangle to assign. But what if we wanted to let the user control that - or, say, to use a fixed height from the style but let the user control the width. How would we do that?The answer is in the GUILayoutOption class. Instances of this class represent directives to the layout system that a particular rectangle should be calculated in a particular way; for example, “should have height 30” or “should expand horizontally to fill the space” or “must be at least 20 pixels wide.” We create them by calling factory functions in the GUILayout class - GUILayout.ExpandWidth(), GUILayout.MinHeight(), and so on - and pass them to GUILayoutUtility.GetRect() as an array. They’re stored into the layout tree and taken into account when the tree is processed at the end of the layout event.To make it easy for the user to provide as few or as many GUILayoutOption instances as they like without having to create and manage their own arrays, we take advantage of the C# ‘params’ keyword, which lets you call a method passing any number of parameters, and have those parameters arrive within the method packed into an array automatically. Here’s our modified slider now:As you can see, we just take whatever the user’s given us and pass it onwards to GetRect.The approach we’ve used here - of wrapping a manually-positioned IMGUI control function in an auto-layouting version - works for pretty much any IMGUI control, including the built-in ones in the GUI class. In fact, the GUILayout class uses exactly this approach to provide auto-layouted versions of the controls in the GUI class (and we offer a corresponding EditorGUILayout class to wrap controls in the EditorGUI class). You might want to follow this twin-class convention when building your own IMGUI controls.It’s also completely viable to mix auto-layouted and manually positioned controls. You can call GetRect to reserve a chunk of space, and then do you own calculations to divide that rectangle up into sub-rectangles that you then use to draw multiple controls; the layout system doesn’t use control IDs in any way, so there’s no problem with having multiple controls per layout rectangle ( or even multiple layout rectangles per control). This can sometimes be much faster than using the layout system fully.Also, note that if you’re writing PropertyDrawers, you should not use the layout system; instead, you should just use the rectangle passed to your PropertyDrawer.OnGUI() override. The reason for this is that under the hood, the Editor class itself does not actually use the layout system, for performance reasons; it just calculates a simple rectangle itself, moving it down for each successive property. So, if you did use the layout system in your PropertyDrawer, it wouldn’t have any knowledge of any of the properties that had been drawn before yours, and would end up positioning you on top of them. Which is not what you want!So far, everything we’ve discussed would equip you to build your own IMGUI control that would work pretty smoothly. There’s just a couple more things to discuss for when you really want to polish what you’ve built to the same level as the Unity built-in controls.The first is the use of SerializedProperty. I don’t want to go into the SerializedProperty system in too much detail in this post - we’ll leave that for another time - but just to summarize quickly: A SerializedProperty ‘wraps’ a single variable handled by Unity’s serialization (load and save) system. Every variable on every script you write that shows up in the Inspector - as well as every variable on every engine object that you see in the Inspector - can be accessed via the SerializedProperty API, at least in the Editor.SerializedProperty is useful because it doesn’t just give you access to the variable’s value, but also information like whether the variable’s value is different to the value on a prefab it came from, or whether a variable with child fields (e.g. a struct) is expanded or collapsed in the Inspector. It also integrates any changes you make to the value into the Undo and scene-dirtying systems. It lets you do this without ever actually creating the managed version of your object, too, which can help performance greatly. So, if we want our IMGUI controls to play nice and easy with a slew of editor functionality - undo, scene dirtying, prefab overrides, etc - we should make sure we support SerializedProperty.If you look through the EditorGUI methods that take a SerializedProperty as an argument, you’ll see the signature is slightly different. Instead of the ‘take a float, return a float’ approach of our previous custom slider, SerializedProperty-enabled IMGUI controls just take a SerializedProperty instance as an argument, and don’t return anything. That’s because any changes they need to make to the value, they just apply directly to the SerializedProperty themselves. So our custom slider from before can now look like this:The ‘value’ parameter we used to have is gone, along with the return value, and instead, the ‘prop’ parameter is there to pass in the SerializedProperty. To retrieve the current value of the property in order to draw the slider bar, we just access prop.floatValue, and when the user changes the slider position we just assign to prop.floatValue.Having the whole SerializedProperty present in the IMGUI control code has other benefits, though. For example, consider the way that modified properties in prefab instances are shown in bold. Just check the prefabOverride property on the SerializedProperty, and if it’s true, do whatever you need to do to display the control differently. Happily, if making text bold really is all you want to do, then IMGUI will take care of that for you automatically as long as you don’t specify a font in your GUIStyle when you draw. (If you do specify a font in your GUIStyle, then you’re going to need to take care of this yourself - having regular and bold versions of your font and selecting between them based on prefabOverride when you want to draw).The other major feature you need is support for multi-object editing - i.e. handling things gracefully when your control needs to display multiple values simultaneously. Test for this by checking the value of EditorGUI.showMixedValue; if it’s true, your control is being used to depict multiple different values simultaneously, so do whatever you need to do to indicate that.Both the bold-on-prefabOverride and showMixedValue mechanisms require that context for the property has been set up using EditorGUI.BeginProperty() and EditorGUI.EndProperty(). The recommended pattern is to say that if your control method takes a SerializedProperty as an argument, then it will make the calls to BeginProperty and EndProperty itself, while if it deals with ‘raw’ values - similar to, say, EditorGUI.IntField, which takes and returns ints directly and doesn’t work with properties - then the calling code is responsible for calling BeginProperty and EndProperty. (It makes sense, really, because if your control is dealing with 'raw' values then it doesn't have a SerializedProperty value it can pass to BeginProperty anyway).I hope this post has shed some light on some of the core parts of IMGUI that you’ll need to understand if you want to really take your editor customisation to the next level. There’s more to cover before you can be an Editor guru - the SerializedObject / SerializedProperty system, the use of CustomEditor versus EditorWindow versus PropertyDrawer, the handling of Undo, etc - but IMGUI plays a large part in unlocking Unity’s immense potential for creating custom tools - both with a view to selling on the Asset Store, and with a view to empowering developers on your own teams.Give me your questions and feedback in the comments!

>access_file_
1657|blog.unity.com

Bedroom demo: Archviz with SSRR

We also wanted to see how an architectural interior would look in Unity and what level of visual quality we could get from our latest technology. Here is a video preview of the result we got:Conveniently, there are some very useful online asset libraries for high quality architectural models and scenes, and they come with very affordable prices, so we just grabbed a scene that seemed suitable for the job and quickly set it up in Unity:Most of the assets are used directly as-is, with some minor adjustments, mainly adding lightmap UV’s and the occasional optimisation of the high-poly meshes.We thought it would be nice to be able to change colour and textures of some objects, so we added a simple interface that allows you to do that. For the additional textures of floors and wallpapers, we used again an online library.Setting up the lighting in Unity is quite simple. We have an environment HDR cubemap for the exterior, directional light for the sun, and a spot light in each lamp.This was rather straightforward, but it also brought up the need for lightprobe cages. We made a temporary solution of our own for this scene, and at the same time elevated the need to our R&D team. The feature is now in development to go properly into Unity.Lightprobe cages allow for transferring lighting information to large dynamic objects, or in cases where baked lightmaps cannot be used. We use them for a number of objects in our scene: blankets on the bed, rug on the floor, etc.In the interface we also included the ability to move the lighting, so that it’s easy to observe the effects of the realtime global illumination in Unity. It makes for a nice, soft, realistic lighting in the scene, which is a good idea when an interior designer or an archviz artist wants to present their work in the best possible way.The reflective surfaces around the scene made for a good study of the behavior of our screen-space reflections.These are the settings we used:We also use a single realtime reflection probe which updates dynamically as lighting and materials change.The Bedroom demo was shown at Unite Boston in September this year, and was available to all visitors to interact with and try for themselves.We are now happy to ship the player publicly. You are welcome to download it for Windows (requires DX11) or OSX (requires OpenGL 4); download size: 337 MB. And here is an alternative download link. You can choose among different quality settings.Note: We are not releasing the project for copyright reasons, as this demo is built entirely with library assets. Here is the original scene from the Evermotion Store.

>access_file_
1658|blog.unity.com

Awesome Realtime GI on desktops and consoles

We've teamed up with Alex Lovett again and built The Courtyard, a demo that puts the Precomputed Realtime GI features in Unity 5 to good use. He previously built the Shrine Arch-viz demo. This time, however, the goal was to build a demo aimed at game developers requiring realtime frame rates. Check out this video:Alex built the demo in about 8 weeks using Unity 5.2 off the shelf without any additional packages from the Asset Store - everything was built from scratch.The demo relies on Precomputed Realtime GI and realtime lighting throughout. It has a full time-of-day cycle, emissive geometry, about 100 animated spotlights that come alive at night, as well as a number of floodlights on stands and an optional player flashlight. The time-of-day cycle uses an animated skybox that is synchronized with the sun in order to capture the subtle lighting changes. In the playable demo we are now making available to you (see below), a UI has been added that allows you to control all of these lighting features in-game. Here are a few shots from the scene at different times of day:The scene was created to be especially demanding in terms of lighting. A significant part of it is lit only by bounced light, and when the sun has set it is lit almost entirely by bounced light.The realtime GI system works by precomputing all of the possible paths that light can take between static objects in the scene. This makes it possible to tweak the lighting in realtime, without interruption, because the system already has all the information it needs to quickly calculate the consequences of lighting changes. However, this means that static objects should not be modified, because doing so would require precomputing all the paths again. Given this, it makes sense to author levels in stages: geometry then lighting (and then repeat if necessary). Haphazardly moving static geometry around and adjusting lighting at the same time will require many lighting builds. We are working on more progressive and interactive lighting workflows for Unity 5.x. More details on this will follow in a separate blog post.The demo was built with desktop PCs and consoles in mind, see the blogpost on GI in Unity 5 covering the Transporter demo for Realtime GI use on mobile platforms.The Realtime GI system in Unity 5 is powered by Geomerics Enlighten and is designed for use in games. All the lighting computations are performed asynchronously on CPU worker threads; because games are usually GPU bound, the extra CPU work has very little impact on overall frame rate. Also, only the areas where the lighting has changed are recomputed.The lighting latency in the game is dependent on the resolution chosen for the realtime indirect lightmaps. In this demo Alex set the resolution to be relatively low - in order to be responsive - but such that it still captures the desired lighting fidelity and subtleties in the indirectly lit areas.The indirect lightmap resolution was:One texel every two units (i.e. 0.5 texels per unit) in the central areas.One texel every 10 units in dunes close to the central area.One texel every 32 units in dunes in the outer areas.In order to balance the resolutions, an overall baseline of 0.25 texels per unit was set on the scene. Then, multipliers were added using custom lightmap parameters to give some really nice lighting and a precompute time of just 15 minutes.The following screenshots show a shaded overview of the scene, the Enlighten systems generated, the UV charting view (showing the resolution of the indirect lightmaps), the clusters (responsible for emitting bounce lighting), the bounced lighting, and the lighting directionality (used for lighting off axis geometry and specular)Care was taken to provide good lightmapping UVs. In some cases they were carefully authored to make sure that the models perform well for both lighting builds and the runtime. One specific instance of this is the staircase model.Staircases can be difficult to get right, since at large texel sizes a texel can cover more that a single step. This can cause lighting levels to vary unexpectedly between the steps. On the other hand, using many texels for the steps becomes expensive in terms of performance. The staircase used in this scene also had bevels, which can really throw off the unwrapping and packing for realtime GI and generate many unnecessary charts taking up texel space. The initial staircase design looked like this in realtime GI UV layout:This takes up a 70x72 texel realtime lightmap. There are two problems with this layout. Firstly, it uses too many texels per step (4x4); secondly the bevels are split into separate charts that also take up a minimum of 4x4 texels. Why can each chart not just use 1 texel? Firstly, Enlighten is optimized to use 2x2 texel blocks when processing the textures in the runtime, so every chart must be at least 2x2 texels. Secondly, Enlighten includes a stitching feature where charts can be stitched together to allow smooth results on, for example, spheres and cylinders; this feature requires that a chart have separate directionality information at each edge. Directionality information is only stored on a per-block basis, so a stitchable chart will always need a minimum of 2x2 blocks - becoming a minimum of 4x4 texels. Since no stitching is needed for the staircase, 2x2 texel charts suffice.We have introduced an option for this in the Object properties of the lighting panel:The value can either be 4, which works well for stitching in a setting that uses directionality, or 2 which is more compact. Setting the minimum chart size option to 2 reduces the texel density significantly - now the model fits in a 44x46 texel realtime lightmap:The bevels are still taking up unnecessary chart space. This is somewhat unexpected as bevels and steps have been authored such that the bevel is part of the step in UV space. The image below shows the UV borders overlaid on the model. Notice how the bevels are integrated into the steps:In the 2D view of the lightmapping UVs the bevels do not show up. This is because they have been collapsed into the step charts so they do not have any area in UV space. This is done to avoid that the lighting simulation takes the sloped bevels into account.

>access_file_
1659|blog.unity.com

IL2CPP internals: Testing frameworks

The IL2CPP team has a strong test-first development mentality. Much of the code for IL2CPP is written using the practice of Test Driven Development (TDD), and very few pull requests are merged to the IL2CPP code without significant test coverage.Since IL2CPP has a finite (although rather large) set of inputs - the ECMA 335 spec- the process of developing it fits nicely with TDD concepts. Most of tests are written before production code, and these tests always need to fail in an expected way before the code to make them pass is written.This process helps to drive the design of IL2CPP, but it also provides the development team with a large bank of tests which run rather quickly and exercise nearly all of the existing behavior in IL2CPP. As a development team, this test suite provides two important benefits.1) Confidence: Most changes to refactor code in IL2CPP can be made with high confidence. If the tests pass, it is very unlikely that a regression has been introduced.2) Troubleshooting: Since the code in IL2CPP behaves as we expect it to, bugs are almost always unimplemented sections of the code or cases we have not yet considered. By scoping down the space of possible causes of a given bug this way, we can correct bugs much more quickly.The various types of tests that we run against the IL2CPP code base break down into a few different levels. Here are the number of tests we current have a each level (I’ll discuss what each type of test actually is below).Unit testsC#: 472C++: 44Integration testsC#: 1735IL: 173If all of these tests are green, then we feel confident that we can ship IL2CPP at that moment. We maintain one main development branch for IL2CPP, which always tracks the leading edge branch for development in Unity as a whole. The tests are always green on this main development branch. When they break (which does happen once in a while), someone usually fixes them within a few minutes.Since developers on our team are forking this main branch for personal development often, it needs to be green at all times. The build and test status for both the main development branch and personal branches are maintained on Katana, Unity’s internal build management system.We use NUnit to run all of these tests and the drive NUnit in one of three different waysWindows: ReSharperOSX: Xamarin StudioCommand line on Windows and OSX on our build machines: a custom Perl scriptTypes of testsI mentioned four different types of tests above without much explanation. Each of these types of tests serves a different purpose, and they all work together to help keep IL2CPP development moving forward.The unit tests verify the behavior of a small bit of code, typically a method. They set up a situation, execute the code under test, and finally assert some expected behavior.The integration tests for IL2CPP actually run the il2cpp.exe utility on an assembly, compile the generated C++ code to an executable, then run the executable. Since we have a nice reference for IL2CPP behavior (the existing version of Mono used in Unity), these integration tests also run the same assembly with Mono (and .Net, on Windows). Our test runner then compares the results of the two (or three) runs dumped to standard output and reports any differences. So the IL2CPP integration tests don’t have explicit expected values or assertions listed in the test code like the unit tests do.C# unit testsThese tests are the fastest, and lowest level tests that we write. They are used to verify the behavior of many parts of il2cpp.exe, the AOT compiler utility for IL2CPP. Since il2cpp.exe is written entirely in C#, we can use fast C# unit tests to get good turn-around time for changes. All of the C# unit tests complete in a few seconds on a nice development machine.C++ unit testsThe vast majority of the runtime code for IL2CPP (called libil2cpp) is written in C++. For parts of that code which are not easily accessible from a public API, we use C++ unit tests. We have relatively few of these tests, as most of the behavior of code in libil2cpp can be exercised via our larger integration test suite. These tests to require more time than you might expect for unit tests to run, as they need to run il2cpp.exe itself to set up their fixture data.C# integration testsThe largest and most comprehensive test suite for IL2CPP is the C# integration test suite. These tests a divided into smaller segments, focusing on tests that verify behavior of icalls, code generation, p/invoke, and general behavior. Most of the tests in this suite are rather short, only about 5 - 10 lines long. The entire suite runs in less than one minute on most machines, but we can run it with various IL2CPP options related to things like stripping and code generation.IL integration testsThese tests are similar in toolchain to the C# integration tests. However, instead of writing the test code in C#, we use the ILGenerator class to directly create an assembly. Although these tests can take a bit more time to write than C# tests, they offer increased flexibility. Often we run into problems with IL code that is invalid or not generated by our current Mono C# compiler. In these cases, we can often write a good test case with IL code. The tests are also beneficial for comprehensive testing of opcodes like conv.i (and similar opcodes in its family) which have clear behavior with many slight variations. All of the IL tests complete end to end in less than one minute.We run all of these tests through many variations and options on Katana. From a clean pull of the source code to completed test runs, we see about 20-30 minutes of runtime depending on the load on the build farm.Based on these descriptions, it might seem like our test pyramid for IL2CPP is upside down. And indeed, the end-to-end integration tests (near the top of the pyramid) make up most of our test coverage.Following TDD practice with test times more than a few seconds can be difficult as well. We work to mitigate this by allowing individual segments of the integration test suites to run, and by doing incremental building of the C++ code generated in the test suites (this is how we are proving out some incremental building possibilities for Unity projects with IL2CPP, so stay tuned). Then the turn-around time for an individual test is reasonable (although still not as fast as we would like).This heavy use of integration tests was a conscious decision though. Much of the code in IL2CPP looks different than it used to, even at our initial public releases in January of 2015. We have learned plenty and changed many of the implementation details in the IL2CPP code base since its inception, but we still have many of the original tests written years ago. After trying out tests at a number of different levels (including even validating the content of the generated C++ source code), we decided that these integration tests give us the best runtime to test stability ratio. Seldom, if ever, do we need to modify one of the existing integration tests when something changes in the IL2CPP code. This fact gives us tremendous confidence that a code change which causes a test to fail is really a problem. It also let’s us refactor and improve the IL2CPP code as much as we need to without fear.Outside of IL2CPP itself, the IL2CPP code fits into the much larger Unity testing ecosystem. For each platform we ship supporting IL2CPP, we execute the Unity player runtime tests. These tests build up a single Unity project with more than 1000 scenes, then execute each scene and validate expected behavior via assertions. We usually don’t add new tests to this suite for IL2CPP changes (those tests usually end up being at a lower level). This suite serves as a check against regressions that we might introduce with IL2CPP on a given platform. This suite also allows us to test the code used in integration IL2CPP into the Unity build toolchain, which again varies for each platform. A typical runtime test suite completes on about 60-90 minutes, although we often execute individual tests locally much faster.The largest and slowest tests we use for IL2CPP are Unity editor integration tests. Each of these tests actually runs a different instance of the Unity editor. Most of the IL2CPP editor integration tests focus on building a running a project, usually with various editor build settings. We use these tests to verify things like complex editor integration, error message reporting, and project build size (among many others). Depending on the platform, integration test suites run in a few hours, and usually are executed at least nightly, if not more often.At Unity, one of our guiding principles is “solve hard problems”. I like to think about the difficulty of problems in terms of failure. The more difficult a problem is to solve, the more failures I need accomplish before I can find the solution.Creating a new highly-performant, highly-portable AOT compiler and virtual machine to use as a scripting backend in Unity is a difficult problem. Needless to say, we’ve accomplished thousands of failures along the way. There are more problems to solve, and so more failures to come. But by capturing the useful information from almost all of those failures in a comprehensive and fast test suite, we can iterate very quickly.For the IL2CPP developers, our test suite is not so much a means to verify bug-free code (although it does catch bugs), or to help port IL2CPP to multiple platforms (it does that too), but rather, it is a tool we can use to fail fast and solve hard problems so our users can focus on creating beautiful things.We hope that you have enjoyed the IL2CPP Internals series of posts. We’re happy to share implementation details and provide debugging and performance hints when we can. Let us know if you want to hear more about other topics related to the design and implementation of IL2CPP.

>access_file_
1660|blog.unity.com

IL2CPP internals: Garbage collector integration

As with all of the posts in this series, this post deals with implementation details that can and likely will change in the future. In this post we will specifically look at some internal APIs used by the runtime code to communicate with the garbage collector. These APIs are not publicly supported, and you should not attempt to call them from any code in a real project. But, this is a post about internals, so let’s dig in.I won’t discuss general garbage collection techniques in this post, as it is a wide and varied subject, with plenty of existing research and published information. To follow along, just think of a GC as an algorithm that develops a directed graph of object references. If an object Child is used by an object Parent (via a pointer in native code), then the graph looks like this:As the GC scans through the memory for a process, it looks for objects which don’t have parent. If it finds one, then it can reuse the memory for that object on something else.Of course, most object will have a parent of some sort, so the GC really needs to know which objects are the important parents. I like to think of these as the objects that are actually in use by your program. In GC terminology, these are called the “roots”. Here is an example of a parent without a root.In this case, Parent 2 does not have a root, so the GC can reuse the memory from Parent 2 and Child 2. Parent 1 and Child 1, however, do have a root, so the GC cannot reuse their memory. The program is still using them for something.For .NET, there are three kinds of roots:- Local variables on the stack of any thread executing managed code- Static variables- GCHandle objectsWe’ll see how IL2CPP communicates with the garbage collector about all three of these kinds of roots.For this post, I’m using Unity 5.1.0p1 on OSX, and I’m building for the iOS platform. This will allow us to use Xcode to have a look at how IL2CPP interacts with the garbage collector. As with the other posts in this series, I’ll use an example project with a single script:I have enabled the “Development Build” in the Build Settings dialog, and I set the “Run in Xcode as” option to a value of “Debug”. In the generated Xcode project, first search for the string “Start_m”. You should see the generated code for the Start method in the the HelloWorld class named HelloWorld_Start_m3.Add a breakpoint in the HelloWorld_Start_m3 function on the line where Thread_Start_m9 is called. This method will create a new managed thread, so we expect that thread to be added to the GC as a root. We can see where this happens by exploring the libil2cpp header files that ship with Unity. In the Unity installation open the Contents/Frameworks/il2cpp/libil2cpp/gc/gc-internal.h file. This file has a number of methods prefixed with il2cpp_gc_ it serves as part of the API between the libil2cpp runtime and the garbage collector. Note that this is not a public API, so please don’t call these methods from any real project code. They are subject to change or removal without notice.Let’s create a breakpoint in Xcode on the il2cpp_gc_register_thread function, using Debug > Breakpoints > Create Symbolic Breakpoint.If you then run the project in Xcode, you’ll notice that the breakpoint is hit almost immediately. We can’t see the source code here, as it is built in the libil2cpp runtime static library, but we can see from the call stack that this thread is created in the InitializeScriptingBackend method, which executes when the player starts.We will actually see this breakpoint hit a number of times, as the player creates each managed thread used internally. For now, you can disable this breakpoint in Xcode and allow the project to continue. We should hit the breakpoint we set earlier in the HelloWorld_Start_m3 method.Now we are just about to start the managed thread created by our script code, so enable the breakpoint on il2cpp_gc_register_thread again. When we hit that breakpoint, the first thread is waiting to join our created thread, but the call stack for the created thread shows that we are just starting it:When a thread is registered with the garbage collector, the GC treats all objects on the local stack for that thread as roots. Let’s look at the generated code for the method we run on that thread (HelloWorld_AnotherThread_m4) :We can see one local variable, L_0, which the GC must treat as a root. During the (short) lifetime of this thread, this instance of the AnyClass object and any other objects it references cannot be reused by the garbage collector. Variables defined on the stack are the most common kind of GC roots, as most objects in a program start off from a local variable in a method executing on a managed thread.When a thread exits, the il2cpp_gc_unregister_thread function is called to tell the GC to stop treating the objects on the thread stack as roots. The GC can then work on reusing the memory for the AnyClass object represented in native code by L_0.Some variables don’t live on thread call stacks though. These are static variables, and they also need to be handled as roots by the garbage collector.When IL2CPP lays out the native representation of a class, it groups all of the static fields together in a different C++ structure from the instance fields in the class. In Xcode, we can jump to the definition of the HelloWorld_t2 class:Note that IL2CPP does not use the C++ static keyword, as it needs to be in control of the layout and allocation of the static fields to properly communicate with the GC. When a type is first used at runtime, the libil2cpp code will initialize the type. Part of this initialization involves allocating memory for the HelloWorld_t2_StaticFields structure. This memory is allocated with a special call into the GC: il2cpp_gc_alloc_fixed (also in the gc-internal.h file).This call informs the garbage collector to treat the allocated memory as a root, and the GC dutifully does this for the lifetime of the process. It is possible to set a breakpoint on the il2cpp_gc_alloc_fixed function in Xcode, but it is called rather often (even for this simple project), so the breakpoint is not too useful.Suppose that you don’t want to use a static variable, but you still want a bit more control over when the garbage collector is allowed to reuse the memory for an object. This is usually helpful when you need to pass a pointer to a managed object from managed to native code. If the native code will take ownership of that object, we need to tell the garbage collector that the native code is now a root in its object graph. This works by using a special managed object called a GCHandle.The creation of a GCHandle informs the runtime code that a given managed object should be treated as a root in the GC so that it and any objects it references will not be reused. In IL2CPP, we can see the low-level API to accomplish this in the Contents/Frameworks/il2cpp/libil2cpp/gc/GCHandle.h file. Again, this is not a public API, but it is fun to investigate. Let’s put a breakpoint on the GCHandle::New function. If we let the project continue then, we should see this call stack:Notice that the generated code for our Start method is calling GCHandle_Alloc_m11, which eventually creates a GCHandle and informs the garbage collector that we have a new root object.We’ve looked at some internal API methods to see how the IL2CPP runtime interacts with the garbage collector, letting it know which objects are the roots it should preserve. Note that we have not talked at all about which garbage collector IL2CPP uses. It is currently using the Boehm-Demers-Weiser GC, but we have worked hard to isolate the garbage collector behind a clean interface. We currently have plans to research integration of the open-source CoreCLR garbage collector. We don’t have a firm ship date yet for this integration, but watch our public roadmap for updates.As usual, we’ve just scratched the surface of the GC integration in IL2CPP. I encourage you to explore more about how IL2CPP and the GC interact. Please share your insights as well.Next time, we will wrap up the IL2CPP internals series by looking at how we test the IL2CPP code.

>access_file_