// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1689 transmissions indexed — page 73 of 85

[ 2019 ]

20 entries
1441|blog.unity.com

Mobile Champions: Singular's Susan Kuo

Mobile advertising is an incredibly high-performing marketing channel, but we often forget how new it actually is. Most attribution standards have only been established in recent years, partly thanks to companies like Singular — an MMP that started life as a mobile analytics platform.Tapjoy recently met with Singular co-founder and COO Susan Kuo to discuss how her company successfully merged analytics and attribution into a unified solution and what that means for today’s app developers.WHAT WE LEARNED:- Singular transitioned from an analytics platform to a unified MMP solution in order to standardize marketing data sets across the industry.- Growth marketing is transforming to support more women leaders than ever before.- Singular expects — and is preparing for — a world where mobile apps are privacy-safe environments for consumers.Hi, Susan. Can you start by telling us about yourself and your role within Singular?I’m the COO and co-founder of Singular. In my day-to-day, I oversee business development with our partner ecosystem that spans across a thousand plus publishers, ad networks and other various marketing automation and analytics providers.Singular started out as an analytics product, but transitioned into mobile attribution. Why was this transition important to Singular?When we first started Singular, the mission of our platform was very simple: To give marketers a single source of truth to understand their ROI at the most granular level. But we quickly realized that in order to deliver on our mission to advertisers, we were beholden to the standards of existing mobile attribution at the time. No one in the industry was taking ownership to connect marketing channels and attribution solutions in a way that supported standardized data governance. In order to improve the industry standard, this meant that we would need to help marketers manage their data at the user level. This is why we decided to become an MMP, so that we have the foundation to build a best in class unified platform that provides both Analytics and Attribution in a single solution. Today, over 50% of the top 100 global app publishers use Singular.What separates Singular from the rest of the market as an MMP?The first three years of our business was dedicated to building our analytics solution. It not only connects to over a thousand ad channels, but the core IP also lays the foundation for understanding the taxonomy and hierarchies within each of these networks. Ultimately, that means that we can standardize marketing data sets across these channels so that marketers can see their ROI at the most granular and accurate level.Until 2017, when Singular started offering attribution, marketers had to purchase an attribution and analytics solution side-by-side with different vendors. We are the only MMP to integrate all a marketer’s conversion and install data with their overall marketing and campaign data. The result: Context and color around your marketing investments that is essential for optimization and future growth.Along with Singular, you’re also one of the founders of THRIVE — a professional women’s community for the growth marketing industry. What inspired you to help establish this community, and what do you think of its progress so far?One of the things that I noticed when we started Singular five years ago was how male dominated this industry was. Thankfully, there are more women in our industry today, in higher-powered roles than ever before. I’ve been very fortunate to have met some of the most amazing group of women in our industry. I can’t count how many times during off hours where we commented on how we would love to ‘get together more’ and learn from each other.So, this is really the purpose behind THRIVE.It is a community that is aimed at connecting and empowering women leaders and influencers in growth marketing. The goal is to learn from each other’s mistakes and accomplishments both in and out of the workplace, and most importantly coming together and forming meaningful friendships. If you think about it, the very definition of THRIVE is growing together, and that’s really the objective for this community.We launched our first event at MAU in Las Vegas in May and I’m incredibly pleased with how the launch went. Not only was it one of the most well attended events across all of the adjacent events going on at MAU, but the overwhelming amount of interest from inspiring women across the industry who were keen on getting involved in the community. We came out of the event with some great topics for follow on events and will be looking to launch our second THRIVE event later this quarter here in the bay area, and more to come after!What would you say is the biggest industry trend (or trends) within the mobile advertising space right now? Where do you see it headed in the next few years?There are three primary trends we’re watching unfold in our space:1. Expanded appetite for marketers to test new ad channelsThere is a common misconception about the mobile advertising industry that there are only a few dominant media sources to work with. And yes, a good portion of spend in the market goes to Facebook and Google. But what some marketers aren’t aware of is that there is a considerable amount of good inventory and innovative ad units across other ad networks.My advice to app marketers who aren’t doing this already is this: If you have the ability to scale beyond five ad networks, I definitely recommend trying this out. What we’ve seen in our data is that marketers who advertised on more than 5 ad networks — scaling efforts past Facebook, Google, and Instagram — had a 37% lower CPI and 60% higher installs with the same exact ad spend.2. Growth of mobile apps that monetize through in-app advertisingWe see that there is an estimated 60% growth in mobile apps that will monetize through in-app advertising. Emerging ad-supported genres like hyper-casual are a big part of why in-app advertising revenue is set to triple from $72 billion to $201 billion. Publishers need to make sure that their UA strategy and measurement vendor has a solid solution in place to factor in the in-app advertising revenue into the equation so that you can understand true ROI across your marketing channels.3. A continued push for consumer privacy across dominant industry playersWe expect to see tightening down of consumer privacy from dominant industry players like Apple. There’s been quite a bit of speculation across the industry that one of the likely outcomes can be the potential deprecation of IDFA.At Singular, we’ve spent considerable time imagining and planning for a world where mobile apps and marketers would have to survive in a privacy-safe environment without a common device identifier like IDFA on devices. We recently launched an industry working group called MAP, which stands for Mobile Attribution Privacy. MAP is focused on defining the new standards around a privacy-first Mobile Attribution ecosystem.What’s something that excites you about mobile marketing in 2019, whether it involves Singular or the broader ecosystem?There has been quite of buzz over the past few years about AI. While I think it’s still very much in its infancy, I think we will start to see some initial advancements within our industry this year. For example, we will be looking to launch our Insights product later this year that will be aimed at surfacing benchmarks and predictive models that will enable marketers to automatically uncover insights that they are having to do manually (or worse, not at all). This is an incredibly exciting move for Singular as this has always been the vision for the company since day one, to build the core foundation and standardization of data centrality and data governance.And only when this has been achieved, can we harness the ability to provide insights on top of this data and become what we consider the next generation marketing analytics providers, a true marketing intelligence platform.Tapjoy would like to thank Susan Kuo for taking the time to join us. To learn more about how Singular helps developers grow more while paying less, be sure to download their Scaling Mobile Growth Report and learn how you can unlock breakthrough mobile growth for your mobile portfolio.

>access_file_
1444|blog.unity.com

Now available: Spaceship Demo project using VFX Graph and HDRP

Last year, at Unite LA, we released a video showcasing the brand-new Visual Effect Graph in action through a First-Person Game walkthrough, using Unity 2018.3. This demo was rendered using the High-Definition Render Pipeline and showcases High-Definition assets, lighting, and effects.This year, Visual Effect Graph will come out of preview with Unity 2019.3. In order to explore how this project was made, we upgraded the effects to take full advantage of the latest features and released and documented the Spaceship demo project. You can already download, open and learn from it, starting today, using Unity 2019.2.Watch the video of the Spaceship demo.All effects from the Spaceship Demo are made using Visual Effect Graph, from simple environmental VFX to more complex Augmented Reality and Holographic UI, HUD, to a gorgeous Reactor Core effect. Here are some eye-candy screenshots of the environments and effects you will encounter in the demo.The spaceship demo features many effects during its walkthrough. All these effects have been authored and optimized in-game production conditions with performance in mind, targeting 33.3 ms (30 fps) on Playstation 4 (base) at 1080p. All the effects are taking advantage of the many optimization settings we implemented in Visual Effect Graph and High Definition Render Pipeline.Half-Resolution Translucent Rendering renders selected transparent particles at a lower resolution, increasing rendering performance by 4 (at the expense of little blurriness in some rare cases). We used it mostly for big, lit particles that are present in the foreground as their texel/pixel ratio is rather low, the loss in resolution is not noticeable at all.Octagon Particles is an optimization of quad particles and enable the corners of the particles to be cropped. where the pixels are often found transparent (invisible cost). Particle corners are often transparent, but the overlapping of these transparent areas result in unnecessary calculations. Cropping out these sections can optimize the scene up to 25% in situations where there is lots of overdraw. There is also the benefit of reducing the resolution of the translucent sections when they can't be cropped away.Simplified Lighting model: Simple Lit for HD Render Pipeline enables disabling properties of the BRDF - Diffuse Lighting, Specular Lighting, Shadow and Cookie Reception, and Ambient Lighting. By selecting only the features you want to see, you can decrease the lighting computation cost to close to none. For instance, particles can be lit using only Light Probes by selecting a Simple Lit Translucent Model, then disabling everything except ambient lighting. This optimization was chosen for many environment effects that did not require a lot of high-frequency lighting.In order to access this project, you will first need the brand-new Unity 2019.2 Editor. You can easily install it in the Unity Hub App by clicking the Installs Tab, clicking the Add button, then selecting Unity 2019.2.0f1 (or newer versions).Then, you will need to go to the Spaceship Demo page on Github to get the project files. Once on this page, you can clone the project by clicking the “Clone or Download” button, then clicking “Open in Desktop” if you have the Github Desktop app installed. Or, you can use the address above to clone it in another git client.If you do not wish to use git to download the project, you can go to the Releases page and select any release to download the zip file from the release’s title “(Download Project File here)."Do not download the other SourceCode zip or tar.gz files. The project uses git LFS and these archives will not include binary data. If you download a project from this page, please make sure that you have an updated editor.Once downloaded, you need to unzip the project files into the folder of your choice. Then, you are good to go!We also built playable binaries for the demo if you want to run it on your PC. You can download it using this link. For updated binaries, you can also go to the Releases Page of the Github Project in order to find newer, standalone builds.The demo should run around 30fps at 1080p on a mid-range gaming-grade PC with the following specifications:Processor: Intel i5 8400 / AMD Ryzen 5 2600Graphics Card: Nvidia GTX1050 / AMD RX 5608GB RAMUsing Unity Hub, you can add the project in the projects Tab using the Add button and navigate to the root of the project folder. If the Unity version is not selected, you can select it using the drop-down menu. Then you will be able to load the project by clicking the project name.Loading the project takes between 10 to 20 minutes depending on your computer’s capabilities. Once it’s done, you should end up with the editor open and displaying a Discover Spaceship Demo window.The Discover window is made to guide you through the project and help you find the key elements that compose the walkthrough of this 5-minute long sequence. It focuses mainly on visual effects and scripted sequences.Note: You can close the window at any time and reopen it using the Help/Discover Spaceship Demo menu.When the project opens, the window prompts you to open either the Spaceship Demo Level or the Main Menu Scene. By clicking the corresponding Open Buttons, the editor will automatically load the Scenes in the editor. Then, the discover window will go into Discovery Mode, displaying the many points of interest that you can find in this level.You can select items in the list located on the left of the window. All of them will move the Scene View to the relevant point of view and will display useful information on the right side in order to select Game Objects and open Assets. Using this window, you will be able to find out which game objects do what, preview timelines, and open the visual effect graphs that compose every sequence.Visual Effect Graph will come out of preview in 2019.3 and the Spaceship demo project will be updated after the final release to run with Unity 2019.3. Stay tuned for more project updates this fall by starring or watching the repository on GitHub.In the meantime, feel free to join us on the forums and give us feedback about your experience with the Spaceship demo!Join us at Unite Copenhagen to learn more about VFX GraphThe next place you can meet us is at Unite Copenhagen, which takes place September 23 -26. It’s a one-of-a-kind opportunity for you to engage with thousands of talented creators and top developers from around the world!Get ahead with the latest Unity features, tips tricks, and cool reveals in dozens of tech sessions, pop-up talks, and the keynote.Meet industry leaders and make new friends at fun networking events and “Unite at Night” gatherings.Uplevel your Unity skills and make career-advancing connections in workshops, Q&A sessions, and community events.Don’t forget to follow us on Twitter and Facebook for the latest news.

>access_file_
1445|blog.unity.com

7 user acquisition strategy tips for mobile apps and games

How to acquire users for your appWant your game to make it to the Top Charts? Today, user acquisition plays a key role in any mobile marketing strategy. As the need for discoverability in the App and Google Play stores increases, the business of generating a user base for new games has erupted. So what’s the best UA strategy? And what’s the best creative to acquire high value users for your mobile app?ironSource LevelUp sat down with mobile industry titans to get advice for any developer to handle their user acquisition strategy like the top game companies do. Hear from the experts behind chart topping games like Homescapes, Cut the Rope, Angry Birds, 1010!, and Adventure Capitalist, to learn how to increase mobile user acquisition while successfully acquiring users through cross-department communication, data analytics, creativity, innovation, and most importantly, a focus on player happiness.1. Maintain a strong relationship between marketing and monetizationKongregate’s Jeff Gurian, VP Ad Monetization and Marketing“At Kongregate, our [combined] monetization-marketing approach has helped give our UA team perspective on what our CPMs are and how competitive they are in the marketplace. If we’re primarily buying on a CPI basis, the amount of inventory we get is dependent on the CPMs that are being driven within the apps we’re buying from. That’s imperative because if the UA team doesn’t know what those CPMs are, they don’t know how much they really need to move the needle to get more volume.”Listen to the full podcast here.2. Your user acquisition strategy should focus on communication and creation, rather than exclusively focusing on the dataWarren Woodward, Director of User Acquisition at Nexon“UA is a communication discipline. While there’s a tendency in user acquisition to build teams mostly with people with mathematics, data or finance backgrounds, at the end of the day, UA is doing marketing, so you have to have creative skills within your team. Your UA team needs to be able to understand messaging – what’s going to connect with people and and what is going to turn them off. The UA team here at Nexon has a very complementary set of skills- we have some people with finance backgrounds, some people with analytic backgrounds, and some people with marketing backgrounds, so it all comes together. Everyone helps out with their strength, which creates a good atmosphere of mutual respect on the team.”Listen to the full podcast here.3. Increase mobile game user engagement with storytelling and always question the dataCarissa Gonzalez, Senior Marketing Manager at Pixelberry“You can’t do user acquisition without taking a deep dive into your numbers. One of the first things I learned early on was to “never trust the data” – if the data is good question it, if the data is bad question it too, if the data is out of trend question it. No matter what, always question the data.”“One of the main things that I love about the gaming industry is the storytelling aspect of it. There’s so many psychological factors that go into game design and so many psychological triggers that developers can place in a game’s UX design. There’s this amazing storytelling opportunity in every part of a game and it’s incredible to see companies in the industry take the time to put value into what they’re developing. In terms of UA specifically, I love that we can track all of our results. That trackability and instant response allows us to see immediately what kind of impact a feature had.”Listen to the full podcast here.4. Approach your mobile app user acquisition strategy knowing that high-quality users are limitedArtur Grigorjan, Head of Growth at Playrix“The second challenge resulting from the rapidly-growing industry is quality user acquisition. Increasing UA costs combined with reduced user quality and reduced retention means it’s getting harder to find quality users at an affordable cost. Game marketers should approach user acquisition knowing that perhaps only a portion of their acquired users will be high-quality or with high LTV-potential. The trick to acquiring quality users is to see which traffic sources are bringing in the highest volume of quality users, and then double-down on them.”Read the full post here.5. Mobile advertising has become essential to in-app experience and app user engagement.Jane Anderson, Head of Ad Monetization at Zeptolab“Across the whole industry there's been a shift to focus a lot more on data to see how it can impact gameplay. Today, game developers are much more precise in terms of estimation and analytics - they actually compare if advertising changes something in app retention. Before that advertising was more like an inevitable evil that everybody had to put in order to survive in the market. Now it’s the source of product improvement, not only the source of incremental revenue. Mobile advertising today and game advertising today is shifting to being a part of the engagement strategy and an integral part of the in-app experience."Listen to the full podcast here.6. Focus on player acquisition instead of user acquisition for mobile gamesTatyana Bogatyreva, Head of UA at Gram Games“It's not really user acquisition, it's more player acquisition, especially when you're looking at the ad driven side of things. We're looking to engage players into the idea behind the games. Gram is making games that are very appealing and addicting and have a low barrier to enter. UA is about figuring out the right approach and allocating the limited resources of the UA team (at Gram we only have four UA people and two marketing artists). So we need to split our time wisely.”“Creatives, specifically on the CTR and IPM side of things, play a very very important role. Playable ads are a very core part of our creatives because they’re representatives of gameplay and users are able to immediately understand gameplay and their objective. Playable ads have been a key driver as we're seeing some of the highest retention come in from the playable units on both sides of the business.”Listen to the full podcast here.7. Mobile app engagement and acquisition strategies are built on player feedbackNate Barker, Director of Business Development at Fluffy Fairy Games“One of the most important things about the game industry is maintaining a tight focus on what actually makes your players happy, and not just on the metrics. Things like DAU, MAU, and ARPDAU are important but you can make better decisions about the right way to drive your monetization or UA based on your players.Listen to the full podcast here.Learn more about mobile game user acquisitionMake sure you subscribe to LevelUp to continue receiving tips from trailblazers in the gaming industry and stay updated on all things related to gaming.

>access_file_
1446|blog.unity.com

Accelerate your Terrain Material painting with the 2019.2 Terrain Tools update

After receiving your feedback, we are excited to share some new improvements to Terrain Material painting. Our 2019.2 Terrain Tools package lets you paint complex Terrain with less effort and includes UI changes to help speed up your workflow.This package update enhances the experience of painting Materials on Terrain. In this update, you’ll find new Brush Mask Filters, a revised Paint Texture tool, and improvements to the Terrain Toolbox workflow.You might already be familiar with Brush Masks, which are Terrain Brushes that use a single-channel Texture to define the shape and strength of the Terrain area you’re working on. To adjust the grayscale mask output, you can change other Brush features such as Radius Scale, Falloff, and Remap.Brush Mask Filters are an exciting new addition to Terrain Brushes. These filters add additional operations to the Brush before computing the final Brush Mask output.For example, if you use the Add filter with an input value of 5, the elevation of the sculpting area increases. This is because we’ve added 5 to each pixel of the Brush Mask.There are two categories of Brush Mask Filters. The first category contains math operation filters, which are commonly seen in node-based editing tools for manipulating Textures. The second category comprises of Terrain-based operations.A Terrain-based filter might contain a series of operations that help to isolate certain Terrain features. For instance, the Height filter uses Minimum and Maximum height values to isolate a certain height range of the Terrain. The Concavity filter helps identify exposed crevices of a Terrain, and even includes an option for detecting recessed faces. There are 15 filters available in this package release.AbsAddClampComplementMaxMinMultiplyNegateNoisePowerRemapAspectConcavityHeightSlopeYou can easily add or combine multiple filters in the Brush Mask Filters stack to achieve different results. This new feature is available to all Terrain Brushes!Unity performs the operations in order, from top to bottom. By using different combinations of Brush Mask Filters, you can achieve some really interesting and nice results, such as the Terrain Textures that appear later on in this blog post.You can combine multiple Brush Mask Filters to create complex Brushes that fit your needs.Here are the steps to follow, as demonstrated in the video.Scree: Add a Height filter to mask out the river bed and other flat areas that will be covered with water.Sand: Adjust the Height filter to mask the river banks, and add a Slope filter to smoothen out the transitions between slope changes. Then, disable the filter to touch up areas with harsh Texture transitions. You can disable filters to quickly alter the filter stack without losing data.Grass: Adjust the Height filter to mask out all areas below the riverbank, and adjust the curve to smoothen slope transitions.Rock: Remove the Height filter, and adjust the slope curve to mask off flat areas. Use the Slope filter to target areas with steep slopes, and lightly blend them into the flatter grass areas. Moss: Add the Complement filter, and set an input value of 1 to invert the results of the stack. Instead of targeting steep slopes, the Complement filter lets you target flat areas and shallow slopes. Then, add the Noise filter to randomize the results, and the Power filter to sharpen the areas you’re painting on. Moss has a tendency to grow in areas with high moisture, and unlike grass, it can live on steeply-sloped surfaces as well as areas with higher elevation.Snow: Remove all the filters to empty the filter stack. Add a Height filter to mask off the peak of the mountain. Add an Aspect filter to mask off faces of the mountain relative to the desired direction. Use the Aspect filter to replicate snow that has been blown by a gust of wind. Add a Concavity filter to target the valleys of the summit, and replicate the accumulation areas of snow within a cirque.The improved Paint Texture tool now includes a brand new Terrain Layer Eyedropper tool as well as reorderable Material Layers.Hold Shift + A to enable the Terrain Layer Eyedropper tool, and then click on an area of the Terrain to pick a Material directly from it. The tool works similarly to the Eyedropper tool in photo editing software. This greatly speeds up the process of selecting Materials while painting.The Material you choose appears in the updated Material Layer UI, which now features a list that you can reorder. Reorder Material Layers in the UI to change its corresponding splat map channel. Splat maps are Material Distribution Mask Textures that Unity uses on Terrain. To simultaneously delete multiple Terrain Layers, simply enable the checkbox next to each layer, and then hit the Remove Layer button.The Terrain Toolbox workflow has been improved in multiple ways. You'll notice a difference when importing both Material Layers and Splat Maps. Instead of having to manually select a Texture from the Asset folder, you can now import Textures directly from a Terrain.When you use external content creation software like World Machine, exported splat maps might sometimes be oriented incorrectly. Now, when you import splat maps using the Terrain Toolbox, you can preview them and adjust their orientation as needed.On the topic of visualization, the Terrain Toolbox also has a new section for visualization tools. Use the Heatmap Altitude tool to preview the elevation of a Terrain as you sculpt it. This is great for sculpting Terrain that needs to be relatively uniform with the rest of your world. It's also useful for sculpting Terrain from a bird's-eye view and planning out elevation-specific features like lakes.All of these new features and updates are currently available in the latest Terrain Tools package, which you can download from the Package Manager in Unity 2019.2.0f1 or newer versions.Our Terrain team is just getting started and will continue to work hard at developing new features. Please share your feedback with us in the World Building forum!

>access_file_
1448|blog.unity.com

Custom lighting in Shader Graph: Expanding your graphs in 2019

With the release of Unity Editor 2019.1, the Shader Graph package officially came out of preview! Now, in 2019.2, we’re bringing even more features and functionality to Shader Graph.To maintain custom code inside of your Shader Graph, you can now use our new Custom Function node. This node allows you to define your own custom inputs and outputs, reorder them, and inject custom functions either directly into the node itself, or by referencing an external file.Sub Graphs have also received an upgrade: you can now define your own outputs for Sub Graphs, with different types, custom names, and reorderable ports. Additionally, the Blackboard for Sub Graphs now supports all data types that the main graph supports.Using the Shader Graph to create powerful and optimized shaders just got a little easier. In 2019.2, you can now manually set the precision of calculations in your graph, either graph-wide or on a per-node basis. Our new Color Modes make it fast and easy to visualize the flow of Precision, the category of nodes, or display custom colors for your own use!See the Shader Graph documentation for more information about these new features.To help you get started with the new custom function workflow, we’ve created an example project together with step-by-step instructions. Download the project from our repository and follow along! This project will show you how to use the Custom Function node to write custom lighting shaders for the Lightweight Render Pipeline (LWRP). If you want to follow along using a fresh project, make sure you’re using the 2019.2 Editor and LWRP package version 6.9.1 or higher.To get started, we need to get information from the main light in our Scene. Start by selecting Create > Shader > Unlit Graph to create a new Unlit Shader Graph. In the Create Node menu, locate the new Custom Function node, and click the gear icon on the top right to open the node menu.In this menu, you can add inputs and outputs. Add two output ports for Direction and Color, and select Vector 3 for both. If you see an “undeclared identifier” error flag, don’t be worried; this will go away when we start to add our code. In the Type dropdown menu, select String. Update your function name — in this example, we’re using “MainLight”. Now, we can start adding our custom code in the text box.First, we’re going to use a flag called `#ifdef SHADERGRAPH_PREVIEW`. Because the preview boxes on nodes don’t have access to light data, we need to tell the node what to display on the in-graph preview boxes. `#ifdef` tells the compiler to use different code in different situations. Start by defining your fallback values for the output ports.Next, we’ll use `#else` to tell the compiler what to do when not in a preview. This is where we actually get our light data. Use the built-in function `GetMainLight()` from the LWRP package. We can use this information to assign the Direction and Color outputs. Your custom function should now look like this:Now, it’s a good idea to add this node to a group so that you can mark down what it’s doing. Right-click the node, select Create Group from Selection, and then rename the group title to describe what your node is doing. Here we've entered “Get Main Light”.Now that we have our light data, we can calculate some shading. We’re going to start with a standard Lambertian lighting, so let's take the dot product of the world normal vector and the light direction. Pass it into a Saturate node, and multiply it by the light color. Plug this into the Color port of the Unlit Master node, and your preview should update with some custom shading!Since we now know how to get light data using the Custom Function node, we can expand on our function. Our next function gets attenuation values from the main light in addition to the direction and color. As this is a more complicated function, let's switch to file mode, and use an HLSL include file. This lets you author more complicated functions in a proper code editor before injecting it into the graph. This also means that we have one unified location to debug the code from. Start by opening the `CustomLighting` include file in the Assets > Include folder of the project. For now, we’ll only focus on the `MainLight_half` function. The function looks like this:This function includes some new input and output data, so let’s go back to our Custom Function node and add them. Add two new outputs for DistanceAtten (distance attenuation) and ShadowAtten (shadow attenuation). Then, add the new input for WorldPos (world position). Now that we have our inputs and outputs, we can reference the include file. Change the Type dropdown to File. In the Source input, navigate to the include file, and select the Asset to reference. Now, we need to tell the node which function to use. In the Name box, we’ve entered “MainLight”.You’ll notice that the include file has `_half` at the end of the function name, but our name option doesn’t. This is because the Shader Graph compiler appends the precision format to each function name. Since we’re defining our own function, we need the source code to tell the compiler which precision format our function uses. In the node, however, we only need to reference the main function name. You can create a duplicate of the function that uses ‘float’ values to compile in float precision mode. The ‘Precision’ Color Mode lets you easily track the precision set for each node in the graph, with blue representing float and red representing half.We’ll probably want to use this function again somewhere else, and the easiest way to make this Custom Function reusable is to wrap it in a Sub Graph. Select the node and its group, and then right-click to find Convert to Sub-graph. We’ve called ours “Get Main Light”. In the Sub Graph, simply add the required output ports to the Sub Graph output node, and plug the node’s output into the Sub Graph output. Next, we’ll add a world position node to plug into the input.Save the Sub Graph, and go back to our unlit graph. We’re going to add two new multiply nodes to our existing logic. First, multiply the two attenuation outputs together. Then, multiply that output by the light color. We can multiply this by NdotL from earlier to properly calculate attenuation in our basic shading.The shader we’ve made is great for matte objects, but what if we want some shine? We can add our own specular calculations to our shader! For this step, we’ll use another Custom Function node wrapped in a Sub Graph, called Direct Specular. Take a look at the `CustomLighting` include file again, and see that we’re now referencing another function from the same file:This function performs some simple specular calculations, and if you’re curious, you can read more about them here. The Sub Graph for this function also includes some inputs on the Blackboard:Make sure that your new node has all the appropriate input and output ports to match the function. Adding properties to the Blackboard is simple; just click the Add (+) icon on the top right, and select the data type. Double-click the pill to rename the input, and drag and drop the pill to add it to the graph. Lastly, update the output port for your Sub Graph, and save it.Now that specular calculation is set up, we can go back to the unlit graph, and add it through the Create Node menu. Connect the Attenuation output to the Color input of the Direct Specular Sub Graph. Next, connect the Direction output from the Get Main Light function to the Direction input of the specular Sub Graph. Add the result of NdotL*Attenuation to the output of the Direct Specular Sub Graph, and plug this in the Color output.Now we’ve got a bit of shine!The LWRP's main light refers to the brightest directional light relative to the object, which is usually the sun. To improve performance on lower end hardware, the LWRP calculates the main light and any additional lights separately. To make sure our shader calculates correctly for all lights in the Scene, and not just the brightest directional light, you need to create a loop in your function. To get the additional light data, we used a new Sub Graph to wrap a new Custom Function node. Take a look at the `AdditionalLight_float` function in the `CustomLighting` include file:Like before, use the `AdditionalLights` function in the file reference of the Custom Function node, and ensure that you've created all the proper inputs and outputs. Make sure to expose Specular Color and Specular Smoothness on the Blackboard of the Sub Graph in which the node is wrapped. Use the Position, Normal Vector, and View Direction nodes to plug in the World Position, World Normal, and World Space View Direction in the Sub Graph.After you've set up the function, use it! First, take your main Unlit graph from the previous step, and collapse it to a Sub Graph. Select the nodes, and right-click Convert to Sub-graph. Remove the last Add node, and plug the outputs into the output ports of the Sub Graph. We recommend that you also create input properties for Specular and Smoothness.Now you can combine your main light calculations and your additional light calculations together. In the main Unlit graph, create a new node for the Additional Light calculations to go alongside the Main Light calculations. Add the Diffuse and Specular outputs from Main Light and Additional Lights together. Pretty simple!Now you know how to get the data from all lights in a Scene for an LWRP project, but what can you do with it? One of the most common uses for custom lighting in shaders is a classic toon shader!With all of the light data, creating a toon shader is pretty simple. First, take all the light calculations you’ve done so far, and wrap them in a Sub Graph one more time. This will help with readability in the final shader. Don’t forget to remove the final Add node, and feed Diffuse and Specular into separate output ports on the Sub Graph output node.There are lots of methods to create toon shading, but in this example, well use light intensity to look up colors from a Ramp Texture. This technique is usually called Ramp Lighting. We've included some examples of the kind of Texture Asset needed for Ramp Lighting in the sample project. You can also sample a gradient to use dynamic ramps in Ramp Lighting. The first step is to convert the intensity of Diffuse and Specular from RGB values to HSV values. This lets us use the intensity of the light color (the HSV values) to determine the brightness on the shader, and helps us sample the Texture at different spots along the horizontal axis of the Asset. Use a static value for the Y channel of the UV to determine, from top to bottom, what part of the image should be sampled. You can use this static value as an index to reference multiple lighting ramps for the project in a single Texture Asset.Once you've set the UV values, use a Sample Texture 2D LOD node to sample the Ramp Texture. The Sample LOD is important; if we use a regular Sample Texture 2D node, the ramp is automatically mipped in a Scene, and objects further away will have different lighting behaviors. Using a Sample Texture 2D LOD node allows us to manually determine the mip level. Additionally, since the Ramp Texture is only 2 pixels high, we created our own Sampler State for the Textures. To make sure that the Texture is sampled correctly, we set the Filter to Point, and the Wrap to Clamp. We exposed this as a property on the Blackboard so that you can change the settings if the Texture Asset changes.Finally, we multiply the ramp sample from the diffuse calculations by a color property, Diffuse, so that we can change the object’s colors. Add the ramp sample from the specular calculations to the Diffuse output, and plug the final color into the Master node.This simple custom lighting setup can be expanded and applied to a wide variety of use cases in all kinds of Scenes. In our example project, we've included a full Scene configured with shaders that use our custom lighting setup. It also contains vertex animation, a simple subsurface scattering approximation, as well as refractions and coloring that use depth. Download the project, and check out our Example Assets to explore more advanced methods!If you want to discuss Shader Graph, and the shaders you can make with it, come hang out in our brand new forum space! You can also find community members and (sometimes) a few developers hanging out in the community Discord!Don't forget to keep an eye out for recordings of our SIGGRAPH 2019 sessions, where we go into even more detail about using Shader Graph for custom lighting!

>access_file_
1449|blog.unity.com

Here’s what’s in the brand-new Unity 2019.2

We have over 1000 developers dedicated to extending and improving Unity for you. In this release, you get more than 170 new features and enhancements for artists, designers, and programmers. We’ve updated ProBuilder, Shader Graph, 2D Animation, Burst Compiler, UI Elements, and many more. Read on for the highlights.But before we tell you about all the great new additions and improvements, note that we’ve revamped our release announcement too. You no longer have to scroll/scan a super-long post just to find what’s most pertinent for you or your team. Here we give you just the highlights, plus handy links to dedicated webpages featuring all the update info organized by an overview, artist and designer tools, programmer tools, graphics, and supported platforms.But before you dive in, why not start downloading 2019.2 now.ProBuilder 4.0 ships as verified with 2019.2 and is our unique hybrid of 3D modeling and level design tools, optimized for building simple geometry but capable of detailed editing and UV unwrapping as needed.Polybrush is now available via Package Manager as a Preview package. This versatile tool lets you sculpt complex shapes from any 3D model, position detail meshes, paint in custom lighting or coloring, and blend textures across meshes directly in the Editor.DSPGraph is the new audio rendering/mixing system, built on top of Unity’s C# Job System. It’s now available as a Preview package.We have improved how UI Elements, Unity’s new UI framework, renders UI for graph-based tools such as Shader Graph, Visual Effect Graph, and Visual Scripting. These changes provide a much smoother and responsive feel when you author more complex graphs in the Editor.To help you better organize your complex graphs, we have added subgraphs to Visual Effect Graph. You can share, combine, and reuse subgraphs for blocks and operators, and also embed complete VFX within VFX. We’ve also improved the integration between Visual Effect Graph and the High-Definition Render Pipeline (HDRP), which pulls VFX Graph in by default, providing you with additional rendering features.With Shader Graph you can now use Color Modes to highlight nodes on your graph with colors based on various features or select your own colors to improve readability. This is especially useful in large graphs.We’ve added swappable Sprites functionality to the 2D Animation tool. With this new feature, you can change a GameObject’s rendered Sprites while reusing the same skeleton rig and animation clips. This lets you quickly create multiple characters using different Sprite Libraries or customize parts of them with Sprite Resolvers. Now you can swap Sprites to create characters that are completely different visually but use the same animation rig.The Burst Compiler came out of Preview in 2019.1. With this release, Burst Compiler 1.1 includes several improvements to JIT compilation time and some C# improvements.TypeCache provides a fast way to access types or methods marked with specific attributes, as well as types derived from a specific class or interface. It utilizes an internal native cache that is built for all assemblies loaded by the Editor.For developers of mobile apps we have introduced screen brightness controls via the new Screen.brightness property (iOS and Android) and improved the ReplayKit API (iOS). We’ve also made it easier to adjust your UI by adding support for detecting the bounding box around the notch(es).We’ve migrated the PhysX Cloth Library from the previous PxCloth to NvCloth as part of our transition from PhysX 3.4 to PhysX 4.x.In this release, we updated the default editors to Visual Studio 2019 and Visual Studio 2019 for Mac. We’ve also started to move the Code Editor Integrations (and thus IDEs) from core to packages, and exposed our C# APIs. With this release, the Visual Studio Code and JetBrains Rider integrations are available as packages; Visual Studio will be available as a package in an upcoming release.We’ve removed the old .NET 3.5 Equivalent Scripting Runtime. Any projects that use the .NET 3.5 Equivalent Scripting Runtime will be automatically updated to use the .NET 4.x Equivalent Scripting Runtime.Incremental Garbage Collection, released as experimental on some platforms in Unity 2019.1, now supports all platforms except WebGL.This release also includes support for the Intel® VTune™ Amplifier for the Windows Standalone Player (x86, 64-bit) and Windows Editor, including sampling profiling of C# code.In this release, our High-Definition Render Pipeline (HDRP) includes an Arbitrary Output Variables (AOV) API, allowing you to output material properties only, lighting only, depth buffer, and other passes from the Scene. As well, this API is now used in the Unity Recorder, which makes it easy to export specific outputs for rendering with HDRP.We’ve also added Dynamic resolution, which allows you to scale the resolution at which the world is rendered, with hardware dynamic resolution support. This gives you better performance compared to software dynamic resolution.The MatCap debug view mode replaces the material and lighting of objects with a simple environment texture. This mode is useful for navigating and getting a sense of the Scene without having to set up the Scene lighting. For example, if you are editing a dark area, like the inside of a cave, this makes it easier to navigate in low lighting.The new Ambient Occlusion effect is a screen-space shading and rendering algorithm that improves the quality of ambient lighting simulation in your Scene, especially for small-scale details, while providing good performance. You can choose from several options to optimize for performance and quality.There are new 2D features in the Lightweight Render Pipeline (LWRP) such as the experimental 2D Renderer, which now contains 2D Pixel Perfect and 2D Lights. The new 2D Lights enable you to easily enhance visuals of 2D projects directly without having to use 3D lights or custom shaders.Shader Graph now has 2D Master nodes to create 2D Unlit and Lit Sprite Shaders. Additionally, precision modes let you set nodes to use less GPU memory, which helps increase performance on diverse platforms, including mobile.Lightmap denoising now works on all Editor platforms, regardless of GPU manufacturer. We have also made a fundamental change in how you configure the baking, giving you new possibilities for speeding up lightmap baking. As well, we’re introducing new probe workflows.With Probe-Lit GI Contributors, you can choose if objects that Contribute Global Illumination should receive GI from Light Probes or lightmaps. This allows Mesh Renderers to contribute to bounced lighting calculations without occupying texels in the lightmap, which can lead to huge improvements in bake times and reduced memory usage.This release also includes major speed improvements in our GPU Lightmapper, especially during lighting iterations. New features include Multiple Importance Sampling support for environment lighting and increased sampling performance when using view prioritization or small/low occupancy lightmaps.The NVIDIA OptiX AI Denoiser has been upgraded for better performance and lower memory usage, and to add support for NVIDIA Turing GPUs. It is supported in the GPU Lightmapper.Lightmapping now supports the Intel Open Image Denoise library, which is a machine-learning-based filter. It will improve your lightmapping workflow and lightmap quality by post-processing lightmaps. Noise and unwanted artifacts are removed so that you can get smooth, noise-free lightmaps that use far fewer samples.Optimized Frame Pacing for Android, developed in partnership with Google’s Android Gaming and Graphics team, provides consistent frame rates and hence smoother gameplay experience by enabling frames to be distributed with less variance.Mobile developers will also benefit from improved OpenGL support, as we have added OpenGL multithreading support (iOS) to improve performance on low-end iOS devices that don’t support Metal. We also added OpenGL support for SRP batcher for both iOS and Android to improve CPU performance in projects that use the Lightweight Render Pipeline (LWRP).We have added an APK size check using Android App Bundle so you can see the final APK size of different targets for large apps.If you are working with VR, try out HDRP, which now supports VR too.We’re also introducing a revamped SDK loading and management system for your target platforms to help streamline your development workflow. The system is currently in Preview and we’re looking for users to try out the new workflow and to give us feedback.The updated AR Foundation 2.2 includes support for face-tracking, 2D image-tracking, 3D object-tracking, and environment probes. See this recent blog post for details about AR Foundation support for ARKit 3 features.Vuforia support has migrated from Player Settings to the Package Manager, giving you access to the latest version of Vuforia Engine 8.3.We’re continuing to make the Editor leaner and more modular by converting several existing features into packages, including Unity UI, 2D Sprite Editor, and 2D Tilemap Editor. They can be easily integrated, upgraded or removed via the Package Manager.As with all releases, 2019.2 includes a large number of improvements and bug fixes. A special thanks goes out to our alpha and beta community for using and testing all the new tools and capabilities. Your pertinent and timely feedback helped us fix a large number of issues and finalize this release.You can find the full list of features, improvements, and fixes in the Release Notes, which are available here. You can also use the Issue Tracker to find specific ticket information.We are happy to announce that we’ve declared the five lucky winners of our Unity 2019.2 beta sweepstakes. They each won a Samsung Galaxy S10+, and all winners have been contacted. Stay tuned for updates about future sweepstakes and other beta news by signing up for our newsletter.Are you curious about what’s going to be in Unity 2019.3? You can get access to the alpha version now or wait for the beta, which we expect to launch later this summer. The full release of 2019.3 is scheduled for the fall of 2019. If you’re interested in knowing more about our Preview packages, check out the overview here.Not only will you get early access to the latest new features, but you can ensure that your project will be compatible with the new version. You can also help influence the future of Unity by sharing your feedback with our R&D teams in our forums or in person. Additionally, you’ll have the opportunity to get invited to Unity events, roundtables and much more. Start by downloading our latest alpha or beta and have a look at this guide for how to be an effective beta tester.

>access_file_
1452|blog.unity.com

Upgraded and Updated: Scripting your next game with Visual Studio and Unity

We are excited to announce a refresh of our popular beginner and intermediate scripting tutorial videos, available free on Unity Learn. We teamed up with Microsoft to bring Unity game developers tutorials that will help you get started with the fundamentals of scripting and programming for using C#, Microsoft Visual Studio and Unity 2019.These bite-sized beginner scripting videos will take you from understanding what a script is and how to attach it to your project, through to comparing a single variable against a series of constants. All while learning more about one of the most popular and user-friendly integrated development environments (IDE), Microsoft Visual Studio.The intermediate videos broaden your knowledge a step further by taking you from how to create properties, to using events to create a broadcast system; the building blocks you’ll need for your first Unity game and beyond as you foray further into your game development journey.Hiding in the background of the videos, you’ll see a sneak peek of another update. If you’re familiar with the previous Scripting videos, you’ll remember the Robot Lab environment used to show effects such as blasting doors from frames, moving a robot car around or firing projectiles from a bazooka. Included in our updated videos, we’ve upgraded to RoboLab 2.0, with a few new faces.Have a look at the trailer for a more visual (studio) explanation or go straight to the Unity Learn site to check out the beginner and intermediate level videos.

>access_file_
1454|blog.unity.com

What are playable ads? Examples and how to's

The global ad blocking craze pushed the ad industry to rethink its strategies — users were fed up with intrusive, irrelevant, and boring ads. In response, we slowly saw mobile ads shift towards ad formats and creatives that both took advantage of the mobile medium, and also created a value exchange between users and advertisers.Next came rewarded video ads (or RV), which give users in-app currency in exchange for watching a video advertisement — finally, an ad format that engaged users and improved the user experience. Today, playable ads are pushing the boundaries even further.What are playable ads?Exactly what they sound like — ads that you can play. They’re one of several app advertising units available to mobile advertisers today.Simply put, playable ads are ads that offer users interactive snippets of gameplay, otherwise known as “micro-games.” There is a call to action at the end (e.g., install the app), they are completely opt-in, and are typically under one minute in length — meaning “time to magic,” as we call it at ironSource, can be as short as 15 seconds. Playable ads: Creating, examples, tutorials, and moreThe global ad blocking craze pushed the ad industry to rethink its strategies — users were fed up with intrusive, irrelevant, and boring ads. In response, we slowly saw mobile ads shift towards ad formats and creatives that both took advantage of the mobile medium, and also created a value exchange between users and advertisers.Next came rewarded video ads (or RV), which give users in-app currency in exchange for watching a video advertisement — finally, an ad format that engaged users and improved the user experience. Today, playable ads are pushing the boundaries even further.What are playable ads?Exactly what they sound like — ads that you can play. They’re one of several app advertising units available to mobile advertisers today.Simply put, playable ads are ads that offer users interactive snippets of gameplay, otherwise known as “micro-games.” There is a call to action at the end (e.g., install the app), they are completely opt-in, and are typically under one minute in length — meaning “time to magic,” as we call it at ironSource, can be as short as 15 seconds.But most importantly, users love them. They do a great job of providing an enjoyable ad experience that not only shows the user what the advertised app is really like, but also pulls them in for more and gets them fully engaged.How to create playable adsIn addition to being true to the spirit of the app, playable ads also offer users multiple opportunities and touch points for engagement — unlike any other kind of app ad unit. They’re made up of three components: tutorial, gameplay, and end card.Playable ads tutorialThe tutorial introduces the player to the mini-game and is short and to the point. In 3 seconds, the tutorial effectively immerses the user in the game and explains how to play. It should be clearly interactive with an intuitive design that doesn’t require too many taps. Above all, the user needs to know from the get-go that this is an ad they can interact with.GameplayNext is the actual gameplay. Despite being a quick and simplified version of the game, this section still gives users a proper glimpse into what playing the game would be like. In 10 to 20 seconds, users should be able to know whether or not they want to install the app and keep playing. In a game, users could play an exciting beginning level. In a camera app, users could snap a picture and overlay it with a funky filter.The end cardLast is the end card. Here, the playable ad displays a clear call to action (CTA), asking the user to install the app, or giving them some other steps to perform. If it’s an app-install ad and they choose to install, users are redirected to the app store for download. Throughout the entire playable ad, there should always be an option to x-out, making sure users feel in control and the experience remains positive.Why playable ads?Lately, at ironSource we’ve noticed advertisers in markets across the world focus their KPIs on acquiring high-quality users, as opposed to prioritizing scale. This shift couldn’t come at a better time. Now, with interactive ad units like playable ads available, advertisers are better equipped to acquire high LTV users today than ever before.Try before you buyPlayable ads work as a “try before you buy” type ad unit, letting users interact with an app’s main features before installing it. The users who end up installing the app after a few seconds of fun gameplay become more likely to open the app later and continue engaging with it over time, creating more high lifetime value (LTV) players.It’s also because the playable mobile ads are inherently enjoyable, and actually fun to play with. Like the Super Bowl ads viewers look forward to watching, playable ads are an ad format users actually want to encounter.Reduce uninstall rates, improve retention ratesApp uninstall rates in the first hour can be as high as 25 percent. The figure rises to 64 percent in the first month. But by guaranteeing high LTV early in the user lifecycle, playable ads effectively reduce app uninstall rates and improve retention rates down the line — since users who install via playable ads know what to expect from the app. In fact, we’ve seen retention rates jump to 30-40 percent with playable ads.Eventually, this process helps advertisers save money, since it weeds out users who wouldn’t enjoy the app — minimizing money potentially wasted on acquiring irrelevant, low LTV users. Thus, even if the cost per install for playable ads is more expensive than a typical video or banner ad, the ROI ends up being higher.Maximize playable ads: Access to in-ad dataSophisticated playable ad creators can provide advertisers with a large amount of in-ad data, giving them real-time insight into the type of users who engage with their ads. In contrast to other ad formats, which are like black boxes that don’t tell us why a user likes the ad or why the user installed the app, playable ads give developers more transparency. Why? Because there are more moving parts in a playable ad — meaning there are more variables to manipulate and optimize for. For example, in one of our playable campaigns at ironSource, we saw that too many users were losing the mini-game and therefore not installing the app. So we tweaked the difficulty level and found that while there is a very fine line between ‘medium’ and ‘hard,’ users responded better to ‘medium’ because it combines the positive experience of winning along with the challenge and intrigue of the game. Suddenly, engagement and install rates soared. Only an interactive ad could provide enough data to understand exactly what was wrong with the initial ad, and how to better optimize it for performance.Playable ads examples1. Check out the playable ad ironSource built for Nexon M's Battlejack here.2. Learn how Kwalee used ironSource playable ads to reach #2 in the top charts here.3. View other playable ads ironSource Playworks Studio created here.Beyond gamingFor now, playable ads have been mostly confined to the gaming world. But given how enticing and interactive they are, we can expect to see them spill over into more non-gaming app categories.It’s often the case that gaming advertisers and publishers are the first to utilize the latest innovative mobile ad formats. It was true for offerwall, interstitial, and rewarded video, after all. Like the others, once playable ads have proven their success on gaming apps, non-gaming apps will be the next to hop on the bandwagon and repurpose the format to benefit their unique needs.

>access_file_
1455|blog.unity.com

Embedding real-time 3D in your digital marketing strategy

Customer acquisition has become a formidable challenge in the digital age. Our latest webinar shares how a major auto brand stands out from the crowd by using immersive technologies to complement every stage of the customer journey.Augmented reality (AR) and virtual reality (VR) – collectively known as mixed reality (XR) – are becoming an integral part of digital marketing. There’s a reason for that. As consumer buying behavior evolves and expectations for personalized, frictionless experiences increase, every brand needs to find new and compelling ways to engage prospective customers.With real-time 3D, you can tell your story in an immersive way, anywhere, on any device and platform. Auto OEMs are deploying this technology across their product lifecycle for a wide range of use cases, including changing the way customers experience and purchase their vehicles.In our latest webinar, “Embedding XR & Real-time 3D in the Digital Marketing Strategies of Automotive Leaders,” we invited Frantz Lasorne and David Castañeda, co-founders of Visionaries 777, to share how INFINITI disrupted the traditional showroom experience with AR and VR.Missed the live showing? Watch it on demand now.Watch the webinarThe 60 minutes are well worth the investment, but if you’re short on time, we’ve covered the top takeaways from the discussion below.During the webinar, Frantz and David shared how Visionaries designed a life-size AR experience to accompany the unveiling of the INFINITI QX50. Participants could move a touchscreen around the vehicle and learn about the inner workings of the new model at various points.Customers weren’t the only ones to benefit from this experience. It also empowered sales teams to easily highlight the new model’s innovations, rather than relying on catalogs and complex videos to get up to speed on the new features. Visionaries first created an interactive digital mockup in Unity, visualizing and simulating the experience before the physical build, which drove efficiencies and saved resources.Real-time 3D isn’t something automakers “bring out of the garage” to make a splash. The technology delivers real results. According to Visionaries, immersive experiences generate more traffic to showrooms and event booths. At a recent trade show, one booth recorded 35% more leads as a result of an on-site real-time 3D activation. As an added benefit, these experiences are scalable and can be easily repurposed for other use cases, such as placing in dealerships.Real-time 3D helps drive sales by demystifying design innovations and helping consumers visualize their customizations, but that’s just one part of the equation. The data from these immersive, interactive experiences can be packaged into insights that add value to multiple departments throughout the organization. For instance, sales and marketing can share which configurations are most popular, helping the design team optimize vehicle design for next year’s model.Check out the webinar to see more of Visionaries’ cutting-edge work with INFINITI. Learn more about the power of real-time 3D on the Unity for automotive page.Watch the webinar

>access_file_
1458|blog.unity.com

How to use Timeline Signals

Since Timeline's introduction in 2017, we know that you've been patiently waiting for a way to send events. Well, wait no more! Starting in Unity 2019.1, a new feature called Signals helps you send events. Let's dive in and see what this new feature is all about.Here's what Signals look like in the Timeline window:We’ve built Signals to establish a communication channel between Timeline and outside systems. But what does that mean? Why did we decide on this approach?Let's say that you are using Timeline to create a cutscene. When the cutscene ends, you want a new scene to be loaded and you want to enable a physics system. There are a couple of ways to implement this scenario:When your cutscene ends, use custom scripts to make your Timeline instance interact directly with a scene manager and your physics system.When your cutscene ends, create markers and signals to send a ‘’Cutscene finished’’ signal and have interested systems react to the signal. The Timeline team went with the second approach because it keeps the emitter independent from the receiver. Keeping them independent adds a lot of flexibility, power, and reusability. To understand this more, let’s see how the different pieces work together.To use signals to communicate with outside systems, you need three pieces: a Signal Asset, a Signal Emitter, and a Signal Receiver.Signal Emitter: A Signal Emitter contains a reference to a Signal Asset. In Timeline, a Signal Emitter is represented visually by a marker. You can place a marker on a track, or in the Markers area under the Timeline ruler.Signal Receiver: A Signal Receiver is a component with a list of reactions. Each reaction is linked to a Signal Asset.Signal Asset: A Signal Asset is the association between a Signal Emitter and a Signal Receiver. You can reuse the same Signal Asset in many Timeline instances.Here's a simple game where you defeat a bunny zombie by pressing the arrow keys that match the directional images shown on-screen:The directional image randomly changes with each musical beat. If you don't press the right arrow key before the image changes, you lose. If you press a certain number of arrow keys, you win.In the game, the GameTimeline instance includes the gameplay. It uses Signals to display new directional images with each musical beat, as shown in the finished GameTimeline instance below:To demonstrate how to create and set up signals, let's start from a project where none of the Signals have been created. If you want to follow along, you can download the project here.First, to view the Markers area where you add Signals, click the Marker icon beside the Clip Edit modes. The Markers area appears beneath the Timeline ruler:To add a Signal Emitter, right-click in the Markers area and select Add Signal Emitter.A Signal Emitter appears in the Markers area:The Signal Emitter you just added is selected and the Inspector window show its properties. The Inspector window also provides buttons for creating the other pieces of a signal:To link the Signal Emitter to a Signal Receiver, you need to add a new Signal Asset. In the Inspector window, click Create Signal…. Name the Signal Asset "ShowKey" because this Signal Asset will be used to link and emitter with a receiver to change the directional image. Click Save and the Signal Asset is added to the project.You also want the Signal Emitter to be associated with a Signal Receiver, so click Add Signal Receiver. Your Inspector window should now look like this:Before continuing, let's stop and describe what's going on. When you click the Add Signal Receiver button, two things happen: a new Signal Receiver component is added to the bound GameObject and a new reaction is added and linked to the Signal Asset that you just created. You can see they are linked because "ShowKey" appears as the Emit Signal and "ShowKey" is added to the list of reactions:The Inspector window shows two things: the Signal Emitter properties and the reactions defined by the Signal Receiver component:Although the Signal Receiver component is linked to the GameObject that is associated with the Timeline instance, you can edit the Signal Receiver in the Inspector window. The reaction is invoked when the Signal Receiver receives the signal.The last step is to specify what the reaction does. In this example, there is a component named Manager with a ShowRandomKey method. This method displays a new random arrow key. To have the reaction call the ShowRandomKey method, select this method for the Unity Event, as shown below:And that’s it! After adding the Signal Emitter and defining its Signal Receiver and the reaction, you now have an example of a Timeline instance that communicates with the scene.When Timeline hits the Signal Emitter, the ShowKey signal is emitted. The Signal Receiver, that is listening for the ShowKey signal, calls the ShowRandomKey method.But you don't have to stop there. Multiple signal emitters can emit the same signal. For example, the following Timeline instance has the same Signal Emitter copied and moved to different times; there is an emitter for every musical beat:You can also drag a Signal Asset directly from the Project window to the Timeline window. A Signal Emitter is automatically created with the Emit Signal property already set.To see what the finished project looks like, with all Signals, download it here.In order to set up your first signal, you need to:Right-click on a Marker area or a track that supports signals, then choose Add Signal Emitter...In the Inspector window, click Create Signal Asset, choose a file name and press Enter.Still in the Inspector, click Create Reaction... button and define the reaction associated with the signal you just created.Can I have multiple receivers on a single GameObject? Yes, all Signal Receivers on a GameObject receives the signals sent to that GameObject.Can a signal asset be reused between multiple Timeline instances? Yes, a Signal Asset can be used in more than one Timeline instance.Can a Signal Asset be extended and customized? Yes, Signal Asset, Signal Emitter and Signal Receiver can all be extended. But this is a subject for another day; there will be another blog post about customization.Are signals guaranteed to be emitted? Yes. Signals will not be skipped; they will be emitted on the next frame regardless of the game’s FPS.What happens if duplicate a Timeline Asset contains signal emitters? The signal emitters keep their reference to their Signal Assets. Signal Assets are not duplicated.---Check out beginner and advanced learning content for using Timeline on the Unity Learn Premium platform.

>access_file_
1459|blog.unity.com

GPU Lightmapper: A Technical Deep Dive

The Lighting Team is going all-in on iteration speed. We designed the Progressive Lightmapper with that goal in mind. Our goal is to provide quick feedback on any changes that you make to the lighting in your project. In 2018.3 we introduced a preview of the GPU version of the Progressive Lightmapper. Now we’re heading towards the feature and visual quality parity with its CPU sibling. We aim to make the GPU version an order of magnitude faster than the CPU version. This brings interactive lightmapping to artistic workflows, with great improvements to team productivity.With this in mind, we have chosen to use RadeonRays: an open source ray tracing library from AMD. Unity and AMD have collaborated on the GPU Lightmapper to implement several key features and optimizations. Namely: power sampling, rays compaction, and custom BVH traversal.The design goal of the GPU Lightmapper was to offer the same features of the CPU Lightmapper while achieving higher performance:Unbiased interactive lightmappingFeature parity between CPU and GPU backendsCompute based solutionWavefront path tracing for maximum performanceWe know that iteration time is the key to empowering artists to improve visual quality and unleash creativity. Interactive lightmapping is the goal here. Not just impressive overall bake times, we also want the user experience to offer immediate feedback.We needed to solve a bunch of interesting problems to achieve this. In this post, we will explore some of the decisions we have made.For the Lightmapper to offer progressive updates to the user, we needed to make some design decisions.We don’t cache irradiance or visibility when doing direct lighting (direct lighting could be cached and reused for indirect lighting). In general, we don’t cache any data and prefer computation steps that are small enough to not create stalls, and provide a progressive and interactive display while baking.Scenes can potentially be very large and contain many lightmaps. To ensure that work is spent where it offers the most benefit to the user, it is important to focus baking on the currently visible area. To do this, we first detect which of the lightmaps contain most unconverged visible texels on a screen, then we render those lightmaps and prioritize the visible texels (off-screen texels will be baked once all the visible ones have converged).A texel is defined as visible if it’s in the current camera frustum and if it isn’t occluded by any Scene static geometries.We do this culling on the GPU (to take advantage of fast ray tracing). Here is the flow of a culling job.The culling jobs have two outputs:A culling map buffer, storing whether each texel of the lightmap is visible. This culling map buffer is then used by the rendering jobs.An integer representing the number of visible texels for the current lightmap. This integer will be asynchronously read back by the CPU to adjust lightmap scheduling in the future.In the video below, we can see the effect of the culling. The bake is stopped midway for demo purposes. So when the Scene view moves, we can see not yet baked texels (i.e. black) that aren’t visible from the initial camera position and direction.For performance reasons, the visibility information is updated only every time the camera state ‘stabilizes’. Also, supersampling isn’t taken into account.GPUs are optimized for taking huge batches of data and performing the same operation on all of it; they’re optimized for throughput. What’s more, the GPU achieves this acceleration while being more power- and cost-efficient than a many-core CPU. However, GPUs are not as good as a CPUs in terms of latency (intentionally, by the design of the hardware). That’s why we use a data-driven pipeline with no CPU-GPU sync points to get the most out of the GPU’s inherently parallel computation nature.However, raw performance isn’t enough. User experience is what matters, and we measure it in visual impact over time aka convergence rate. So we also need efficient algorithms.GPUs are meant to be used on large data sets, and they‘re capable of high throughput at the cost of latency. Also, they’re usually driven by a queue of commands filled ahead of time by the CPU. The goal of that continuous stream of large commands is to make sure we can saturate the GPU with work. Let’s look at the key recipes we are using to maximize throughput and thus raw performance.The way we approach the GPU lightmapping data pipeline is based on the following principles:1. We prepare the data once.At this point, CPU and GPU might be in sync in order to reduce memory allocation.2. Once the bake has started, no CPU-GPU sync points are allowed.The CPU is sending a predefined workload to the GPU. This workload will be over-conservative in some cases (for example using 4 bounces but all indirect rays finished after the 2nd bounce then we still have enqueued kernels that will be executed but early out).3. The GPU cannot spawn rays nor kernels.Rather, it might be asked to process empty jobs (or very small ones). To handle those cases efficiently, kernels are written in a way where data and instruction coherency is maximized. We handle this via data `compaction`, more on this later.4. We don’t want any CPU-GPU sync points, nor any sort of GPU bubbles once the bake has started.For example, some OpenCL commands can create small GPU bubbles (i.e. moments where the GPU have nothing to process), such as clEnqueueFillBuffer or clEnqueueReadBuffer (even in the asynchronous versions), so we avoid them as much as possible. Also, data processing needs to remain on the GPU for as long as possible (i.e. rendering and compositing up to completion). When we need to bring data back to the CPU for additional processing, we will do so asynchronously and neither have it send back to the GPU again. For example, seam stitching is a CPU post-process at the moment.5. CPU will adapt the GPU load in an asynchronous fashion.Changing the lightmap being rendered when the camera view changes or when a lightmap is fully converged will incur some latency. CPU threads generate and handle those readback events using a lockless queue to avoid mutex contention.One of the key features of the GPU architecture is wide SIMD instruction support. SIMD stands for Single Instruction Multiple Data. A set of instructions will be executed sequentially in lockstep on a given amount of data inside of what is called a warp/wavefront. The size of those wavefronts/warps is 64, 32 or 16 values (depending on the GPU architecture). Therefore a single instruction will apply the same transformation to multiple data - single instruction multiple data. However, for greater flexibility, the GPU is also able to support divergent code paths in its SIMD implementation. To do this it can disable some threads while working on a subset before rejoining. This is called SIMT: Single instruction multiple threads. However, this comes at a cost as divergent code paths within a wavefront/warp will only profit from a fraction of the SIMD unit. Read this excellent blog post for more info.Finally, a neat extension of the SIMT idea is the ability of the GPU to keep around many warps/wavefronts per SIMD core. If a wavefront/warp is waiting for slow memory access, the scheduler can switch to another wavefront/warp and continue working on that in the meantime (providing there is enough pending work). For this to really work however, the amount of resources needed per context needs to be low, so that the occupancy (the amount of pending work) can be high.Summing up we should aim for:Many threads in flightAvoiding divergent branchesGood occupancyHaving good occupancy is all about the kernel code and is too broad of a subject to be a part of this blog post. Here are some great resources:Understanding Latency Hiding on GPUs by Vasily Volkov (NVIDIA)Intro to GPU Scalarization by Francesco Cifariello (Unity Technologies)In general, the goal is to use local resources sparsely, especially vector registers and local shared memory.Let’s take a look at what could be the flow for baking direct lighting on the GPU. This section mostly covers lightmaps however, Light Probes work in a very similar way, except that they don’t have visibility or occupancy data.There are a few problems here:Lightmap occupancy in that example is 44% (4 occupied texels over 9), so only 44% of the GPU threads will actually produce usable work! On top of that, useful data is sparse in memory so we will pay for bandwidth even for unoccupied texels. In practice, lightmap occupancy is usually between 50% to 70% hence a huge potential gain.Data set is too small. The example is showing a 3x3 lightmap for simplicity but even the common case of a 512x512 lightmap will be a too small data set for recent GPUs to attain top efficiency.In an earlier section, we talked about view prioritization and the culling job. The two points above are even truer as some occupied texels won’t be baked because they are not currently visible in the Scene view, lowering occupancy and overall data set even more.How do we solve that? As part of a collaboration with AMD, ray compaction was added. The idea vastly improves both ray tracing and shading performance. In short, the idea is to create all the ray definitions in contiguous memory allowing all the threads in a warp/wavefront to work on hot data.In practice you also need each ray to know the index of the texel it is related to, we store this in the ray payload. Also, we store the global compacted ray count.Here is the flow with compaction:Both the kernels that are shading and tracing the rays can now run only on hot memory and with minimal divergence in code paths.What’s next? Well, we haven’t solved the fact that the data set could be too small for the GPU, especially if view prioritization is enabled. The next idea is to decorrelate the generation of rays from the gbuffer representation. With the naive approach, we only generate one ray per texel. Since we will eventually want to generate more rays anyway, we might as well generate several rays per texels up front. In this way, we can create more meaningful work for the GPU to chew on. Here is the flow:Before compaction we generate many rays per texel and we call this expansion. We also generate meta information that is used in the gather step to accumulate into the correct destination texel.Both the expansion and gather kernels are not executed very often. In practice we expand and then shade every light (for direct) or process all bounces (for indirect), to finally gather only once.With these techniques we achieve our goal: we generate enough work to saturate the GPU and we spend bandwidth only on texels that matter.These are the benefits of shooting multiple rays per texel:The set of active rays will always be a large data set even in view prioritization mode.Preparation, tracing, and shading are all working on very coherent data as the expansion kernel will create rays targeting the same texel in continuous memory.The expansion kernel handles occupancy and visibility, making the preparation kernel much simpler and thus faster.The size of the expanded/working data set buffers is decoupled from the size of the lightmap.The number of rays we shoot per texel can be driven by any algorithm, a natural expansion is going to be adaptive sampling.Indirect lighting uses very similar ideas, albeit more complex:With indirect light we have to perform multiple bounces, each one can discard random rays. Thus we do compaction iteratively to keep working on hot data.The heuristic we currently use favors an equal amount of rays per texel. The goal is to get a very progressive output. However, a natural extension of this would be to improve these heuristics by using adaptive sampling, so to shoot more rays where the current results are noisy. Also, the heuristic could aim for a greater coherency, both in memory and in thread group execution, by being aware of the wavefront/warp size of the hardware.Assets from ArchVizPRO baked with GPU Lightmapper.There are many use cases for transparency/translucency. A common way to handle transparency and translucency is to cast a ray, detect intersection, fetch material and schedule a new ray if the encountered material is translucent or transparent. However, in our case, the GPU cannot spawn rays for performance reasons (please refer to the `Data-driven pipeline` section above). Also, we can’t reasonably ask the CPU to schedule enough rays in advance so we are sure that we handle the worst possible case, as this would be a major performance hit.Thus we went for a hybrid solution. We handle translucency and transparency differently allowing to solve the issues above:Transparency (when a material is not opaque because of holes in it). In that case, the ray can either go through or bounce off the material based on a probability distribution. Thus the workload prepared in advance by the CPU does not need to change, we are still Scene independent.Translucency (when a material is filtering the light that goes through it). In that case, we approximate and do not consider refraction. In other words, we let the material color the light, but not change its direction. This allows us to handle translucency while walking the BVH, meaning we can handle easily a large number of cutout materials and scale very well with translucency complexity in the Scene.However, there is a quirk; BVH traversal is out of order:In the case of occlusion rays, this is actually fine as we are only interested in the attenuation from translucence of each intersected triangle along the ray. As multiplication is commutative, out of order BVH traversal is not a problem.However for intersection rays what we want is to be able to stop on a triangle (in a probabilistic way when the triangle is transparent) and to collect translucence attenuation for each triangle from the ray origin to the hit point. As BVH traversal is out of order the solution we have chosen is to first only run the intersection to find the hit point, and mark the ray if any translucency was hit. For every marked ray, we thus generate an extra occlusion ray from the intersection ray origin to the intersection ray hit. To do this efficiently we use compaction when generating the occlusion rays, that means one will only pay the extra cost if the intersection ray was marked as needing translucency handling.All of that was possible thanks to the open source nature of RadeonRays which was forked and customized to our needs as part of the collaboration with AMD.We have seen what we do in regard to raw performance, great! However, it is only the first part of the puzzle. High samples per second are great but what really matters, in the end, is the bake time. In other words, we want to get the maximum out of every ray we cast. This last statement is actually the root of decades of ongoing research. Here are some great resources:Ray Tracing in One WeekendRay Tracing: The Next WeekRay Tracing: The Rest of Your LifeUnity GPU Lightmapper is a pure diffuse lightmapper. This simplifies the interaction of the light with the materials a lot and also helps dampen fireflies and noise. However, there is still a lot we can do to improve the convergence rate. Here are some of the techniques that we use:Russian rouletteAt each bounce, we probabilistically kill the path based on accumulated albedo. One can find a great explanation in Eric Veach’s thesis (page 67).Environment Multiple Importance Sampling (MIS)HDR environments that exhibit high variance can cause a considerable amount of noise in the output, requiring huge sample counts to produce pleasing results. Therefore, we apply a combination of sampling strategies specifically tailored to evaluate the environment by analyzing it first, identifying important areas, and sampling accordingly. This approach, which is not exclusive to environmental sampling, is generally known as multiple importance sampling and was initially proposed in Eric Veach’s thesis (page 252). This was done in collaboration with Unity Labs Grenoble.Many lightsAt each bounce, we probabilistically select one direct light and we limit the number of lights affecting surfaces with a spatial grid structure. This was done in collaboration with AMD. We are currently investigating deeper in the many light problem as light selection sampling is critical to quality.DenoisingNoise is removed by using an AI denoiser trained on outputs from a path tracer. See Jesper Mortensen’s Unity GDC 2019 presentation.We have seen how a data-driven pipeline, the attention to raw performance and efficient algorithms are combined together to offer an interactive lightmapping experience with the GPU Lightmapper. Please note that the GPU Lightmapper is in active development and is constantly being improved.Let us know your thoughts!The Lighting TeamPS: If you think this was a fun read, and are interested in taking up a new challenge, we’re currently looking for a Lighting Developer in Copenhagen, so get in touch!---Want to learn how to optimize graphics in Unity? Check out this tutorial.

>access_file_