// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1690 transmissions indexed — page 64 of 85

[ 2021 ]

20 entries
1261|blog.unity.com

From design model to real-time 3D configurator: The canVERSE and Arksen workflow

Wide-ranging industries are deploying real-time 3D configurators, as more businesses strive to engage their customers with interactive and immersive presales and marketing activities. Unity Forma, part of the family of Unity marketing solutions, provides an efficient, cost-effective solution to meet that need.With innovation, technology and adventure at their core, Arksen needed a new way to showcase their latest marine exploration vessel that aligned to those core values. When they partnered with a design company specializing in creating shared 3D spaces that work across desktop, mobile, augmented and virtual reality, canVERSE knew exactly where to start.canVERSE leveraged Unity Forma to create a streamlined workflow, starting with Arksen’s computer-aided design (CAD) data and resulting in a fully interactive, real-time 3D configurator of their latest vessel.“Previously, creating a static render of our vessels would take a couple of weeks to coordinate. Now, the model feels alive as clients can interact with the Arksen 85 in real-time, viewing it from any angle while customizing options with Unity Forma.” – Eleanor Briggs, Marketing Director, ArksenIn this guest post, canVERSE’s Chief Executive Officer Barnabas Cleave, with Chief Product Officer Charlie Hasdell, steps through the configurator creation process.By integrating the 2D creative canvas of screen space with a shared 3D universe, canVERSE propels companies to reach their customers and engage more with their products.Arksen has created a range of explorer vessels, designed to safely travel to some of the most wild and precious places on the planet. Directly from the Arksen CAD data, canVERSE was able to create a real-time 3D fully configurable model of the Arksen 85 vessel with Unity Forma.Each Arksen is customizable to the client’s needs, so canVERSE created a workflow that can ingest multiple CAD models. The workflow consists of five stages:It all starts with the CAD model, which is built in Rhinoceros 3D and exported as a 3DM file by the Arksen team.Using Unity’s standalone Pixyz Studio solution unlocked a wealth of tools, including CAD data conversion to tessellated triangle meshes, building UVs for texturing, and optimizing the geometry. It also allows the team to deploy numerous features to fix any technical issues, apply materials, and finesse the model in preparation to be imported into Unity Forma.To take advantage of having a vessel’s construction in progress, canVERSE created a digital library of materials by taking photographs of the Arksen 85 and feeding them into ArtEngine. ArtEngine’s AI-powered tools then enabled the team of artists to create highly realistic materials using Unity’s High Definition Render Pipeline (HDRP). For example, the aluminium sheet pictures from the outside of the vessel were converted into a physically based renderer (PBR) asset.Our design is continually evolving, and Unity Forma’s native tools provide a workflow that allows canVERSE to update and review changes at an incredible speed.For the first Arksen 85, the development team created the animations for the boom arms and added a three-state visibility system, while the artists built the coastal scenes. Unity Forma really comes into its own when all the final elements can be manipulated together.Traditionally, the chief product officer would be directing each of the various developers and artists to polish the final product. This can be a lengthy process of back and forth between team members. However, with Unity Forma, a non-technical specialist can easily adjust cameras, lighting and material variants and test them with ease. This saves a great deal of time and resources internally.Additionally, the Forma process itself is very efficient, which is essential, as the Arksen model is getting updated continuously. As soon as a new version of the Arksen model is in Unity Forma, it’s possible to update material and visibility variants in a simple drag-and-drop interface using the product configurator.As the structure of the project is preserved, the existing camera work, animations and environments are retained between product updates. Automation tools such as the product configurators and the material thumbnail generator further speed up this rework. It generates new thumbnails for the Forma user interface in a quick and easy process that saves many hours of work.Finally, Unity Forma allows all parties – canVERSE, the Arksen team, and Arksen’s client – to try different ideas and spot potential problems in real-time that normally would be found much later. Hence, collaborators are able to build and review together in one cohesive design process.“Unity Forma lets clients emotionally engage with their Arksen, as they get to see and feel a digital twin of their boat in context, rather than an abstract CAD drawing.” – Jim Mair, Technical Director, ArksenSo, Unity Forma has not only become a vital sales tool, but also part of the design feedback loop that lets the team rapidly prototype in hours instead of days.Finally, each Unity Forma build is deployed on Furioos, Unity’s cloud-based streaming technology that makes it easy to share complex, interactive 3D applications on web browsers and embed them onto websites. This makes sure the Arksen team and their customers always get to see the latest update, all without the need for a powerful machine.Click here to access the Furioos stream.Unity Forma allows canVERSE’s workflow to be tried and tested and ultimately to feed back into Arksen’s Rhinoceros 3D CAD model, ready for manufacture.Now canVERSE are working on several XR configurations to take the immersion to the next level. Check out how they developed the digital twin into a virtual reality experience with Varjo.___________________________________________________________By adopting real-time 3D technologies, manufacturers are seeing significant improvements in their product lifecycle processes. Find out more about Unity Forma, or reach out to our Unity experts to try it for free.

>access_file_
1262|blog.unity.com

How to optimize game performance with Camera usage: Part 2

Welcome back to Part 2 of: How to optimize your game performance with Camera usage. If you haven’t read Part 1 yet, check it out here. Now we will pick up from where we left off and dig in the profiler results!In the Built-in Render Pipeline, the Camera.Render profiler marker measures the time spent processing each Camera on the main thread.Each Camera has its own Camera.Render marker which we summed up in each test case in the graphs below.In the high load scenario, we selected a different load factor for each device to increase the total frame time without going too high. Very high loads do not provide reliable results because the cost of additional Cameras can get somewhat overshadowed by the performance variability between frames.The trend is clear: Time spent in Camera.Render is directly affected by the number of Cameras. This is true even when adding an extra Camera that doesn’t render anything in the fourth scene.When moving to the Universal Render Pipeline (URP), the first thing that stands out in the profiler timeline view is that many bars are blue (Scripts) instead of green (Rendering). Green bars represent time spent in the C++ side of the Unity engine’s rendering code which is where all the rendering code lives in the Built-in Render Pipeline. URP is a scriptable render pipeline which means that a lot of the rendering code has been moved to C# to give users much more control to customize the rendering process.The Inl_UniversalRenderPipeline.RenderSingleCamera profiler marker measures the time spent processing each Camera on the main thread. Conveniently, each of those markers also contains the name of the Camera as a suffix.However, summing up those Camera markers as in the Built-in Render Pipeline tests does not give an accurate picture of the total performance of URP. As can be seen in the figure above, we should also count the significant time spent in the Inl_Context.Submit markers. This is time spent on the creation of draw command buffers which are included in the Built-in Render Pipeline’s Camera.Render markers. To make things easier, we choose the RenderPipelineManager.DoRenderLoop_Internal marker which encompasses all this.For consistency reasons, we used the same high load factors as in the Built-in Render Pipeline scenarios.Again, the trend is clear: Time spent in rendering code is directly affected by the number of Cameras. As in the Built-in Render Pipeline tests, this holds true even when adding an extra Camera that doesn’t render anything in the fourth scene.At this point, if you closely compared the performance characteristics of the Built-in Render Pipeline and URP, you might have noticed some strange results. You would be right. In the high load tests, for example, URP is much more efficient than the Built-in Render Pipeline on the Galaxy S7 Edge, but not on the iPhone models we tested. To keep this post to a manageable length and keep the focus on the primary subject, we will investigate this in a separate blog post.Let’s examine some multiple Camera usage scenarios we’ve seen in the wild and discuss their alternatives.Having a giant canvas in the middle of the scene in the Scene view can be distracting. Some users fix this problem with a separate UI Camera positioned further away which renders canvases set to “Screen Space - Camera”. Since Unity 2019, you can instead toggle child GameObject visibility in the Hierarchy window to hide distracting canvases. The Layers drop-down menu in Unity’s Toolbar can also be used to achieve this.Some users rely on Cameras to order their canvases. This is not the right tool for the job. Instead, use the Canvas’ Sort Order or Plane Distance. However, be also aware that nested canvases have an “Override Sorting” option which must be taken into account.Another case we’ve seen is using separate Cameras that render different parts of the game UI with culling masks for the sake of toggling the visibility of UI screens. The correct way to do this is to instead toggle either the activation of GameObjects or the enable flag of Canvas components.One last example that doesn’t involve UI is using multiple Cameras to switch between viewpoints. The worst situation then arises when all those Cameras are enabled and the Camera rendering order (i.e. the Depth property) is used to control which one is visible. In that case, all Cameras are rendered in full one over the other which is very costly. Disabling unused Cameras removes this cost. However, we argue that it’s best to use a single Camera and always position it at the currently active viewpoint. This makes it impossible to have multiple active Cameras by mistake and simplifies the Camera management process.While you should avoid the unnecessary use of multiple cameras, there are times when this is the best or even only solution. In general, multiple Cameras are likely the right choice if you need more than one of any of the following:Camera outputs. This includes the rendering surface (Display, RenderTexture) and the viewport rectangle.Resolutions. Only a single resolution can be used for the output of a Camera. Intermediary results inside the rendering pipeline used by a Camera can however be rendered at arbitrary resolutions and used to produce the Camera’s output. An example of this is HDRP’s Low-Resolution Transparent pass.Field of view, position, and orientation. These parameters directly define the culling frustum. An exception to this is XR where Unity uses a couple of tricks for the two eyes which are Cameras very close to each other.Here are some examples of valid use-cases for multiple Cameras.A common practice to improve GPU performance on newer (mobile) devices with very high display resolutions is to render the scene in a lower resolution and upscale it to the final resolution. In this scenario, many games want to render at least some parts of their UI at the native resolution over the upscaled scene to get sharper UI sprites and images. This type of rendering configuration requires a separate Camera because two different resolutions are used.Multiple sub-displays with different Camera positions or resolutions also require multiple Cameras. An example of this is a split-screen game where each player can move their viewpoint independently.A dynamic billboard showing a part of the scene from a second viewpoint requires a RenderTexture as its own texture. A separate Camera is necessary to generate the content of this RenderTexture.In this post series, we measured the cost of additional Cameras in Unity’s Built-in Rendering Pipeline and Universal Render Pipeline. The results clearly show that unnecessary Cameras in a scene have a cost that can easily be avoided for a nice performance gain.On a closing note, you might wonder why even a Camera that doesn’t render anything can have such a large performance impact. The first main reason is that it simply takes a good amount of work for Unity to figure out that the Camera does not, in fact, contain anything. The second reason is, to put it bluntly, that Unity is not optimized for sub-optimal Camera setups. Optimizing the engine for these would make the well-configured games slower and also probably use more memory which is undesirable.Want to learn more about the Accelerate Solutions Games team and how they can help you elevate your game? Visit our homepage, or reach out to a Unity sales representative to find out how we can help accelerate your next project.If you like our Accelerate Success content series, don’t miss out on this recording of our latest Accelerate Success webinar: The Unity UI makeover, delivered by two of our team leads, Andrea and Sebastian. In this demo, Andrea and Sebastian will take a poorly designed UI and give tips and best practices on how to improve the UI so your game runs faster and more efficiently.

>access_file_
1263|blog.unity.com

Unity for Humanity Summit highlights: Imagining a better world with RT3D

Today, we hosted the second annual Unity for Humanity Summit, bringing together innovative changemakers using their creativity to imagine a brighter future. This year’s event was jam-packed, featuring more than 80 creators and 100 projects across speaking sessions, networking events, and a changemakers showcase, and our core focus areas of environment and sustainability, education and inclusive economic opportunity, and digital health and wellbeing.If you registered for the Summit, you can sign in with your credentials to watch the session recordings through November 7. If you didn’t register, you can view recordings on Unity’s YouTube channel beginning November 1. Read on to learn about event highlights and gain some insight into what’s next for Unity Social Impact.The Unity for Humanity program is designed to uplift and empower social impact creators. During today’s Keynote, we announced the Unity for Humanity 2022 Grant and introduced the Imagine Grant, created in partnership with award-winning artist, actor, and activist Common. The grant theme is inspired by Common’s latest single, “Imagine.” The grant will be awarded to the project that best ‘imagines a better world.’The Unity for Humanity Grant and the Imagine Grant are open for applications through December 3, 2021. We’re awarding $500K USD in total across the grants. While a single project cannot receive both the Imagine Grant and a Unity for Humanity 2022 Grant, you can apply for both via the same application.All grant applications will be evaluated using the following criteria:Vision: Does the project reflect a strong sense of compassion for humanity? Does the project demonstrate clarity of vision and express a unique point of view? Inclusion: Are the project and team demographically diverse?* Does the project have a natural connection to the community and audience it represents and/or serves?Impact: Does the project have clear social impact goals or a call to action in alignment with the 17 UN Sustainable development goals? Does the application include an impact plan? Viability: Are the production, financial, and impact milestones achievable? Is the project realistic in scope?In addition to the general Unity for Humanity Grant evaluation criteria, the Imagine Grant recipient will be selected using the following additional criteria:Positive future of humanity: Does the project provide a strong picture of humanity’s future that is inclusive, transformative, and/or positive?Based on real-world issues: While imagining a better future, is the project based on real-world issues? Examples include climate change, human rights infringements, economic disparity and lack of opportunity, etc.Inspiration for change: Does the project have the potential to inspire audiences to make positive changes and take action?Imagination: Does the project demonstrate uniqueness, depth, and/or imagination in terms of its story and approach to depicting a better world?Motivation: Does the team articulate a strong motivation for creating the future world represented in their application?The Imagine Grant is just one component of our partnership with Common. We have also granted funding to the Art in Motion (AIM) school in Chicago, of which Common is a partner. AIM provides personalized learning and immersive arts education to middle and high school-aged students. This grant will enable more students to use real-time 3D technology to tell their stories and create change, which is a central and long-standing pillar of Unity’s social impact work.If you’re interested in learning or teaching Unity, check out Unity Learn for free courses, guided certifications, and more. Educators and nonprofit organizations can apply for an Education Grant license to access tools and resources for teaching students how to create in 2D, 3D, AR, and VR. And, students can get started creating with the Student plan, which provides access to specialized curricula, assets, and product licenses.Beyond our commitment to supporting changemakers, enhancing education, and encouraging inclusive economic opportunity, we also recognize our role as global citizens and the need for decisive action on sustainability.With this in mind, we announced today that Unity is net zero. This means that we’re immediately neutralizing our greenhouse gas emissions by purchasing high-quality offsets. We set our science-based target after conducting our 2020 greenhouse gas (GHG) emissions baseline inventory and learning that we emit 38,400 metric tonnes of carbon annually – as much as 8,400 passenger vehicles driven for a year. We are employing a 3-step approach to reach our net zero goal:Offsetting: We‘re achieving net zero carbon emissions today through carbon offsets. Starting with our 2020 emissions calculation amount, approximately $500K USD will be invested in high-quality offsets that provide co-benefits to local communities.Redesigning: Second, we’re reducing our carbon footprint by sourcing renewable energy for our facilities and redesigning our procurement policy to ensure that everything purchased is as sustainable as possible. We will continue to implement energy efficiency projects in our facilities and procure certified IT equipment where feasible.Aligning: Lastly, we’re committed to funding, aligning, and partnering with groups who are demanding better from the world and setting new industry standards.We recognize this is just the beginning of our long-term obligation to take concrete steps to preserve our planet. Learn more in our latest press release.In addition to the efforts to reduce our carbon emissions, we recognize the transformative potential of real-time 3D to drive real-world carbon reduction at scale. We see creators innovating to develop new efficiencies, reduce negative environmental impacts, and proactively prepare the world for a climate-resilient future every day. In April of this year, we collaborated with the United Nations Environment Programme and Project Drawdown to launch the Unity for Humanity Environment and Sustainability Grant in support of creators who are using RT3D to realize a more sustainable future. The winners of this grant are:Powers of X: A VR experience designed to raise awareness around humanity’s impact on global climate change and empower high school students to act.District 64: A VR storytelling experience that challenges systemic injustice and illustrates the grave impacts of urban oil drilling on health in marginalized communities.Origen: An interactive, immersive experience that spotlights the destruction of sacred territory endured by Indigenous communities, highlighting the ancient way of understanding human life through our relationship to the land.Learn more about the grantees in this Unity for Humanity Summit session recording and stay tuned for more updates.Thank you to everyone who participated in this year’s Unity for Humanity Summit – we can’t wait to see what you create next. Get future updates and event information by joining the social impact mailing list.*Demographically diverse: Diversity of theme, geography, creator background and experience, medium, etc.

>access_file_
1264|blog.unity.com

Made with Unity: Creating and training a robot digital twin

Our Made with Unity: AI series showcases Unity projects made by creators for a range of purposes that involve our artificial intelligence products. In this example, we feature a recent submission to the OpenCV Spatial AI competition that showcases robotics, computer vision, reinforcement learning, and augmented reality in Unity in an impressive suite of examples.Unity has a world-class, real-time 3D engine. While the engine and tools we have created traditionally supported game developers, the AI@Unity group is building tools around areas like machine learning, computer vision, and robotics to enable applications outside of gaming, especially those that rely on artificial intelligence and real-time 3D environmentsGerard Espona and the Kauda Team’s submission to the OpenCV Spatial AI competition utilized many of our AI tools and packages across multiple examples. They used our Perception Package to help train computer vision models and the ML-Agents toolkit to train their machine learning models and do a sim2real demonstration of a robotic arm. We interviewed Gerard to find out what inspired him to build this project. Read on to learn more about how he brought this project to life in Unity and in the real world.Where did you get the Kauda Team name from?Kauda Team is composed of Giovanni Lerda and myself (Gerard Espona) with the name coming from the free and open-source 3D-printable desktop-sized 5-axis robotic arm that Giovanni created called Kauda. This is a great desktop robot arm that anyone can make and allowed us to collaborate remotely on this project.We developed Kauda Studio which is a Unity application that powers the Kauda digital twin. It provides a fully functional, accurate simulation of Kauda with inverse kinematics (IK) control, USB/Bluetooth connection to the real Kauda, and can support multiple OpenCV OAK-D cameras.The OAK-D camera combines two stereo depth cameras and a 4K color camera with onboard processing (powered with Intel MyriadX VPU) to automatically process a variety of features. As part of the contest, we built a Unity plug-in for OAK devices, but we also wanted to have a digital twin in Unity as well. The OAK-D Unity digital twin provided a virtual 3D camera with an accurate simulation that could be used for synthetic data gathering. It also allows for virtual images to be fed into the real device’s pipeline. We were able to use the Unity Perception Package to collect synthetic for training custom items with the virtual OAK-D camera.Having a digital twin allowed us to enable additional features on Kauda. You can also use augmented reality (AR) features of Unity to interact with a virtual robot in the real world. One application is to learn how to perform maintenance on a robot without requiring a robot there. This also allows the programming of sequential tasks with a no-code approach having a virtual and accurate representation of the robot.The digital twin enabled us to perform reinforcement learning (RL) training. RL is a time-consuming process that requires simulation for anything beyond extremely simple examples. With Kauda in Unity, we used the ML-Agents toolkit to perform RL training for control.We also began testing human-machine collaboration and safety procedures by replicating the robot in Unity and using the cameras to measure where the human was inside the robot area. You can imagine doing this for a large robot that can cause injury to humans when errors occur. The simulation environment lets us test these scenarios without putting humans in danger.We believe RL is a powerful framework for robotics and Unity ML-Agents is a great toolkit that allows our digital twin to learn and perform complex tasks. Because of the limited time frame of the contest, the goal was to implement a simple RL “touch” task and transform the resulting model to run inference on the OAK-D device. Using ML-Agents, the robot learned the optimal path using IK control to dynamically touch a detected 3D object.To accomplish this, we first implemented a 3D object detector using spatial tiny YOLO. The RL model (PPO) uses the resulting detection and the position of the IK control point as input observations. As output actions, we have the 3-axis movement of the IK control point. The reward system was based on a small penalty in each step and a big reward (1.0) when the robot touched the object. To speed up training, we took advantage of multiple agents learning simultaneously developing virtual spatial tiny YOLO with the same outputs as real spatial tiny YOLO.Once the model was trained, we transformed it to OpenVino IR and Myriad Blob format using the OpenVino toolkit to load the model on an OAK-D device and run inference. The final pipeline is a spatial tiny YOLO plus RL model. Thanks to our Unity plugin, we were able to compare inference using ML-Agents and OAK-D agents inside Unity side by side.The first stage of our pipeline is a 3D object detector, which is a very common starting point for AI-based computer vision and robotic tasks. In our case, we used a pre-trained tiny YOLO v3 model and thanks to the Unity Perception package we were able to train a custom category. It allowed us to generate a large synthetic dataset of 3D models with automatic ground-truth bounding box labeling in a matter of minutes. Usually, the collection and labeling process is a manual human process that is very time consuming. Having the ability to generate a rich dataset with plenty of randomization options to have different rotations, lightning conditions, texture variations, and more is a big step forward.The timing required to sync the virtual and real-world items was a little off at times. We think this could be resolved by using ROS in the future and it is nice that Unity officially supports ROS now.Gerard has a full playlist of videos documenting their journey with a few notable videos including a webinar with OpenCV and the final contest submission video. He has also released the OAK-D Unity plug-in on Github to help others get started on their project.We are excited to see our tools enable projects like this to come to life! If you are looking to add AI to your projects in Unity, we have many examples and tutorials to get you started! The Unity Perception Package allows you to easily gather synthetic data in Unity. The Unity Robotics Hub has tutorials and packages to get you started with ROS integration and robotics simulation. And the ML-Agents toolkit makes reinforcement learning simple with many environments to get started with.

>access_file_
1265|blog.unity.com

Automotive HMI Template: Take it for a ride

Want to see how to use Unity to create rich, interactive human-machine interfaces (HMIs) in cars? Check out our new Automotive HMI Template, available for free on the Unity Asset Store, and watch our webinar for an expert-led walkthrough.Step into any new vehicle today, and you’ll see dashboard innovations beyond Henry Ford’s wildest dreams. As automakers push the boundaries on the number and size of screens inside their vehicles, they unlock opportunities to create more engaging experiences for drivers and passengers.Car companies want to build rich, interactive content and are under increasing pressure to do so. As one auto exec memorably put it, “Screens are the new horsepower.”Modern HMI user interfaces (UI) aren’t that different from a video game. They need to be interactive, provide compelling visuals and incorporate various simultaneous transition animations.With its roots as a game engine, Unity is especially well suited for creating this type of content. Just as it enables collaboration among game developers, automotive teams and partners can use Unity for prototyping and production, ensuring what they dream up is what users ultimately experience. Thanks to its collaboration and extensibility features, Unity is a popular tool for HMI and UI/UX prototyping across the automotive industry.Unity also provides unparalleled multiplatform support for teams to deploy to their target of choice. We’ve worked to ensure Unity supports a range of embedded operating systems and chipsets to make the jump possible from prototyping all the way into production. We provide platform licensing for Intel, NVIDIA, NXP and Qualcomm processors, as well as major operating systems like Android Automotive, BlackBerry QNX and Yocto Linux. We’re also working with map providers like HERE Technologies to create more immersive navigation and location-based experiences.HMI use cases extend across all sorts of embedded systems, from car dashboards and in-flight entertainment to home appliances and digital twins. Our Unity for HMI solutions support this vast range of applications.Introducing the Automotive HMI TemplateWe created the Automotive HMI Template to help simplify your HMI design and development, and to demonstrate how Unity can be used for creating any rich, interactive and dynamic content. The template provides great value as a starting point for creating an HMI project, and it includes several assets to inspire you.We focused the template on priority use cases in the automotive industry, such as the following.Multi-display supportEmbedded HMIs may span more than one display. The template features a multi-screen manager for cluster, in-vehicle infotainment (IVI) and heating, ventilation and air conditioning (HVAC) UIs, and shows how Unity can support multiple displays based on the project contents.High-quality visualsThe look and feel of an HMI is critical to the user experience. The template shows the stunning results possible with Unity’s Universal Render Pipeline (URP), which enables teams to achieve best-in-class graphics while optimizing for performance on your embedded target.What you see is what you getTeams working on HMI systems traditionally build and test components in silos. A design team may work with an entirely different toolchain than a development team. When they are ready to see the results of their work, traditional HMI toolchains don’t let them debug within a scene – once the simulation is running, they are effectively locked out, which ultimately lengthens revision cycles and time to market.One of Unity’s major advantages is that you can design, develop and debug on the fly, instantly see your changes accurately displayed, for both 2D and 3D content. The template showcases how Unity lets you see the UI and your scene at the same time, even during simulation.These features are just a taste of what’s included in the Automotive HMI Template. It demonstrates plenty of specialized Unity functionality that can be used for HMI design, such as scriptable objects that store data like skins or the vehicle status.The making of the Automotive HMI TemplateThis work was inspired by the exterior automotive design of Tianze Yu and the interior automotive design of Lukas Chesla, which is part of a larger sponsorship with the College for Creative Studies (CCS) in Detroit, Michigan. To learn about Unity’s collaboration with CCS, read our blog post.We teamed up with Innovation Works, a Unity Certified Creator and renowned automotive design studio, to build the template. This project is built with Unity 2020 LTS to provide the best support for embedded systems.Get startedDownload the template today on the Unity Asset Store. Watch our webinar to get an in-depth walkthrough of the template from our technical experts.Speak to a Unity sales representative to bring your HMI project to life.

>access_file_
1266|blog.unity.com

Creatives and performance: Why I became obsessed with art and science

Let’s face it - not every one of your game’s creatives is going to have a sky-high IPM and drive quality users by the thousands. As Creative Lead at ironSource, I’ve helped design and launch enough creatives to recognize not all of them are hits. But, I’ve learned to celebrate these failures and learn from them - doing so gets you closer to unlocking the secret sauce of success. Even those who don’t feel they have creativity running through their veins can still crack creatives and help a game scale quickly and profitably. Here’s a framework for doing just that so you can gather and apply the learnings that optimize your game’s creative strategy.Step 1: Start chewing over the dataBefore you start building a great creative, you need a marketable concept - this starts by analyzing data to determine what’s performing well and what areas need improvement. We’ve broken down three ways you can approach creative thinking and how to use performance metrics to back up your decisions:Current top-performers, future playablesWhen designing your video creatives, test a few versions that highlight different parts of your game, like the core mechanic, metagame, and other relevant parts. Analyze each of these themes by looking at performance KPIs like IPM and CVR and quality metrics like retention and ROAS so you can confirm what aspects and themes are resonating with your users. Then you can take these top-performers and adapt them into an engaging and interactive creative, like a playable. Spoiler alert: At the end of this process, measure your playable’s performance by looking at in-game metrics to see how well your creative attracted quality users.Embrace your inner sonar by tracking industry trendsIf your gameplay is the foundation for your creative concept, you can build another layer by applying trends in the industry and in your game’s genre. Trends perform well because users are familiar with them and they already have a track record of success.The first version of the creative for the game Wheel Scale by Supersonic used a drag mechanic. In the sixth version, it was swapped for a drawing mechanic because this was trending in the top charts. The creative with the drawing mechanic achieved 45% more scale, a 5% higher IPM, and boosted CVR by 34% - and importantly, all while keeping LTV stable, which indicated the quality of the users was still high even as the mechanic in the creative changed.Think like a game designerGame designers prioritize gameplay and staying true to the core concept as they brainstorm new ways to level up in the next update. Stepping out of the shoes of a creative designer and into those of a game designer can help you meld the two worlds - creativity and hooking users with gameplay. As you adopt this new perspective for your creative strategy, you can come up with a new hook that entices users while highlighting the core mechanic. For example, if you have a match-3 mechanic and a bakery theme, you can introduce a storyline in your creative around building up your bakery and feature a Muffin Woman character while showing actual gameplay. Now, your ad appeals to users who enjoy a story-based game, like a bakery theme, and play match-3 games. Hold up: If your game doesn’t have a bakery theme, this tip still applies to you. This theme in your creatives could be something that resonates with your users - try testing it as a wild card before implementing it into your game.Analyzing the impact of these three approaches comes down to looking at the data. With each tweak to your creatives, check the KPIs, apply your learnings to each variation, and optimize as you work towards uncovering your game’s secret sauce. We’re going to get into this more soon, but keep in mind that this cycle of learning and optimizing is an endless process - you should design new creatives and measure their impact continuously.The next part of the framework is all about executing and optimizing after getting into the creative mindset.Step 2: Time to improve performanceWhen first launching your UA campaign, you can’t know for sure what creatives are going to succeed - even someone with decades of experience in the industry can’t know with certainty what’s going to work. Take our creative team, for example: Many of us have been designing creatives for years and years, but we still place friendly bets about which iteration will perform best because the results can always surprise us until we finish running the test.Great execution comes down to macro A/B tests that compare the impact of making a drastic change to your creatives and micro decisions that change small details. Designing many iterations that initially change large aspects then making minor tweaks ensures you’re testing as much as possible and gathering data to optimize efficiently while not missing out on an opportunity to give your creative performance a boost. Let’s start with the macro changes.Go big: A/B test major elementsWhile small tweaks can help you quickly fine-tune creatives, macro A/B tests can unlock success by changing a big part of the design. Some common examples of major features to change include the lead character (female vs. male, animal vs. human), level difficulty, and camera angle. Just because it’s a big change doesn’t mean it necessarily takes too much time or resources to test - you can test many of these features directly through Unity.One of the elements that we often A/B test is the beginning of the funnel - the tutorial screen - which is essential for attracting and engaging users. Idle Barber Shop Tycoon from Codigames demonstrates this type of macro decision in action. We tested two different versions of the tutorial in the playable - one that highlights the barbershop environment and another that leverages a trend of showing characters with strong expressions by having users wake up the sleeping hairdresser. The version that used the sleeping character earned a 60% engagement rate compared to the 54% ER of the other creative. This iteration attracted potential players with a visual hook that reduced user reaction times (time to engage) by 33% - it became the winning version and achieved over 100 million impressions.Making both micro decisions and macro changes then analyzing the impact can help you make more efficient and impactful decisions as you iterate on your creatives. Knowing exactly what worked - and what didn’t - helps you identify the opportunities for optimizing creatives and, in turn, gameplay. You can close the loop of UA and monetization by integrating the concept or feature that performed well from your playable into your build. To know if your change made an impact on monetization and game performance, track its impact by looking at in-game metrics.Refine the little thingsIn the past, creative teams thought their work was complete when they finished designing and launching an ad set. Now, you can’t head to the beach the moment your creative goes live - it’s a cycle of crunching the numbers and optimizing performance so you get the most out of your creative strategy. Use in-ad metrics like engagement rate and drop-off to pinpoint parts of your interactive creative that are failing and then make small quick tweaks to improve performance. Not every creative is going to help you scale exponentially from the get-go, but each creative does contain important information about performance and what’s working with your users. Recognizing what pieces of the creative are failing can help you quickly identify and adjust them to try and unlock scale.Let’s say a playable ad of yours isn’t scaling well. Looking at the data, you notice users are dropping off quickly or failing the game in the first few seconds - this could indicate it’s too short and/or challenging. So you add a feature to increase the duration of gameplay and give users an automatic extra life so they don’t immediately fail. Now they play all the way through to the end card, feel less frustrated (although striking the right balance of frustration can be a very strong feeling to motivate them to keep playing), and want to download your game and keep playing all night.Small changes can create a sense of urgency, too - a trend we’re seeing that encourages engagement as users feel compelled to overcome the challenge and beat the level shown in the creative. You can accomplish this by adding features like a timer, showing characters asking for help or panicking, including a red flashing border, or increasing speed. To see if a quick tweak like this worked, look at your data and check if it boosted key metrics like engagement rate, CVR, and IPM.Finding your secret sauceEach game is going to have its own recipe for the secret sauce of creative success - it depends on your genre, concept, and specific elements within your game. But if you approach your creative strategy in a way that focuses on testing and iterating then analyzing the data to help you extract a hook, you can find your unique recipe and unlock scale into the long-term. With each successful campaign, gather insights and build a knowledge center that you can apply to your next set of creatives. While every game’s user acquisition strategy will need to strike a different balance between the creative, the bid, and user quality, you can more easily find this formula by applying your learnings from previous campaigns.Editor’s note: This article is based on the exclusive presentation that Creative Lead at ironSource Elad Gabison gave at the virtual LevelUp 2021 conference. Check out the video from LevelUp 2021 below.

>access_file_
1267|blog.unity.com

What is the Unity for Humanity program?

Dot’s Home, UFH 2020 grant recipientUnity for Humanity is a Unity Social Impact program that empowers changemakers to foster a more inclusive, equitable, and sustainable world using real-time 3D (RT3D).Unity for Humanity connects, uplifts, and serves communities across three focus areas: environment and sustainability, digital health and wellbeing, and education and inclusive economic opportunity.Unity for Humanity was born out of the desire to support social impact creators building in Unity. The program was co-founded in 2018 by producer and environmental activist Amy Zimmerman, with the visionary support of the VP of Social Impact, Jessica Lindl. With their leadership, The Unity for Humanity program grew from a small annual grant at Unity, to a robust creator support program.In 2021 alone, Unity for Humanity has supported creators through:The Unity for Humanity Grant for all social impact content using RT3DThe Rare Impact Challenge for mental health and wellbeing projectsThe Environment and Sustainability Grant, created in collaboration with Project Drawdown and the United Nations Environment Programme (UNEP)The Black Visions Grant, which supported Black-led social impact, including four projects selected for the Tribeca Film Festival Juneteenth ProgramBringing people together and empowering visionary RT3D creators to make a positive impact on the world is what drives the Unity for Humanity program. We believe that stories rooted in community have the power to create positive global change.Whether it’s a game encouraging citizen science, an AR experience about civil rights, or a tool for connecting children impacted by grief, global creators are using RT3D to make an impact in their communities.Unity for Humanity is guided by several big questions: How can technology serve humanity and global communities? How can we exponentially grow the impact social impact creators are having around the world? How can we amplify underrepresented voices using RT3D?The values of respect, empathy, and opportunity are embedded in everything we do. Our aim is to foster an inclusive social impact creator ecosystem.Please note: Unity for Humanity will not accept projects or content that includes or promotes hate, violence, bullying, harassment, threats against individuals or groups of people, or illegal content.The Unity for Humanity program supports social impact-driven RT3D projects. All project genres (game, XR, film, solution, etc) are eligible to apply, so long as they are built on a real-time 3D platform. In order to apply for a Unity for Humanity Grant, projects must be in production (early-stage production is accepted). Creators should be able to share their vision for the project, visuals, in-progress work, and impact plan.All projects must be impact-driven – meaning that they have measurable impact goals and/or calls to action – and encompass themes of social, healthcare, education, humanitarian, and/or environmental issues. Projects must also align with at least one of the UN 17 Sustainable Development Goals.The rubricTo make sure all future applicants have as much information and context as possible, this is what our Unity for Humanity judges are looking for when reviewing projects. We hope this transparency will help strengthen your Unity for Humanity grant applications.When reviewing grant applications, we consider inclusion, impact, viability, and vision.Inclusion: Inclusive storytelling is at the heart of Unity for Humanity. Does the project reflect a diversity of experiences and backgrounds? Does the project have a natural connection to the community and audience being represented or served through the work? Does the application demonstrate that the creators are thinking about future audiences and distribution of the work in an inclusive way?Vision: Is there a strong motivation for creating the work? Does the project express a unique perspective? Does the project reflect a strong sense of compassion for humanity?Impact: Does the project have measurable impact goals and calls to action? Is the project aligned with at least one of the UN 17 Sustainable Development Goals? Viability: Does the team have a realistic plan of execution for the production and distribution of the project so that it can achieve the greatest impact? Is it realistic in scope? You can find more information and details about the judging process in our Unity for Humanity FAQ. If a grant is announced with a theme – for example, digital health and wellbeing – we may provide even more theme-specific criteria. This information will be shared in our FAQ.Grant DetailsThe Unity for Humanity Grant is designed to support the vision of the creator(s) and the ability of the project to make an impact on the world.Creators who are awarded a Unity for Humanity Grant can expect to receive financial support. These funds are awarded to support the completion and distribution of their social impact project.Grant recipients are also awarded technical support, which includes a kick-off consultation with our developer relations team and bespoke assistance from Unity technical support.We are pleased to also offer our grant recipients marketing support, industry mentorship, and invitations to events including the Unity for Humanity Summit.Most importantly, grant recipients join our incredible community of social impact changemakers and visionary storytellers committed to creating a better world.If you are interested in applying for a Unity for Humanity Grant or attending future events, please subscribe to the Social Impact newsletter. To see how creators are using RT3D to drive meaningful change, please check out unity.com/humanity and the Changemakers showcase.

>access_file_
1268|blog.unity.com

Introducing: Unity Robotics Visualizations Package

With this toolkit for visualizing and debugging the internal state of robotics simulations, Unity can be used as an all-in-one ROS simulation and visualization tool.One of the challenges faced by roboticists is the need to understand what's happening in their system. In a complex interconnected network of components, when something isn't working, how do you figure out which part is going wrong? Is component A generating bad data, or is component B processing it wrong? It's critical to be able to visualize the data travelling around the system.With this in mind, today we're excited to announce the next release from Unity Robotics: the Robotics Visualizations Package is a new package for displaying and customizing visualizations of ROS messages.The Unity Robotics team has been hard at work, releasing several Robotics example projects, such as Pick-and-Place and Object Pose Estimation. Most recently, we released our Nav2-SLAM Example, demonstrating an autonomous robot navigating and mapping an unknown space, all simulated in Unity.The Robotics Visualizations Package builds on and supports these packages by offering a library of customizable visualizations for all the common ROS message types: shapes, poses, point clouds, images, sensors of all kinds, and more. It natively supports the ROS transform tree, and allows you to enable, disable and customize visualizations for any ROS topic at runtime.Here's how easy it is to use the Robotics Visualizations Package to add visualizations to an existing robotics project:Import the Robotics Visualizations Package into Unity using Package Manager.Drag the DefaultVisualizationSuite prefab into your Unity scene.Hit Play, and you’ll see some new buttons in the Heads-Up Display (HUD). Click on the Topics button to see the list of all the topics ROS knows about.Click on whichever topics you want to see visualizations for!Transforms represent the relationships between coordinate frames in a robotic system. All your data are generated in different coordinate frames. Lidar data are in the lidar frame, camera data are in the camera frame, and map data are in the map frame. In order to make sense of all these disparate data sources, we need to have a common frame of reference. Transforms help us do this by keeping track of the relationships between these frames. Debugging robots without putting the sensors and algorithms in this 3D context is nearly impossible. With the Robotics Visualizations Package you can now view data in real time alongside Unity scenes and assets and seamlessly switch between simulated and real data.Robot mapping is tricky. Maps can fracture, robots can drift. Is your odometry tuned correctly or did the map just break in half because an optimization-based SLAM algorithm broke down? The Unity Robotics Visualizations Package enables you to visualize the occupancy grid, transforms, localization, and lidar point cloud all on top of one another, enabling you to visually see where algorithms break down.The Robotics Visualizations Package supports most common ROS message types, including Transforms, Occupancy Grids, 3D point clouds, markers, laser scans, images (jpeg, png and uncompressed), and more. And has opportunities for customization if you have a unique data type you need to view!The Robotics Visualizations Package also supports user-created visualizations: it includes a powerful set of tools to draw anything you need, and/or build on and customize the built-in visualizations. Here are some highlights:Drawing3d is an easy-to-use utility class for drawing arbitrary textured/colored lines, shapes, meshes and labels in 3D space. For example, you could use it to draw the ghost of an object at the position where you predict it will be, a line showing the trajectory it will follow to get there, and more lines around it to indicate error bars.PointCloudDrawing is a GPU-optimized point cloud renderer, which can display up to 10 million billboarded points each with their own size and color, at interactive speeds. You can use it to display volumetric data such as 3D scans or depth images.Display historical data trends, 3D movement trails, and more with the HistoryDrawingVisualizer template, which maintains a configurable-length history of the messages sent on a topic. You can analyse and display that history however you want.And of course, all this is backed up by the power and ease of use of the Unity Engine, offering features such as AR and VR support; custom shaders with Unity Shader Graph; compute shaders, cloud simulation, and more.We're excited to see what you all do with the Robotics Visualizations Package!To get started with the Robotics Visualizations Package, check out this new extension to the Nav2-SLAM tutorial that demonstrates how to use the new package.Our Robotics Visualizations Package is just a part of our growing ecosystem of robotics packages and features that enable robotics in Unity. For more robotics projects, visit the Unity Robotics Hub on GitHub.Be sure to visit us on the Robotics Forum, or email us at unity-robotics@unity3d.com with your feedback and suggestions!

>access_file_
1269|blog.unity.com

Speed up your programmer workflows

We recently published two blog posts, Five ways to speed up your workflows in the Editor and Speed up your artist workflows, both based on our e-book for professional developers, 70+ tips to increase productivity with Unity 2020 LTS. In this third and final blog post of the series, we focus on workflows and the features that help programmers get more done in less time. Let’s start with how you can save compilation time when playtesting.When you enter Play mode, your project starts to run as it would in a build. Any changes you make in-Editor during Play mode will reset when you exit Play mode.Each time that you enter Play mode in-Editor, Unity performs two significant actions:Domain Reload: Unity backs up, unloads, and recreates scripting states.Scene Reload: Unity destroys the Scene and loads it again.These two actions take more and more time as your scripts and Scenes become more complex.If you don’t plan on making anymore script changes, leverage the Enter Play Mode Settings (Edit > Project Settings > Editor) to save some compile time. Unity gives you the option of disabling either Domain Reload, Scene Reload, or both. This can speed up entering and exiting Play mode.Just remember that if you plan on making further script changes, you need to re-enable Domain Reload. Similarly, if you modify the Scene Hierarchy, you should re-enable Scene Reload. Otherwise, unexpected behavior can arise.An assembly is a C# code library, a collection of types and resources that are built to work together and form a logical unit of functionality. By default, Unity compiles nearly all of your game scripts into the predefined assembly, Assembly-CSharp.dll. This works well for small projects, but it has some drawbacks:Every time you change a script, Unity recompiles all other scripts.A script can access types defined in any other script.All scripts are compiled for all platforms.Organizing your scripts into custom assemblies promotes modularity and reusability. It prevents them from being added to the default assemblies automatically, and limits which scripts they can access.You might split up your code into multiple assemblies, as shown in the diagram above. Here, any changes to the code in Main cannot affect the code in Stuff. Similarly, because Library doesn’t depend on any other assemblies, you can easily reuse the code from Library in just about any other project.Assemblies in .NET has general information about assemblies in C#. Refer to Assembly definitions in the Unity documentation for more information about defining your own assemblies in Unity.Ever catch yourself repeating the same changes when you create a new script? Do you instinctively add a namespace or delete the update event function? Save yourself a few keystrokes and create consistency across the team by setting up the script template at your preferred starting point.Every time you create a new script or shader, Unity uses a template stored in%EDITOR_PATH%\Data\Resources\ScriptTemplates:Windows: C:\Program Files\Unity\Editor\Data\Resources\ScriptTemplatesMac: /Applications/Hub/Editor/[version]/Unity/Unity.app/Contents/Resources/ScriptTemplatesThe default MonoBehaviour template is this one:81-C# Script-NewBehaviourScript.cs.txtThere are also templates for shaders, other behavior scripts, and assembly definitions.For project-specific script templates, create an Assets/ScriptTemplates folder, then copy the script templates into this folder to override the defaults.You can also modify the default script templates directly for all projects, but make sure that you backup the originals before making any changes.The original 81-C# Script-NewBehaviourScript.cs.txt file looks like this:It’s helpful to keep these two keywords in mind:#SCRIPTNAME# indicates the file name entered or the default file name (for example, NewBehaviourScript).#NOTRIM# ensures that the brackets contain a line of whitespace.Relaunch the Unity Editor, and your changes should appear every time you create a custom MonoBehaviour.You can also modify the other templates in a similar fashion. Remember to keep a copy of your original, plus modifications, somewhere outside the Unity project for safekeeping.Unity has a variety of attributes that can be placed above a class, property, or function to indicate special behavior. C# contains attribute names within square brackets.Here are some common attributes you can add to your scripts:This is just a small sample of the numerous attributes you can work with. Do you want torename your variables without losing their values? Or invoke some logic without an empty GameObject? Check out the Scripting API for a complete list of attributes to see what’s possible.You can even create your own PropertyAttribute to define custom attributes for your script variablesOne of Unity’s most powerful features is its extensible Editor. Use the UI Toolkit package or the IMGUI mode to create Editor UIs, such as custom windows and inspectors.UI Toolkit has a workflow similar to standard web development. Use its HTML- and XML-inspired markup language, UXML, to define user interfaces and reusable UI templates. Then, apply Unity Style Sheets (USS) to modify the visual style and behaviors of your UIs.Alternatively, you can use immediate mode, IMGUI, as mentioned above. First derive from the Editor base class, then use the CustomEditor attribute.Either solution can make a custom inspector.A custom Editor modifies how the MyPlayer script appears in the Inspector:See Creating user interfaces (UI) for more on how to implement custom Editor scripts using either UI Toolkit or IMGUI. For a quick introduction to UI Toolkit, watch the Getting started with Editor scripting tutorial.The Addressable Asset System simplifies how you manage the assets that make up your game. Any asset, including Scenes, Prefabs, text assets, and so on, can be marked as “addressable” and given a unique name. You can call this alias from anywhere.Adding this extra level of abstraction between the game and its assets can streamline certain tasks, such as creating a separate downloadable content pack. This system also facilitates referencing those asset packs, whether they’re local or remote.To begin, install the Addressables package from the Package Manager, and add some basic settings to the project. Each asset or Prefab in the project should have the option to be made “addressable.” Check the option under an asset’s name in the Inspector to assign it a default unique address. Once marked, the corresponding assets will appear in the Window > Asset Management > Addressables > Groups window.For convenience, you can either rename each address in the asset’s individual Address field, or simplify them all at once.Bundle these assets to host them on a server elsewhere, or distribute them locally within your project. Wherever each asset resides, the system can locate them using the Addressable Name string.You can now use your Addressable assets through the Addressables API.It’s worth noting that, without Addressables, you’d have to complete the following to instantiate a Prefab in your Scene:The disadvantage here is that any referenced Prefab (like prefabToCreate) would load into memory, even if the Scene doesn’t need it.Using Addressables, do this instead:This loads the asset by its address string, meaning that the Prefab does not load into memory until it’s needed (when we invoke Adressables.Instantiate inside CreatedPrefabWithAddress). Additionally, you can use Addressables for high-level reference counting, to automatically unload bundles and their associated assets when they’re no longer in use.Tales from the optimization trenches: Saving memory with Addressables shows an example of how to organize your Addressables Groups so that they are more memory efficient. Meanwhile, the Addressables: Introduction to concepts tutorial offers a quick overview of how the Addressable Asset System can work in your project.If you’re operating a live game, then you might want to consider using Unity’s Cloud Content Delivery (CCD) solution with Addressables. The Addressables system stores and catalogs game assets so that they can be located and called automatically. CCD then pushes those assets directly to your players, completely separate from your code. This reduces your build size and eliminates the need to have your players download and install new game versions every time you make an update. To learn more, read this blog on the integration between Addressables and Cloud Content Delivery.The Platform Dependent Compilation feature allows you to partition your scripts to compile and execute code for a specifically targeted platform.This example makes use of the existing platform #define directives with the #if compiler directive:Use the DEVELOPMENT_BUILD #define to identify whether your script is running in a player that was built with the Development Build option.You can compile selectively for particular Unity versions and/or scripting backends, and even supply your own custom #define directives when testing in the Editor. Open the Other Settings panel, part of the Player settings, and navigate to Scripting Define Symbols.See Platform Dependent Compilation for more information on Unity’s preprocessor directives.A ScriptableObject is a data container that saves large amounts of data, seperate from class instances. ScriptableObjects avoid copying values, which can reduce your project’s memory usage. Check out the full e-book for some examples of how to use ScriptableObjects. Otherwise, peruse the ScriptableObject documentation for further details on using ScriptableObjects in your application.Looking for even more? Watch Better data with ScriptablesObjects in Unity for a quick introduction, and see how they can help with Scene management in Achieve better Scene workflow with ScriptableObjects.Optimization tipWe recommend binary serialization formats such as MessagePack or Protocol Buffers for saved data, rather than text-based ones, such as JSON or XML. In project reviews, these binary serialization formats reduce the memory and performance issues associated with the latter.Unity offers support for the following integrated development environments (IDEs):Visual Studio: Default IDE on Windows and macOSVisual Studio Code: Windows, macOS, LinuxJetBrains Rider: Windows, macOS, LinuxIDE integrations for all three of these environments appear as packages in the Package Manager.When you install Unity on Windows and macOS, Visual Studio is installed by default. If you want to use another IDE, simply browse for the editor in Unity > Preferences > External Tools > External Script Editor.Rideris built on top of ReSharper and includes most of its features. It supports C# debugging on the .NET 4.6 scripting runtime in Unity (C# 8.0). For more information, see JetBrains’ documentation on Rider for Unity.VS Code is a free, streamlined code editor with support for debugging, task running, and version control. Note that Unity requires Mono (macOS and Linux), Visual Studio Code C#, and Visual Studio Code Debugger for Unity (not officially supported) when using VS Code.Each IDE has its own merits. See Integrated development environment (IDE) support for more information on choosing the right IDE for your needs.Take a look at the e-book for a list of shortcuts that can benefit your project, and watch Visual Studio tips and tricks to boost your productivity for more workflow improvements with Visual Studio.Interested in exploring JetBrains Rider? Check out Fast C# scripting in Unity with JetBrains Rider, or these tips on using JetBrains Rider as your code editor.The Unity Debugger allows you to debug your C# code while the Unity Entity is in Play mode. You can attach breakpoints within the code editor to inspect the state of your script code and its current variables at runtime.Go to the bottom right of the Unity Editor Status Bar to set the Code Optimization mode to Debug.You can also change this mode on startup at Edit > Preferences > General > Code Optimization On Startup.In the code editor, set a breakpoint where you want the Debugger to pause execution. Simply click over the left margin/gutter area where you want to toggle a breakpoint (or right-click there, to see the context menu for other options). A red circle will appear next to the line number of the highlighted line (see image below).Select Attach to Unity in your code editor, then run the project in the Unity Editor.In Play mode, the application will pause at the breakpoint, giving you time to inspect variables and investigate any unintended behavior.As shown above, you can inspect the variables when debugging by watching the list build up, one step at a time, during execution.Use the Continue Execution, Step Over, Step Into, and Step Out controls to navigate the control flow.Press Stop to cease debugging and resume execution in the Editor. You can debug script code in a Unity Player as well. Just make sure that Development Build and Script Debugging are both enabled in the File > Build Settings before you build the Player. Check Wait for Managed Debugger to wait for the Debugger before the Player executes any script code. To attach the code editor to the Unity Player, select the IP address (or machine name) and port of your Player. Then proceed normally in Visual Studio with the Attach To Unity option.Unity provides a Debug class to help you visualize information in the Editor while it’s running. Learn how to print messages or warnings in the Console window, draw visualization lines in the Scene and Game views, and pause Play mode in the Editor from script. Here’s a few more tips to help you get going:1. Pause execution with Debug.Break. This is useful for checking certain values in the Inspector when the application is difficult to pause manually. 2. You should be familiar with Debug.Log, Debug.LogWarning, and Debug.LogError for printing Console messages. Also handy is Debug.Assert, which asserts a condition and logs an error upon failure. Note, however, that it only works if the UNITY_ASSERTIONS symbol is defined. Log messages, warnings, and errors in the Console.3. When using Debug.Log, you can pass in an object as the context. If you click on the message in the Console, Unity will highlight the GameObject in the Hierarchy window. 4. Use Rich Text to mark up your Debug.Log statements. This can help you enhance error reports in the Console. 5. Unity does not automatically strip the Debug logging APIs from non-development builds. Wrap your Debug Log calls in custom methods and decorate them with the [Conditional] attribute. To compile out the Debug Logs all at once, remove the corresponding Scripting Define Symbol from the Player Settings. This is identical to wrapping them in #if… #endif preprocessor blocks. See this General optimizations guide for more details. 6. Troubleshooting physics? Debug.DrawLine and Debug.DrawRay can help you visualize raycasting.1. If you only want code to run while Development Build is enabled, see if Debug.isDebugBuild returns true. 2. Use Application.SetStackTraceLogType, or the equivalent checkboxes in Player Settings, to decide which kind of log messages should include stack traces. Stack traces can be useful, but they are slow and generate garbage. By default, the Console Log Entry shows two lines. For improved readability, you can streamline them to just one line. See how below.Alternatively, you can use more lines for longer entries.When Unity compiles, the icon in the lower right corner can be difficult to see. Use this custom Editor script to call EditorApplication.isCompiling. This creates a floating window to make the compiler status more visible.Launch the MenuItem to initialize the window. You can even modify its appearance with a new GUIStyle to suit your preferences.Unity has integrations with two version control systems (VCS): Perforce and Plastic SCM. To set the Perforce or Plastic SCM servers for your Unity project, go to Project Settings > Editor. Configure the server (and your user credentials for Perforce) under Version Control.Teams on Unity can use Plastic SCM Cloud Edition for free, for up to 5 GB of storage and a maximum of three users. By using Plastic SCM for Unity, you can sync your changes with your teammates’ work and consult your project history without ever leaving Unity. Read about some recent updates to Plastic SCM here.You can also use an external system, such as Git, including Git LFS (Large File Support) for more efficient version control of larger assets, like graphics and sound resources. For the added convenience of working with the GitHub hosting service, install the GitHub for Unity plug-in. This open source extension allows you to view your project history, experiment in branches, commit your changes, and push your code to GitHub, all within Unity.Unity maintains a .gitignore file. This can help you decide what should and shouldn’t go into the Git repository, and then enforce those rules.Unity Teams is another option for streamlining your workflows, as it allows you to store your entire project in the cloud. This means that it’s backed up and accessible anywhere, making it that much easier to save, share, and sync your Unity projects with anyone.Check out the first two blog posts in this series, 5 ways to speed up your workflows in the Editor and Speed up your artist workflows. You can also download the free 70+ tips to increase productivity with Unity 2020 LTS guide, which compiles all the tips in one practical e-book.As always, please inform us of any additional topics or features you’d like to hear about in the comments, and feel free to share your own productivity tips with the community.

>access_file_
1272|blog.unity.com

How to optimize game performance with Camera usage: Part 1

The world is better with more creators in it, and our new content series, Accelerate Success, is our way of showcasing the technical experience of our Accelerate Solutions Games team. This team is our professional services group that’s responsible for consulting, co-development, and full development with our customers. They often take on some of the most complex and challenging Unity work where our customers are pushing the boundaries of what Unity can do.This series, made up of technical eBooks and webinars, is our way of giving back to the greater gaming community that has helped build Unity. The Accelerate Success series will strive to showcase pragmatic and methodological tips and best practices gathered from our experiences working with top studios from around the world. Each piece from the Accelerate Success content series is curated and written by a different Software Development Consultant from our Accelerate Solutions team, and is based on real-life scenarios and findings.This blog post series was written by one of our leads on the Accelerate Solutions - Games team, Bertrand Guay-Paquet. Bertrand is based in Unity’s Stockholm office and the findings in this post series come from findings during his Projects Reviews that he has carried out for the team.A big part of the work completed by Unity’s Software Development Consultants on the Accelerate Solutions Games team is to deliver Project Reviews. Project Reviews are annual engagements included with Unity customers who subscribe to our Integrated Success plan. During these engagements, we spend two days onsite (or lately on Zoom) reviewing the client’s project in depth. During Project Reviews, our Software Development Consultant will deep dive into a project or workflows, and deliver a comprehensive report identifying areas where performance could be optimized for greater speed, stability, and efficiency.During some of our Project Reviews, we frequently encounter Camera setups that are suboptimal because they have unnecessary Cameras. We normally instantly flag this as something to investigate and our final recommendation to improve performance is usually something along the lines of trying to combine Cameras and removing unnecessary Cameras.Over the years, we observed that additional Cameras decreased performance in several real-world games. However, until now, we didn’t have a set of benchmarks of different Camera setups in a scene. This eBook will discuss how Camera performance benchmarks on mobile hardware to get a better understanding of the situation. We tested both Unity’s Built-in Render Pipeline and the Universal Render Pipeline.At a basic level, a Camera defines a field of view, a position, and an orientation in the scene. These parameters establish the content (Renderers) visible by a Camera. The rendered image of a Camera is output either to a display or a RenderTexture. In both cases, the Camera’s viewport rectangle defines the covered output area.At a high level, every frame, in the Unity engine’s code, each active Camera must:Determine the set of visible Renderers. This is called culling. Culling ensures that only Renderers that potentially contribute to the Camera’s rendered image are drawn on the GPU. In other words, the goal is to avoid drawing as many Renderers as possible to improve performance. Three processes are used for this: Renderers on layers that don’t match the Camera’s culling mask are excluded.Frustum culling excludes Renderers that are outside the Camera’s frustum (i.e. its viewing volume).Occlusion culling excludes Renderers which are completely hidden behind other opaque Renderers. This step is optional and usually not exhaustive.Determine the order in which the GPU draws the visible Renderers. Broadly speaking, the Camera sorts transparent objects from back to front and opaque objects roughly from front to back. Other factors can also affect the rendering order, such as the Render Queue of the material or shader, sorting layers, and sorting order.Generate draw commands for each visible Renderer to submit work to the GPU. These commands set up the Material and Mesh for rendering.As you can probably imagine, these steps gloss over a ton of details and nuances, but they serve as a good starting point to reason about Cameras.To measure the cost of additional Cameras, we used everyone’s favorite test case: spinning cubes! From back to front, every test scene has:A 3D grid of spinning cubes. Each slice is a 10x10 grid of cubes. The number of slices is adjustable and is referred to as the “load factor” in this post series.A single directional light source with soft shadows.2 “game” UI canvases with a panel each to simulate two popups in a mobile game.A separate “overlay” UI canvas for controlling the test.The full test project is available on our Accelerate Solutions Games samples GitHub repository.We used a configurable number of spinning cubes (the load factor) to simulate different conditions in a game. The first scenario has a low load that should approximate something like a game lobby scene. The second scenario is a high load to simulate more demanding gameplay.To get meaningful results, we kept the same scene content identical in all Camera setups. The base scene contains the Main Camera, the spinning cubes, and the UI that controls the test. We then created four scenes containing the “game” UI with different Camera configurations:The optimal scene with only the Main Camera. The “game” UI canvases are set to “Screen Space - Overlay”.The 2 “game” UI canvases are set to “Screen Space - Camera” and assigned to a second Camera. This is a relatively common Camera setup we’ve seen used in games.The 2 “game” UI canvases are set to “Screen Space - Camera” and each is assigned to a separate Camera. This simulates a Camera setup where multiple Cameras are used to composite the UI.This is the same as the third scene with an extra Camera with its culling mask set to “Nothing” so it doesn’t render anything.Each test scenario additively loads one of those scenes over the base scene. We used culling masks to ensure that no GameObject is processed by more than one Camera.Rendering pipelines are the processes by which a Camera inside a scene produces an image. Unity has a Built-in Render Pipeline which was historically the only option for rendering scenes. Unity 2018.1 introduced the Scriptable Render Pipeline (SRP) which offers many more possibilities to control the rendering from C# scripts. The Universal Render Pipeline (URP) and the High Definition Render Pipeline (HDRP) are the two SRP implementations provided by Unity.In this post series, we examine the performance overhead of Cameras in the Built-in Render Pipeline and URP because they support mobile devices. HDRP does not run on those devices so it was excluded from this test. A secondary reason for excluding HDRP is that its range of configuration options for render features is quite large which makes it difficult to create fair and meaningful comparison scenarios of inefficient Camera usage.URP introduced the concept of Camera Stack which consists of a Base Camera and one or more Overlay Cameras. This is what we used for setting up the Cameras in the URP tests. See Manager.cs for the details on how to programmatically set up and change a Camera Stack at runtime.We used Unity 2020.3.11f1 with IL2CPP development builds for our tests. We ran the tests on both the Built-in Render Pipeline and the Universal Render Pipeline. For each render pipeline, we tested with both low and high load factors for a total of 16 profiler captures per device. The five devices used are:Google Nexus 5Samsung Galaxy S7 edge (G935F)Samsung Galaxy A20eiPhone 5siPhone 7We used the Profile Analyzer package to get the mean values of profiler markers on the main thread over captures of 300 frames.Measuring performance reliably is often difficult to achieve since there are both internal and external factors to consider. Mobile phones are notorious for having a few of the latter such as thermal throttling and asymmetric CPU cores. The "Mobile performance handbook: thermal stability" published by our Mobile team is invaluable to overcome these difficulties.This wraps up blog post #1 of How to optimize game performance with Camera usage. Check out our blog in three weeks' time to read about our test results, camera pattern usages to avoid, when to use multiple cameras, and the conclusion of our tests.If you can’t wait to read part 2, you can download the full PDF version of the ebook now.

>access_file_
1273|blog.unity.com

8 factors to consider when choosing a version control system

While implementing your first version control system or moving to a new one can be challenging, the long-term impact is worth it. Here’s what to consider when choosing a version control system, before you commit.Game creation is a rewarding though often chaotic endeavor. During development, many team members with distinct roles and varying levels of technical expertise work on the same project, attempting to align on a single production process. Coordinating with more than one person at a time can be difficult, and this challenge scales exponentially as your team continues to grow.When issues arise, the time it takes to identify and fix the problem can slow everything – and everyone – down. Therein lies the importance of choosing the right version control system (VCS) for your aims.Version control allows you to maintain a bird’s-eye view of your entire project. It brings fundamental organization to your work, which enables your team to iterate rapidly and efficiently. But how?Project files are stored in a shared database called a repository, or “repo.” Managing your files in this way prompts you to back up your project at regular intervals, and conveniently roll back to previous versions if things don’t go according to plan.With a VCS, you can make multiple, individual changes, and “commit” them as a single group for sourcing. This lumps the group of changes together, so that when you revert back to a previous version, everything from that same group is undone. In fact, you can review and modify each change grouped within a “commit,” or undo the “commit” in its entirety. Seeing as you have access to the full history, you can more easily trace and eliminate bugs, as well as restore any previously removed features.Even more, because version control is typically stored in the cloud or a distributed server, it supports your development team’s collaboration across time zones and geographies – an increasingly important benefit as remote work becomes commonplace.Switching from one version control system to another can be demanding, especially if it means changing the technology your team relies on mid-project. But making an informed decision before you commit can certainly pay off.Here are just a few common reasons for implementing or switching to a new version control system:Improved collaboration among and across diverse teamsSpeedy support for managing large binary files and assetsA file-based workflow for making changes to unique files without downloading an entire project buildFlexible and robust branching solutions for your teammates to work in parallel (not just a select few)Enhanced integrations with your current development toolsStronger security to keep your projects protectedHere are eight key factors to consider when choosing your next version control system:1. Your teamImplementing or switching to a new VCS primarily serves to strengthen teamwork. Whether onsite or remote, version control will empower you and your colleagues to coordinate with one another, while working independently. To meet your team’s specific needs, ask yourself: How many people will be using this new system? What is their level of technical expertise? What do they think of your current options – and what would they want from something new?To improve productivity, ensure that everyone is equipped to make changes, without the need for technical intervention. Selecting a system that’s easy to use for all teammates, including non-technical artists, can reduce the emotional cost of switching to a new VCS. Less resistance makes for a quick adoption, followed by fast results.2. File types and sizesAs the gaming industry expands, so too do consumer expectations. Gamers consistently expect better graphics, bug-free launches, post-launch updates, and stellar support. For developers, the stakes are high – and they’re going to get higher.The growing complexity of game design means working on and managing more intricate projects with a wider array of file types, larger files, and potentially huge repos. To establish smooth workflows and rapid merges, take on a VCS that can handle your projects at scale. Remember, choosing a version control system is a long game. Even if your team isn’t handling large files now, your needs are sure to change eventually. Think ahead to set yourself up for lasting success.3. Ease of setup and maintenanceThis factor goes back to the first item on our list – your team. Does your team have the time, expertise, and overall bandwidth to implement and maintain a new version control system? How quickly can implementation take place? Will there be ongoing support once the system is up and running?Let’s not forget: An easier setup means a faster setup. And a timely launch means giving your team a longer runway to adapt and start working more efficiently. So if the technical aspects of setup and maintenance are a concern, evaluate the customer success rate for packages you’re considering. Start by reading reviews to help determine the best possible VCS for your objectives.4. Your workflowsIt’s crucial to consider the processes and tools that your team uses day-to-day. Choosing a version control system that integrates smoothly with other required tools speeds up implementation and minimizes disruptions.Another workflow-related factor to think about is whether your version control system supports branching. Branching is when someone who’s working on a specific set of project files isolates those files from the main project branch, or “trunk.” This allows them to test changes without affecting the main trunk.Changes can then be merged back into the main trunk once they’ve been assessed and proven stable.In game development, you will likely need to facilitate a high number of branches and create them swiftly. Directory-based systems can lead to improper branching and frequent merge conflicts as your team encounters difficulties merging back into the trunk.With branching, you can prioritize project stability so that your team continues working toward their shared goals, without impacting the work of others.5. The timing of your system implementationImplementing a version control system involves a rigorous adaptation process. After all, this change can completely overhaul existing workflows and tools.Strategically timing the implementation of your VCS can reduce its impact on current projects, and speed up the adoption of new systems. That said, it is ideal to implement your chosen VCS either at the start of a new project, or during the postmortem phase of a product you’ve just launched.Of course, things don’t always go according to plan. You might find yourself in the position of migrating to a new version control system in the middle of a project. While this isn’t ideal, it’s not impossible.Learn more about migrating your Unity Collaborate projects to Plastic SCM below:6. What you’ll spend (and where you’ll save)Version control systems rank among the more affordable DevOps tools; the true costs lie in implementation. With this in mind, try to evaluate a system for its benefits, and how it can help you save in other ways.It’s also worth choosing something that’s easily accessible to your whole team, no matter their technical background. As previously mentioned, everyone should be given the chance to contribute autonomously. When your teammates’ needs are not met, hidden costs can start to surface.If your version control system is challenging to grasp, for example, you’ll have to spend more time teaching others how to use it, as well as creating dense internal documentation on version control best practices. And when teammates can’t work independently, internal frustrations can begin to rise. Whatever you choose, it shouldn’t stand in the way of your team’s overarching success.7. Security requirementsVersion control isn’t just about managing your game’s source code. The system you choose will also store other assets like business and procedural documentation, design files, tool configurations, and so on.To keep these files safe, your VCS should provide multiple levels of protection and permissions. This will help secure your code and IP assets from outside intrusions – and safeguard them from the possibility of internal leaks, too.8. Level of flexibilityWould you consider your team big or small? Do you work out of a single office, or are you distributed? Depending on these factors, you’ll need different levels of flexibility from your version control system. More than this, you’ll have to decide whether you’ll operate using centralized, distributed, or multisite workflows. Let’s look at the advantages of each.Centralized workflowsA centralized workflow uses a check-in/push workflow to connect to your main server. Whenever changes are made, they are automatically stored in your repository as a new version. This way, you get powerful branching and merging without cloning your repository to multiple machines. It’s a simple and secure solution.Distributed workflowsWith a distributed workflow, you can check in, branch, and merge on your own time, without connecting to your main server. The advantage here is that remote teammates can work apart, at speed, without having to worry about slow networks or VPNs.Multisite workflowsMultisite is like a blend of centralized and distributed workflows. At each location, teammates work in a sort of mini centralized workflow, where they share their branches and progress – easily merging, and pushing and pulling among the team, until they finally push to the main server, on their own time.Multisite workflows are optimal for teams working on a shared codebase across different cities or continents. In this situation, you should establish a host server at each work site and then copy changes between those servers. If you don’t, the teams working at sites without servers will experience slower responses than others.Now that you have an idea of what’s at stake when choosing a version control system, it’s time to evaluate some of the most popular options out there.GitOpen source, free, and easy to use, Git is one of the most popular version control systems around. It features distributed repos and strong branching and merging capabilities, but can’t handle large binary files as effectively as other solutions on the market.Perforce (Helix Core)Helix Core is an enterprise-level version control system used by game studios like EA and Ubisoft. This VCS features centralized repos and handles large binary files. However, it does not feature visual repos, so its adoption might be more challenging for non-technical developers.Apache SubversionLike Git, Apache Subversion is a free and open source version control system. It features centralized repos and can handle large binary files, but you must be connected to the main server to use it, which is not ideal for working offline, and could be a hindrance to larger or distributed teams.Plastic SCMPlastic SCM is a flexible version control system that supports programmers and artists alike. It excels at handling large repos and binary files, and as both a file-based and change-set based solution, gives you the capability to download only the specific files that you’re working on, rather than the entire project build.Even more, Plastic SCM is the only version control system on the market that features visual branching. It can handle thousands of branches at once and doesn’t make you choose between centralized or distributed workflows.Want to know what else Unity can do for you? Discover Unity solutions to overcome challenges at every stage of development, from big idea to big success.

>access_file_
1274|blog.unity.com

How Volkswagen Group of America visualizes vehicles for 2030 and beyond

What will the vehicles of the future look like? How will drivers and passengers interact with them? How is autonomous driving going to become a reality? These are some of the many questions a group of engineers, designers, scientists and futurists are answering at Volkswagen Group of America’s Innovation Center California (ICC) in Silicon Valley.At the ICC, one of three global research centers for Volkswagen Group Innovation, the team’s charter is as inspiring as it is challenging: to predict what the world will look like in 2030 and beyond. Forecasting the far future helps the automaker better anticipate and identify the needs of its customers across its worldwide family of brands, including Audi, Bentley, Bugatti, Lamborghini and Volkswagen.To show the future, the ICC doesn’t use a crystal ball. It uses Unity. With expertise spanning film and animation, software engineering, and VR development and design, the ICC’s group of Unity users stretch the software to its full capabilities to solve diverse problems, including:Interaction with far-future scenarios (interior and exterior design and customer journey design)Human-machine interface (HMI) design, including 3D user interfaces (UIs)Synthetic data generation for machine learning-powered productsThe ICC provided a behind-the-scenes look at their work in these areas. Learn more in our report co-created with the ICC.Here’s a sneak peek at what’s inside the report.Unity is a key component of the ICC’s visualization pipeline, according to Alisia Martinez and Andrew Gwinner, frontend XR software engineers, and Dij Jayaratna, a senior product designer. “Using Unity, we’ve built projects for tethered and standalone VR headsets, mobile AR devices, custom controllers, in-vehicle experiences, and rendered cinematic videos.”The audiences for these projects range from product designers assessing the ergonomics of their designs to management reviewing final vehicle concepts. The team brings computer-aided design (CAD) data into Unity using Pixyz and then creates interactive experiences across different platforms to ideate, prototype and communicate.“The translation of a concept to an interactive experience is rarely perfect on the first pass,” said Martinez, Gwinner and Jayaratna. “Being able to have our designers provide notes in real-time allows us to reach the ideal representation of the concept much more quickly than if we were using other tools.”How much faster? The team estimates its Unity-based workflow visualizes content in less than half the time and cost as traditional methods.When it comes to in-vehicle experience design, Unity’s tools help the ICC design richer and more immersive content for future vehicle human-machine interfaces (HMI), which go beyond the center console screen and into instrument clusters and head-up displays (HUDs).“Having a game engine as our base allows us to have much more complex interactions and visualizations,” said Martinez and Loren Skelly,senior manager of UX Design & Concepts. “We can get pretty far designing concepts on our computer screens and test benches, but to truly test our concepts, nothing compares to having an in-car driving experience. Our toolchain with Unity allows us to do that … The ability to test a proof of concept in-vehicle and give feedback directly to our software team to make adjustments to the car in real-time is unparalleled.”The ICC uses machine learning (ML) and computer vision extensively to develop products that improve over time and with customer usage. One of the key components of these ML-powered products is structured and labelled data. Acquiring this data in the real world can be time-consuming and expensive. At the ICC, synthetic data is emerging as a more affordable, scalable alternative to generating this ML training data.“Most perception neural networks rely on labeled data, which is costly and prone to error,” said the ICC’s Elnaz Vahedforough, technical project manager. “By using synthetic data, once the labeling task is set up, the labeling is essentially free, and other costs are minimized.”The ICC generates images and ground-truth data in Unity to train neural networks for implementation of autonomous driving components, such as sensors, perception, prediction and driving.Besides the cost considerations, synthetic data generated with Unity can be used to construct scenarios that rarely occur (e.g., accidents, unusual objects on the road, etc.) or harsh weather conditions such as fog or heavy rain. Vahedforough noted, “This makes it possible to recreate edge-case scenarios safely.” _______________________________________________________________________________As the ICC charts its path forward, the group works hand in hand with Unity’s Integrated Success team. “As experienced Unity users, we know there are a million ways to do one thing, and a million more ways to optimize it,” said Martinez and Skelly. “The Unity Integrated Success team helps us identify the ideal way as we focus on the experience we are trying to create, while achieving the best design and implementation.”Learn more about the ICC’s innovative work with Unity in this report.Inspired by VW’s cutting-edge capabilities? Bring the power of these technologies to your business.Unity Industrial Collection – Create interactive visualization experiences from CAD and 3D data for mobile devices, PCs, AR and VR devices and other platforms. Try or buy online today.Unity Computer Vision – Create high-quality synthetic datasets for computer vision training and validation.Unity for HMI – Connect HMI development processes, from design to deployment, to create stunning, interactive user experiences for in-vehicle infotainment (IVI) systems and digital cockpits.

>access_file_
1275|blog.unity.com

How to get started licensing IPs for your games

As game studios look to diversify the ways they acquire users in an increasingly competitive market, they’re turning to the world’s top IPs - causing a major shift in the top free-to-play game charts from hyper-casual to IP. In fact, 30% of the top 100 grossing games utilize IP according to GameRefinery.In the app economy, IP, or intellectual property, refers to the literary and artistic works (often movies, TV shows, and characters) third-party brands license out to app and game developers. For mobile games, IPs often bolster the LTV of a player and reduce churn rate. For the IP licensors, mobile games are a valuable way to increase brand recognition, largely due to the platform’s lower barriers to entry compared to console and PC. These top entertainment companies are also attracted to mobile game’s impressive revenue and user growth.Squeezing the most out of a licensed IP is an art of its own. Developing IP based games is a competitive market and you’ll need a well thought out strategy to effectively scale your titles. Here are some tips to begin licensing IPs for your games:1. Optimize what you’re already good at with a suitable IPTying your mobile game to a major IP can have a huge impact on your installs and revenue, which is why you want to take what you’re already good at and find the most suitable IP to maximize performance.First of all, don’t change your style just to fit a specific IP - it’s far more impactful to add an IP to a game genre that you’ve already mastered. That in mind, the key to success is finding an IP with an existing fan base that is likely to enjoy that game genre. This way, you’re retaining your current users who already enjoy your games, while attracting a new, engaged audience.For example, Glu, a leading developer of mobile games, are known for their role playing storylines - so they were able to easily develop titles with celebrities that have strong fan bases that already follow their lives through TV and social media, such as Kim Kardashian, Gordon Ramsey or WWE fighters.Similarly, Jam City is known for their match-3 games, and they have a strong, loyal audience of casual players. Disney Emoji Blitz, which lets users collect Disney themed emojis, is a great way for Jam City to keep their current players while reaching Disney fans around the world - Disney fans are often young and enjoy a casual way to interact with their favorite characters.2. Account for royalty payments when determining profitabilityWhile IPs are great for scaling a title, they’re not a free pass to acquire more users. Even after the initial cost of the licensing agreement, revenue will be split between you (the developer) and the IP licensor - so it’s essential to be aware of these incurring costs, and ensure your marketability equation is profitable before diving into a partnership.To do that, first define reporting expectations and limitations with the IP licensor. Identifying the commitment you owe to the brand is essential in determining how often royalty payments are required, allowing you to measure net revenue and profitability.To determine whether the license will be profitable, look at the marketability equation (eCPM = CPI*IPM). The IP is going to increase IPM and lower CPIs. You also need to account for royalties which could reduce your net LTV. Ultimately, you want to ensure you can achieve the same eCPMs and scale with the lower CPIs.If you don’t see high profitability, try setting a more aggressive ROAS based on net LTV. This will allow for higher retention and stronger conversion values that will increase overall IPM. If a higher ROAS goal still doesn’t do the trick, it may be best to negotiate with the brand about royalty payments and agreements.Communication is incredibly important when partnering with an IP, and a strong bridge between the teams will help foster valuable discussion.3. Build a strong bridge between the three main teamsIP games require collaboration between the IP partner, the developer, and the creative team that designs your ads. With poor communication costing the average organization $62.4 million per year in lost productivity according to Inc., it’s vital to be proactive about ensuring a smooth content and communication flow between these three teams.First of all, it’s important to show your IP partner that you understand the world they’re trying to build and the pillars of their brand. If the brand is a movie franchise, everyone on the team should watch each film at least once to better understand the IP. It’s also important to put yourself in the shoes of fans through personal research to relate to what they enjoy about the brand. For example, an outsider may assume Marvel and Star Trek are just about defeating bad guys. In reality, Marvel is about heroes and fighting, and Star Trek is about exploration and discovery.In the same vein, keep in mind that gaming and creatives are not the core business of the licensor. The Harry Potter franchise began in 1997 but the first mobile game was released in 2018 - brands often exist long before their first mobile game. That said, the IP needs as much education about gaming and creatives as you need about their brand. For example, rather than just emailing them creative assets for approval, it’s more productive to set aside time to explain the creative process, why creatives are important to growth, and show them creatives in context.Ultimately, it all comes down to setting the right expectations, getting a valuable dialogue going, and ensuring you can come to a mutually workable process that builds a bridge between the teams.With strong communication in place, you can hone in on your creative strategy to optimize your ad revenue.4. Build quality creativesEngaging creatives are key to success for any game business, and are often the difference between acquiring users at scale and struggling for liftoff. Because IP brands are often extremely recognizable (you can spot Disney’s font from a mile away), your creatives are going to be under extremely watchful eyes. Since ads that misrepresent an IP can alienate a portion of the brand’s audience, creating quality ads for your IP titles should be a top priority.The first step to creating a quality ad is ensuring there’s a strong interplay between the creative experience and the core gameplay. Especially for IP titles, it’s essential to know the audience and what the brand stands for when designing creatives - you don’t want there to be a disconnect between the brand, the game and the ad. Ultimately, your creatives should reflect the pillars of the brand in the same way the game does.You also want to make sure your creatives are authentic and educational. Authenticity - putting time into understanding the motivations of the audience - is important for driving scale from the right users. Also, be sure to show respect for the brand’s existing audience by telling them everything they need to know about your game, which includes their favorite, trusted character. By the end of the ad, the user should know exactly what to expect from your app and why they should download it.You’re not going to find a winning creative on the first try, which is why it’s important to A/B test as often as possible and get your IP brand on board with the creative process as soon as possible. Learn more about ironSource’s creative management solution here.Similarly, it’s important to be aware of the creatives shown in your IP.5. Ensure the creatives shown in your app are safeWith 87% of consumers believing brands are responsible for ensuring ads appear in safe environments according to New Digital Age, you have to be careful about the creatives shown in your IP game. Ultimately, big brands with cult followings, such as Harry Potter, Disney, or Marvel, put a lot of trust in you to uphold their reputation, which means you want to put yourself in the best position to ensure your user experience is safe.On top of that, IP licensors may place limitations on what’s allowed to be shown - brands can have varying levels of control over creatives and some IPs will have heavy restrictions.Prioritize preserving brand integrity and safety by monitoring the ad content shown to your users. You don’t want to show negative or inappropriate ads next to your IP content. For example, ironSource’s ad quality solution allows you to see a gallery view of the ads served to your users so you can inform a network directly about problematic or unsafe ads.Learn more about Ad Quality by ironSource here.Today, brands are increasingly looking to implement their IPs into mobile games, and games studios are keen to accept them. With so much growth in this category, you want to be prepared to make the most out of your IP titles and beat out the competition. The above tips are a great place to start optimizing your IP strategy.

>access_file_
1276|blog.unity.com

Unity AI 2021 interns: Navigating challenges with robotics

AI@Unity is working on amazing research and products in robotics, computer vision, and machine learning. Our summer interns worked on AI projects with real product impact.As robots get more sophisticated and robot tasks more complex, the need for simulation is increasing. Simulation allows developers to scale as they don’t need to have a physical robot to represent every scenario they need to test. It also enables the ability to develop and test certain tasks during development, especially those tasks that cannot be carried out until the robot is fully deployed. Our Unity Robotics team is focused on enabling robotics simulation by harnessing the power, assets, and integrability of the Unity engine while building robotics specific tools and packages that expand simulation capability. The Unity Robotics Hub features demos, tutorials, and packages to get you started simulating your robot today.During the summer of 2021, our interns worked diligently to create valuable contributions to our work at Unity. Read about their projects and experiences in the following sections.This summer, I had the amazing opportunity to work on integrating inverse kinematics and robot controllers into Unity as part of the robotics team. When users need to simulate robots, particularly robotic arms, they need to control the robot using the same or similar APIs that they would use to control the real robots. These APIs are known as robot controllers, and they provide a variety of functionalities, including moving the robot from one position to another, moving a single joint (in joint space), or even moving the robot in a circle. Robot controllers work primarily in joint space—i.e., commands are given as target angles for each joint. Humans, however, only care about the position and orientation of the end effector in Cartesian space (i.e., X, Y, and Z coordinates in our 3D world). Thus, the goal of inverse kinematics is to determine what joint angles correspond to a given position and orientation in Cartesian space. Inverse kinematics is a crucial portion of a roboticist’s toolkit, so this package makes Unity even more capable and easier to use as a robotics simulation platform.Integrating these features in Unity proved to be an immense challenge that required me to brush up on my linear algebra, physics, calculus, computer science, and even pre-calculus skills, while concurrently designing the software in the most user-friendly way. I also learned about simulating industrial robots in VR by creating a demo where users can move a cube in VR which the robotic arm follows. With challenge comes great opportunity, however, and effectively single-handedly designing, building, and shipping such a fundamental piece of code for enabling roboticists in Unity has truly been an honor. It is unbelievably rare that employees find themselves looking forward to and being consistently challenged by their work on a daily basis, and I am lucky to say that I found that experience at Unity!In industrial applications, multiple robots with different specialized capabilities must work in concert to carry out complex tasks. This project showcases how coordination between multiple robots can be achieved via the Unity Editor and robotics simulation packages, along with ROS 2, to carry out a find-and-ferry task in a warehouse. This demonstration also highlights the advantage of using Unity over other robotics simulation tools where multi-agent simulations like this are challenging to accomplish. Our simulation consists of two types of robots, which we call Findbot and Ferrybot. Multiple Findbots are responsible for finding target cubes in a warehouse environment using machine learning, and a single Ferrybot navigates to, picks up, and drops off these cubes at a designated location. To accomplish this, each Findbot is equipped with a camera for detecting the cube, while the Ferrybot has a robotic arm for picking it up. This example project will be useful for robotics developers and researchers looking to use Unity’s robotics tools in their own simulations.Overall, this was a great experience because we were able to use and integrate a wide array of Unity packages into our project. For instance, we used the Computer Vision Perception Package for data collection to train our pose estimation model. We also used an inverse kinematics package (mentioned in Jacob’s project above) on Ferrybot for picking up the cubes. Taking a dependency on a project being developed in parallel with ours was also a major challenge, but it was a great opportunity for learning collaboration and communication. It is also very rewarding to know that our project will be used to prepare a ROSCon 2021 workshop.If you are interested in building real-world experience by working with Unity on challenging artificial intelligence projects, check out our university careers page. You can start building your experience at home by going through our demos and tutorials on the Unity Robotics Hub.

>access_file_
1277|blog.unity.com

It’s official: Parsec is now part of Unity

In 2016, Chris and I started Parsec with one goal: make a low latency remote desktop application performant enough to play PC games from anywhere, across any device. Our assumption was that if a technology was purposefully built to stream video at 60+ frames per second in HD quality while shaving as many milliseconds off the stream as possible, we could significantly expand the world's access to software, games, and tools. Even in those early days, we hoped that any ultra-low latency technology would be powered by Parsec.Today, we have the pleasure to announce that Parsec has officially joined Unity, helping us accelerate toward that goal: democratizing access to all of the tools and software needed to build & enjoy interactive 3D experiences.Almost five years ago, we launched the first version of Parsec and Chris shared his technical goals in a blog post. Over the course of those five years, there have been countless people supporting Parsec to help us get here. Most importantly, everyone who has been a member of the Parsec team deserves a massive congratulations and has earned our deepest gratitude for helping us build and define the world-class product Parsec has become.The Parsec community (and their feedback) has also been an enormous source of inspiration. Their imaginative use cases, creative applications, and novel tools have repeatedly driven us to push the boundaries of what our technology can do. For nearly a year after our launch, we had fewer than one hundred active customers each day. That early feedback and those early needs helped define the product. Over time, as our community grew to hundreds of thousands of customers per week, the community’s passion stayed strong and helped support our development efforts. If you were there, thank you. Seriously.To get this far, Parsec also required financial investments and expert advice. Notation, Lerer Hippeau, NextView, HP Ventures, Makers Fund, Mini Fund, Gridlov, MBK Capital, and Andreesen Horrowitz believed in us and saw how Parsec could change the way the world accesses software, content, and tools. We really appreciate your guidance and support.Finally, although Chris and I are Parsec’s co-founders, our families (especially Megan and Allison) were with us at every step of this journey, supporting and encouraging us every day. Thank you.Through the verticals business at Unity and their relationship with 94 of the top 100 game studios in the world, the immediate impact Parsec can have on industrial and gaming use-cases is going to grow exponentially. We couldn’t think of a better company to help us accelerate in the short term while also expanding our opportunity to impact the future so dramatically.Applications built with Unity are downloaded more than 5 billion times a month, reaching an average of 2.5 billion devices globally. We believe Parsec will bring value to each and every Unity customer, as well as everyone interacting with real-time 3D (known for the rest of this post as “RT3D”).The potential of our original goal expands further and further: Parsec can help convert anyone creating in 2D/non-interactive applications into creators of RT3D applications. We can simultaneously give creators the freedom to work from any device, at any time, on their own terms. We can push their experiences further by using Parsec streaming to deliver their interactive experiences to more consumers globally.There are approximately 500 million high performance computers in the world with even fewer capable of running the most demanding software. Meanwhile, there are approximately 2.7 billion people who own a smartphone and 2 billion computers globally. With more than 7 billion people on the planet, you can quickly see that a vast majority of the world does not have the technology capable of running a RT3D application. But, with access to the internet and a low powered device, they can connect to hardware that gives them the opportunity to build, create, design, develop, and interact with a RT3D experience via Parsec.Over the next decade, hundreds of millions of people will become creators of RT3D interactive experiences. Democratizing access to the tools to create for the next phase of online consumer experiences is a mission we’re thrilled to be a part of. In the future, the digital and real worlds will blend together even further, moving from computer and phone screens to a pervasive and omnipresent digital world. Meeting the technical demands of this future is an enormous undertaking, let alone giving everyone the opportunity to edit and iterate upon this world. It’s pretty exciting.Streaming real-time and interactive content from the cloud or another computer to your laptop, phone, AR glasses, and more will open up a plethora of opportunities for creators and anyone else to change the digital world around them. We’re just getting started. And within Unity, we have even greater resources and opportunities to help shape this future.Although this isn’t everyone who has been part of this journey, we’d like to call out and thank this group for helping us build Parsec.Team (in order of appearance): Jamie Blanks, Erik Nygren, Dan Applegate, Jake Lazaroff, Alex Chaparro, James Stringer, Josiah Savary, May Kim, Ronald Huveneers, Charlie Tran, Marco De la Cruz, Max Sebela, Jakob Wilkenson, Keith Cook, Wontae Yang, Binh Hoang, Callum Watson, Vince Auletta, Eric Fahs, Steve Kehaya, George Kehaya, Meg Esch, Jason Hart, Jim Coleman, Michael McCormack, Malyse McKinnon, Marcus Stoll, Mike Nicoli, Nicole Closson, Devin Kelly, Sam Leavens, Fernando Boom, Dave Doherty, Val Nuccio, Karthik Selvakumar, Justin Valletta, Martin Trang, Susanne Blix, Zac Overson, Adrienne Merrick-Tagore, Paul Johns, Andrew Koonmen, Erin Boardman, Gabrielle Lysenko, Fernando Martinez, Kelli Branam, Darius Iglesias, Avery Vernon-Moore, James King III, JonnyLee GiardAdvisors: Shafqat Islam, Asif Rahman, Eros Resmini, Monte Ford, and Joost van DreunenContractors: Richard Vinicius Gerotto Silva, Max Morris, Omar Panduro, Ignacio RamirezInvestors: Notation Capital, Lerer Hippeau, NextView, HP Ventures, Makers Fund, Mini Fund, Andreessen Horowitz, Gridlov L.P., MBK Capital

>access_file_
1278|blog.unity.com

6 ways to optimize UA with iOS 15 SKAdNetwork

In June 2021, Apple announced iOS 15 would come with new opportunities for gathering performance data from apps with SKAdNetwork (SKAN) - now, developers can get a copy of the winning postback by configuring the NSAdvertisingAttributionReportEndpoint into an app’s plist. With full transparency into the entire postback across all marketing channels, developers can adjust their strategy for the new SKAdNetwork era and optimize [tooltip term="user-acquisition"]UA[/tooltip] in real-time.So why is this so ground-breaking? Under Apple’s ATT framework, access to cross-app data is limited. Developers can learn about their marketing performance through postbacks that are anonymized using mechanisms like privacy thresholds and anonymization timers. Since postback data now provides the most important insights into your app’s marketing performance, extracting as much information from it as possible is crucial to continue analyzing and optimizing your UA activity.For example, universal SKAN reporting from ironSource lets any developer running their UA on the platform integrate a simple endpoint to collect, verify, and analyze postbacks - saving you the technical hassle of building a dedicated endpoint from scratch. Here, Yevgeny Peres, VP Growth at ironSource, shares 6 ways to optimize your UA with iOS 15’s SKAdNetwork.1. Analyze SKAN adoption for each UA channelYour goal is to run campaigns on channels that have 100% SKAN adoption - that way, you’re able to accurately analyze performance and maximize postback data. To do that, you need to continuously be checking on adoption rates by tracking the volume of postbacks across UA channels and the share of voice on each - a low volume of postbacks indicates a lack of readiness for SKAN. If you identify any lagging UA channels, get in touch with them to understand and resolve issues.Your goal is to run campaigns on channels that have 100% SKAN adoption - that way, you’re able to accurately analyze performance and maximize postback data.2. Know the SKAN version each channel is runningWith each new version of SKAN, developers get greater insights and capabilities - that’s why it’s important for your UA channels to stay up to date. For example, version 2.2 of SKAN enables you to see view-through attribution (VTA) information per channel. If you see that a channel is running below 2.2, they technically cannot offer this valuable data, which may make optimization more difficult.3. Research privacy thresholdsConversion value and source app are essential values in analyzing and optimizing your UA performance. Apple applied a mechanism known as a “privacy threshold” to protect the anonymity of the data, so you need to meet this threshold to receive the values.It’s important to identify the channels not meeting the privacy threshold and make decisions accordingly that enable you to have a maximum understanding of your campaigns' performance. To do so, look at the percentage of postbacks in each channel that doesn’t include conversion values or source apps.4. Visualize trends and reportingFor the first time ever, Apple lets you see where your users are coming from across all your UA channels. Using ironSource, you can filter this postback data into a variety of granular breakdowns, like date, source app, and conversion values, and then visualize these data points with easily digestible charts and graphs. With this information at your fingertips, it’s easier to optimize campaigns and build out your UA strategy.5. Export raw dataThe ability to export the data is part of the larger dedication to transparency that the new SKAN environment provides to developers. You can export data directly regularly and use it to back up important information as you troubleshoot an issue or share performance metrics and insights among your team to stay in sync.6. Forward your data to your own BI-stack and MMPDifferent SKAN solutions offer distinct benefits for optimizing UA - choosing just one endpoint to receive winning postback data could mean missing out on an advantage. So choose an endpoint that lets you forward data to all the relevant parties, like your MMP and BI-stack. This ensures you can continue to track and verify UA performance internally and across multiple partners, without having to select just one as your endpoint and risk losing the benefits that other solutions offer.How to get started with universal SKAN reporting from ironSourceWant to start taking advantage of these 6 capabilities of SKAN? Select ironSource as your endpoint. Here’s how to do it:Select info.plist in the project navigator in XcodeClick the add (+) button next to a key in the property list editor - press returnType the key name: NSAdvertisingAttributionReportEndpointChoose “string” from the pop-up menu in the Type columnType the following URL: https://postbacks-is.com Take full advantage of the transparency that the iOS 15 update provides with a solution that removes the operational overhead from your plate and lets you optimize UA performance in real-time.

>access_file_
1279|blog.unity.com

3 ways to optimize the ad experience for your users

When integrated carefully, in-game ads are a powerful way to increase revenue without compromising the UX of a game - especially since today there are multiple ad formats available that put the player experience first, such as rewarded videos, playable ads, and interactive end cards. Still, sometimes things outside of your control can lead to a poor ad experience that may cause users to leave the game.Keep reading to learn how to prevent this from happening by optimizing the ad experience for your users while increasing revenue.Identify bad adsThe first step in optimizing the user experience is keeping an eye on the ads shown in your app. Often there's a correlation between user churn and the type of ad content being shown.Negative ad experiences could involve inappropriate content, spam, bugs, and auto-redirects. It goes without saying that avoiding such content must be prioritized in order to prevent user churn.Use a tool like Ad Quality to get more visibility into the ads being displayed in your game and take control of your game’s UX. Look at metrics like churn rate per advertiser and quality CTR (QCTR), which goes one step further than CTR and indicates whether users are automatically redirected to the app store, or go there intentionally.Once you’ve identified ads with poor UX, report the advertiser to your network partner to ensure they do not damage your game’s experience again. Mobile game developer TutoTOONS, for example, decreased the number of user complaints for inappropriate ads by 60% using Ad Quality by ironSource, while Jam City was able to unblock competitor content and boost CPMs by 20%.Provide zero latency using progressive loading technologyRewarded videos can do wonders for both your retention and ARPDAU - but when executed poorly, they can cause users to lose engagement and churn. One of the most common causes of a bad rewarded video UX is latency - which means your users do not instantly see the ad after clicking on its traffic driver. Often that’s because the ad network doesn’t have an ad available at that moment to fill the ad space.To avoid churn and ensure a positive user experience for your rewarded video ads over the long term, removing latency is critical.To do so, game publishers have typically chosen from three options:- The first approach is to only show users a rewarded video traffic driver when there’s an ad ready to display - The second option is to always display the traffic driver, and if the ad inventory isn’t filled, users are left staring at a blank screen until they exit out - The third option is the same as the second, just with a slightly better user experience - if an ad isn’t available, a pop-up will appear that states this.The drawbacks of these options are clear, harming ARPDAU, the user experience, or both. That’s why in recent years, developers have dedicated considerable time to optimizing their monetization strategy - typically through hybrid bidding and waterfall setups - to find the balance between sophisticated auctions and reduced latency. However, even the most optimized setup can’t deliver zero latency. Monetizing with a mediation solution that has dedicated technology to remove latency entirely is the best way to serve as many rewarded videos as you want without worrying about a poor UX that causes churn.A/B test your ad unit implementation strategyIn addition to identifying bad ads and managing latency, planning where you put your ads in your game, the frequency in which they appear per session (capping), and the interval of time in between each ad (pacing) are key components you’ll need to test to preserve a positive ad experience for users while maximizing ARPDAU.PlacementLet’s start by looking at rewarded videos-placement is all about where in your game you’re actually providing players access to the ads with traffic drivers. The most effective placements are those in which players can receive real value in their gaming experience in return for engaging with a rewarded video - typically these are at pain points, such as when they run out of lives, fail a level, or lack in-game currency to progress.To optimize the user experience, put yourself in your players’ shoes: identify various points in your game that you believe will be ideal for a rewarded video placement. Don’t take a stab in the dark - run A/B tests, rolling out the new placements to limited audiences, comparing the ARPDAU data between the groups, and retention. Also run A/B tests to determine which rewards users value the most. The ads that have the highest usage and engagement rates will likely indicate that the reward is highly valuable to users.Placement is also important for optimizing the user experience with interstitial ads. For an ad-based game, like in the hyper-casual genre, placing interstitials when the user opens the game, loses a level, returns to the homepage, or gives up on watching a rewarded video work best - the key is to avoid disrupting the gameplay itself.For IAP-heavy games, where users have higher LTVs and retention rates, it's best to be more cautious, showing interstitials only at the end of game sessions, or when users do not engage with your rewarded video ads or your IAP offers. Again, each time you try one of these placements, make sure you’re A/B testing and measuring retention rates to see what works best with your specific player base.Capping and pacingPlacement is just one piece of the puzzle when it comes to A/B testing your ad unit monetization strategy and preserving a good user experience.Capping and pacing should also be tested frequently. Because rewarded videos are opt-in, for this ad unit you can be pretty liberal with your capping and pacing. After all, there’s zero risk of damaging the user experience by disrupting their game flow.With interstitial ads, be more careful with capping and pacing. Because users don’t choose to see these ads, they can be disruptive to their game experience and cause churn. Remember that genre plays an important role here - hyper-casual or casual gamers will have higher tolerance for interstitial ads than midcore or hardcore gamers. You want to find the sweet spot where you maximize revenue without hurting retention rates - and to do that you’ll need to A/B test.Neon Play put this advice into practice and were able to preserve their retention rates while increasing ARPDAU. CEO Oli Christie said: “We never considered testing as standard practice, but with ironSource it’s so easy, so now we do it all the time for each game.”

>access_file_
1280|blog.unity.com

How to build a conversion value strategy to optimize iOS UA campaigns

To measure user quality and optimize campaigns on iOS, developers today are building strategies around the SKAdNetwork conversion value. While it's the main way to get accurate insight into user quality, there are many challenges that come with it:It’s universal - ​​all of your marketing channels will receive the postbacks based on the same conversion value schemaOnly one solution or platform can update the conversion value - whether it’s your own in-house solution or a third party solution Changing your strategy is tricky - if you decide to change your conversion values, it's critical to understand that it will harm both measurement and optimization across all your campaignsAdoption picks up paceSince Apple introduced SKAN 2.0 more than a year ago, most developers already have a strategy in place for measuring it today. By now, the majority of them are using conversion values to do so. In fact, according to ironSource data, adoption of conversion value (CV) has picked up since Apple fixed a bug in November 2020, and accelerated with the push of iOS 14.6.Today, there are multiple strategies for mapping conversion values, but two dominate: the most common is mapping in-app events to bits, followed by revenue measurement. Let’s dive into each.Strategy #1: In-app events to 6 bitsA bit, or binary digit, is a basic unit of information that can be one of two possible values - typically 1 or 0. Using bits, SKAdNetwork allows you to measure whether specific in-app events happened (at least once) or not, without any priority or sequence.As a developer, you’re limited to measuring up to 6 events. That’s because SKAN allows 64 conversion values - with 6 bits, in which each one measures whether an event occurred or not, then there are exactly 64 possible combinations.Here’s an example of this in action - note that this data would continue all the way through to CV 64. CV 10, for instance, would indicate that the user reached level 20 and subscribed.Pros and consThe 6 bit strategy lets you understand whether specific events, that you’ve defined as good signals of user value, occurred or not. In addition, theoretically, you can use a 6-bit strategy to optimize differently on several networks: for example, on Facebook you could optimize towards a specific IAP event, and on Google you could optimize towards a specific level completion, like the user reaching level 20.The main drawback is that it’s not effective for games that aren’t able to correlate events in the first 24 hours post-install to user value - which, beyond hyper-casual, include many games.Revenue measurementThe second most common strategy for conversion value management gives a revenue-based value to each conversion value. It's important to note that traditionally, IAP revenue has been the only form of revenue available. There are a few ways to do this:Counting dollars or centsWith this approach, each conversion value represents a specific amount of revenue that the user has generated. For example, conversion value 1 could be $0.99, conversion value 2 could be $1.50 - following this pattern all the way up to conversion value 63.The revenue range of your 63 conversion values will depend on your internal benchmarks - for example, if your data shows that your users generate between $0.99 to $10.99 within their first 24-48 hours, you’d logically start your conversion value 1 at $0.99.BucketsAlternatively, some developers split their conversion values into buckets. For example, CV 1 could equal everything between $0.99 to $2.99, CV 2 equals everything between $2.99 and $5.99, and so on. The range of each bucket can differ - for instance, CV 3 could be everything between $10.99 and $25.99. It’s up to you to test it and determine which range per bucket makes the most sense for your game.The idea is that once you get the postbacks, you can then work out the revenue average from each bucket. This in turn provides you with a greater range of users’ value. The benefit of this is that you then have the data needed to optimize your UA bidding strategy towards the users with the highest CVs.By contrast, if each conversion value is assigned a specific number, like in the first approach, it's possible that most of your traffic will be on the lower end of your conversion value map and just a small amount will reach the top values. As a result, even though you want more of the top users, you’ll find it very difficult to efficiently optimize towards these users because you lack the necessary data.Pros and consThe beauty of this strategy is that there’s no better proxy for paying users than paying users themselves. This approach fits well for games that can properly monetize users within the first 24 hours and have a statistically significant part of their ARPU curve occur on D0.However, because this measurement strategy traditionally excludes ad revenue, ad-based developers such as those making hyper-casual games have been left without an effective revenue-based measurement solution - until now.How to measure user value based on ad revenueTo correlate conversion values to ad revenue, ad-based game developers need a solution by an MMP or mediation platform. For example, ironSource’s conversion value manager, which is available to ironSource mediation partners, provides ad revenue insights that enable developers to continue measuring D0 ARPU and optimize towards D0 ROAS across the board.Start managing your conversion value strategy with ironSource’s CV Manager solution inside our iOS toolkit. Learn more here

>access_file_