// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1695 transmissions indexed — page 59 of 85

[ 2022 ]

20 entries
1161|blog.unity.com

Creating games for everyone: Introducing Unity Learn’s new accessibility course

Creating a game means creating a shared experience. Whether it’s a small personal project or a global commercial release, a game is an invitation for players to connect with and respond to your ideas as a creator. And, when you make that game more accessible, you’re extending the invitation to a wider and more diverse audience.How should you go about prioritizing accessibility as an emerging creator? If you’ve never done it before, it might feel overwhelming – but we’re here to help!Practical Game Accessibility is a new, free online course for intermediate creators. It’s an introduction to creating games that more players can enjoy. As you work through the course, you’ll learn about prioritizing accessibility while building a game guided by an inclusive design approach.To support this learning journey, we created Out of Circulation – a small, vertical slice of a point-and-click narrative adventure game. You’ll use Out of Circulation as an example case study to explore and expand upon throughout the course.“You’ll work it out, Sureswim,” Old Smalt reassures you as she passes you the apanthometer and sends you on your way. Surely the benevolent tech-witch and her gadgets will help you solve the mystery surrounding the local library. While your sidekick Wink is an expert in eavesdropping, you’re going to need all the support you can get!Not working on a game? No problem! Although Practical Game Accessibility uses games and game development as its core example, you can also apply much of what you’ll learn to other non-game projects, such as simulations, visualizations, and other real-time applications.The gaming community is diverse. A huge number of people enjoy playing games, and this includes players with disabilities. By working directly with these players as you develop your game, you’ll create a better, more inclusive experience for a broader audience.Prioritizing accessibility is critical to supporting players with disabilities, but it’s also just good design practice. When you center accessibility as a design pillar from the very start of a project, it’s not extra work – it’s just part of making the best game that you can.Already in the middle of making a game? It’s never too late to try and make your game more accessible. Even relatively small changes can have a big impact on your players’ experience.In Practical Game Accessibility, you’ll start with an introduction to accessibility and inclusive design. After that, you’ll work through pre-production for your own game idea, prioritizing accessibility each step of the way.When you get to the production stage, you’ll explore a range of focused tutorials on the Out of Circulation case study to help you bring your game to life.Finally, you’ll reflect on the overall experience and identify your next steps as a creator who prioritizes accessibility.By the end of the course, you’ll be able to:Apply an inclusive design approach to your work as a creatorIdentify critical accessibility requirements for your projectsImplement accessibility review and feedback cycles throughout developmentDesign and develop features using an inclusive design approachMaintain a focus on accessibility while adapting to constraints and emerging project needsPractical Game Accessibility is available for free on Unity Learn. Now’s the perfect time to get started on your journey to creating more accessible digital experiences!

>access_file_
1162|blog.unity.com

World Oceans Day: RT3D projects make waves and encourage conservation

Not just a source of natural beauty, oceans are critical to all life on earth – and they’re at risk. Riddled with plastic and facing oil spills, overfishing, and increasing global temperatures, these ecosystems are in real danger. Read on to learn about World Oceans Day and why protecting our oceans is essential.Today is World Oceans Day, a time for the planet to unite and take action to protect our oceans. This year’s focus is raising awareness and support for the 30x30 movement, which asks world leaders to commit to protecting “at least 30% of the world’s lands, waters, and oceans by 2030.” You can join the movement by writing to your nation’s leader.Oceans provide more than just incredible views and untapped areas for exploration. They also play a key role in our survival.A breath of fresh air. It’s estimated that between 50–80% of the world’s oxygen originates from oceans.The building blocks of life. In addition to fish, oceans provide many other foods, vitamins, and minerals that we rely on.Climate control. Oceans absorb over 90% of the world’s heat and much of the carbon dioxide we produce, which means they’re integral to preventing climate catastrophe.Unlimited natural energy. It’s clear that we can’t keep exploiting non-renewable energy resources. Fortunately, electricity can be generated by waves – and the oceans have plenty!Pollution“Human activities are threatening the health of the world's oceans. More than 80% of marine pollution comes from land-based activities.” – National GeographicPollution of any kind can have devastating impacts on delicate ocean ecosystems. We’ve all seen images of marine animals afflicted by plastic straws, grocery bags, and beer rings, but did you know that air pollution can find its way into the oceans, too? According to National Geographic, air pollution makes up a third of the toxic contaminants that enter the ocean. This, plus the runoff of pesticides from agricultural land, factory sewage, and oil spills, is destroying marine life, contaminating food sources, and depleting vital oxygen supplies.Climate changeThe planet’s rapidly increasing temperatures are one of the biggest threats to the oceans. As the planet warms, so do the oceans, which can be deadly to marine life. Currently, 75% of coral reefs are at risk due to rising temperatures which can kill them.Accumulating carbon dioxide in the atmosphere can also lead to a process called ocean acidification, which involves increasing pH levels in ocean waters. Among other risks, this can dissolve the shells of marine creatures like lobsters and oysters.Healthy oceans are essential for the survival of all life on Earth, so we need to protect them. We’re committed to ocean conservation as part of our ESG (environmental, social, and governance) efforts to build a more sustainable future and invest in our planet.Here are some exciting projects using Unity to celebrate the planet’s oceans, educate audiences, and encourage action:An Otter Planet by Habithéque is an in-progress PC game designed to teach players about water and help them understand its importance to all life on earth. In addition to raising awareness through play, An Otter Planet will raise money for charities to support water-related protection and revitalization efforts through in-game purchases and charitable donations.Raft, a PC game developed by Redbeet Interactive, highlights the incredible vastness of the open ocean. Players wake up adrift on a raft and then fight for survival by crafting, growing food, and avoiding shark attacks. Experiencing this game provides a new appreciation for the danger, stillness, and mystery of the oceans.The Hydrous is an innovative project that designs science-based augmented and virtual reality experiences to engage audiences with the wonders of ocean life. The creators’ goal is to provide “equitable access to ocean exploration,” which in turn builds understanding of beautiful and threatened marine ecosystems.--We believe that the world is a better place with more creators in it, and we’re excited to see the inspiring work being done to realize a sustainable, inclusive, and equitable world for all. Want to hear more inspiring creator stories? Sign up for Unity’s Social Impact newsletter for regular news and updates about our Social Impact work.

>access_file_
1164|blog.unity.com

Profiling in Unity 2021 LTS: What, when, and how

Developing expertise with Unity’s suite of profiling tools is one of the most useful skills you can add to your game development toolbox. Thorough profiling can massively boost the performance of your game, so we want to help you get started with key tips from our newly released e-book, Ultimate guide to profiling Unity games.Every game creator knows that smooth performance is essential to creating immersive gaming experiences – and to achieve that, you need to profile your game. Not only do you need to know what tools to use, and how, but when to use them.Our hot-off-the-press, 70+ page guide to advanced profiling was created together with both internal and external experts. It compiles advice and knowledge on how to profile an application in Unity and identify performance bottlenecks, among other best practices.Let’s look at some helpful tips from the e-book.Profiling is like detective work, unraveling the mysteries of why performance in your application is lagging, or why code is allocating excess memory.Profiling tools ultimately help you understand what’s going on “under the hood” of your Unity project. But don’t wait for significant performance problems to start showing before digging into your detective toolbox.The best gains from profiling are made when you plan early on in your project’s development lifecycle, rather than just before you are about to ship your game. It’s an ongoing, proactive, and iterative process. By profiling early and often, you and your team can understand and establish a “performance signature” for the project. If performance takes a nosedive, for instance, you’ll be able to easily spot when things go wrong, and quickly remedy the issue.You can also make before-and-after performance comparisons in smaller chunks by using a simple three-point procedure: First, establish a baseline by profiling before you make major changes. Next, profile during the development to track performance and budgeting, and finally, profile after the changes have been implemented to verify whether they had the desired effect.You should aim to profile a development build of your game, rather than profiling it from within the Unity Editor. There are two reasons for this:The data on performance and memory usage from standalone development builds is much more accurate compared to results from profiling a game in-Editor. This is due to the Profiler window recording data from the Editor itself, which can skew the results.Some performance problems will only appear when the game is running on its target hardware or operating systems, which you’ll miss if you profile exclusively in-Editor.The most accurate profiling results occur by running and profiling builds on target devices and using platform-specific tooling to dig into the hardware characteristics of each targeted platform.While Unity ships with a range of free and powerful profiling tools for analyzing and optimizing your code, both in-Editor and on hardware, there are also several great native profiling tools designed for each platform, such as those available from Arm, Apple, Sony, and Microsoft. Using a combination provides a more holistic view of application performance across all target devices.For a full overview of the tools available, check out the profiling tools page here.Unity’s profiling tools are available in the Editor and Package Manager. Each tool specializes in profiling various parts of the process (a holistic “sum of all parts” workflow). Familiarize yourself with the following profilers so they become a part of your day-to-day toolbox:The Unity Profiler is where you want to start and spend most of your time. It measures the performance of the Unity Editor, your application in Play mode, and connects to the device running your application in Development mode. The Unity Profiler gathers and displays data on the performance of your application, such as how much CPU time is being used for different tasks, from audio and physics to rendering and animation. Check out this course on profiling to begin.The Memory Profiler provides an in-depth analysis of memory performance to identify where you can reduce memory usage in parts of your project and the Editor. The Memory Profiler is currently in preview but is expected to be verified in Unity 2022 LTS.The Profile Analyzer aggregates and visualizes both frame and marker data from a set of Unity Profiler frames to help you examine their behavior over many frames (complementing the single-frame analysis already available in the Unity Profiler). It also allows you to compare two profiling datasets to determine how your changes impact the application’s performance.The Frame Debugger lets you freeze playback for a running game on a particular frame, and then view the individual draw calls used to render that frame. In addition to listing the draw calls, the Debugger lets you step through them one at a time, so you can see how the scene is constructed from its graphical elements.The Profiling Core package provides APIs for adding contextual information to Unity Profiler captures.Steve McGreal, a senior Unity engineer and the co-author of our advanced profiling e-book, put together the following high-level overview. Please feel free to use it as a reference sheet.While the detailed explanation on how to use the tools can be found in the e-book, this flowchart illustrates three main observations to consider for your workflow:Download the printable PDF version of this chart here. For more, see the linked resources on how to use each of the profiling tools at the end of this post.A common way that gamers measure performance is through the frame rate, or frames per second. However, it’s recommended that you use frame time in milliseconds instead.For example, you might have a game that renders 59 frames in 0.75 seconds at runtime, with the next frame taking 0.25 seconds to render. The average delivered frame rate of 60 fps sounds good, but in reality, players will notice a stutter effect since the last frame takes a quarter of a second to render.Strive for a specific time budget per frame when profiling and optimizing your game, as this is crucial for creating a smooth and consistent player experience. Each frame will have a time budget based on your target fps. An application targeting 30 fps should always take less than 33.33 ms per frame (1000 ms / 30 fps). Similarly, a target of 60 fps leaves 16.66 ms per frame.Most modern console and PC games aim to achieve a frame rate of 60 fps or more. In VR games, a regularly high frame rate is actually more important to avoid as it can cause nausea or discomfort to players. Mobile games might also require restrictive frame budgets to avoid overheating the devices they run on. For instance, a mobile game might target 30 fps with a frame budget of only 21–22 ms so that the CPU and GPU cool down between frames.Use the Unity Profiler to see if you are within frame budget. Below is an image of a profiling capture from a Unity mobile game with ongoing profiling and optimization. The game targets 60 fps on high-spec mobile phones, and 30 fps on medium/low-spec phones, such as the one in this capture:This is a game running comfortably within the ~22 ms frame budget required for 30 fps without overheating. Note the WaitForTargetfps padding the main thread time, up until VSync and the gray idle times in the render thread and worker thread. Additionally, observe the VBlank interval by looking at the end times of Gfx. Present frame over frame draws up a timescale in the Timeline area or on the Time ruler up top to measure from one of these to the next.If you’re within the frame budget, including any adjustments made to the budget to account for battery usage and thermal throttling, then you’ve successfully finished performance profiling until next time – congratulations! Now look at memory usage to see if it’s within budget as well.That being said, if your game is not within frame budget, the next step is to detect the bottleneck. In other words, find out whether the CPU or GPU is taking the longest. If it’s the CPU, determine which thread is the busiest – therein lies the bottleneck.The point of profiling is to identify bottlenecks as targets for optimization. If you rely on guesswork, you can end up optimizing parts of the game that aren’t bottlenecks, resulting in little or no improvement. Some “optimizations” can even worsen your game’s overall performance.The main thread is where all of the game logic and scripts perform their work by default; where features and systems such as physics, animation, UI, and rendering take place.See the screenshot below for an example of what a project that is main thread-bound looks like:Although the render and worker threads look like the previous example that’s within frame budget, the main thread here is clearly busy with work during the entire frame. Even if you account for the small amount of Profiler overhead at the end of the frame, the main thread is busy for over 45 ms, meaning that this project achieves frame rates of less than 22 fps. There is no marker that shows the main thread idly waiting for VSync; it’s busy for the whole frame.The next stage of investigation is to identify the parts of the frame that take the longest time, and pinpoint any underlying causes. Use both the Unity Profiler and Profile Analyzer to evaluate and address the biggest costs. Common bottlenecks often derive from physics, non-optimized scripts, Garbage Collector (GC), animation, cameras, and UI. If the source of the issue is not immediately obvious, try enabling Deep Profiling, Call Stacks, or using a native CPU profiler.In our 95-page performance optimization guide, we collected a list of common pitfalls you can encounter and prepare for.During the rendering process, the main thread examines the scene and performs Camera culling, depth sorting, and draw call batching, to compile a list of things to render. This list is passed to the render thread, which translates it from Unity’s internal platform-agnostic representation to the graphics API calls required to instruct the GPU on a particular platform.In the Profiler capture shown below, you can see that the main thread waits for the render thread before it begins to render on the current frame, as indicated by the Gfx.WaitForPresentOnGfxThreadmarker.The render thread still submits draw call commands from the previous frame, but isn’t ready to accept new draw calls from the main thread. The render thread spends time in Camera.Render.The Rendering Profiler module shares an overview of the number of draw call batches and SetPass calls for every frame. The best tool for investigating which draw call batches your render thread issues to the GPU is the Frame Debugger. Common causes of render thread bottlenecks include poor draw call batching, having multiple active cameras in the scene, and inefficient Camera culling.Being bound by CPU threads, besides the main or render threads, is not that common of an issue but it can arise in projects that use the Data-Oriented Technology Stack (DOTS) – especially if work is moved off the main thread into worker threads using the C# Job System.Here’s a capture from Play mode in-Editor that highlights a DOTS project running a particle fluid simulation on the CPU:As you can see, the worker threads are packed tightly with jobs. This suggests a large amount of work being moved off of the main thread. Note that the frame time of 48.14 ms and the gray WaitForJobGroupID marker of 35.57 ms on the main thread indicate that the worker threads are doing more work than can be realistically achieved within a single frame on this CPU.WaitForJobGroupID shows that the main thread has scheduled jobs to run asynchronously on worker threads, but it needs the results of those jobs before the worker threads have finished running them. The blue Profiler markers beneath WaitForJobGroupID depict the main thread running jobs while it waits, in an attempt to make the jobs finish sooner.The jobs in your project might not be as parallelized as in this example. Perhaps you just have one long job running in a single worker thread. This is fine, so long as the time between the job being scheduled and the time that it needs to be completed is long enough for the job to run. If it isn’t, you will see the main thread stall, waiting for the job to be complete, as in the above screenshot.You can use the Flow Events feature in the Timeline view of the CPU Usage Profiler module to see when jobs are scheduled and when their results are expected by the main thread. For more information on writing efficient DOTS code, see our DOTS best practices.You might notice that your main thread spends time waiting for the render thread (as exhibited by Profiler markers such as Gfx.WaitForPresentOnGfxThread). But at the same time, your render thread might display markers such as Gfx.PresentFrame or .WaitForLastPresent. This means that your application is GPU-bound. You will therefore need to focus your optimization efforts on GPU bottlenecks to improve overall performance.The following capture was taken on a Samsung Galaxy S7 using the Vulkan graphics API. Although some of the time spent in Gfx.PresentFrame in this example might be related to waiting for VSync, the extreme length of this Profiler marker proves that the majority of time is spent waiting for the GPU to finish rendering the previous frame.If your application appears to be GPU-bound, you can use the Frame Debugger to gain a quick understanding of the draw call batches being sent to the GPU. However, this tool can’t present any specific GPU timing information. It only reveals how the scene is constructed.To carefully investigate the cause of GPU bottlenecks, examine a GPU capture from a suitable GPU Profiler. The tool that you use depends on the target hardware and chosen graphics API.Common causes of poor GPU performance include inefficient shaders, expensive post-processing effects, transparent overdraw (often from particle effects or UI), large or uncompressed textures, meshes with excessively high polygon counts, and excessive output resolutions (i.e., rendering at 4K).Performance optimization and profiling are massive topics. If you’re looking for more information, check out our recently released e-book, Ultimate guide to profiling Unity games. You’ll get more than 80 pages of tips and tricks created in partnership with multiple experts, including those on our Integrated Support services team.In fact, some of these experts also helped put together our 100-page guide on performance optimization for mobile and PC/console – packed with actionable tips on how to avoid creating bottlenecks in the first place. For additional resources, take a look at our previous blog post series on physics, UI, and audio settings, graphics and assets on mobile or console, and memory and code architecture.If you’re interested in learning how your team can gain direct access to engineers, expert advice, and project guidance, peruse Unity’s Success Plans here.Tune in to our new Ultimate profiling tips webinar featuring experts from SYBO Games, Arm, and Unity for tips on how to identify common performance challenges in mobile games, using both Unity and native profiling tools.This webinar will cover:Key considerations for creating lean, performant code and optimized memory usage for smooth performance across low- and high-end devicesManaging thermal control to conserve precious battery cycles on mobile devicesProfiling strategies at all stages of game development and how to test them to build a solid methodologyExpert insights on Android profilingJoin our roundtable and live Q&A on June 14, 2022 at 11:00 am ET / 8:00 am PT.We want to help you make the most of your Unity applications. If there’s any optimization topic that you’d like us to further explore, please let us know in the forums. We’d also like to hear about the formats that you prefer so we can improve our e-books and other learning materials.

>access_file_
1165|blog.unity.com

P2E vs. P&E: Models for using blockchain technology in gaming

Games that use blockchain technology are rapidly gaining popularity, and now’s the time to get acquainted with what this technology is and how to start building your own games using the blockchain. In the first part of this series, we gave a high-level introduction of blockchain technology and its key benefits for improving retention and opening up new opportunities for monetization.Here in the second part, Gal Fisheloviz, Director of Business Strategy, Blockchain Gaming at ironSource, and Etai Koren, Director of Blockchain Games at ironSource, dive deeper into use cases for the technology in gaming by discussing the two main frameworks currently in play. These are:Play to earn (P2E)Play and earn (P&E)Keep reading to learn more about these models and where the future of blockchain gaming lies. Let’s start by talking about play to earn.Play to earnP2E blockchain games are pretty much like they sound - users play them to earn currency. For example, players complete challenges like quests or battles to earn tokens they can add to their crypto wallets or spend in the game.This is one of the earliest models of blockchain gaming, and continues to be popular today but has changed over time. Early P2E games let users start playing with minimal investment - back then, users had to purchase an NFT from a limited collection, which meant they already had to have a certain level of understanding and competency with cryptocurrency. But often, these players purchased one or more NFTs from the limited amount available for the sole purpose of making a profit instead of for the enjoyment of the game itself. As more users joined these games and the NFTs and tokens they earned grew in value as they became more scarce, the rewards became more lucrative and the cost of playing became higher. This priced out many average users who couldn’t afford to play and led to the creation of guilds. Guilds are groups of players or investors that buy high-value NFTs then rent them out to users so they can play the games and earn tokens. Users pay the guilds back with their tokens in what essentially functions as a revenue-share. Though they’re now highly influential within the game economy, guilds function outside of these ecosystems - meaning the resources they earn usually don’t get spent back within the game.With the value of assets increasing exponentially, play-to-earn games started launching quickly with little entertainment value and with the sole purpose of attracting high-value investors and guilds. These users created an exchange ecosystem where players would rent NFTs, earn tokens, then sell some of these tokens back in hopes of continuing to earn more profit. This pattern led to higher user churn - players were only chasing a profit so left a game for one with a better earning opportunity. In these cases there was more value leaving a game than entering it - resources weren’t redistributed in the game because users earned their rewards then sold them off to players for a profit, often outside of the game instead of using their rewards to buy in-game assets, like upgrades.Given the imbalance of resources entering and leaving the game - and users failing to retain for a long time, it becomes very challenging to build a sustainable economy under the play to earn model. To have a healthy game ecosystem, a blockchain gaming model needs new users to constantly enter and invest their resources back into the game. This maintains the value and profitability of the assets in the game, which makes them appealing for users.P&E or play and earn games came into the space to solve the concerns of user churn and an unsustainable game ecosystem.Play and earnThe P&E model evolved from the P2E framework and puts the focus back on providing real entertainment value to create a sustainable, open game economy. Like P2E games, users of play and earn games can win NFTs and tokens that get added directly to their crypto wallets. But, unlike their predecessor, many P&E games are free to play - players can then earn rewards by either being highly skilled or purchasing assets, like NFTs and tokens.Play and earn games aim to build value with their gameplay. Providing a high entertainment value encourages players to keep playing, and to do so, they’ll want to get rewards they can spend in the game to improve their gameplay experience.For example, in a P&E game, players can earn tokens to buy a character NFT that unlocks special powers. While the NFT can still be used on a secondary market or sold outside the game, it’s more likely to be spent for in-game bonuses or upgrades to make it more enjoyable and rewarding to play the game. As rewards get spent back in the game instead of exchanged for profit outside of the game, the game economy runs more smoothly and sustainably than the P2E model.A functioning game economy is based on the flow of assets - you need to offer rewards at a rate that’s balanced with spend so prices remain accessible to users. And, players need to earn enough rewards to remain engaged and maintain demand levels. Basically, the inflow and outflow of assets need to be balanced. This is more likely to be the case with P&E than with P2E. The future is play and earnThe direction of play to earn games is heading towards an unsustainable ecosystem - and future. Prices continue to keep many users from playing, and as a result more guilds and investors are staking their claim. A market for play to earn blockchain games will likely remain - many developers are trying to solve the challenge of the game economy and make it more sustainable. However, unless they start prioritizing entertainment and focus on giving users a truly enjoyable gameplay experience, P2E is likely to remain niche.As many agree, the fundamental purpose of games - including those on the blockchain - is to be fun and engaging, which is why play and earn games are likely to become the next dominant model of games using blockchain technology. They provide entertainment value along with the benefits of ownership from being on the blockchain, which drives demand higher to spend assets within the game. This creates a virtuous ecosystem that lets players earn rewards they fully own while encouraging in-game spending to improve their gameplay experience.When executed correctly, these well-oiled game economies of the P&E framework can create more doors of opportunity for game design, monetization, and marketing. Whichever model you choose, using blockchain technology for gaming now sets you up better to improve both user engagement and your monetization strategy as this technology likely gains more popularity.

>access_file_
1166|blog.unity.com

The Metaverse Minute: AWE Auggie Awards finalists using Unity to build the metaverse

Each year we keep our eyes peeled to see who will make the AWE’s prestigious Auggie Awards list of finalists, and 2022 boasts a variety of Unity applications that are helping build the metaverse. These products and solutions, nominated by the AR/VR industry, were voted on by the public and will be judged by a panel of experts. The winners will be announced at AWE USA 2022 on June 2.Before we dive into this exciting list we want to acknowledge that we may have missed some Auggie finalists that used Unity. If you are one of these, please let us know (just read to the end to find out how). Now onto the finalists!Big Rock Creative caught our attention last year with their Auggie nomination for BRCvr, an official virtual Burning Man experience. This year Big Rock Creative captured our attention with the Breonna’s Garden Immersive Experience, a touching virtual reality (VR) tribute to Breonna Taylor in collaboration with Lady Phoenix who won the Societal Impact Auggie last year for her augmented reality (AR) app. The sanctuary honors the life of Breonna Taylor and tells a story of grief, hope and healing. Additionally, we were blown away by Big Rock’s Burning Man (VR Experience) back for a second year. With 200 unique worlds and over 3500 hours of programming, this experience was driven by a desire to connect the Burner Community amid the pandemic.We are thrilled to have not just one but two projects nominated this year for an Auggie Award. Big Rock Creative brings creators together from many different facets to collaborate on all our projects. It's not just one person, but a grand communal effort to make participatory experiences that are radically inclusive. – Athena Demos, CEO and Cofounder, Big Rock Creative The Groove Jones team uses Unity to create a diverse set of experiences. Nominated for Best Consumer App, the NAEC’s Nose Art Gallery App allows visitors of the National Aviation Education Center to fly through the air with WWII planes and witness aircraft nose art up close. Nominated for Best Healthcare & Wellness Solution, VIST Neuro-ID uses a VR headset to test cognitive abilities and detect concussions. Designed for wildly different purposes, Groove Jones clearly has the ingenuity to serve many audiences.Groove Jones leverages the power of real-time render 3D to help our clients bring to life exciting programs for internal training purposes, interactive marketing engagements to connect with their customers, and innovate solutions for health and wellness. Unity Pro is a platform that allows us to break through barriers and develop across any screen imaginable. – Dan Ferguson, Cofounder, Groove Jones The Magnopus team truly united the digital and physical worlds earlier this year with its Expo Dubai Xplorer. This real-time connected experience allowed millions of onsite visitors to enjoy augmented reality spectacles aligned to real-world locations and allowed remote visitors to access the same AR content from their home.Expo Dubai Xplorer is a a great example of what the future holds for new and better ways people and places can be united across the physical and digital worlds. This project was a massive chance for us to flex our creative and tech muscles and learn loads along the way. We're now using that knowledge to help others create similar large-scale experiences, without having to develop the technology to drive it. – Daisy Leak, Executive Producer, Magnopus With Volu, you can record a video with your own device, and then the app’s AI algorithms perform a volumetric reconstruction, transforming your video into a 3D experience – all powered by Unity’s library and enhanced by Unity’s shaders. The team has plans to build plug-ins that will allow anyone to integrate “volograms” captured with Volu into Unity to create games, AR/VR experiences, and more.We are very happy to see our amazing AI tech recognised with an Auggie nomination. We have made a great effort to bring volumetric video capture to the smartphone, which will power user-generated content in the Metaverse. – Rafael Pagés, CEO, VologramsUsing 2D static images to explain how the human body’s myriad complex systems work together is like painting a sunset to describe the solar system. With Unity, Octagon Studios created an interactive AR app that helps you understand the details of human anatomy through exploration. Humanoid AR+ has immense educational potential.Humanoid AR+ offers children and young adults an immersive exploration of the human anatomy, in detail and up close. Utilizing high-quality 3D visualization and powerful developer tools such as Unity, we thrive to provide the best interactive educational apps, as we believe that augmented reality (AR) is the future of learning experiences. – Chitra Ananda, CMO, Octagon Studio Imagine being able to speak things into existence. That’s the idea behind Anything World. With AI, voice computing, and 3D rendering, this tool is a fast and easy way for anyone to create 3D experiences. This type of genius deserves recognition, not just for the product but for the impact it’s making on creators around the world. After all, the world is a better place with more creators in it.“We bring 3D worlds to life using machine learning and have over 5,000 creators building out wild experiences with us in Unity right now! We allow people to create living worlds without modeling, rigging or animating anything. We have an incredible team that has made the really-actually-should-be-impossible possible, and we're just getting started. Why not jump in and try and make things fly at anything.world!” – Gordon Midwood, Cofounder, Anything World There is a massive opportunity for industrial applications of VR, and the Immerse team is making these experiences easy to create with their SDK. This Unity-based free-to-use extended reality (XR) tool gives anyone the ability to create enterprise-grade VR. The tool also supports desktop and untethered VR, Android mobile, desktop applications and webGL.Built for Unity, the Immerse SDK helps developers get up and running quickly while simplifying and solving the common technical and production challenges of building, hosting and distributing measurable and scalable VR applications. Together with the Immerse Platform, the SDK helps organizations realize the full potential of virtual reality and provides a means through which enterprises can easily scale their own applications and content. For content creators, it also has the added benefits of an upsell path for existing customers and referral opportunities. – Justin Parry, COO, Immerse The sequel to the smash hit I Expect You To Die is just as fun as the original. I Expect You To Die 2: The Spy and the Liar built on what made the first game so fun – with new puzzles, missions, and, of course, more ways to die. Schell Games is noteworthy as one of the studios that is pushing VR games forward.We are honored I Expect You To Die 2: The Spy and the Liar advanced to the finalist round in the “Best Game or Toy” category for the 2022 Auggie Awards! Shoutout to the development team who worked on the original game and the sequel. Thanks to your hard work, we created a VR experience that places players in the heart of intrigue and espionage. We'd also like to thank everyone who voted for our entry during the public voting phase and all the secret agents who supported The Agency over the years. We couldn’t have gotten this far without your enthusiasm and love for the game. Stay sharp, agents! – Charlie Amis, Project Director for IEYTD2, Schell Games What the Blaston team achieved with Passthrough API is the perfect example of what is possible when mixing the physical and virtual worlds. With the API, players were able to transform their personal spaces into dueling arenas. As XR technology continues to advance, more game experiences will enrich the physical world with digital content.The mixed reality arena we’ve added to Blaston is just the beginning. At Resolution Games, we’ve invested in a dedicated AR division to continue building reality-bending experiences for the next generation of AR devices. Unity has been a crucial part of our VR and AR toolkit; its flexibility gives us the room we need to innovate and be first-to-market with a variety of exciting new products in the games space – and Blaston is a true showcase for that. Not only did it accelerate our ability to become the first game developer to bring an experience to the Quest Store using Passthrough API, but its multiplatform support also helped us solve another problem for the XR community: giving influencers and esports organizers the ability to stream VR gameplay in an way that’s optimized for broadcast with the recent release of the Blaston Spectator app on PC. – Paul Brady, President and Cofounder, Resolution GamesNot one, not two, but all of the experiments housed in the Petricore AR Experiments app were created using Unity’s AR Foundation. This whimsical collection of activities includes taking a family photo, petting a virtual dog, mixing paint colors from the real world, and more. This eclectic app is a celebration of AR.The AR Paint Bucket game is one of our experiments where a player places an AR paint bucket and has to grab colors from the real world to mix and match a given target color. Our inspiration for this was the TikTok trend of people trying to guess the color of mixing paint. We used Unity to develop the Paint Bucket game, relying primarily on AR Foundation. AR Foundation/Unity made it really easy to jump in and build something that’s fun quickly, which was our goal with these experiments. – Oliver Awat, Lead Designer & Senior Developer, PetricoreFitness doesn’t have to be boring. With Audio Trip, players are invited to dance across 84 different levels and unleash their inner fitness fanatic. The visual and audio experience is so much fun, you forget that you are burning calories. Besides, we have a soft spot for indies.Being a finalist for the “Best Indie Creator(s)” Auggie award is a tremendous honor for us at Kinemotik Studios. Audio Trip has been a labor of love for the two of us for the past four years. And in that time we’ve had the privilege of bringing the joys of music and dance to the world through VR on a wide array of platforms. For a tiny team of two, only one of whom is an engineer, porting to and supporting so many platforms ourselves while still developing the game would have been impossible without Unity. By doing most of the heavy lifting, Unity has made it possible for us to bring Audio Trip to a much wider audience. We’ve been able to accomplish much more than a team of only two normally would have. – The Audio Trip Team If you’re an Auggie Award finalist using Unity, give us a shoutout on Twitter so we can celebrate you! Tag @DigitalTwin with a brief description and the link to your finalist page.All of this year’s Auggie Award finalists, made with Unity or not, astound us and inspire us. Our world faces so many problems today, and we need creators to help solve them. Keep creating, and we will see you at AWE.—June 19:35 AM | Keynote: Getting beyond the fiction and seeing the reality in the metaverse11:30 AM | Faster iteration in AR using UnityJune 21:00 PM | Revolutionizing the e-commerce shopping experience in real-time 3D4:25 PM | Real-time 3D changemakers: Solving the world’s biggest problems through techJune 31:30 PM | Augmenting reality with AI 2:05 PM | XR input using Unity: What’s new and what’s next

>access_file_
1168|blog.unity.com

4 myths about cross promotion

In the webinar “How to Scale Your App Portfolio with Cross Promotion,” ironSource Growth Strategy Manager, Omer Katzburg, walks you through cross promotion and debunked misconceptions about its use.First, Omer explains the gist of cross promotion - it’s a form of user acquisition where you can promote your games to users already playing your other games. It’s a hit with top developers - since you already have user data, this means not only greater targeting capabilities but also greater knowledge of the audience, which can also help ease a game launch. Also, utilizing users in multiple games ensures that you can save high quality users in your portfolioNext, Omer broke down four of the main misconceptions about cross promotion. Let’s dive in:Misconception #1: Cross promotion is only for hyper-casual gamesCross promotion has a big impact - it contributes to 20-50% of leading developers’ user acquisition activity. According to Omer, this is not only true for hyper-casual developers, but also casual developers, and professionals producing games of many different genres.Finding success in cross promotion is not related to the game genre, but rather the size of the developer’s portfolio. A developer with at least 2 games in their portfolio is good to go - but the more games you have, the more significant the cross promotion activity. The cross promotion tool is a win-win - by keeping your users in your portfolio, you’re making the most out of quality users and also preventing them from potentially leaving to competing games.Misconception #2: High value players will lose their value in different genresToday, Omer explains, there’s no reason to put users into boxes - many gamers are playing different genres simultaneously. Even if you have a diverse portfolio of different types of games, cross promotion can be a huge help. It’s true - not every game genre combination might work together well, so utilizing and understanding your first-party data is crucial to figure out which games pair best.If you don’t necessarily have this data yet, trial and error is key. Utilize your regular user acquisition data - if you have a puzzle game, and you’re getting high quality users from a lucky reward game, you can assume that cross promotion is a good fit. For example, the ironSource data below displays additional game genres that lucky reward players installed (alongside playing lucky reward games).Developers can dig deeper to understand user trends by mapping it out - in this case, laying out the correlation between game categories based on IPM or any important KPI. Let’s say you want to know where to advertise your casual game - you can see that mid core games advertised in casino games have very high IPMs. Mid core games advertised in sports games don’t have the same success, so you can tailor your advertising accordingly. You can also assess the mapped data for ARPU or any other KPI that is relevant to determine your best cross promotion strategy. Note: The mapping presented here is high-level mapping - make sure to go more granular and to create and test this mapping specifically for every game you have.Misconception #3: Cross promotion will cannibalize my revenueCannibalization is an important concern, but in the context of cross promotion, you can lower this risk. The key to limiting cannibalization is simple: find the balance between user churn from the publishing game and user engagement with the advertised game. Note: if you’re a hyper casual publisher, it’s even less of a concern - hyper casual gamers are exposed to many competitor ads and usually have low retention rates anyway.According to Omer, the more similar the published and advertised games are, the higher chance of cannibalization. This is because users won’t usually play two similar games at the same time - if a user installs a new similar game, the probability that he will churn from the former game is higher. Let’s say you’re advertising a mid-core RPG game in a published mid-core RPG game, you’re increasing the risk of cannibalization. On the other hand, if you advertise a slots game in this mid-core RPG game, scaling up would be a challenge because the genres have low IPM correlation.Reducing cannibalization is not only about the type of game, but also the users themselves. You can use your first-party data to find the right user segments for cross promotion - then combine this segment with your genre mapping (or any other method). This strategic combination should lead to a solid understanding of how to maximize your cross promotion strategy. Even if you need to keep testing for a while to find the sweet spot, the potential for scaling up is always there.Misconception #4: I should only cross promote low quality segmentsFinally, it’s essential to recognize exactly which user segments are best for cross promotion. If you only cross promote your low value (generally non-paying) segments, they’re less likely to budge and actually start paying. It’s not so easy to convince someone who never pays for games to start paying. As Omer quotes: “once a non-payer, usually a non-payer.”So the solution is to find the highest payers, right? Not quite. Engaged and paying users tend to act the same no matter the game genre. But still, it’s best not to cross promote all of your highest paying segments. For example, big spenders are likely to be very loyal to the game, but if they move to another game, their loyalty toward this new game might not be replicated. To maximize your impact using cross promotion, it's essential to use your first-party data and find segments somewhere in the middle: open to paying, but not so loyal that you’re losing a great customer.We can see how the common myths about cross promotion shouldn’t hinder you from trying it out. Demystifying concerns about cross promotion allows us to utilize its many benefits, strategically using first-party user data and carefully crafted research to make the most informed and profitable decisions to increase your user acquisition. To learn more, watch the full webinar below.

>access_file_
1170|blog.unity.com

Unity and .NET, what’s next?

We’ve recently started a multiyear initiative to help you write more performant code faster and deliver long-term stability and compatibility. Read on to find out what we’re doing to update the foundational tech stack behind your scripts.The .NET ecosystem is dynamically evolving in a number of beneficial ways, and we want to bring those improvements to you as soon as we can. Our internal .NET Tech Group works on continuous improvement of our .NET integration, including newer C# features and .NET Standard 2.1. But we’ve recently kicked things into a higher gear to improve your developer experience across the board, based on your feedback.This blog post introduces the issues we’re working on. We’ve also discussed this topic at the Unity Dev Summit at GDC 2022. You can watch the full session here.The story starts 17 years ago, when our CTO started leveraging the Mono .NET runtime with C#. Unity favored C# due to its simplicity, combined with a JIT (just-in-time) compiler that translates your C# into relatively efficient native code. The remaining and much larger parts of the Unity engine have been developed using C++ in order to provide well-balanced and controlled performance.For many years, Unity had been running with a specific fork of the Mono .NET runtime and C# language (2.0). During that time, we’ve added support for additional platforms. We’ve also developed our own compiler and runtime, IL2CPP, to enable you to target iOS and some console platforms.In the meantime, the overall Microsoft .NET ecosystem has evolved, with new licensing and support for non-Windows platforms. This evolution has allowed us to upgrade the Unity .NET Mono Runtime in 2018 and embrace more modern C# language versions (7.0+). The same year, we also released the first version of the Burst compiler, pioneering fast native code generated for a subset of the C# language. This breakthrough allowed Unity to envision a world where we could extend the usage of C# in the other critical segments of the engine without having to develop these parts in C++, leading to the development of the DOTS runtime.Unity 2020 LTS and Unity 2021 LTS brought newer C# language versions and new .NET APIs. In parallel, we have seen tremendous performance improvements be delivered in the .NET ecosystem, as well as a more friendly development environment with the introduction of SDK style csproj and the flourishing NuGet ecosystem.As a result of this long evolution, the Unity platform includes a very large C++ codebase that interacts directly with .NET objects using specific assumptions inherited from the Mono .NET Runtime. These are no longer valid or efficient for the .NET (Core) Runtime.Furthermore, there’s a complicated custom compilation pipeline bound to the Unity Editor that doesn't rely on MSBuild and thus cannot easily benefit from all standard features.We’ve also been talking to many of you over the past few years, both in interviews and on the Unity forum, to see what we could improve to better enable your success. What we’ve heard is that you want to use the latest C# language, the .NET runtime technology, and third-party C# code from NuGet. When it comes to using the Unity platform, you told us you wanted to get the maximum out of the target hardware with high-quality C# testing, debugging and profiling tools, and good integration between standard .NET API and the Unity API. As a C# Unity programmer, you want Unity tools that seamlessly work with the rest of your toolbox and enable rapid iteration so that you can achieve best-in-class runtime performance.Getting there is going to take us several years. We’ll keep you in the loop with frequent blog and forum updates on the technical challenges that we encounter along the way.Our first step on this initiative was to huddle with all the internal people passionate about C# and .NET in Unity to form a C#/.NET Tech Group to drive this effort.We want to build on top of the .NET ecosystem instead of developing custom solutions. To enable you to take advantage of the performance and productivity improvements that come with the latest .NET SDK/Runtime and MSBuild, we want to migrate from the Mono .NET Runtime to CoreCLR, the modern .NET (Core) Runtime.This initiative is also bringing you innovation beyond the existing .NET universe, with the goals of delivering faster .NET iteration cycles on your C# scripts. We’ll be working on converging the JIT and AOT (ahead-of-time) solutions – IL2CPP and Burst – to offer the best balance between compile time efficiency and CodeGen quality.Externally, we’re working with partners in the industry like Microsoft and JetBrains to ensure that Unity creators are using the latest .NET technology. We’re also ramping up our participation in open-source communities. We’re going to break down this endeavor into several steps. Let’s see what’s coming next.This year, the teams are planning to work on the following tracks.Iteration time remains our top priority since we know that you want to get more out of your time. Here are a few examples of what we’re doing to improve this.As part of the compilation pipeline, we’re improving the time spent by the IL Post Processing which is responsible to modify the compiled .NET assemblies after your C# has been compiled. We are now using a persistent process to run the IL Post Processing after the compilation phase, and this can shave off a few hundred milliseconds.With the Burst compiler being used more frequently, we’re improving the granularity of detecting code changes with a transitive hashing algorithm. This lets us identify which Burstable code we need to compile more quickly. We’re working on moving the Burst compiler out of process so that it can compile your code faster thanks to running in a separate .NET 6.0 executable.We’re also making improvements to the domain reload by improving the reflection data built behind the scene whenever the TypeCache is used.We’re going to add tests and validation to better track iteration time regression for packages and Project templates.For the migration to MSBuild, the first step is to decouple our compilation pipeline from the Unity Editor and move it to a separate process. This is a complicated operation because there are years of legacy code with thousands of lines of C++ and C# code that we need to untangle in order to achieve this – while also staying backward compatible. You won’t see changes from your point of view, but it’s going to pave our path to MSBuild and simplify maintenance.We’re also going to improve the C# IDE debugging experience with Burst by introducing a mode that will automatically switch the debugger to managed debugging when a breakpoint is set on a codepath running with Burst. This means you won’t have to manually remove the [BurstCompile] attribute on codepath being debugged.The work involved in the migration to .NET CoreCLR runtime has already started, and it’s a very challenging journey. In order for us to successfully deliver this migration, we’d like to tackle the problem gradually and make sure that we can release pieces in a way that maintains stability of existing Unity projects.So, we’re planning to deliver this migration in multiple phases:First, we’ll provide support of .NET CoreCLR for standalone players on desktop platforms. You’ll be able to select this runtime in your player settings alongside the existing Mono and IL2CPP backend. This first phase should help us to migrate the core part of the Unity Engine (which is much smaller than the Editor part), and will hopefully solve a good chunk of the technical challenges involved for this migration. You will still access the .NET runtime through the .NET Standard 2.1 API, and we aim to release this new runtime during 2023.Secondly, we’ll be porting the Unity Editor to .NET CoreCLR and removing support for the .NET Mono runtime at the same time. This second phase will challenge how we are going to reload your scripts in the Editor without using AppDomains and complete the switch to .NET CoreCLR. It will also involve upgrading IL2CPP to support the base class libraries from the dotnet/runtime repository. You will finally have access to the full .NET 7.x or 8.0 API. We hope to release this new Editor during 2024..NET Standard 2.1 support in Unity 2021 LTS enables us to start modernizing the Unity runtime in a number of ways. We are currently working on two improvements.Improving the async/await programming model. Async/await is a fundamental programming approach to writing gameplay code that must wait for an asynchronous operation to complete without blocking the engine mainloop.In 2011, before async/await was mainstream in .NET, Unity introduced asynchronous operations with iterator-based coroutines, but this approach is incompatible with async/await and can be less efficient. In the meantime, .NET Standard 2.1 has been improving the support of async/await in C# and .NET with the introduction of a more efficient handling of async/await operations via ValueTask, and by allowing your own task-like system via AsyncMethodBuilder.We can now leverage these improvements, so we’re working on enabling the usage of async/await with existing asynchronous operations in Unity (such as waiting for the next frame or waiting for a UnityWebRequest completion). As a first step, we’re improving the support for canceling pending asynchronous tasks when a MonoBehavior is being destroyed or when exiting Play mode by using cancellation tokens. We have also been working closely with our biggest community contributors, such as the author of UniTask, to ensure that they will be able to leverage these new functionalities.Reducing memory allocations and copies by leveraging Span. Because Unity is a C++ engine with a C# Scripting layer, there’s a lot of data being exchanged between the two. This can be inefficient since it often requires either copying data back and forth or allocating new managed objects.Span was introduced in C# 7.2 to improve such scenarios and is available by default in .NET Standard 2.1. In recent years, you might have heard or read about many significant performance improvements made to the .NET Runtime thanks to Span (see improvements details in .NET Core 2.1, .NET Core 3.0, .NET 6, .NET 6). We want to leverage its usage in Unity since this will help to reduce allocations and, consequently, Garbage Collection pauses while improving the overall performance of many APIs.We hope that you’re all as excited as we are about these changes and features.Let us know what you think about our plans on the forum. We’re also going to be regularly updating the engineering section of the Unity Platform Roadmap, and you can share your feature requests and prioritization suggestions with us there.Editor's note: This article was last updated in February 2023.

>access_file_
1171|blog.unity.com

Ambitious art: How Mistwalker fulfilled their magnificent vision for FANTASIAN

Mistwalker founder Hironobu Sakaguchi, the creator of the iconic Final Fantasy series, had an ambitious vision for his latest production FANTASIAN. By importing photos of over 150 miniature handcrafted dioramas and innovating photogrammetry techniques, his team set out to create stunning sets and character effects for mobile.We sat down with Sakaguchi and Takuto Nakamura, the director and main programmer of FANTASIAN, to peek behind the curtain at Mistwalker. In this blog, they share insight on how the team brought such a behemoth of an artistic vision to mobile. Check out our recent case study to delve even deeper into Mistwalker’s incredible achievement.FANTASIAN sounds like a true passion project. Where did the ideas for the story and visual style originate?Mr. Sakaguchi: It all started when I wanted to use dioramas to make stop-motion characters for the Terra Wars project. At that time, I also created the background using dioramas, and had the actual dioramas on hand. I was looking and imagining what it would be like if CG characters were adventuring, and playing RPGs on them.The story is a different expression of the cycle of life and the stars, which I have cherished since the days of Final Fantasy. I came up with a theme that overlaps with the contradictions of modern society, such as the fact that the most chaotic things are born out of the most orderly living things. I also focused on multiple worlds, and knowing that emotion equals energy, sought to fill the player’s heart with a warm feeling.Though often seen on film, the use of dioramas has not been as common in game creation. Seeing how well it worked for FANTASIAN, what did you learn by combining these approaches?Mr. Sakaguchi: Visuals are one of the most critical elements of a game. It’s not easy to innovate, and ever since the introduction of CG in Final Fantasy VII, I hadn’t been able to come up with a fresh idea. Then I was reminded of the handmade detail of dioramas, and I chose the novelty of this visual expression, even though it might not be so compatible with game creation.As a result, we feel that the effect was even better than expected. It made me realize, once again, that the idea of a new visual expression, no matter how big or small, is an essential one.What were some of the unexpected differences you encountered developing for mobile platforms compared to the previous games your team worked on?Nakamura: Many of our members have experience in both console and mobile, so we didn’t really have any difficulties with mobile devices. However, it was challenging to make controller and touch controls coexist, especially in the menu area. Some bugs occurred with the controller; this area was more costly than expected.From menus to exploration to combat, tell us about the design process for user interaction and user interface for mobile. How was it different from the approach you would have taken for a PC or console game?Nakamura: With FANTASIAN, we didn’t think of it as a mobile game. We designed the interface as if we were making a console game. We then made several minor adjustments to the design to accommodate screen resolutions for mobile devices.The most challenging part of the game in terms of user interaction was the combat. In combat, you have to curve your magic trajectory to aim at multiple enemies. We had to go through a lot of trial and error to find the best angle for the camera to aim at, adjust the trajectory of the curve so that it engulfs the enemies quickly, and achieve a pleasing player experience by defeating many different enemies.Were there any unexpected aspects of mobile gamedev that required a change to the creative vision for FANTASIAN?Nakamura: We never gave up on the game, mainly because it was mobile. The game’s characteristic diorama method was designed to work well with mobile processing power.Its backgrounds are essentially photographs, which means that the background model is a 10,000 to 30,000 polygon model used for depth and photo projection. There’s no lighting, so the cost of drawing the background was relatively low. This allowed us to spend more money on the characters and post-effects, which resulted in a stronger overall picture.Tell us about the Dimengeon system and how you worked on the design. At first glance, it seems to be a great innovation for random encounters. But after looking more closely, it appears specifically useful for mobile players, who wouldn’t typically be able to engage in long or complex battles.Nakamura: This was Sakaguchi’s idea. In FANTASIAN, if you touch a treasure chest in the distance where you can’t see the route, NavMesh will automatically take you there. We talked about how new and exciting this was, but the problem was that it became stressful when interrupted by encounters along the way. So we came up with the idea of a dimension system, where encounters are stored.This system was initially created for field exploration. Still, it led to the exhilaration of defeating many enemies at once by curving magic trajectories in battle. It also made the humble task of leveling up more efficient. I think it’s a very unique and innovative system.How did you ensure that the visual effects, lighting, and shadows would work with the data captured from the dioramas to maintain your artistic style?Nakamura: The most effective way to achieve harmony centers on the texture of the characters and the atmosphere.First of all, for the characters, we tried to create a figure-like texture that’s not entirely realistic. I adjusted this until the end to fit with the miniature, handmade feel of the diorama. The lighting was also handled with a stronger ambient to bring out this figure-like feel.We added a customized vignette post-effect to create a natural atmosphere. Vignetting is an effect that darkens the corners of the picture, but in FANTASIAN we used it to add color to the image’s corners, as if it were a fog. It’s easier to add color in 2D than in a fog that depends on the diorama’s depth.As FANTASIAN uses photographs, the depth of information is not perfect. That’s why we aren’t as good at depth-based post-effects like fog or depth of field (DOF).Do you have any tips to share with Unity developers looking to create their own JRPG-style games?Nakamura: JRPGs are simple in structure, but they tend to be significant in volume. We needed many assets, so the most important thing was to manage them effectively.For FANTASIAN, we set up rules for naming and folder structure and then used import scripts to automate the process to a certain extent. This helped us manage the assets.Debugging is also essential. The simplicity of the structure means that crash bugs are unlikely to occur, but bugs such as flagging errors that prevent the story from progressing are more likely. It’s a good idea to have debugging tools in place to detect and reproduce such bugs.Lastly, we would love to know if there are any fun facts or secrets behind the game to share with fans and other developers?Nakamura: Sakaguchi is quite flexible and open to individual ideas. Many of the storylines and characters have been changed based on the opinions of our team members. For example, we didn’t have a female character named Valrika at first, but the artist wanted to create a mature female character, so we added her in. He also agreed to make one character a triplet at the end of the game to make the battle more exciting, and even made Tan a cat lover. However, Sakaguchi was less open when it came to Ribbidon because he designed the character himself, and was very keen to keep his vision for the 3D model. Ribbidon, despite his appearance, can speak in a philosophical way. The artists and game designers had a lot of trouble with this, but it turned out great.Sakaguchi’s production style is to play first, then make requests and adjustments. In other words, we needed to implement and then have people play. This provided us with detailed recommendations on the user experience to improve the quality of the game.On the other hand, if the game wasn’t interesting enough, it would have needed to be redesigned and revised in a significant way. Ultimately, we chose to prioritize speed. The faster we could get the implementation done, the better we could determine what we needed to focus on for the project.At the same time, we didn’t make it too flexible. If you build to be flexible, you’re likely to end up with more recklessness and changes than you can handle. We always look for easier ways to achieve a similar experience.Thank you both so much for your time and behind-the-scenes insight. It’s been a pleasure chatting with you.Mr. Sakaguchi and Nakamura: Thank you very much.

>access_file_
1172|blog.unity.com

5 myths about playable ads that are ending here and now

Playables are incredibly effective for driving higher engagement, boosting conversion rates, and giving unique insights to optimize your entire UA strategy. But many myths about their cost, complexity, and performance still exist. That’s what this article is all about. Here, John Wright, Head of Global Growth Partnerships at Luna, discusses 5 common myths about playable ads and uncovers the truth for each.Myth 1: My video creatives are performing well, so I don’t need playablesIt’s easy to think that if your video creatives are performing well, there’s no room for playables - but it’s important not to limit your creative strategy. Using a wider variety of ad types creates more opportunities to test and improve performance, which is a benefit no matter your game or genre.Different ad sources - from SDK to social channels - reach different audiences, which means playables could resonate better with some users compared to video across different sources. You’re not stealing traffic from your video creatives - this scale represents incremental growth.In other words, it’s not a “one or the other” choice when it comes to your creatives. In fact, you can turn your top-performing playables into winning video creatives. Finding a hero creative set that translates into both interactive and static creatives expands your opportunities to reach more users and maximize growth.Video is largely a black box when it comes to analytics, but playables have in-ad events that give you greater transparency into the user journey.One of the reasons why playables tend to perform better is that they give you access to in-ad events. Video is largely a black box when it comes to analytics, but playables have in-ad events that give you greater transparency into the user journey. You can set up events like clicking the tutorial and showing an end card that let you track crucial KPIs like engagement rate, time to engage (TTE), and user dropoff. Then you can use these learnings to optimize your creative.Myth 2: I need to know Javascript or hire someone who doesThere’s a common assumption that it takes expert coding skills to build playable ads - that you need to build a playable ad in Javascript (JS). Or that you need someone to build the playable utilizing a JS game development library, like Phaser. But there are several solutions available that don’t require hiring a JS developer, like template builders and Unity. If your game is built in Unity, for example, you can build playables using the assets and code from your game. The platform uses the same engine as your game so you don't need to recode anything in JS.With Elements, you don’t need any coding abilities to design a high-quality interactive creative. Instead, you use your existing game assets and choose from a selection of customizable templates to build and launch a high-impact playable ad in minutes. For example, Kakao Games used Elements to start designing interactive creatives and improve their entire UA strategy. Within 11 days, they found their new hero creative set that generated 6.8x more impressions and had a 3x higher IPM than the creatives they used when working with an external agency.Myth 3: Playable ads are expensive to buildThis cost myth traces back to the fact that many studios think outsourcing playable production is the best option. But to outsource playables comes at a high price - anything from $3K to $10K per playable, depending on location. Since many studios end up paying an external organization for multiple projects to find the one that works, the costs for each playable quickly add up. Plus, there’s no guarantee that buying a playable from an agency or vendor means it’ll be successful.Having an internal developer build your playables addresses the concerns around cost, time, and performance. They’re a member of your team who’s been part of the game’s development from the start and can work closely with other teams to build and optimize playables. This usually results in a much shorter feedback cycle and more accurate, better-performing playables than outsourcing. The shorter time spent in this feedback loop, the lower the costs.And having a dedicated developer working with other teams can improve your organization’s internal knowledge-sharing. Building a database of best practices now can shorten the time it takes to find winning creatives later, which reduces the costs associated with testing and development.Myth 4: If I build a playable, I’m guaranteed to get higher-quality usersThe idea of “build it and they will come” doesn’t apply to almost anything in UA - and that extends to playables. You need to design playable ads intentionally and use data to back up your decisions and optimize your strategy.It’s true that playable ads can lead to higher LTV and user engagement. These creatives give users a taste of your gameplay - if they like the experience, they’ll likely download your game and keep playing. But if your playable is misleading, then users will be surprised when they download and start playing the game, which can lead to high user churn and low LTV.If your game is a match-3, for example, and your playable only highlights simulation elements like the story or narrative, users won’t understand the mechanic. They’ll be shocked that it’s actually a match-3 game when they install it, and many are likely to leave shortly after starting to play.Your playable doesn’t need to be a 1-to-1 depiction of actual gameplay, but it should still highlight the core mechanic and concept so users won't be shocked after they install your game.Your playable doesn’t need to be a 1-to-1 depiction of actual gameplay - the goal is to reach your target audience, and showing gameplay that’s a slight departure from the original can help attract the right users. Just be sure you’re still highlighting the core mechanic and concept so users are hooked for the right reasons and won’t be unpleasantly surprised when they start playing your game.The quality of your playable matters, too - this is your chance to get users hooked on your game. But a low-quality playable creates a poor user experience that will likely fail to hook users and can reflect poorly on your game. For tips on building high-quality playables in-house, check out this eBook. Some key tips include:- A/B test each part of your playable - the tutorial, gameplay, and CTA- Tap into the user psychology of your game’s genre- Set up custom events to track the user funnel and optimize your creative strategyMyth 5: Any end card is a playable adMany studios think a GIF or one-click experience at the end of a creative is a playable ad - but this isn’t the case. A GIF as an end card is fine for some mobile creative strategies, but it doesn’t make the same impact as having a standalone playable ad or playable end card. The depth of the experience has a big impact on user quality, and these two examples often mistaken for interactive ads aren’t as deep as the real thing.Interactive creatives give users a taste of your game and encourage greater engagement in a way that just watching a GIF or a video - or tapping one time on the screen - does not. You should approach building, testing, and optimizing Interactive end cards (IECs) differently than any other ad unit - even from playables. 8SEC, for example, was relying on GIFs as end cards for their game Hero Squad. But using Luna Elements, they easily created high-quality IECs that helped their title achieve over 74 million impressions and improve overall CVR by 30%.They’re just as important for reaching and engaging new users, though, which brings us back to the earlier advice of not limiting yourself to certain types of ad units. Test IECs, playable ads, and other versions of end cards to make sure you’re not missing out on either incremental users or a hero creative set.Expand your creative horizonsSo there you have it - consider the 5 most common myths about playable ads officially dispelled. Designing playables in-house is a key way to achieve incremental growth and optimize your UA, all while saving on costs, time, and resources. Put the truth about playables into action and start building your own ads today so you can enjoy all of the advantages these interactive creatives provide.

>access_file_
1173|blog.unity.com

Expanding the robotics toolbox: Physics changes in Unity 2022.1

Simulate sophisticated, environment-aware robots with the new inverse dynamics force sensor tools. Explore dynamics with the completely revamped Physics Debugger. Take advantage of the performance improvements in interpolation, batch queries, and more.The Physics Debugger is an essential tool for understanding the inner workings of the physics engine, as well as for making sense of the particular behavior observed in a project. A good debugger is a critical tool for authoring convincing, modern, rich physics. With that in mind, we completely reworked the user interface (UI) and added some interesting features.To fit more information into the same space, we grouped the properties into tabs and then expanded them with the newly added properties.Before, both Rigidbody and ArticulationBody components had a collapsible “Info” section in the Inspector that you could expand to view additional information, such as the current linear velocity. Once expanded, however, the overall performance of the Editor degraded significantly. In addition, it was previously complicated to compare parameters of different bodies, as you needed to open two Inspector panels. To address these issues, we moved all of the properties to the “Info” tab of the Physics Debugger window, where the properties are displayed for each of the selected objects, so you can easily compare them side by side.Contact points can now be visualized, alongside the contact normal and the separation distance.Physics queries, such as Physics.Raycast or Physics.CastSphere, are normally part of some custom physics behavior, such as custom character controllers or vehicle controllers. They’re invisible and tricky to debug. To help with that, this release offers optional visualization of the physics queries.Until now, Unity had tools that supported only what is called forward dynamics: given a set of objects and the forces applied to them, calculate their trajectories. While this is incredibly useful, we wanted to expand our robotics toolbox. So, Unity 2022.1 adds support for inverse dynamics: given an object and a desired trajectory, calculate the forces that cause that trajectory when simulated.This effort will span multiple releases, as we build it out iteratively. In Unity 2022.1, we’re exposing a set of functions to calculate the components of the current total force applied to ArticulationBodies that should be counteracted before applying the external force to drive them along the desired trajectory. Further interesting concepts will be exposed in later releases, such as the joint force required to counteract the impulse applied by the solver. We invite you to try this out and let us know what you think on the forum.In particular, the new functions are:get the current force applied to the body by the drive. It’s an indication of how hard a drive is trying to reach the desired drive target. It depends on the stiffness and damping of the drive, as well as on the current delta target position and delta target velocity;get the joint forces required to counteract the gravity, Coriolis and centrifugal forces acting on the body; andget the joint force required to reach the desired acceleration.Rigidbody uses both interpolation and extrapolation to give an impression of smooth motion while simulating at a comparatively low frequency. Internally, this is implemented by calculating the transform poses every update. In the case of interpolation, the last two simulated poses are used to calculate a new transform pose for this frame. In the case of extrapolation, the last simulated pose and velocity are used instead. Since it is designed to be lightweight, however, we don’t communicate these poses back to the physics engine. The poses are only presented to the systems outside of physics (e.g., graphics and animation). Because of that, for instance, a raycast won’t detect a body at the interpolated pose.To keep physics from noticing the transform changes, the mechanism was to have a Physics.SyncTransforms() call each update right before pose write, followed by an internal method call to clear all transform updates for physics. That caused two classes of problems:If a scene has at least one interpolated body, all the transform changes to all the physics components were synced with the physics engine each update (even though they’re mostly needed once per FixedUpdate); andIf a change was made to a transform that had a Rigidbody component with interpolation on it, interpolation for this object broke because the user-made transform change was propagated to the physics engine and effectively changed the last simulated pose (the pose is not stored separately – it’s just the pose that the physics engine uses currently).To address these problems, we updated the interpolation code so that it doesn’t need to sync all transforms for each frame. This change also improves performance; the new interpolation code runs faster than before (depending on the scene complexity).A section of the forum is dedicated to discussing various experimental previews of physics tech, and some of the changes implemented in this release originated there:Many projects, especially larger ones, often use many GameObject layers, so the matrix that describes the layer combinations and produces contact pairs for physics becomes quite large too. In this release, we’re highlighting the currently selected row and column so that it’s easier to use.A joint is used to link two Rigidbodies, and it defines the constraints on their relative motion. Starting in Unity 2020.2, a joint can also be used to link a Rigidbody to an ArticulationBody. To make that possible, each Joint class received an additional property that is shown in the Inspector. Linking to both Rigidbody and ArticulationBody at the same time is impossible, so displaying both options when one has already been set takes up vertical space for no reason. Now, only the property that was set is displayed.A kinematic Rigidbody is a special type of body that can influence other bodies, while not letting anything else affect it. In that regard it’s analogous to a static collider, with the exception that it’s intended to be moved frequently. Typical use cases are character controllers, animation driven physics, virtual reality (VR) simulation of wrists, and so on. It is controlled by setting a kinematic target that the body will reach in just one simulation frame. The main difference with the static collider here is that the kinematic target is reached not by instant teleportation (pose change), but by calculating the linear and angular velocities required to reach the goal in one frame, and passing them to the solver afterwards. That way, the movement can contribute to the constraint Jacobian matrices correctly, and thus any attached joint chain will react properly (no glitches). In this Unity release, we expose a new method to set both position and rotation of the kinematic goal in one operation.Contact modification, introduced in Unity 2021.2, enables changing the contact point details as generated by the narrow phase, right before they’re used to create contact constraints for the solver. In this release, we’re adding new getters for body velocities in a contact pair, for advanced use cases such as this example of custom anisotropic friction.The PhysX version was updated to 4.1.2, the latest in the 4.x line to date. It’s a minor release, so it only addresses critical bugs and crashes. Release notes are available here.When a dynamic body overlaps a collider, the solver aims to find a corrective impulse that pulls them apart while satisfying all the constraints. Internally, this impulse is computed per each contact point in a pair, but we had only an aggregate value that returned a total sum over all points. With this release, we’re exposing a new property of the ContactPoint structure that allows retrieving impulses for each contact point.We’re closely watching the feedback about the ArticulationBody component coming from the robotics community. To facilitate creating and tweaking the behavior of some smaller robot parts, we anchored the joint limit handles in screen space so that they will no longer occlude colliders in the scene.Physics batch queries were the result of a Unity hackweek and shipped directly to enable certain use cases, but with minimal functionality. They continue to evolve, with new functionality to enable even more use cases, such as those with more sophisticated threading patterns, and the types of queries are more diverse. In this particular release, we’re enabling the batch queries to be run on any physics scene, and we’re adding one new batch query type (Physics.ClosestPointCommand).For a mesh to be usable with MeshCollider, it has to be baked first. Baking is an expensive process of producing the spatial search structures required for collision detection. Normally, it happens implicitly every time a MeshCollider’s mesh property is changed, and it runs on the main thread, blocking any further work until complete. In Unity 2019.3, we exposed a thread-safe method to perform bake off the main thread on demand. The intent was to enable more sophisticated procedurally generated meshes, since one could now jobify content generation and mesh bake, gaining much higher thread utilization. However, one particular disadvantage of this function was that it only supported baking with the default cooking options. In this release, we correct it by exposing a new variant of Physics.BakeMesh that supports baking with any cooking options.We can’t wait to see what you create with the new Inverse Dynamics APIs and the revamped Physics Debugger! Download the latest Unity 2022.1 build today and join the conversation on the robotics forum and the physics previews forum.

>access_file_
1174|blog.unity.com

Unity 2022.1 Tech Stream is now available

Today, I’m happy to share that the new 2022.1 Tech Stream is available for download from our releases page. Tech Stream releases give you an opportunity to go hands-on with early features, provide feedback, and engage in dialog on how we build tools that work harder for you. Tech Streams are released twice a year and ensure that when the LTS releases in 2023 that you’re already familiar and ready to incorporate all of the functionality into your new project.This first major release of our new lifecycle was informed by your feedback and suggestions on where to invest Unity’s engineering resources. Your 7,600 notes to the roadmap, over 5,000 forum threads with direct product feedback and insights, and hundreds of individual conversations with us have resulted in more than 280 feature improvements, including over 70 new features. All shaped by you.In this post, we’re sharing just a few of the most impactful highlights that cover key focus areas, including unified UI, artist usability, iteration speed, and platform enhancements. You can always get more detail in the official release notes.Your team’s needs are unique, and we want to give you an extensible Editor that can flex to your workflows, so everyone can work faster together.UI Toolkit is a unified solution for both authoring runtime UI and extending the Editor with custom tools. In 2022.1, we’ve added even more features for tool developers looking to customize the Editor for their teams with UI widgets and custom shapes. We’ve also added the TreeView with multi-column support, new vector drawing APIs to customize the UI element appearance, and we’re progressively making Property Drawers and Property Attributes available, starting with the most commonly used. Connect with us in the forums and let us know how we can help make UI Toolkit even better for you.We’ve heard you tell us how important Splines are in our forums, and it’s one of the most requested features on our public roadmap.“I have been researching spline tools… but I don't know if any of them will provide exactly the functionality that I need and it would become quite costly to buy a bunch just to experiment. So a good built-in spline tool is incredibly important to my project.”In this release, a new Spline authoring framework is available as a package. It’s designed to create and manipulate Splines in-engine, above all by letting programmers extend functionality with tools and custom components such as instantiating geometry and moving along a Spline. It can also work alongside the new Edit modes, and edit Spline points and tangents using the standard editing tools and shortcuts. Keep letting us know what you think in our forums, and see what’s next on the roadmap.We’ve also improved the procedural creation of materials. For creators using code to generate materials, we extended the Material API to all material properties, now supporting keyword states, HDRP’s diffusion profiles and IES lights, enhancing procedural material usage in-Editor or at runtime.Finally, we’ve added a new API for Unity File System, enabling you to create tools for Asset Bundle visualization and analysis that help your team optimize performance.Rapid iteration is a key element of any creative work – it’s what makes game development so much fun. We’re optimizing the core of the Unity Editor so that you can iterate quickly through the entire lifetime of your productions, from importing assets, through working in the Editor, to building and deploying a playable game.At the same time, we’ve heard through our graphics forum that technical artists are looking for additional Editor tools and APIs to help them bring their vision to life more quickly. So, based on the feedback, we’ve added new options that will help any creative team get more done in less time.As the HDRP and URP renderers mature, we’ve heard that you’re looking for even more ways to achieve your visual fidelity goals at a faster pace.One of the most highly requested feature cards from our Rendering & Visual Effects public roadmap was Material Variants. We’ve heard that you often reuse base materials numerous times across different projects, scenes, or locations in an environment, which can lead to authoring issues when materials are changed out of the context of their implicit hierarchy.“This is a critical feature for any bigger project if we want to control all shader/material for the game. Been waiting for years for this.”Material Variants offer an integrated and powerful workflow to reduce iteration and authoring mistakes when reusing materials in teams where artists manage large amounts of assets. Now available in both HDRP and URP, Material Variants allow you to create material hierarchies, where children can share common properties with the parent material, and override only the properties that differ. Changes to common and non-overridden properties in the template material will automatically be reflected in the variant material, saving you time and making material changes that much easier.You’ve shared that finding the right items in your project can be time-consuming, particularly as you scale. That’s why we’ve introduced visual search queries to help you find what you’re looking for faster. Additionally, you can also build more complex queries and leverage the Editor object picker for more precise selections for object fields.For 2D creators, there are plenty of productivity improvements. In this release, we’ve focused on speed-improving enhancements to foundations, import, animation, and physics.For starters, the Sprite Atlas v2 is now the default for all new projects bringing support for Accelerator and for folders as packable objects, a productivity boost that is much-loved by 2D creators. Working with Photoshop for 2D is enhanced by support for importing files with the PSD extension. Alongside this, we’ve added layer management in the 2D PSD Importer to give you more control over which layers get imported. The Sprite Swap feature now has streamlined keyframing and previews, making sprite swapping for 2D animation more intuitive.To help with 2D physics, we’re introducing Delaunay tessellation. Often, polygons can be too thin or small and are filtered out by the physics engine. Delaunay tessellation not only stops producing polygons that are too thin or small but also produces fewer polygons to cover the same area. Check out some of the samples and our roadmap to learn more.We’re also continuing to improve the Package Manager to help you get working on your project faster. In this release, you’ll find the ability to select multiple packages at once so you can manage them in bulk, along with the option to control the location of Package Manager caches.To further boost productivity in another part of your workflow, the IL2CPP scripting backend will now always generate fully shared versions of all generic methods. This allows programmers' use of generic-type combinations that are not present at compile time to avoid a whole class of difficult-to-detect errors that can occur only at runtime.There are so many quality-of-life improvements to the Editor that we can’t list them all here, but a few highlights include:Faster to get in and out of Play mode, import textures and small files (by up to 60%), and create buildsBetter UI on undo and redo operationsCancel button for the project open progress windowShortcut manager improvementsWe know that profiling your games and projects to get insights about their performance is critical to your success. So, in 2022, we’ve continued to enhance our profiling tools and analytics to give you comprehensive information that you can act on.In this release, we’re delivering the Frame Timing Manager for capturing and accessing GPU and CPU frame timing data and timestamps at a granular level. The Frame Timing Manager is available in-Editor and lets you target and adjust performance bottlenecks in your project, regardless of platform, with more information than ever before about how individual frames are performing. Together, these features let you build tools to profile and report on your projects on any platform. Connect with the performance team or get even more detail on the forums.When building up or modifying a scene or if you’re improving or optimizing content, it’s important to understand how the frame budget is spent. We added a Frame Stats Profiler to the Rendering Debugger, available both in-Editor (Play mode only) or in a built Player, for all Scriptable Render Pipelines. This tool isn’t just intended for developers; it’s for anyone who wants to identify whether a scene is CPU- or GPU-bound and get a breakdown of the frames’ timings.Finally, let’s talk about the breadth of platforms that you deploy to each and every day. It’s one of the primary reasons many of you choose to develop in Unity, and it’s why we continue to optimize platform support for new features and the latest APIs to power your creativity.For those looking to push Android performance even further on Samsung devices, you can now take advantage of Adaptive Performance 4.0. With that, you get four more scalers that cover physics, decals, custom, and layer culling – many of which include samples. One major benefit is the support for visual scripting that further simplifies scripting with Adaptive Performance.For Android games targeting devices with Arm chipsets, we’ve heard that you want to optimize even more. With Unity 2022.1, you can access low-level performance data with the System Metrics Mali package, exposing metrics that provide insight into what impact your changes produce on the hardware level. Install the Read GPU Metric sample that ships with this package to see how GPU metrics can be accessed at runtime. On the iOS platform, we’ve enabled the latest incremental build pipeline, which ensures that you only rebuild the parts of the application where there have been changes since the previous build.Continued improvement of the console development experience includes enhanced overall stability, as well as added support for the incremental build pipeline for Xbox.Check out the release notes and Unity Manual for details about what’s new. You can download Unity 2022.1 from the Unity Hub. If you’re curious about what’s coming or want to share your feature ideas with us, visit the Unity Platform Roadmap page.Each Tech Stream release is supported with weekly updates until the next one, but there is no guaranteed long-term support for new features. We recommend using the more stable, better-supported Unity LTS release for projects in production. Remember to always back up your work before upgrading it to a new Unity version. See the Upgrade Guide for advice on bringing your project to Unity 2022.1.We’ve just begun the Unity 2022 journey, but we’re excited to continue collaborating to help our Editor and tools make you as productive as possible. Your feedback is essential, so download the new release, use the new features, and tell us what we’re getting right and where we should go next.You can share any general feedback about the new release in the announcement forum post, while specific insights about key features are always welcome in dedicated forum groups for different areas, such as render pipelines, UI Toolkit, or Frame Timing Manager – you can find the full list of these groups here.This release is just the first stage in our 2022 development cycle. Building on these great improvements, we’ll also deliver on several other key areas, including improved rendering pipelines, artist usability, and netcode. Check out our roadmap overview from GDC for more details. Thank you for partnering with us, and we can’t wait to see what you create.

>access_file_
1175|blog.unity.com

Upgrading from Legacy Analytics to Unity Gaming Services Analytics

In October 2021, we launched Unity Gaming Services (UGS) – an end-to-end platform with flexible solutions to help you build, manage, and grow your game. Alongside this launch, we also announced that we rebuilt and re-envisioned our standalone Legacy Analytics tool to be integrated within the platform.Our new Analytics (beta) solution is a data visualization and dashboard tool to help you understand and analyze all of your game data in one place. It works as a standalone tool (similar to the Legacy version) or you can use it with other Unity Gaming Services offerings to make it more powerful.Today, our new Analytics offering is free while in beta. Upon full release, it’ll have a free tier and a pay-as-you-go pricing model that scales with your usage – learn more about Analytics pricing here.Analytics sits within the Unity Gaming Services Dashboard and can be used as a standalone service or with other services to make it more powerful. Analytics has been redesigned to take the best of both of Unity’s pre-existing solutions: Legacy Analytics and deltaDNA to deliver a powerful, intuitive end-to-end analytics solution.Analytics delivers all the value of our Legacy Analytics plus more functionality. See how we’ve upgraded the experience below:Fresh data: One hour data processing time, down from eight hours on Legacy Analytics.Unlimited events: We’ve removed the strict event and parameter limits on Legacy Analytics so you can track what you need with no limits.Designed to work out-of-the-box: Pre-built dashboards, Standard Events, and Audiences give you quick insights so you can get started quickly.Built with flexibility and depth: We’ve added incremental tools to give you all the power to customize what you need for your studio. Custom Events, Custom Audiences, Custom Dashboards (coming soon), and SQL Data Explorer let you capture and analyze all the data you need.Built to work with other UGS tools: We’ve designed Analytics to operate within the platform, so there are use cases supported across our individual tools. Use Remote Config and Analytics together to run A/B tests, change variants within your game using Remote Config, and understand which variant performs best with Analytics.A dynamic platform for all: We’ve prioritized a simple UI and built-in tools for non-technical users, with robust, deep functionality for the most technical members of your team. We’ve taken learnings from both of our platforms to improve the user interface, increase data transparency, and build new use cases and workflows that weren’t previously possible.Better quality control and oversight of your data: A new event validation step has been introduced during event ingestion to prevent invalid events from polluting your dataset. You can also now use Environments to separate development and production data and config.Funnels: Our Funnels feature has been improved to ingest historical data so you can see and analyze your game progression immediately.New and improved functionality: Our new tool is already an upgraded version of Legacy Analytics but we’re continuing to invest in more functionality.SQL Data Explorer (new):Now you can write your own queries using SQL, so you can aggregate and visualize any data needed for your game.Acquisition Channel Tracking: We know how important it is to see your monetization data with the rest of your game. Now, you can bring data from your attribution provider into Analytics. Use this to identify your best performing campaigns or start to build an LTV model.Custom Dashboards (coming soon): Soon you’ll be able to create custom dashboards that will let you view the data and charts most important for your studio. This also allows you to create side-by-side comparisons.To put more resources into our upgraded Analytics platform, we will stop investing in Legacy Analytics with the aim of fully replacing the product with Unity Gaming Services Analytics. It is advised that new projects are created with the new Unity Gaming Services Analytics platform and we also recommend using this guide to migrate existing projects over.Transitioning a live game to a new Analytics solution can be difficult, so we’ve designed a data pipeline so you can run Analytics and Legacy Analytics in parallel. The Core Events data (from July 2021 onwards) from your Legacy Analytics integration will be automatically imported into the new Analytics solution.Metrics such as DAU, MAU, session length, revenue, and others will be populated in the new Analytics solution for a trial of the product ahead of implementing the SDK.Please note that this does not cover any Custom Events that you have defined; these will need to be redefined both in your game code and the dashboard – see the tech docs guide for more information.Note: No duplication or double counting will occur, standard events being triggered by the Analytics package will take precedence over imported data for each individual player.Want to learn more about Unity Gaming Services Analytics? Register here for our free UGS Analytics bootcamp on May 17 at 12 PM EST/ 9 AM PST and get a live overview of everything you need to know for your next project.If you have any concerns or questions, please contact our support team here or reach out to your client partner and we’d be happy to help you through this transition.

>access_file_
1177|blog.unity.com

Our biggest e-book yet: 2D game art, animation, and lighting for artists

Our most comprehensive 2D game development guide is now available to download for free. Over 120 pages long, it covers all aspects of 2D game development for artists. This includes roundtripping between Unity and your digital content creation (DCC) software, sprite creation, layer sorting for level design, camera setup, animation, lights, plus many optimization tips along the way.2D games don’t have the same constraints as 3D, which empowers artists to produce cartoonish and fantastical art that looks great on any device. Let’s learn how.Unity’s suite of 2D tools provides creators with endless possibilities. This guide unpacks key decisions to tackle at the start of your project, as well as best practices for leveraging Unity’s 2D toolset. It’s specifically tailored to developers and artists with intermediate Unity experience who want to make high-end 2D games, independently or collaboratively with a team.The e-book was written by Jarek Majewski, a professional 2D artist, Unity developer, and creative director of our 2D demo, Dragon Crashers, together with input from several Unity 2D experts, such as Rus Scammell and Andy Touch.If you’re unfamiliar with Unity’s tools for 2D creation, see our 2D page to get an idea of what’s supported by Unity. There’s native support for games of all styles and genres, ranging from pixel art to immersive animated experiences.Then get inspired by our latest 2D sizzle reel, which showcases stunning 2D games made with Unity.Get the full range of expert tips in our free e-book. Topics covered in this guide include:The art of your game and technical considerations before beginningHow to choose the perspective and resolution of your assetsIn-depth explanations and guidance on level design, 2D animation, and 2D lightsTechnical breakdowns of advanced visual effects and post-processingFor further inspiration and pointers around creating 2D games with Unity, check out the Level up your 2D skills reading list that our team put together.Seeking additional support? We recommend that you visit the forum thread to follow up and provide any other feedback you might have.

>access_file_
1178|blog.unity.com

Inside Sweatcoin's ad monetization strategy

There's a growing trend toward implementing ad monetization into apps beyond games - but Sweatcoin's been there from the start. Having built a robust rewarded video strategy already back in 2017, Sweatcoin's all but perfected their ad monetization.ironSource sat down with their team to learn a bit more about how they got started in ad monetization, their top tips for apps looking to do the same, and their experience with ironSource LevelPlay as a mediation partner.Tell us a bit about Sweatcoin and what your app business is all about.Sweatcoin is a free app which rewards your daily steps with a new-generation currency you can spend on cool products, donate to charity or convert into crypto. Why? Because when you look after your health, you benefit society. You are more productive. You help save billions of dollars in healthcare. Your movement has real value and you deserve a share in it.Sweatcoin relies heavily on ad monetization as a revenue stream - at what point did you start implementing ads into your app and why?Sweatcoin originally implemented rewarded video ads in 2017, with the introduction of a Daily Rewards feature. This placement allows each user to watch up to three rewarded video ads every day in exchange for a randomized amount of Sweatcoin currency.The aim was to increase engagement by giving users a tangible reason to return to the app on a daily basis, and given the popularity of the Daily Rewards placement in the present day, that was definitely a success!You recently switched to ironSource LevelPlay mediation. What qualities do you look for in a mediation partner and why?The considerations around choosing a mediation partner are pretty unique and the wrong choice presents a fair amount of risk compared to say, the decision to choose an additional network partner.With that in mind, we have tried to keep the principle of de-risking prominent in our decision making. That meant looking at each major provider and assessing duration and track record in the marketplace of serving publishers well, but also the strength of our relationships and where we felt valued as partners.We also needed to be sure that we would be choosing a product feature set that facilitates us reaching our potential in ad monetization. For Sweatcoin, that means access to the strongest possible selection of potential network partners, a quick and easy way to A/B test new setups, and an analytics package that is fast and easy to use on a day to day basis.What type of internal processes do you have in place to support ad monetization? Team setup? Optimizations? Automation?Our product team works closely alongside a consultant to manage the ad monetization. Weekly calls and monthly performance reviews allow us to assess our current trajectory and plan appropriate development time to adapt to industry developments. We periodically review and test new network partners, as well as conduct experiments on the user experience that are designed to create greater user engagement with the Daily Rewards feature.What are some tips you have for other app developers looking to get started with ad monetization?If there is currently no obvious way to introduce rewarded video in your app, consider whether forging a path towards that is worth the resource. Sweatcoin has found that introducing rewarded video has not only created another hugely significant revenue stream for the app, but it also provided a sustained uplift to our other key performance metrics."Introducing rewarded video has not only created another hugely significant revenue stream for the app, but it also provided a sustained uplift to our other key performance metrics."Spend more time choosing your mediator than any single network partner. The risk/reward relationship associated with your decision to choose a mediator is very different because it will affect the utility of ALL your network partners.Develop an A/B testing process that continually searches for incremental performance improvements. A singular A/B test may yield only a minor improvement, but repeating this process over time will have a massive compounding effect.What are the biggest challenges you’ve faced when it comes to ad monetization? How did you overcome them?A big part of what we think makes Sweatcoin unique is our brand. We want to have complete awareness of what campaigns our users are being exposed to, and we look to avoid being affiliated with anything that we feel is not aligned with our brand. This can be a challenge when you are serving ads at scale to a global audience, but we have found that being swift and proactive when it comes to shutting off inappropriate campaigns has been a good defense against this.Another ongoing challenge is balancing the desire to increase our revenue yield with protecting the user experience inside the app. This means that we do not consider showing our users system-initiated ads. Instead, we continue to overcome this challenge by finding creative ways to increase engagement with our existing opt-in placement. We can be sure that when users interact more with this placement, it does not harm other key performance metrics, but actually improves them.Which ironSource LevelPlay mediation products are you eager to start using?AdQuality by ironSource is something that we are really excited about using. We are hoping the insights we get from the tool can be factored into our future product development cycles to further improve how we provide value to our audience."AdQuality by ironSource is something that we are really excited about using."The Direct Deals functionality that ironSource LevelPlay offers is also very attractive to us. Sweatcoin is in somewhat of a unique position amongst mobile publishers, in that we already possess the infrastructure to serve some advertisers directly; the Marketplace area of the app facilitates this. The possibility of also offering these advertisers access to our audience through the rewarded video placement allows us to increase the value that we can offer them as partners.What’s your favorite part about working with ironSource?ironSource has been a great partner for Sweatcoin and we are delighted to be expanding that partnership to work with their mediation platform."What we enjoy about working with ironSource is their consistency; they always deliver what they promise"What we enjoy about working with ironSource is their consistency; they always deliver what they promise. We really feel that they have valued the relationship that they have with us from day one.

>access_file_
1179|blog.unity.com

Customizing performance metrics in the Unity Profiler

Optimizing your application requires being able to accurately measure how your project consumes hardware resources. Extending the Unity Profiler with your own performance metrics enables you to better understand your application’s unique performance story. In this post, we will cover new Profiler extensibility features in Unity 2021 LTS.Our new Profiler counters provide a lightweight mechanism for adding your own performance metrics to your Unity applications and packages. You can also now visualize your metrics directly in the Profiler window by adding your own Profiler Modules. Read on for more details on how to use these features to improve the performance of your Unity project.The Unity Profiler is a tool you can use to get detailed performance information about your application. It tracks a large number of performance metrics across a range of categories, such as memory, audio, and rendering. These metrics are visualized in the Profiler window, and in some cases they may also be queried from script. Using this information, you can gain insights into how your Unity application uses the available hardware resources of the target platform, which can help you pinpoint where optimizations might be made.Profiler counters track, measure, or count metrics in your Unity application that are useful for performance analysis purposes. For example, Unity defines a built-in Profiler counter to track the total number of bytes of memory used by your application, called “Total Used Memory”. This is a useful statistic to gauge your app’s memory footprint on the target device. You can see this value displayed over time alongside other memory-related metrics in the Profiler window’s Memory module, shown below.By adding your own Profiler counters, you can expose performance metrics unique to your own systems and applications. These metrics can be displayed in the Profiler window alongside other performance data, including built-in counters. This enables you to view the performance characteristics unique to your application, in context, directly in the Profiler window.In the example below, a custom Profiler counter has been added to track the number of active creatures in a scene. It is shown alongside the built-in Profiler counter, Batches Count, which tracks the number of rendering batches processed each frame. This allows us to easily correlate between the two metrics and see the impact of our creature count on the number of batches the renderer must process.Additionally, all of your Profiler counters are available in Release Builds, as well as some of the built-in ones where specified. Release Builds are more representative of the real-world performance of your application than Development Builds.However, the Unity Profiler cannot be attached to Release Builds for optimization reasons. Therefore, you can selectively monitor important metrics in Release Builds by using Profiler counters and querying them from script. For example, you might do this in Continuous Integration testing to detect performance regressions. Or, you might display some key performance metrics via an in-game debug user interface using one of Unity’s UI systems, such as in the lower-left corner of the example below. Please see the Profiler Recorder documentation for more information.If you want your Profiler counters to be present only in Development Builds, you can exclude them from Release Builds by wrapping them in the DEVELOPMENT_BUILD scripting symbol, as described in the Conditional Compilation documentation.Our Profiler counters are available in the Profiling.Core package, released in Unity 2021 LTS. The package is bundled with Unity 2021 LTS but not installed by default, so please follow the installation instructions in the package documentation to add this package from the Package Manager by name.Once you have the package, you can create a Profiler counter, referred to in the API as a ProfilerCounter or ProfilerCounterValue, and update it as follows.Please see the Profiling.Core package documentation for more information.A Profiler module presents performance information for a specific area or workflow in the Profiler window, such as memory, audio, or rendering. Each Profiler module aims to provide insight into the performance profile of its domain. For example, the Memory Profiler module shown below displays seven key metrics relating to memory usage over time. Below that is the module’s detailed section showing the distribution of memory in the selected frame.In Unity 2021 LTS, the Profiler window can be customized with your own Profiler modules. This enables you to present the performance metrics of your own systems directly in the Profiler window. It also provides the opportunity for custom visualization in your module’s detailed view, allowing you to present your system’s performance data however you wish.Profiler Module EditorThe Profiler Module Editor is recommended for quickly creating temporary Profiler modules for your own use. For example, you might use it to quickly create a Profiler module to verify a new Profiler counter. Profiler modules created via the Profiler Module Editor are local to your Editor and are not available for other project users.For more information, please see the Module Editor documentation.Profiler module APIThe Profiler module API allows you to add your own Profiler module to the Profiler window for all users of a project or package. When a Profiler module is defined in a project or package using this API, it automatically becomes available in the Profiler window for all users of that project or package.If you are an Asset Store publisher or a package developer, you can now distribute custom Profiler modules with your package. When a user installs your package, your Profiler modules will automatically become available in the Profiler window for your users. This enables you to expose your package’s performance metrics directly in the Profiler window.Several teams within Unity have already been using this API to distribute custom Profiler modules with their packages, including the Netcode for GameObjects, Adaptive Performance, and Mali System Metrics packages.How to use itTo add a Profiler module using the Profiler module API, create a ProfilerModule script in your project or package, as shown below.The module will be displayed automatically in the Profiler window for all users of the project or package.For more information on using the Profiler module API, please see the Manual and API documentation.The Profiler module API includes the ability to draw a custom visualization of your performance data for the selected frame using one of Unity’s Editor UI systems, such as UI Toolkit.For example, the Adaptive Performance package’s Profiler module uses this API to present detailed performance information in the selected frame, as well as contextual information such as Bottleneck and Thermal Warning indicators. These indicators can help a user of the Adaptive Performance package to clearly see when they may be encountering thermal throttling, for example. You might use this API to present a bespoke visualization of your Profiler module’s performance data.For more information, please see the Manual and API documentation on creating a custom Profiler Module Details view.You may wish to visualize additional, more complex data in your Profiler modules alongside your Profiler counters. For example, you might wish to display a screen capture of the current frame in your Profiler module’s detailed view to give more context to the performance data, as shown below.To send additional frame data to the Profiler, such as an image, and subsequently retrieve it from the Profiler capture, you can use the Frame Metadata APIs, Profiler.EmitFrameMetaData and FrameDataView.GetFrameMetaData.If you have additional data that only needs to be sent once per profiling session, such as configuration data, you can use the Session Metadata APIs, Profiler.EmitSessionMetaData and FrameDataView.GetSessionMetaData.Please see the documentation linked above for examples of how and when to use these features.In this post, we covered how to extend the Unity Profiler with your own performance metrics. We looked at adding custom metrics with our new Profiler counters, available in the Profiling.Core package. Then, we explored adding custom Profiler modules to the Profiler window with both the Profiler Module Editor and the Profiler Module API. Finally, we covered sending additional complex data to the Profiler, such as an image, to provide more contextual information.We hope these Profiler extensibility features added in Unity 2021 LTS enable you to better measure and understand your application’s unique performance story. Please feel free to reach out to the team via our forum page. We would love to hear your feedback and learn how we can improve Unity’s performance tooling for you.

>access_file_
1180|blog.unity.com

The secret to more efficient playable production

Bringing playable production in-house can result in greater control, speed, and affordability - so finding ways to improve efficiency and enjoy all of these advantages should be a priority. Unlike video or static creative production, the workflow for designing playable ads includes handing off responsibility to a developer and incorporating them into the feedback loop. With this in mind, it’s important to know the best practices for communicating with your playable developers to make production more efficient, accurate, and easy.Here, Ori Ben-Moshe, Creative Manager at Luna, sheds some light on what you should be sharing with developers before starting to build your playable ads. Even if you’re running a lean team (or just a team of one), this detailed walk-through can help keep you organized and take your playable concept to the next level.1. Create a storyboardOnce you’ve got your playable concept, design a storyboard of the creative that explains the idea on a high level. This will help you visualize the flow of the playable and address questions before they arise.Looking at the example above, users can upgrade their character to run faster by pressing the button on-screen before taking them to an actual race against others. Including a specific explanation and visual mockup of this upgrade in the storyboard clarifies this feature for developers - they can then build an accurate version from the start.2. Write a game design documentGame design documents (GDD) are the foundation of the entire design and development process - they describe playable production and expectations at each step in a clear and concise way. Whether you’re part of a large or small studio - or even a one-person show - at least one part of a GDD can work for you. Having to explain your playable often leads to useful questions, can help improve the production process for current and future creatives, and lead to a better-performing final product.Now let’s talk through each part of the GDD so you know what’s important to highlight at each stage.Highlight the hook of the tutorialThe tutorial of a playable is the crucial moment where you can catch the attention of users and encourage them to engage. From this introduction, they need to know how to play and why - what’s the hook that compels them to interact?From the tutorial of your playable, users need to know how to play and why - this is your chance to catch their attention and encourage them to engage.The GDD is your chance to explain clearly and simply what should be included in the tutorial and how to show the hook. For example, playables for hyper-casual games often use a “ghost” of the action in the tutorial that shows users what’ll happen when they start interacting. Explain this in your GDD by showing a quick mockup or example of a ghost in a tutorial and describing the action and instructions to appear on-screen.Looking at the example of the playable below from Join Numbers, we can say how you shouldn’t and should describe the tutorial in your GDD:Don’t: Only say “show the numbers 0 and 5 on the screen”Do: Describe the tutorial screen as “show the number 0 in yellow in the center of the track and a gray ghost number further up the screen. Have a gray ghost of the number 0 slide up to meet the number 5 so it creates the number 50 and highlights it in yellow”. Alternatively some teams that have an animator prefer to enlist them to create a video that shows - instead of tells - developers how they want their playable to look.Be specific in describing the visuals and mechanics of gameplayProviding exact descriptions of each part of gameplay can help both one-person teams and studios with dedicated developers build a playable more quickly and accurately. We suggest covering the following details in this part of your GDD:Number of interactionsDistances Speed and timing Decide on in-ad events and A/B testing setupWhile describing gameplay in the GDD, also mention the in-ad events you want to track, like clicking on the end card or completing the tutorial. In-ad events are snippets of code developers can set up that give you unique insights into the user journey and how they interact with your playable so you can make data-driven optimizations. They’re a crucial tool for improving your entire UA strategy, so consider which are likely to provide the most valuable insights then inform your developers.In-ad events are a crucial tool for improving your entire UA strategy, so consider which are likely to provide the most valuable insights then inform your developersYou don’t need to have every single detail figured out as you work on the GDD, but make note of what you haven’t thought through or want input on during the kickoff meeting. The Luna team built this template for you to use as a starting point - adapt it for your needs by using only some of the slides or opting for a more visual approach.Also use this section to describe the A/B tests you want to run for each part of your playable’s gameplay, like trying a new object for a match-3 game or increasing the speed of play in a runner game.Confirm how users win or lose your playableDescribing the different scenarios you want to try in your playable ahead of production encourages you to think about user motivations and streamlines the process of building the playable. Each game genre attracts different audiences that tend to prefer either easier or more challenging gameplay. Gathering information by testing different versions, following user behavior in your game, and previous marketing efforts can tell you the type of gameplay your users prefer - then you can apply this to your playable.Maybe you run test campaigns and discover that CTR is higher when users win your playable - so you make gameplay easier to encourage more win scenarios. As is the case with the other parts of the GDD, the more specifically and clearly you can describe the criteria for win/lose scenarios, the better for your developers:Don’t: Be vague and say “users need to reach the finish line first”Do: Describe the scenario as “to win 1st place, users must reach the finish line while getting past all obstacles, like swimming and climbing, before all other players”3. Put your playable into productionAfter you’ve completed the GDD, it’s time to put it to the test and start building the actual playable. To improve speed and efficiency at this stage, there are three actions we recommend taking:Schedule a kickoff with the developerGet a version from the developer halfway through productionImprove operations with best practices, tools, and resourcesGet aligned on a kickoff meetingScheduling a kickoff with team leaders like the UA manager, creative manager, and lead developer ensures your entire organization is aligned on expectations for the project and is the best opportunity to confirm milestones together. Even if you’re a lean operation, this is the chance to take a step back and consider the timeline and project as a whole.Scheduling a kickoff ensures your entire organization is aligned on expectations for the project and is your chance to take a step back and consider the timeline as a whole.In the meeting, review the GDD and discuss any details that you didn’t include - maybe you weren’t sure of the animation to show in the tutorial or you were choosing between a few A/B tests and wanted to get the feedback from other team members. Other people in the kickoff could offer new ideas and suggestions that fill in the blanks or improve upon your concept. As you make decisions, update the GDD - this is a working document that you can (and should) adapt as production moves forward.Check in halfwayWe suggest requesting a version of the playable in the middle of production - what’s known as a mid-version - to review the progress and provide feedback before the final result. This helps improve accuracy and ensure production is on the right track and can be handled as easily as in a short meeting with a shared screen. At this stage you’re able to make any tweaks that could improve the final product and correct mistakes before they get bigger.For teams of all sizes, once you have a working version of your playable, hand it off to a test group of users to play through it and collect their feedback. We’ve found that getting an objective perspective can reveal parts of your playable that should be improved - then addressing these earlier on in the production process saves time and resources later.Optimize the workflowReviewing your operations and introducing new tools and practices can help optimize the production process so it’s more efficient:Open a dedicated channel: We’ve seen that using a tool to communicate with the stakeholders involved in the playable saves time instead of coordinating calls or sending endless email threadsStay on track with deadlines: When we ask for deadline estimates for each milestone, we’re able to keep production on track and hold each other accountableKeep updating the GDD: As you make changes like altering the game flow and changing due dates, update the GDD.Creating a process for playable productionA game design document (GDD) is like a product requirement document (PRD) for product managers - you’d never design a successful product without one. Similarly, a GDD goes into the details and explanations needed to end up with a high-performing playable. Developing playables using a GDD and the tips above lets you build a knowledge base of best practices that turns production into a standardized and efficient process - this is key for building more creatives faster and improving performance. Standardizing playable production is a win-win - you get high-impact playables and your developers get clear instructions and understand expectations.The goal is making playables more accurately, efficiently, and easily. The best way to do this is aligning expectations at every step with all those involved in whatever way works best for you.Get the GDD templateClick here to download

>access_file_