// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1689 transmissions indexed — page 68 of 85

[ 2021 ]

13 entries
1341|blog.unity.com

The app marketer’s guide to working with a mobile ad network

There are many advantages to incorporating ad networks into your media plan, which you can learn more about here. In addition to gaining a competitive edge over other apps in your category, running user acquisition campaigns on multiple mobile ad networks such as ironSource, is an excellent way to increase ROAS and scale.There are three steps to running a user acquisition campaign on a mobile ad network: getting your media plan together, setting up the campaign itself, and then optimizing the campaign. Read on to learn more about each step of the process.1. Getting your media plan togetherThe first step to working with an ad network is setting a media plan. That means preparing the research and creating an initial plan for your app’s marketing strategy. Before going into the nitty gritty, it’s important to take note of what your competitors have already done and accomplished in this space.With mobile ad networks arriving on scene more than a decade ago, it’s likely that some of your competitors may have already run UA campaigns on ad networks. You can find these competitors through sources like App Annie, Sensor Tower, the app stores, and other intelligence services. Be sure to talk to your ad network partner to analyze the geos they’re targeting, the media sources they’re running on, and the creatives they’re relying on.Once you’ve chosen the best media sources and ad networks for your app’s UA strategy and analyzed the competitors in your space, there are a few last preliminary actions to complete on your checklist. This includes setting a ROAS goal, which is the amount of ad spend you want to recoup (check out our blog post or webinar on how to calculate a ROAS goal for your app). From there, you can determine your base bid. Once your checklist is complete, the next step is to actually set up your campaign.2. Setting up your campaignBefore you can actually launch your campaign, you should get to know the mobile measurement platform you’ll be using, in which you’ll need to fill in everything regarding your app’s campaign and attribution window - the amount of hours or days that an ad can get credited for an install after a user sees it. If you’ve only run campaigns on social and search channels, attribution platforms may be new to you, since social and search are typically self-attributing. However, mobile ad networks need third-party platforms to credit publishers and networks for the installs they successfully drove.Make sure to activate view-through attribution, which, in contrast to click-through attribution, ensures you are measuring your UA campaign by counting how many users install your app from the store after viewing your ad. This way you can make sure you are crediting the right ad network for the install. It’s also vital to keep track of who to show your ad to, based on who hasn’t downloaded your app, with a dynamic suppression list to avoid wasted ad spend. Lastly, make sure your post-install events are making it back to your ad network, so that you can get a deeper analysis of your campaigns’ quality performance.To optimize your ad performance from the start, have a set number of creatives that you are confident in and prepared to launch. Ultimately, you want to begin advertising with your best performing ads and it’s important to decide which ones before the launch date. You can test the performance of these ads on platforms like Facebook, and use the winners on the ad network.Once all of this is turned and ready to go, you’re ready to successfully launch your campaign on the mobile ad network. The real optimization, however, will come after the first ad is shown.3. Optimizing the campaignAs soon as your campaign is launched, you’ll begin seeing how well your creatives are increasing your IPM (installs per thousand ad impressions) and whether your bids are optimized towards your ROAS goal. From there, you will be more equipped to make decisions.Let’s start by discussing bid optimization. To better optimize your bids, utilize automated optimizers, such as ironSource’s ROAS optimizer. You can easily feed these tools your ROAS goals, which calculate the perfect bid for all of your apps and then update the bids automatically in real-time. Not only does this save you time and manual effort, but the optimizers generally deliver better results than most humans can manage - since they update thousands of campaigns to high degree of granularity.In terms of creatives, each ad network has its own benchmarks and best practices to optimize your IPM, so it's vital to constantly A/B test and refresh your creatives. Use in-ad data such as funnel analysis, time to engage, time to complete, and more to understand where users are dropping off to iterate your creatives and improve performance. For more about optimizing creatives, check out this eBook.Ultimately, you want to put yourself in the best position to drive quality traffic from your ad network campaigns. If you’re considering starting to work with an ad network, follow these three steps to break down the process, and get in touch with us to learn more.

>access_file_
1342|blog.unity.com

A lightning round of great tips for 2D games

Are you planning to make a 2D game with Unity? Then take a look at these handy tips from our 2D Technical Product Manager Rus Scammell and 2D Product Marketing Manager Eduardo Oriz, that will help you get started quickly and work efficiently throughout your entire project.2D Game with UnityUse the 2D Template to get started fast with a new project. The template is available from the Unity Hub. Unique settings include:A default scene that uses a 2D view and comes with a camera set to orthographic projection and clears to a solid colorThe Editor set to 2D Mode by default so that new textures are imported as SpritesReal-time Global Illumination disabledInstallation of 2D packages, including 2D Animation, 2D Pixel Perfect, 2D PSD Importer, and 2D SpriteShape, as well as the necessary dependenciesFrom 2020.2 onwards, the 2D menu items are displayed as top-level menus for GameObject and Asset creation. These include a set of primitive 2D Sprites for quick prototyping. Menus are also added for newer features such as SpriteShape and the Pixel Perfect Camera.Pixels Per Unit (PPU) is an important concept in 2D development. A Sprite’s PPU determines how many pixels of width or height in a Sprite image correspond to one unit of distance in world space.Consider the PPU of your Sprites as early as possible. Apart from controlling pixel density, the PPU also affects how Sprites are used by the Sprite Renderer draw modes as well as other systems like Tilemap. Choose a pixel density that suits the design of your game and target platform, and avoid unnecessarily large textures.The 2D PSD Importer imports layered Adobe Photoshop PSB files into Unity. It enables you to use features such as Mosaic to automatically generate a Sprite Sheet from the imported layers and character rig. Unity then reassembles the Sprites of a character as they were arranged in their source files. The Importer is designed to work optimally with the 2D Animation system and multilayered character workflows.Aim to reduce the number of passes required to draw the final color to the screen. When pixels are semi transparent, you must consider each overlapping pixel when calculating the final color. Overdraw of overlapping transparent pixels slows down GPU performance, especially on less powerful devices or when higher frame rates are required. All of the Sprites in your scene will be considered transparent, but there are ways to reduce the overdraw.Unity will overdraw the pixels only inside your Sprite Mask. With full rect, the complete area will overdraw, but if the Mesh Type is set to Tight the area to overdraw is reduced since the generated mesh follows only the outline of your Sprite, ignoring the empty areas. If you want to have more control over the outline of the mesh, you can change it from the Sprite Editor.You can also reduce overdraw by merging overlapping static Sprites. If the Sprites don’t have to move or you don’t need them for a parallax effect, consider merging them once your level design is finalized.When you have many Sprites with static 2D Colliders, combine them with the Composite 2D Collider component. This provides better performance and a smoother collision with the entire surface. The Composite Collider can be used with Tilemaps; in the image above, the purple tiles on the screen are used as Colliders. You enable it by adding the component and checking the Composite box.A Composite 2D Collider can also be used with parent GameObjects. Make sure the child Colliders are enabled by checking the box Used by Composite. Another way to optimize Colliders is to manually draw a simple polygon that works well with your GameObject.Every Renderer attached to a GameObject like a Sprite has some overhead. To display many static Sprites efficiently use the Tilemap Renderer for a more performant and convenient way of designing your game. You can render hundreds of Sprites with just one Renderer. When you have many Sprites to render but you don’t require sorting, use Tilemap Chunk Mode to reduce CPU and memory usage, which is important if you’re targeting low-end devices. To use it, put all the static tiles in the same Tilemap, and enable Chunk Mode.Make sure that all tiles are also in the same Sprite Atlas. The tiles might not be displayed correctly in the Scene View, but they will be sorted correctly when you enter Play Mode. One approach is to design your level using Individual Mode and change to Chunk when you are ready to publish.If you need Sprites to be sorted correctly, for example, to move a character in front and behind the individual tiles, use Individual Mode. For Sprites that need to be moved or have gameplay functionality, use Regular Mode.Sprite Shape is the 2D tool to design organic shapes and levels. By default, the Sprite Shape API allows you to change the nodes of the spline at runtime, which can impact performance. If you don’t require that runtime change then you can bake or cache the geometry of the spline for better performance. Select Sprite Shape Controller, enable Edit Spline and then Cache Geometry to bake the mesh. If you use Unity 2019 LTS or later versions and are modifying the spline at runtime, then you can get a big performance improvement by installing the Burst package. Go to the Package Manager and install Burst version 1.3 or later.You can use Sprite Shape Profiles to design environments similar to how you would with vector drawing software. The Profiles can be used for both gameplay and decorative elements. They work well for different art styles and filling large areas fast, all while using fewer assets.The Draw Modes of the Sprite Renderer help you save on the size of your assets and make the process of designing levels more fun. With Tileable Sprites, you can have different GameObjects using the same Sprite in different sizes without stretching it. Go to the Sprite Editor and move the yellow handles to frame the part of the Sprite that will be tileable. Apply it and then go to the GameObject and change the Sprite Renderer Draw Mode to Tile Mode.You can also design your level with 9-slicing Sprites, using blocks that look sharp and proportionate while you scale and arrange them. In the Sprite Editor, select the area that should be tileable and make sure that the corners fit in the corner area that won’t be repeatable, and then select Draw Mode to slice.When you are designing 2D games, there will be situations where some of your GameObjects will be in the same sorting layer and order. You could create many layers and some logic behind your moving Sprites to show them in the right order, but this isn’t efficient. The way we tell Unity the right order to render a Sprite in those situations is with the Transparency Sort Mode under Project Settings. Unity sorts Sprites in the order described by the direction of a Vector2 (defined by the axis X and Y in these settings). See how Unity’s Eduardo Oriz, product marketing manager and Rus Scammell, product manager, set up this optimization at around the 9:15 mark in their Unite Now talk.Creating a frame-by-frame animation clip in Unity is simple. Select a sequence of Sprites in the Project window and drag them into the Scene view or the Hierarchy window. An animation clip and an Animator will be made for you automatically.If you want to add the animation clips on an existing Animator, simply drag those animation frames onto the Animator’s GameObject. The clip will be added to the Animator. Hold down the Alt or Option key while dragging the animation frames into the hierarchy if you want to create a different GameObject for each Sprite.Unity supports both frame-by-frame and bone-based animation. Here are the key differences between the two approaches:Frame-by-frameEach frame uses a unique SpriteAdditional animation clips require more SpritesFrame rate and speed are usually constantTransition animations require additional SpritesDoes not work with 2D Inverse Kinematics (2D IK)Bone-basedEach animation frame uses the same SpritesEach animation clip uses the same SpritesWorks well for variable animation speedAnimation system can create transitions for clipsWorks with 2D IKInstall the Burst/Collections packages to optimize 2D animation performance for high bone counts and Sprite meshes with high vertex counts. Animated Sprite deformation at runtime will also get a performance boost. This works by allowing the 2D Animation package to use Burst compilation and low-level array utilities to speed up Unity’s processing of the Sprite mesh deformation.Make sure you are using Unity 2020.2 as well as 2D Animation 5.0.x. When you install the Burst package 1.3.3 or newer you should also install Collections which is a Preview Package.When preparing a Sprite for rigging, you have complete control over the mesh. To start quickly, the Skinning Editor provides an automatic tessellation option. However, you can customize this by using the mesh tools to add and remove vertices to craft the mesh that you need.Add 2D Colliders and Rigidbody 2D components to bone-based characters to drive motion via the 2D Physics system. In the example image above from Eduardo’s and Rus’s talk, Capsule Colliders were added to each bone as well as to Rigidbody 2D components. They were then connected with Hinge Joints. You can use angle limits on each joint to limit the range of motion and you can also toggle whether the Rigidbody components connected via a joint collide with each other.Keep a consistent pixel density across all of your assets. If your project has 32 PPU, for example, you can expect one unit to contain 32 pixels, like the tile to the left side of the above image.The default 2D Camera expresses size in vertical units from the center of the camera. In the example image here, we can fit 10 tiles vertically:If your assets appear too small when you add them to the Scene view, compared to your original reference image, it’s most likely due to a mismatch of resolutions.Unity’s default PPU for sprites is set to 100. When we change the Sprite Asset configuration to our project’s 32 PPU, the sprite will then be the correct size.By default, Bilinear Filtering is enabled for assets to smooth out hard edges, but if you want a retro-style pixelated look then you change the Filter Mode to Point to get sharp-looking visuals that take up the screen space defined at the start.When you have many assets to import to your project, changing the setting for each individual Sprite to match your project settings will be time-consuming. Instead, create an Import Template or Preset so that every imported asset will have the predefined configuration that you want. To do this:1. Select an asset with the configuration that you want2. Click on the Settings icon next to the Sprite Asset to create a template from it3. Save it with a name that helps you remember what it does4. Make it the default so that all new imported assets will have that configuration (you will see the Preset Asset in your Project view)If you are working with pixel art, make sure that the screen always shows the same number of pixels and that all those pixels stay the same size. First, go to your Camera and add Pixel Perfect Camera. Set the Asset PPU to the one that you use in your pixel art.Once you add the Pixel Perfect Camera component, it will take control of the Camera on Play Mode, but you can also see the changes in the Editor if you enable Run in Edit Mode.In your project, characters and objects can change their position, rotation, or size. By default, they will have a smooth interpolation, but if you want to stay true to the pixel art limitations indicated by the PPU of your project, enable Upscale Texture. The GameObjects will move in increments of one pixel. The rotation and scale changes will also respect the visual constraints. Learn more about tips to create retro-looking 8-bit and 16-bit games.Sorting Groups: Sorting Groups allow you to group 2D renderers together. You might be using Sorting Layers and Order in Layer to make sure the rendering order is correct for each character, but this won’t work if there is overlap and the individual parts become interlaced.You can place a Sorting Group on the root GameObject for each character to group the Renderers and sort them as one. With the Sorting Groups, the characters no longer interlace and sort as a group. Sorting Groups work with all 2D Renderers as well as Particle Systems.Sprite Atlas: This Asset packs several Sprites into a single combined texture. Unity can then use this single texture to increase performance by issuing a single draw call instead of multiple draw calls for each Sprite that is located in a single folder.By adding a folder to a Sprite Atlas, you can freely add and remove Sprites from your Sprite Atlas over the course of your production by simply changing the contents of the packed folders.It is also possible to create variants of Sprite Atlases that are at different sizes or have different compression settings to target a range of platforms with the same source assets.You should consider an atlasing strategy that works for your game. Sprite Atlas has been designed so that you can find a balance between productivity, flexibility, and performance.See how we’re empowering 2D artists; learn about how Odd Bug used 2D lights to create mood in their game Tails of Iron; download the 2D sample project The Lost Crypt, and read this in-depth guide on choosing the best resolution for your 2D assets. Finally, you can watch Eduardo’s and Rus’ Unite Now presentation here.

>access_file_
1344|blog.unity.com

Hyper-casual games: What are they & how do you monetize them?

Since the App store surfaced 10 years ago, we have seen mid-core, casual, and (debatably) hard-core games take off in the mobile space, yet today a fourth category is dominating the app charts - hyper-casual.In the past few years, hyper-casual games have seen unprecedented growth, and their momentum is only getting stronger.So what exactly makes these games so successful? And where do we see hyper-casual going? Here’s our complete guide on the fast ascendance of hyper-casual games, and a look at how best to monetize them.What are hyper-casual games?As the name suggests, hyper-casual games are lightweight games with simple mechanics that offer instant gameplay. Literally “tap to play.”Because of their fundamental ‘simplicity,’ hyper-casual games are not only instantly playable but infinitely replayable, making them highly addictive and engaging. Think - how many hours have you spent playing Join Clash this week? (be honest).The combination of simple mechanics with minimalistic UI provides a very accessible and incredibly engaging user experience, so no tutorial is necessary. Even more so than casual games, players can instantly jump into gameplay and get hooked on their goal.Unlike other genres, which have very specific audiences, hyper-casual games are built for the masses. “If you can show your hyper-casual game in an ad, and people instantly understand what the game’s about, you’ve succeeded - that’s a hyper-casual game. It needs to appeal to a wide audience,” Paul Woodbridge, Director of Design at MAG Interactive shared on an episode of LevelUp.The ascendance of hyper-casual games: Where did they come from?Hyper-casual games aren’t exactly “new”, especially in the sense that they are, in some ways, a revival of 70s arcade games. However, they only recently grew into the sensation they are today, grabbing 10 out of the top 15 spots in the top downloaded charts, versus when they held a ‘mere’ 3 spots on the top charts just a few years ago.The ascendance of hyper-casual games can be credited to three main factors: that hyper-casual remains a global phenomenon, that IPM for hyper-casual grew 70% in the last year (ultimately leading to more installs), and that there are simply more hyper-casual developers on the scene. In fact, the low barriers to entry has helped several previously unknown developers make it to the top of the charts.“As casual games implement deeper, mid-core features they become more engaging, but also more complicated. This, in turn, opens up a segment for hyper-casual games to dominate - games that are easy to start and fun to play,” said Deconstructor of Fun founder and Rovio's Director of Product Management, Mishka Katkoff in his interview with ironSource LevelUp.IAP revenue vs. scale across game genresAlthough hyper-casual games are dominating the download charts, most hyper-casual games still can’t compete with Clash of Clans for highest grossing games - and nor do they need to. While these mid-core and casual games make the bulk of their revenue from IAPs, hyper-casual games mainly monetize through ads, a business model which is sustainable for them due to the huge scale they see in terms of downloads. “While the in-app revenue of their games is relatively low, the scale is great, and through video ads and cross-promotional work, the companies are able to operate a massive volume of business,” said Katkoff.With such great scale, developers creating multiple hyper-casual titles don’t necessarily need to rely on revenue from IAPs, and for this reason some hyper-casual games don’t even bother to include them.“The business model is different. You’re not interested in building this long-term relationship with a player where they feel the need to spend money. You have a much more short-term relationship, where you want them to quickly die and watch an ad to continue,” explained Tom Kinniburgh, founder at Mobile Free to Play said on LevelUp.Hyper-casual game monetizationAs we’ve pointed out, due to their relatively unsophisticated in-app economies, hyper-casual games rely on ads as their primary form of monetization, so picking the correct ad units and monetization technologies is hyper important. Both are key for reaching hyper-casual’s ultimate monetization goal - which is to increase ARPU.Beyond evaluating specific ad units, developers should consider factors like session length and number of sessions per day when designing their ad implementation. Both elements present opportunities to serve more impressions to users, which in turn drives more revenue. Some top hyper-casual titles with great monetization strategies include Supersonic's Join Clash and Samurai Flash.Hyper-casual game revenue and ideasBelow are few different tactics, ideas, and strategies to boost revenue for hyper-casual games.Rewarded videoWhen it comes to monetizing users, in many ways rewarded video is the most rewarding ad format (see what we did there?). In addition to generating more revenue for developers, incorporating rewarded video also increases retention and session length (unlike interstitial and banner ads, which can in some cases have the opposite effect).While it may be more challenging to incorporate rewarded video in hyper-casual games lacking strong virtual economies, it’s well worth your while to think about creative ways to incorporate the ad format. Developers looking to incorporate more rewarded video can try adding different layers to their games, or providing various valuable rewards such as gems, extra time, extra lives, etc. in order to drive more engagement. A 40% plus engagement rate, and 4 impressions per user per day are good goals to aim for.(For a more in-depth look at which metrics are important for your in-app ad monetization strategy, you can check out our presentation on that very topic on Slideshare.)Banners and interstitialsEven when developers have optimized their rewarded video strategy, they may still see that more than half of their users don’t engage with the ad unit (and even fewer make purchases in the virtual store). This makes interstitials and banner ads a hyper-casual game developers best friend, helping them successfully monetize all their users.For interstitials there are two main factors to look at. The first is the number of ads shown per second, and the second is the type of interstitial - static, video, playable. It’s also critical to decide if the ad will have a skip button, and if so after how many seconds will it appear? While taking a more aggressive approach - (more impressions per session, including playable and video ads, a ‘skip ad’ button that pops up only after 5 seconds) - will generate higher ARPDAU, it may also simultaneously hurt retention and session length and alienate your users - so approach with caution.Ultimately it’s all about finding the ‘sweet spot’ that balances user experience and monetization to maximize LTV and ARPU. The best way to find the setup that generates the highest LTV is by A/B testing. For a benchmark, we recommend aiming for 3 impressions per session.For developers looking to implement an even more sophisticated monetization strategy, it’s also worthwhile looking at building tailored ad implementations for different segments. For example, for users that engage with rewarded video and make purchases in the store, taking a softer - i.e. less aggressive - approach to ads shown makes more sense, since these are high-engagement, likely also high-retention users. For users that generate zero revenue through IAPs or rewarded ads, taking a more aggressive approach can work in order to yield more hyper-casual game revenue.Impression level revenueWhile in hyper-casual games, the vast majority of hyper-casual revenue comes from ads, getting a look at device-level data on ad revenue is just as important as getting it for IAP-focused games. The more important a revenue stream ad monetization becomes, the more important this information will be for both user acquisition and overall monetization activities. In a way, ads are actually very similar to IAPs, in the sense that just as you have ‘whales’ who spend a lot in-app, you can also have ‘ad whales’ - i.e. users who generate significant hyper-casual revenue from engaging heavily with ads. As such, it’s important for app developers to adjust their implementation and strategies based on device-level data on ad engagement and revenue. For example, ironSource’s impression level revenue solution enables developers to see which user segments are not engaging with rewarded video at all, and can therefore be shown more interstitial and banner ads.In-app biddingIn addition to implementing the ad units, the monetization stack that powers those ad units is just as important for hyper-casual developers to focus their monetization strategy on. By functioning as an auction among ad networks, in-app bidding, which in the past year has gained significant traction, lets app developers effectively automate monetization while still garnering the highest value for each impression.Because hyper-casual games rely so heavily on ad monetization, and it’s common for hyper-casual developers to have large portfolios, activating in-app bidding can help reduce the operational overheads that come with monetization optimization. Essentially, instead of spending hours manually optimizing several waterfalls to maximize eCPM, in-app bidding technology automatically serves the impression to the highest paying ad network.ironSource’s in-app bidding solution, for example, is the first on the market to grant developers immediate access to top in-app bidding networks like Facebook Audience Network.Hyper-casual game mechanicsIdle mechanicsPuzzle mechanicsTidying mechanicsResizing mechanicsStacking mechanicsRising and falling mechanicsSwerving mechanicsAgility and dexterity mechanicsColor-matching mechanicsTurning mechanicsCapture the territory mechanicsDrawing mechanicsAiming mechanicsTap and timing mechanicsPushing mechanicsDirection mechanicsThe best and top hyper-casual games of 2021According to Sensor Tower, these are the top hyper-casual games worldwide according to downloads in 2021.Join Clash by Supersonic StudiosDOP 2 by Say GamesSushi Roll 3D by Say GamesProject Makeover by Bubblegum GamesRoof Rails by VoodooPhone Case DIY by Crazy LabsOh God by AlictusHit Master 3D by AzurStacky Dash by Supersonic StudiosThe future of the hyper-casual marketAs Katkoff points out, “hyper-casual and ad monetization are proving to go hand in hand” - changing the landscape of the monetization game. With so many hyper-casual titles popping up every week, developers may have to begin implementing deeper features in order to stand out from their competition, and it will be interesting to see how they do so within the category’s natural parameters. Hyper-casual games have opened up a realm of new possibilities in the market and it will be exciting to track the category’s progress over the next year.Listen to the first episode of our hyper-casual podcast mini-series. Want to learn what goes into making the world’s most popular games? Subscribe to LevelUp to never miss an update.

>access_file_
1345|blog.unity.com

Augment assembly work planning with digital human simulations

Learn how Unity is supporting the MOSIM research project to simulate and analyze complex, realistic human motions for a range of manufacturing use cases. With the ability to simulate assembly worker tasks in minutes rather than weeks, companies can improve production planning, increase worker productivity and safety, and reduce risk and costs.The MOSIM project is a consortium championed by auto manufacturer Daimler in partnership with Unity and more than 20 other partners. It seeks to provide an open standard for digital human simulations for industrial applications.Download MOSIM on GitHubEven in today’s age of automation, assembly of vehicles and other manufactured products still requires significant manual work. Much about the production planning process can be improved, as workers’ tasks are typically not visualized in 3D but described in text. Additionally, validation for these tasks occurs on hardware prototypes. Simulation can make this experience more efficient, but generating a process simulation has historically been time-consuming and required tools for experts.While it’s possible to simulate specific, individual actions (e.g., affixing a part), simulating a joint sequence of actions – for instance, walking to a part, picking it up, walking to the vehicle, and affixing the part – requires extensive manual effort and physical motion capture of the actual location and setup. Aside from very specific, high-risk scenarios that might justify the complexity and time investment, most companies are deterred from making this effort.“Manual assembly for a car or bus requires a lot of different actions, and currently there is no existing tool that can simulate those scenarios comprehensively. The challenge is bringing these individual sequences together, so we wanted to create a framework for digital human simulations that could help us do that,” says Felix Gaisbauer, PhD Researcher at Daimler Buses - EvoBus GmbH and Technical Coordinator for MOSIM.As the production of cars, trucks, buses, and more become increasingly more complex and competitive, the need to maximize efficiency is paramount. Reliance on physical testing and optimization not only hinders productivity and inflates costs, but often leaves manufacturers lacking confidence in their operational efficiency.“While these efforts have improved over the last decade, dynamic simulation of humans is more or less not being done. Right now is really a prime moment for simulation in production,” says Thomas Bär, Manager at Daimler Buses - EvoBus GmbH and Project Leader for MOSIM.With MOSIM’s modular, Unity-based human simulations, manufacturers can easily piece together a series of character animations into a comprehensive simulation. They can:Visualize and verify the assembly steps in real-time 3DSimulate hypothetical scenarios to determine the feasibility and optimal sequence of tasksConduct metric-based assessments (e.g., ergonomics, buildability)Optimize and improve processes before completing physical setups or training teams, which greatly reduces costs and inefficienciesSimulations can be created in mere minutes, as compared to the roughly two weeks it would take to create a comparable simulation manually. From identifying ergonomic opportunities during the assembly process to improving worker productivity, safety and training, MOSIM has enormous potential to impact numerous stages of production.“This opens up a lot of new possibilities. When you can simulate humans moving realistically through assembly sequences, it provides additional opportunities to investigate optimization potential in the factory, such as adjusting shelf positions in order to shorten walk paths. It can even be used to provide immersive training for workers on how to complete their tasks,” says Gaisbauer.Leveraging Unity’s real-time 3D platform at the core of MOSIM’s framework opens up new possibilities for visualization and simulation, enabling digital avatars and animations that elevate the entire experience. Unity allows for fast prototyping as well as easy deployment to devices like virtual reality headsets.“I really like the way Unity’s programming works, the large Asset Store, and that most of our project partners have more experience with Unity than other engines. The comprehensive package Unity provides is what makes the difference for us,” says Gaisbauer.MOSIM can be accessed via a web-based application as well as in Unity. It is capable of automatically generating worker assembly simulations in 3D using standardized text descriptions, such as “Pick Up & Put Down.”Inspired by the Functional Mock-up Interface (FMI) approach, the industry open-source standard for computer simulations, MOSIM offers Motion Model Interfaces (MMIs) and their implementations called Motion Model Units (MMUs), which range from simple animations like walking to complex tasks like climbing a ladder.MOSIM includes a library of predefined human motions in its open-source repository on GitHub and encourages the Unity community to contribute to the creation of more MMUs. A Unity-based “MMU Generator” enables the creation of custom MMUs from FBX and other file formats without requiring programming knowledge.“This is something special. We don’t have this for any other gaming engine – it is unique to Unity,” says Gaisbauer. “It simplifies development and we hope it will attract more animation artists and developers to provide more MMUs.”Check out the MOSIM repository on GitHub and bring the power of digital human simulation to your application.

>access_file_
1346|blog.unity.com

The 4 KPIs you should be monitoring to optimize in-app bidding performance

In the past year, we’ve seen a surge in the amount of game and app developers who have adopted in-app bidding into their monetization strategy. But like with any new technology, and especially one that automates manual operations, it’s important to retain as much control as possible and get full transparency into its performance. By relying on hard data, you’ll be better equipped to optimize and control the automated setup, make the right decisions for your business strategy, and manage the conversation with your ad network partners.Here are 4 KPIs you should be monitoring through your mediation platform to make sure your in-app bidding setup is performing at maximum efficiency.1. ARPDAUIn a traditional waterfall, developers have complete control to set price floors or CPMs for each network, and use eCPM to measure the impact on network performance. However, in an in-app bidding stack, the networks’ algorithms determine CPM in a real-time auction, and there’s less developers can do themselves to increase a specific network’s performance. Accordingly, eCPM is no longer a relevant or actionable KPI for measuring monetization performance. That’s why, ARPDAU, or average revenue per daily active user, has become the go-to metric for measuring in-app bidding performance.ARPDAU measures the revenue from in-app purchases, ads, or both on any given day, by summing the revenue on that day and dividing it by the number of unique active users. It provides a holistic view of overall app revenue, and can indicate the health of your monetization stack.2. Bid rateEach time an app calls to load an ad, the in-app bidding auction sends a bid request to each bidding network, and asks if they’d like to bid and how much. The bidding network can choose whether or not to provide a bid. Here, bid request represents the number of times a bidding network is asked to participate in the auction, and bid response represents the number of times a bidding network responds to the request with a valid bid.Putting this all together, the bid rate KPI is the percentage of times that a bidding network chooses to participate in the auction, and is measured by dividing bid responses by bid requests. Think of it as the equivalent metric to fill rate in the traditional waterfall.It’s important to keep in mind that bid rate can vary significantly between networks, due to differences in strategy, technology, and demand - meaning bid rate is not the best indicator of individual network performance. Rather, you can use bid rate to compare a network’s performance across your app portfolio and identify server tech issues in each. For example, if on average, you see the ironSource network has a 99% bid rate, but on one app you see it’s at 70%, that’s a signal something is wrong. Meanwhile, a 0% bid rate most likely indicates a setup issue.3. Win rateWin rate is the percentage of times a bidding network wins the in-app bidding auction and gets to load the ad, and is calculated by dividing bid wins by bid responses.Analyzing win rate can help you determine the strength of a bidding network, and pinpoint any opportunities to optimize your monetization setup. For example, a decline in a network’s win rate may be due to competition that’s improving performance, or a change in the network’s demand. Meanwhile, an increase in a network’s win rate may encourage you to use that network across other games in your portfolio, or other segments or countries in that game. Additionally, a drop in win rate across all bidding networks at the same time may signal setup issues - perhaps bids aren’t sorted by eCPM or rates are set incorrectly.Note, however, that networks that have lower bid rates and are more selective with the bid requests they choose to respond to, tend to have higher win rates. Ultimately, win rate as a KPI is most valuable when compared over time, and also when used alongside render rate, which we’ll dive into next.4. Render rateRender rate is the percentage of times a bidding network bids in the auction and then serves an impression. It’s measured by dividing bid response by number of served impressions.Comparing render rate against win rate can alert you if there are operational issues with your in-app bidding stack. For example, large discrepancies between win rate and render rate may suggest that the network is bidding on fewer engaged users, there’s a poorly placed traffic driver, or that you didn’t call to show the ad. Take note that when measuring these in-app bidding KPIs, it’s critical to analyze them according to network, geo, app or ad unit, rather than overall. Doing so will lessen the load, and make the numbers easier to decipher.Render rate is useful for more than just troubleshooting technical issues. This metric can be used to understand the level of competition within a network, by showing you how many impression opportunities manifested into a real impression. That’s because though the auction takes place on the impression level, not every opportunity for an impression manifests into an ad served.These 4 KPIs should be at top of mind when looking to optimize your monetization bidding stack. Be sure to leverage the transparency into reporting that your platform is offering you, in order to better control your business strategy and increase your app’s revenue.

>access_file_
1347|blog.unity.com

5 innovative projects tackling pandemic challenges with real-time 3D

Learn how enterprises are overcoming COVID-19 challenges by turning to immersive technologies, and how these tools will continue to affect the way we work in 2021.The COVID-19 pandemic took the world by storm, introducing a myriad of unprecedented challenges. From adapting to remote work to the difficulties of social distancing, companies faced workplace and facility interruptions, canceled events, and more.With the physical world upended, teams turned to virtual tools to stay on track: A Unity survey found that 63% of surveyed companies used immersive technologies like real-time 3D, augmented reality (AR), and virtual reality (VR) to navigate the challenges of COVID-19*.On top of a considerable pivot in working conditions during the pandemic, industry experts anticipate that the wide range of ways companies have embraced these tools will continue to shape work in 2021. Learn more about how experts at Unity, Microsoft, Oculus, and NVIDIA predict enterprise adoption of AR, VR, and more will be affected in our 2021 Immersive Technology Trends report.Watch this video to see a snapshot of these trends:Although many of these trends have been on the rise for years, some are just beginning to emerge in response to the pandemic and innovators have been quick to adopt. Even with the sudden shift to remote work, organizations were able to tackle business challenges with real-time 3D to enable human connection and cross-functional collaboration regardless of geographic location.Read on to discover how 5 organizations are overcoming hurdles with the help of Unity:Automotive design firm Pininfarina has been using virtual reality for a number of years, however, they did not realize how useful it would be as they transitioned to remote work. Its 900 employees were able to continue collaborating and conducting design reviews seamlessly across three continents despite lockdowns around the globe.Siemens continues to connect employees and customers with a virtual simulation lab that they created in two weeks. The lab is a digital twin representation of the physical Siemens location in Munich and allows them to continue demonstrating projects virtually, such as their dust remover demonstrator. To top it all off, it was a winner of the Unity Awards in 2020.As physical events began to be canceled, Stratasys had to change its plans surrounding the global launch of its new Stratasys J55 3D printer. Working in partnership with Visionaries 777, they created an AR mobile application to help customers across the world visualize the new 3D printer in any physical space at 1:1 scale from the safety of their own home or office.With the cancellation of the Geneva International Motor Show, a key event for Volkswagen, the automaker made history by hosting its very first virtual motor show. Working with Endava, VW showcased 36 vehicles in great detail with a 360°, real-time 3D experience and allowed visitors to interact with its new models from their web browser anywhere in the world.Proper use of personal protective equipment (PPE) is critical to staff and patient safety, but effective training materials are critical for healthcare workers. At the onset of the COVID-19 pandemic, the U.S. Department of Veterans Affairs (VA) worked with the studio Immersion to provide healthcare workers and service members an interactive, web-based safety training experience for putting on and removing PPE.***Download our 2021 Immersive Technology Trends Report*Unity Research study “2020 Immersive Tech Trends survey”, 130 Unity industrial customers, conducted on October 27-Nov 2, 2020.

>access_file_
1348|blog.unity.com

Create state-of-the-art car cockpits with Elektrobit’s EB GUIDE and Unity

The growing prevalence of high-performance computing platforms in cars combined with the increasing sizes and numbers of displays are driving demand for more immersive in-vehicle human-machine interface (HMI) experiences. To make these advanced user interfaces (UIs) possible, Unity is collaborating with Elektrobit (EB), whose EB GUIDE platform powers in-car UIs for more than 50 million vehicles from manufacturers such as Audi, GM, and Volkswagen.Learn more about the Unity-EB HMI collaboration at our upcoming session during CES 2021. Join us on Tuesday, January 12 at 9 a.m. PST / 5 p.m. GMT to get your questions answered and see a demo of an automotive cockpit UI that brings together EB GUIDE and the Unity Editor.Register nowTraditional HMI processes for going from design to device are not only time-consuming but require the key collaborators – automotive manufacturing, design partners, HMI system Tier 1 suppliers, and key system-on-chip (SoC) vendors – to make many trade-offs and compromises. When design visions are not reflected as originally imagined in the production UI and HMI implementation, a major reason is that teams use different software, speak different terminologies, and explore different workflows during the design and development stages.In the end, a lot of work needs to be repeated and recreated. An integrated HMI toolchain that facilitates the design process all the way through to implementation can drive major efficiencies, reducing the design-to-development time, effort, and complexity required to bring these experiences to vehicles.Proof of concept HMI demo created using Unity and EB GUIDE, running on an NXP i.MX 8QM-based platform and Yocto LinuxUnity believes real-time 3D technology brings significant improvements to this process. It paves the way to transform HMI workflows from a disconnected series of related activities into a streamlined, efficient, and cost-effective delivery system for superior user experiences.Because Unity’s real-time 3D enables what you see is what you get development, all content can be developed in the same context, previewed, and iterated on together. Teams can preview designs much earlier and directly deploy them to embedded targets (e.g., automotive-grade chipsets) – not just during development. This is a game-changer that can help eliminate compromises and complexities that typically occur as projects transition from design to development.To bring the power of real-time 3D to this field, Unity is committed to working with the broader HMI ecosystem and market leaders like EB. EB offers one of the industry’s most widely used HMI platforms, EB GUIDE, and is one of the leading companies addressing functional safety.Our collaboration with EB enables Unity to help power the creation of automotive-grade HMIs that integrate 2D, 3D, and safety-relevant content into a single experience. Teams can design and develop state-of-the-art HMI experiences easier and faster while ensuring they meet the safety needs and legal regulations of the automotive industry.Learn more about the Unity-EB collaboration from Thomas Moder, product manager of EB’s HMI solutionsThe Unity-EB collaboration introduces a new end-to-end workflow to get from UX design to embedded design seamlessly. It integrates the Unity Editor and EB GUIDE, facilitating a direct co-development experience between the platforms.Teams can create content in each application – for instance, the UI in EB GUIDE and 3D assets and animations in Unity – then preview the combined content in EB GUIDE Studio. To show what’s possible, EB and Unity created a joint, proof-of-concept demo running on an NXP i.MX 8QM-based platform and Yocto Linux.Content from the Unity Editor can run simultaneously in EB GUIDE Studio, enabling an instant preview of how the HMI will appear on target hardwareThe instrument cluster and safety-relevant content are created in EB GUIDE Studio, while contemporary 3D navigation and 3D in-car games are made with Unity. It provides a glimpse at how EB GUIDE customers can diversify the content placed in the automotive cockpit leveraging Unity’s real-time 3D rendering and graphics capabilities.--For a deeper dive into the Unity-EB collaboration and to see a demo, make sure to attend our upcoming webinar on January 12.Learn more about Unity for HMI

>access_file_
1349|blog.unity.com

Explore, learn, and create with the new HDRP Scene template

We are excited to share our brand-new template for the High Definition Render Pipeline (HDRP), which helps beginners get started with multi-room lighting setups, physically based lighting intensities, and much more.The scene has been created by a small group of game industry veterans composed of 3D environment artists, VFX artists, lighting artists, and technical artists. They worked previously on world-renowned game licenses such as Assassin’s Creed, Batman: Arkham, Crysis, FIFA, Grand Theft Auto, Need for Speed, Red Dead Redemption, and Watch Dogs.You can run the HDRP template on your machine by downloading Unity 2020.2 and starting an HDRP project in the Unity Hub. Create a New Project, Select the High Definition Render Pipeline template, and hit the Create button.I also encourage you to stream the template from the cloud using Unity’s Furioos cloud platform – you only need a web browser! This template requires mouse and keyboard inputs, and your session time will be limited to 5 minutes.HDRP has a feature set tailored for high-fidelity graphics on high-end hardware (desktop PC and game consoles). Some techniques and concepts used in HDRP can be difficult to understand for newcomers or anyone who’s not familiar with industry standards and photographic concepts. This is why we created this new HDRP template as a learning tool.For the past couple of years, you might have used the following template, which features a tiny construction site. This template lacks a physically correct lighting setup, which is a major drawback when it comes to understanding HDRP’s features.To be more precise, the sun intensity in the older template was set at ten thousand lux, 10 times lower than its real counterpart. This has dramatic implications on the entire lighting setup, notably on the exposure, and the calibration of other light sources. This incorrect setup created a lot of confusion for artists and designers who wanted to adopt a physically based workflow, and beginners were bound to be confused by the randomness of the selected values.Listening to user feedback, we heard that you want more examples for interior and exterior transitions. These scenarios can be very difficult to handle due to the immense exposure variations between a brightly lit outdoor area and a darker, artificially lit interior.To help you understand how the lighting is set up, I prepared a cheat sheet with important values. You will find color temperatures and intensities of common light sources, and exposure values, set via the Exposure Volume Component, for several types of scenes.Finally, HDRP’s Volume System, although common in many AAA engines, can be daunting for beginners who aren’t familiar with the hierarchical concepts of global volumes and local overrides that are necessary to handle rendering settings on a per-location basis. As a consequence, the former template, made of one area only, was not able to showcase the great potential of the volume system.The new template is set up in a physically based way, with a realistic sun intensity at 100,000 lux and correct exposures for each location. Beginners now have a good setup to start lighting their scenes, and they can experiment confidently with this template, knowing that the lighting is already correctly tuned.Once you open the HDRP template, you will find three interconnected rooms with distinctive vibes and lighting setups. Each area has its own set of local volumes to handle the exposure using the brand-new Automatic Histogram mode. They also include various other HDRP settings, such as Volumetric Fog and White Balance to simulate the natural action of your brain or the auto white balance of a camera.Feel free to also explore the environment by jumping into Play mode to get a sense of the scale and appreciate the environment from a human perspective. Press the WASD keys on your keyboard to move and use the mouse to look around.The first room consists of a circular sunlit arena with a large concrete platform, perfect to test your assets in a low-noise environment. The area makes extensive use of HDRP’s Decal Projectors to simulate grime and water puddles, and they also provide more visual variations by breaking up the apparent tiling on the concrete materials.Taking the stairs down to the second room, you’ll discover a naturally lit interior featuring a windowed tree cage with advanced materials such as transparency, subsurface scattering, and tessellation. The area also offers a couple of GPU-based special effects, in the form of floating dust, and butterflies inside the tree cage. A custom Density Volume simulates the higher humidity in the vicinity of the cage and creates beautiful rays of light.The room is a perfect showcase for the capabilities of the GPU Lightmapper since the majority of the indirect lighting is provided by the sole opening in the ceiling. This lets the sun and sky lighting bounce around to simulate beautifully smooth gradients of light, emphasized by the vibrant paint used on specific walls.If you follow the ramp leading to the third and last room, you will discover an ultra-minimalist living space featuring real-time spotlights, a set of three ceiling lamps, and a long, emissive strip light that once again takes advantage of the GPU Lightmapper.The majority of the illumination in this space is generated by artificial lights, making extensive use of soft shadows, light cookies (projection textures), and carefully placed Reflection Probes. As you approach the lamps, subtle specks of dust light up as they traverse the beam of light.First of all, the main constraint for this project was a 100 megabytes limit! By today’s standards, this is a very small amount of data, especially when it comes to texture budgets, notably for albedo, normals, lightmaps, and reflection probes. However, a small template size allows users to download and import the HDRP package very quickly, no matter where they are in the world, and it complied with the internal size limitations for Scriptable Render Pipeline (SRP) templates at the time of its production.To make the best use of our data budget, we decided to adopt a brutalist architectural style with strong shapes, simple materials and a small number of reusable props. Nevertheless, we weren't shy about using complex geometrical shapes for this environment, particularly curved and tilted ones, which can cause numerous problems when it comes to the light baking and the placement of box-shaped rendering GameObjects, such as Reflection Probes and Volumes.To handle the skybox, we decided not to include a custom (and memory-intensive) HDRI texture, as it would have radically increased the size of the template. Instead, we rely on the built-in, low-resolution, HDRI from HDRP. The main drawback is that it doesn’t include a sun disk.Finally, one of the main victims of this 100 MB limit was Reflection Probes. A maximum resolution of 256 pixels had to be used in order to minimize the memory footprint of the 18 Reflection Probes in the scene.This kind of resolution is common in games. Nevertheless, if you require pin-sharp reflections for your mirror-like assets, nothing prevents you from increasing the Reflection Probe resolution in a HDRP asset, then rebaking the Reflection Probes on your local machine. Obviously, the performance impact on memory will increase depending on the resolution and the size of the Probe Cache.The template offers multiple levels of quality to perform on a wide variety of hardware. Head to Edit > Project Settings > Quality, and choose between Low, Medium, and High settings. For instance, in the Low quality mode, Volumetric Fog is disabled to maximize the framerate. On the other end of the scale, the High setting offers soft shadows with penumbra approximation and effects with a higher sample count across the board.The project uses a mixture of real-time lights, lightmaps and light probes. The entire structure and the largest assets take advantage of the GPU Lightmapper. Therefore, most lights are set to Mixed, and the light bake is generated using the Baked Indirect mode. This provides a soft light bounce as well as beautifully smooth occlusion shadows while ensuring the direct lighting and shadowing remain entirely real-time.Small objects, by contrast, rely on a network of Light Probes distributed across the entire scene, rather than on the slower lightmapping approach. I always recommend spending some time minimizing the baking times by forcing smaller objects to only Receive GI from Light Probes, and/or preventing them from Contributing GI altogether in the Mesh Renderers Inspector window.Additionally, to let you experiment with different levels of indirect lighting quality, I provide several Light Baking presets for the GPU Lightmapper, accessible via the Lighting Settings asset under Window > Rendering > Lighting. For example, when using a reasonable Geforce RTX 2070 Super, the Draft preset should produce an extremely quick yet blotchy result in 10 seconds, ideal for quick iterations, whereas the Ultra settings will yield extremely clean lightmaps in just four minutes for production-quality renders.Nevertheless, depending on your GPU, your mileage may vary. Therefore, I recommend that you experiment with different parameters which can greatly influence both the quality and the baking time, especially the Indirect Sample count and the Texel density.As new HDRP features emerge, the template will be updated to showcase them, so keep an eye on future releases of Unity and HDRP. Stay tuned for my upcoming Unite Now Talk, where I will explain in more detail how I set up the lighting in this new template, as well as the volumes, exposure, lightmapper, post-processing effects, and many other HDRP features. Also, register for this upcoming NVIDIA webinar where I will let you know how to enable and tune the ray tracing effects in this template. When you register and attend the entire webinar, you have a chance to win an NVIDIA Quadro RTX 5000!In the meantime, have a look at my previous Unite Now session, titled Achieving high fidelity graphics for games with the HDRP, where I present important HDRP features and physical concepts such as exposure and lighting intensities. You can also find more HDRP learning resources in this blog article, Create jaw-dropping graphics with these High Definition Render Pipeline resources.We hope you will find this new template educational, and we look forward to seeing how you experiment with it.Pierre Yves Donzallaz (Technical Art Manager, R&D, Graphics) is an experienced lighting artist with over a decade of AAA experience in the field of real-time rendering. He has a strong technical and artistic background and specializes in lighting, level beautification, UX, tools design, and workflow improvements.He has worked on award-winning games and large AAA productions including the Crysis series, Ryse: Son of Rome, Grand Theft Auto V, and Red Dead Redemption 2.He is currently a member of Unity’s R&D Graphics team, where he leads fellow technical artists whose mission is to improve artists’ efficiency, educate users globally, and to develop new tools, workflows and graphical features alongside engineers and designers.

>access_file_
1350|blog.unity.com

Become a better Unity developer with these tips from the community

Every Tuesday, Unity users share their best tips on Twitter with the #UnityTips tag. We’ve put together a list of the top tips from the past four months to help you enhance your visuals and improve your workflows. Get ready to take your Unity skills to the next level!Even if you’re not an artist, these tips will up your graphics game to help your project stand out from the pack and get noticed.Use Shader Graph to create a stylized waterfall shader.If you’re using the built-in renderer and you prefer writing shader code, this tip shows you how to create a stylized water shader.MudBun is an impressive package for building real-time volumetric effects. See what’s possible through these volumetric water particles.For more technical users, check out this advanced approach to building your own fluid particle rendering system.Tired of that default Unity lighting look? Follow this guide to see how you can remove it, and learn a little more about Unity lighting along the way.Here’s a simple way to fake background volumetric lighting using line renderers.Learn about fur shells for a simple hair and fur effect.Try this easy way to add simple outlines to your mesh with Shader Graph.Learn how to add color to your shadows using raytracing in Unity.You can animate vertices in Shader Graph using the new Master Stack.If you’re interested in raytracing, why not check out this 60-second raytraced reflections tutorial?See here how you can make your objects disintegrate.Learn how to outline selected objects using command buffers.In case you missed it, the Unity Editor has had some great improvements in 2020, and you can further customize and tweak it using these tips to do your work faster and smarter than ever before.You can define your automatic naming scheme for duplicate objects.Scripting defines are now displayed as an Inspector array, which is much nicer to use than the old comma-separated list.Presets are a convenient new feature that you can use to quickly copy default parameters between objects.Check out the device simulator to more accurately test your UI across multiple devices.Did you know you can use these built-in tags to automatically remove objects from your build?Did you know that the Particle Editor has handy keyboard shortcuts?Try this free custom hierarchy package to keep your game object hierarchy organized.Missing a project’s Unity version? Rather than looking through the download archives, try this Unity Hub feature to install Unity versions with one click.There’s an experimental project setting that allows you to jump into playmode instantly by disabling the automatic domain reload.Programmers will appreciate these handy little shortcuts and code snippets. Tuck these away in your code arsenal so you’ll be ready the next time you encounter one of these situations.Use these snippets to calculate a random point on a circle or sphere.Use this code to convert between radians and degrees.Did you know you can add functionality to enums with extensions?You can use name of to reference your fields in Inspector field scripts for easier code maintenance.If you like to keep your serialized fields private, you might try these different ways to fix that pesky 0649 warning.If you use interfaces, you’ve probably struggled with the reference still being valid, even when the underlying object is null. Here's a nice way to fix that problem.Did you know you can accurately predict physics trajectories by separately simulating the physics?Did you know Unity has a physics debugger? You can use it to find invisible triggers or solve confusing collision issues.This is a clever way to automatically distribute an expensive function over multiple instances of the same object.--We hope you learn something new from these helpful tips. For more, search for the #UnityTips hashtag on Twitter, and you can also get involved with the community by sharing your own tips and best practices every Tuesday. Don't forget to follow @Unity3d for a weekly #UnityTips Tuesday reminder!

>access_file_

[ 2020 ]

7 entries
1355|blog.unity.com

The best tutorials of 2020

Planning to scale up your skillset over the holidays? This collection of our most-watched tutorials from 2020 is a great place to start.As the year comes to an end, there’s no better time to reminisce about what we’ve been up to in the last 12 months. So we’ve compiled a list of some of our favorite and most-watched tutorials of 2020.In January, we released a tutorial showing how to use Machine Learning to make a Kart Racing Game with the ML-Agents Toolkit and our Karting Microgame Template. It demonstrates how to train an AI agent to travel around the track autonomously with reinforcement learning by simulating seeing with raycasts and steering to avoid obstacles. You should definitely check this one out if you’re interested in machine learning.Some of our favorite content this year arrived as part of the Prototype Series, where André Cardoso walks through his process for creating amazing game logic and level design using different Unity features. He kicked this series off in June with a video showing us how to Implement an Ability System similar to the ones we see in many games.We know many of you love creating shaders, with or without the Shader Graph, which is why we made this step-by-step video showing you how to create a tropical water shader using the Universal Render Pipeline. We used Shader Graph to produce this effect as part of our Boat Attack demo’s tropical island feel.André also blessed us with a Prototype Series episode about Creating a procedural boss (?). He uses animation rigging to generate boss movement, and the outcome is amazing. Definitely check out this video if you’re interested in procedural generation or animation rigging – and if you are wondering about the assets he uses, you can find a link to the project materials in the video description.Last but certainly not least, we want to include Creating a Third-Person Camera using Cinemachine. This easy-to-follow guide aims to make your life a little bit easier – or at least filming your 3D sequences. The video uses assets from the Unity Royale project.These are tutorials we were most excited about this year, but we’d love to hear from you. What were your favorite tutorials of 2020? And what topics would you like us to cover in 2021?

>access_file_
1356|blog.unity.com

Improve your mobile game creative strategy with these 4 expert tips

As we continue to see automation become a part of almost every part of a user acquisition strategy, creatives are one of the few remaining pieces where a human touch matters. Devising an innovative creative strategy that makes an impact by attracting and engaging users is essential for an effective campaign.To give your creative strategy a boost, we spoke with Elad Gabison, Creative Lead and Game Designer at ironSource Playworks, about how to improve mobile game creatives and hook users into next year.1. Adopt a squad mentalityBalancing excellent ideas for game creatives with great execution can be a tricky balance. Waiting too long to go from ideation to execution could prevent your creatives from ever getting off the ground or hurt the success of the overall strategy as you rush the design. The ideal situation is to execute your good idea quickly while maintaining quality. And to do this, you should adopt a squad mentality.Rather than keeping teams separate and moving the creatives from one group to the next, a squad mentality means involving the performance, operations, and creative teams all together in the creative strategy. Or if you’re a lean operation and one person represents all 3, then the goal is to make sure the creative strategy is aligned across all parts of the business. When all parts of the game execute on creatives together, it improves the efficiency and quality of the creatives while ensuring the designs remain true to the idea of the game. In general, players aren’t likely to be game designers so comments from people outside your creative circle - like a developer from the performance team - can reveal important findings about the creatives. And, getting feedback from different people in your organization can help inspire new and unique creative ideas.2. Hyper-casualize your designsThink about how hyper-casual creatives are easy for users to understand, quick, and engaging - this attracts a much wider audience. Thinking like a hyper-casual creative designer when you go to optimize your own creative strategy can help you simplify your design or tap into player emotions to boost engagement and conversions.For example, we’ve seen success with testing a version of creatives that frustrate players, such as a playable or video showing a deeper level or boss level of the game. Hyper-casual games are defined by their simplicity and increasing difficulty as players progress - as they get deeper into the game, frustration is often a theme that emerges and encourages people to keep playing, since they want to win. Showing failure or frustration in your creatives inspires a feeling in the users that they can do better themselves and can encourage conversions.3. Use data to inspire your creativityCreative decisions backed by performance data are more likely to succeed, so be sure to check on the creatives from other apps that are shown in your game, the apps that you’re advertising on, and the KPIs of your competitors’ apps. Taking into account the data from all of these sources can help provide a more complete view of the performance of your creatives and affirm your findings.Before launching your creative set, look at data from apps that are performing well and incorporate lookalike visuals or mechanics to try to attract traffic. You already know that users enjoy playing this type of game because supply-side, demand-side, and/or competitor data proves it - so you can design your creatives to appeal to that game’s players.Analyze data after a creative launch, too, so you can use this information to balance quality of users and scale. If your KPI is retention, and you notice initial [tooltip term="installs-per-mille"]IPM[/tooltip] was high and then users dropped off, it could indicate that the quality of the users was low as they didn’t stay to keep playing your game - something could only know this by checking your creative’s performance after launching. To fix that, look at previous creatives that met your quality benchmarks and see what parts you could apply to your current creative set. For example, it can help to expand the number of levels in a creative, so users get a fuller game experience and feel more emotional buy-in.4. Hook users with mini-game playablesMini-games are trending across game genres because they increase engagement and encourage users to play for longer - making them effective as creatives, too.Even if you don’t have mini-games implemented, you can still use the concept to inspire your creative strategy. Try out playables that highlight different types of mini-games, mix up the mechanics, or highlight various characters, to see what meets your KPIs. Just make sure the mini-game relates to your game’s theme and mechanics - you want users to understand your game’s concept and still be excited to play once they go to the app store to download it. And depending on how well the creatives perform, results could indicate mini-games would be an engaging addition to your game. This is one of many trends, so keep your eye out for other ones to use and adapt in your creative strategy and always be testing to find the balance between quality and scale.Gearing up your creative strategyIf you’ve read until here, then we should tell you - don’t take these 4 mobile creative tips as the be-all-end-all. A great creative strategy is all about reinventing yourself, finding what’s relevant for your game, and rebuilding every tip we’ve given you here so it’s adapted for your game design. Try out one of our suggestions - or all of them - and see what sticks.

>access_file_
1357|blog.unity.com

The road to 2021: The many dimensions of the 2D team

We recently shared our roadmap plans for 2021. Now we invite you inside Unity to meet some of the teams working towards these goals. In this third post of the series, we introduce you to the 2D team. Our Unity 2021 roadmap explains some of our focus areas for next year. We’re committed to updating production-ready features and delivering key new features based on what you have told us you’re missing from Unity. But we’re equally determined to improve workflows and your overall quality of life. This post is the third of a series that aims to give you a glimpse behind the scenes. Today we are meeting with Paul Tham, who leads the team behind 2D and tools, and Rus Scammell, the Product Manager of 2D.The dedicated 2D team provides creators of 2D experiences in Unity with a feature set that covers foundations, world-building, animation, graphics and physics in 2D (including tools like Sprite Renderer, 2D Tilemap Editor, 2D Lights). The core team today is 15 people strong, including software engineers, software quality engineers, a designer, a technical writer and a product manager, but several other individuals contribute their expertise to evolving 2D.“It’s a team built for multifunctional conversations,” says Rus.“The team is also diverse. They are Singaporean, American, Swedish, British, Indian, Malaysian, and Chinese. We share our cultures with each other and it makes our lunchtime discussions very lively,” adds Paul.Paul and Rus began their careers developing games together for the PlayStation 2. Paul later met some of the other current team members at Ubisoft. The team consists of passionate game developers involved in various areas of game production and tool development in support of both indie and large-scale projects.The team likes to engage with users (in the 2D forum and Beta forum) and keep a close eye on the conversation happening on Twitter, where, for example they connected with Odd Bug Studio, which was using 2D lights in its upcoming title, Tails of Iron.“At the community level,” says Rus, “we make announcements when a prerelease feature is available so that creators can try it out and give us their feedback. We are also doing more to make sure that performance is tested on a range of mobile devices. At a strategic level, our features are really driven by users.”The team closely observes how studios are using the 2D tools and listens to their feedback. This was the case with B2tGame during the making of the Lost Crypt and Glu Mobile for the development of Isometric Tilemaps.When users report problems or dissatisfaction or suggest solutions or ideas, the cross-functional team strives to understand what the user is ultimately trying to build. By examining a user’s problem space, the team can determine if there’s a larger need for a solution. For example, if a user reports “pixel art” as a problem space, the team evaluates common workflows and desired outcomes and makes sure that the resulting solution (features and resources), in this case the 2D Pixel Perfect package, can support those outcomes.“Design has also become a critical part of how a feature comes together,” states Rus, “and it is championed by a dedicated design team that spans the organization.” The team has a new UX Designer who frequently connects with the community in the forum to assess the user experience.While 2D features will remain familiar for creators who use them in their current workflows, 2D tools are evolving along with the underlying technology and will benefit from overall improvements.In Unity 2020.2 the team offered a more intuitive experience for new users. Working on 2D projects got faster with streamlined menus and better default assets. The team plans to keep integrating 2D menus and settings consistently across the Editor for future releases of Unity.In 2020, Unity established performance as a major area of focus, which will be evident in Unity 2021. The 2D team is carrying that forward, “to improve the performance of 2D projects made with Unity and streamline the workflows for creators of 2D games,” Rus says.Two current focus areas for the 2D team are 2D Animation with Sprite Swap workflows and 2D graphics.Rus and Eduardo (Product Marketing, 2D) shared some productivity and performance tips in a recent Unite Now session.2D Animation received improvements such as Burst compatibility in Unity 2020.1, and integrated 2D Inverse Kinematics (IK) in Unity 2020.2.For Unity 2021, the team has been hard at work on 2D Animation, refining the Sprite Swap workflows and making it easier to share animation clips across multiple characters.The team’s current mission, according to Paul, is to enable workflows for large projects. “If you have characters with many parts or games with lots of downloadable content (DLC), the characters should be able to share rigs, animations and parts.”Thanks to feedback from studios that are already using the new features, the team is further refining the user experience (UX). With these improvements, technical animators can set up the rig, a mannequin or reference so that a standard asset can be shared across the team. Then artists can make variants of the reference skeleton and animators can animate them.These inherited skeletons also help ensure that variants with missing parts or additional parts (like a tail or wings) aren’t problematic – they just work. The user needs to remove unneeded Sprites or add new Sprites or animated parts as child GameObjects of a bone in accordance with the typical Unity workflow for creating variants, but can also keep the flow of 2D Animation for animated parts.The aim is to ensure that the Sprite Swap workflow is part of the general Sprite workflow, not just for skinning within the Sprite editor for 2D Animation. We also want to make sure that 2D is present alongside 3D across various areas of the Editor. This is consistent with Unity’s broader vision of improved quality of life for all of our users, including 2D developers.The team anticipates that these changes and other minor updates will be available to try in the Unity 2021 cycle.Unity is in constant evolution and 2D tooling can benefit from new technology available. Paul describes how they work with other teams:“We are a big consumer of internal tech such as Burst and SRP. Whenever something is released internally, we will take the tech and adapt one of our features to it. By prototyping it on one feature, we will have a fairly clear idea if the new tech is a good fit for 2D. After a few rounds of testing and prototyping, we will put it on the internal roadmap.”One example of that collaboration is the Burst performance improvements in 2D Animation in Unity 2020.1, for challenging scenarios like animating a large number of skinned vertices. Another example is the boost for Sprite Shape mesh calculations, which is especially beneficial at runtime. The team also worked with the Cinemachine team to enable compatibility with 2D Pixel Perfect and the graphics team to refine the 2D Renderer.The team’s goal for 2021 is to make 2D graphics perform faster in the Universal Render Pipeline. They are also striving to improve the user experience for lights and shadows in 2D.The Universal Render Pipeline (URP) will eventually become the Unity default renderer, and that includes the 2D renderer being the default renderer for 2D projects. The team is working very hard on optimizing light render textures. They are paying close attention to frame rate, memory bandwidth, draw calls and general memory usage with 2D Lights.According to Paul, the work consists of low-level optimization code. “We are working closely with the team behind URP, using the same code base and review process, to ensure the renderer meets the needs of 2D projects.”As part of the 2D Renderer work, the team is also further improving the rendering pipeline of Secondary Textures (normal and mask maps for 2D Lights).Unity’s performance is continually benchmarked across a wide range of devices. To test the performance of all the 2D features combined in a single scene, the team used Lost Crypt, among other projects and measures.Overall improvements include enhancements to the user experience (UX) and making features easier to access, for example, managing texture size from the Inspector window in the 2021 cycle.Rus explains that they look at the desired outcomes holistically. “We do that by talking with studios and learning how they use the tools during production.”The team is hard at work to support the maturation of URP in 2021. The 2D Renderer is set to power the next generation of 2D graphics and workflow improvements that will keep enabling anyone – from artists to game designers – to animate characters, design levels or create great visuals directly in the Editor. We are looking forward to sharing the latest developments with you next year that will make sure Unity remains the best solution for your 2D project, regardless what platform you’re building for.This is the third episode of the dev diaries introducing you to some of the people working on future versions of Unity. If you missed them, you can check out the first two installments for some insight into what’s driving our Performance Optimization and Quality of Life teams into 2021.Tell us in the comments below if you would like us to cover any specific teams and feature areas in future blog posts.

>access_file_
1358|blog.unity.com

Automate your playtesting: Create virtual players for game simulation

It’s easy to automate playtesting by creating a Virtual Player (a game-playing agent), then using Game Simulation to run automated playtests at scale. Read on to discover three case studies describing how iLLOGIKA, Furyion, and Ritz Deli created Virtual Players – offloading nearly 40,000 hours (~4.5 years) of automated playtesting to Game Simulation.Games are challenging to test for the same reason that they’re fun – players have the freedom to shape their own experience. Thus, games have a huge surface area for bugs and design flaws to appear. Developers must test frequently and with extensive coverage to resolve issues and reliably meet deadlines.In the past, developers chose between low-coverage tests with high frequency (unit tests in a CI pipeline) and high-coverage tests with low frequency (playtests before a major release).We built Unity Game Simulation to help developers test with the coverage of playtests and the frequency of unit tests. Unity Game Simulation enables developers to run automated playtests in the cloud. To use Unity Game Simulation:Create a Virtual Player (a game-playing agent).Use the Game Simulation package to instrument your game for simulation. Use the Game Simulation package to create and upload a build of your game to our servers.Run your game thousands of times from the Game Simulation user interface (UI).This blog post focuses on step 1 – creating a Virtual Player for automated testing. Steps 2–4 are straightforward and covered in the Game Simulation documentation. You can try Game Simulation for free now.A Virtual Player emulates the input of a real player to test some aspect of your game. For simple tests, such as validating that your game can run for 60 minutes without triggering an exception, a Virtual Player can be as simple as a C# script with a few lines of code that launches a scene and performs random actions.For more complex tests, such as verifying that all weapons are roughly equal in strength or that each level can be completed, a Virtual Player can be created with the same methods commonly used to create non-player characters (NPCs). These include:Heuristic scripts: A script with a very simple rule or algorithmBehavior trees: - A visual representation of a plan consisting of conditions and tasksFinite state machines: A script with a few states that the Virtual Player alternates between, for example, seek and attackUnity AI Planner: A visual planning framework with an intuitive Unity Editor UIReinforcement learning and imitation learning using the Unity ML-Agents Toolkit: Check out how we created a Virtual Player for Jam City’s Snoopy Pop using ML-Agents.Below we highlight how three studios created Virtual Players, collectively offloading nearly 40,000 hours of playtesting to Unity Game Simulation. What’s particularly noteworthy is that all three studios were able to gain immense value with Game Simulation while relying on relatively simple approaches for creating their Virtual Player.Indie studio Ritz Deli developed Eraser Blast, a linker-style puzzle game featuring over 50 characters, each with unique gameplay characteristics. Ritz Deli used Unity Game Simulation to run hundreds of simulations to ensure that each character generates increasing scores and coin counts as their XP level increases.Ritz Deli’s CTO and tech lead Eric Jordan needed to create a Virtual Player capable of solving linker-style puzzles. He implemented a Virtual Player with C# script based on a simple heuristic greedy algorithm. For Eraser Blast, the algorithm matches the longest possible chain of bubbles of the same type:Create the set of all possible single bubble selections. Select the bubble from the set of valid bubbles that has the greatest number of available matches.Repeat steps 1 and 2 until no available bubbles are valid matches.Update the metrics for the total score and coins rewarded.iLLOGIKA is the studio behind Rogue Racers, a player-versus-player (PvP) runner. Players create decks of cards that contain powerups that the player uses during a race. iLLOGIKA used Game Simulation to test every combination of cards to ensure that no card or deck is too powerful.The developers at iLLOGIKA created a Virtual Player using a C# script:Enable the Virtual Player to successfully navigate to the end of the race by performing raycasts to find upcoming obstacles and then avoiding them by switching lanes, ducking, or jumping.Add a series of rules describing when to use cards based on the state of the game, including the player’s current health, relative positions of the other players, their card abilities, etc. For each action described in steps 1 and 2, choose an incorrect but possible action to account for the unpredictability of a real player.Furyion is the developer of Death Carnival, a top-down shooter with a unique weapon socket system that enables the player to choose from over one hundred thousand possible combinations of weapon, ammo, and weapon module – each combination defining a unique gameplay experience.Herbert Yung, founder and director at Furyion, used a behavior tree creation tool called Behavior Designer to create a Virtual Player to estimate the average time of level completion for each combination of weapon, ammo, and weapon module. Herbert then ran thousands of simulations with Unity Game Simulation to test each weapon socket combination, saving more than 600 hours of playthrough.Herbert leveraged the intuitive Behavior Designer UI and many off-the-shelf tasks in Behavior Designer to create a Virtual Player:If an enemy is in range, attack that enemy.If no enemy is in range, move towards the exit until an enemy appears within range.Repeat steps 1 and 2 until no enemies remain.Navigate to the gate at the end of the level and once the level officially ends, call Application.quit().For more information on how to create a bot with Behavior Designer, see the documentation on Behavior Designer’s Asset Store page.The Unity Game Simulation team is committed to helping you create Virtual Players for automated testing, starting with Virtual Players for QA testing. Reach out to us if you’d like to be among the first to try our new tooling and features to create Virtual Players for QA testing.Find out more about getting started with Unity Game Simulation – you can even try it for free. And please reach out to the Game Simulation team with any questions.

>access_file_
1359|blog.unity.com

Introducing Unity Forma: Reimagine marketing with real-time 3D

We believe the world is a better place with more creators in it. That’s why we’re now making the power of real-time 3D accessible to marketing professionals with Unity Forma. It empowers these creators to produce interactive 3D product configurators and digital media from 3D product data in record time and without any coding skills.Marketing is one of the most compelling applications of real-time 3D. We’ve seen this technology used for groundbreaking mixed reality experiences, pushing the limits of photorealism, immersive product storytelling, product configurators, and much more.Real-time 3D offers the best of both worlds. It supports advanced media formats – interactive 3D, augmented reality (AR), virtual reality (VR), and mixed reality – that connect and engage consumers in a highly impactful, emotional, and interactive way, ultimately helping drive results like greater engagement and higher sell-through. It also supports marketers’ needs for traditional media like images and videos, while providing significant speed and cost advantages over common approaches like offline rendering, photoshoots, and productions.For these reasons and more, brands have been eager to embrace real-time 3D in their production pipelines, particularly those who need to work with complex 3D product data to market high-value items that come in multiple variations like cars, aircraft, appliances, furniture, and luxury goods.Typically, brands use a combination of in-house and agency resources to deliver digital marketing content. With an ever-increasing number of marketing channels demanding content, companies are finding that their marketing pipelines are becoming fragmented and inefficient. This inefficiency is often outsourced to agencies, but it is real nonetheless.The effort required to maintain product correctness and deliver fresh content to web, social, mobile apps, and other channels drives up costs and consumes efforts that could be put to better use. As one example, few companies have the ability to deliver personalized marketing content or deliver more deeply interactive experiences to engage consumers more thoroughly.Unity Forma is here to offer a new, better way to bring real-time 3D to your marketing.Unity Forma empowers marketing professionals to quickly create and publish marketing content and interactive experiences from 3D product data – without needing to learn the intricacies of Unity or how to code.It unlocks major efficiencies in marketing content production, enabling teams to rapidly import 3D product data, visualize models and all their variants in real-time 3D, and bring their vision to life with creative tools. You can easily showcase products in realistic visual quality, in any configuration, and in a variety of formats, including interactive 3D product configurators, images, and more.Once the creative process is complete, you can optimize content for your target platform, whether you’re using Unity’s cloud streaming service, Furioos, for web browsers, or publishing directly to mobile devices, the web, and more.Try Unity FormaBecause it is built on Unity and offers an open API, Unity Forma is extensible and can be customized to fit the exact needs of your team. You can create new capabilities that improve productivity and differentiate from the competition and implement custom tools and interfaces to tailor it to your workflows and to deliver brand-specific experiences.At the same time, it’s easy to divide and conquer work with creative agencies and other service providers in a cost-effective way – without compromising your ability to create the marketing content you need, whenever you want.Get a walkthrough of Unity Forma from Oliver Schnabel, the technical product manager for Unity Forma.In case you missed it, we debuted Unity Forma at a virtual event hosted by Unity’s chief marketing officer Clive Downie and featuring special guests from Volkswagen, including its head of global digital marketing, Candido Peterlini.Volkswagen is one of the early pioneers to use Unity in the automotive industry. The company leverages real-time 3D across its business, from R&D to marketing. It has turned to Unity to deliver a range of marketing experiences, from rendering millions of images in real-time for its website car configurators globally to creating mobile apps showcasing its vehicles in AR. Unity Forma will help take its efforts to the next level.To highlight what is achievable with Unity Forma, Volkswagen collaborated with Katana Studio and Unity to produce a new campaign video for the 2020 Volkswagen ID.4 EV, the automaker’s first fully-electric SUV. The teams used Unity Forma along with the Unity Editor to create a virtual production leveraging the ID.4’s manufacturing data and author a 45-second spot. The advertisement fuses together rendered car interiors, exteriors, and environments with shot patterns and angles that would have been impossible to achieve using a live camera production.“Volkswagen constantly seeks new paths to delight the user when experiencing our cars,” said Candido Peterlini, Head of Global Digital Marketing, Volkswagen. “With Unity, we found the right partner to enhance the product experience on the online configurator. Unity Forma comes with features that will help us to provide faster and higher quality real-time content like configurable product visualization.”“We will be able to design innovative and more immersive product experiences for our customers that give them a deeper understanding of our cars and the added value they get with highlight features like, for example, the innovative ID Light concept or our driving assistance features (IQ.DRIVE),” added Peterlini. “By this, the customer will get an even more realistic perception of the product already virtually at home.”We want to see what you can create with Unity Forma. That’s why we’re getting ready to launch the Unity Forma Challenge. It’s your chance to win a Volkswagen ID.4 EV.* Sign up here to get notified when the contest kicks off.*Unity may substitute prizes for cash or other equivalents as solely determined by Unity.Sign up to get notified about the contestTo use Unity Forma, you need a subscription to the Unity Industrial Collection or to both Unity Pro and Pixyz Plugin. Try or buy Unity Forma and stay tuned for more updates about Unity Forma in the months to come.Get started with Unity Forma

>access_file_
1360|blog.unity.com

Learn the Input System with updated tutorials and our sample project, Warriors

With the Input System, you can quickly set up controls for multiple platforms, from mobile to VR. Get started with our example projects and new video tutorials for beginners and intermediate users.Input is at the heart of what makes your real-time projects interactive. Unity’s system for input standardizes the way you implement controls and provides new advanced functionality. It’s verified for Unity 2019 LTS and newer versions (see the documentation for a full list of supported input devices).Our new tutorial content can help you to get started quickly, even if you’re completely new to developing for multiple platforms. If you’re already familiar with the workflows, learn more about how to use the Input System to drive (other) Unity tools like Cinemachine or Unity UI with Warriors, our main example project.This Meet the Devs session from March explains why we created this new system and includes a demo that outlines workflows for setting up local multiplayer, quickly adding gamepad controls, spawning new players, and implementing mobile controls. Rene Damm, the lead developer of the Input System, also answers questions from the audience about tooling and the team’s roadmap.If you’re a beginner who just wants to get to know the basic input workflows for Unity, check the updated Roll-a-Ball project on Unity Learn. It walks you through first steps like installing the Input System package, adding a Player Input component, and applying input data to the Player.Our Prototype Series also uses the Input System, as you can see in this video and example project on creating a boss with procedural animation. The spaceship’s input action asset uses input for movement controls, shooting laser projectiles, and boosting.Warriors is a project that demonstrates more intermediate tooling and APIs with the Input System in a typical third-person local-multiplayer cross-platform game setup. It’s available for Unity 2019 LTS and will be updated for the 2020 LTS version when it’s released next year. You can download the project from GitHub, where we have three branches available:V1 captures the state of the project at the time of the Meet the Devs session mentioned above;V2 is what we’ve used for the most recent Input System Unite Now session (see below);We’re continuing our work in the Master branch.The project is built around characters that can be controlled in both Single Player and Local Multiplayer modes. When the Game Manager is set to Local Multiplayer, it will instantiate several instances of the Warrior prefab. As the Warrior prefab is set up to use the Input System’s Player Input component, each instance of the Warrior will automatically have a connected input device paired to it.The control scheme for this Warrior is set up for cross-platform play and auto-switches between the keyboard and different gamepads. This setup provides an example of applying smoothing to the raw runtime input data so that the character’s directional movement is even no matter which axes input (keyboard keys or gamepad joystick) the player uses. The following control scheme also uses a button press for a basic attack animation.The Warrior’s Input Action asset is also set up for two different Action Maps: one for controlling the character (Running and Attacking), and the other for controlling input for the menu. A game or any other interactive project usually has different states with different control requirements that share the same input bindings.For example, pushing a joystick in one direction will cause the Warrior to run toward that axis. However, when the game is paused, the player can use the joystick to navigate between different selectable buttons and options in the pause menu.Using the Input System, you can easily set up different Action Maps for various control scheme scenarios and switch them, per PlayerInput component, at runtime with a simple API:playerInput.SwitchCurrentActionMap(stringForMapName);Once you have set up an Action Map for interacting with the UI, actions can then be assigned to the Event System’s UI Input Module:In the Warriors project, each player can pause the game and interact with the pause menu. The overlay UI consists of a few option buttons and a context-changing panel on the right. The Input System UI Input Module applies actions assigned to interacting with any Unity UI component with Interactable enabled (such as Buttons and Sliders).This UI interaction will be isolated for the specific player that pressed pause. For example, if Player 3 pauses the game, then Player 1 and 2 will not be able to interact with it, and only Player 3 will be in control. When Player 3 unpauses the game, all Player controls are resumed. We do this by calling the following methods on a PlayerInput component:playerInput.ActivateInput(); playerInput.DeactivateInput();In the UI Menu, there is a context panel for runtime rebinding controls. This allows you to perform an Interactive Rebind for a specific Input Action and set a new input binding to it. For example, when using a PlayStation controller you could rebind the Attack action from the Cross button to another input button such as Triangle or R2.This rebind applies as an override to a specific PlayerInput component. For example, Player 2 will be able to rebind their Attack button to a different button, but this will not affect Player 3’s use of their Attack Input Action.The API for setting up an interactive rebind also contains an operation for when the rebind has completed. This allows you to set up your own control when updating the UI, saving the game state or changing any other gameplay element based on this event.Besides the Input System’s integration with the Event System and Unity UI, the Warriors project also demonstrates how it can be set up to work in concert with Cinemachine. A typical camera setup for a third-person game would feature an orbiting follow camera where the view direction is set by a mouse position or joystick push direction. The Cinemachine Input Provider component can be added to a Cinemachine Free Look camera rig, which allows an Input action to send Vector 2 axis data directly into Cinemachine’s axis control values. In the image below, you can see that the CinemachineInputProvider is using the input data from the Look Input Action.The Input System also contains a variety of APIs so that other systems can access data at runtime that can be displayed in the UI or other visuals. The Warriors example project uses data from Player Input to display information such as the current player device, player input ID, or even to update the UI-based rebind controls on the current binding. We created a custom system that takes the device and binding strings and returns a specific icon.For example, if the player is using an Xbox controller and the Attack action is bound to the northern context button, this will display a Y button icon. However, if the player is using a keyboard, then it will display a string for the currently bound keyboard key. You are welcome to extract and use this Device Display Configurator tool in your own projects.The Input System package also contains several extra samples that you can explore in the Package Manager. You can use these as a reference and extend or adapt them for your own projects. You can also find the official documentation there, including this Quick start guide.Input System 1.1 is currently in preview, and it brings lots of fixes and some improvements. You can now save and load rebinds as JSON (see the RebindSaveLoad.cs sample script) and Device layouts can now be precompiled, greatly reducing instantiation time and garbage collection (GC) heap cost for these devices.We hope that the sample project and the other learning materials will help you get started with using the Input System to bring your ideas to life. Please share what you want to learn next in the comment section below. Also, keep an eye on the Input System page for more tutorials and other resources.

>access_file_