// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1690 transmissions indexed — page 66 of 85

[ 2021 ]

20 entries
1301|blog.unity.com

ML-Agents plays DodgeBall

In the latest ML-Agents blog post, we announced new features for authoring cooperative behaviors with reinforcement learning. Today, we are excited to share a new environment to further demonstrate what ML-Agents can do. DodgeBall is a competitive team vs team shooter-like environment where agents compete in rounds of Elimination or Capture the Flag. The environment is open-source, so be sure to check out the repo.The recent addition of the MA-POCA algorithm in ML-Agents allows anyone to train cooperative behaviors for groups of agents. This novel algorithm is an implementation of centralized learning with decentralized execution. A centralized critic (neural network) processes the states of all agents in the group to estimate how well the agents are doing, while several decentralized actors (one per agent) control the agents. This allows each agent to make decisions based only on what it perceives locally, and simultaneously evaluate how good its behavior is in the context of the whole group. The diagram below illustrates MA-POCA’s centralized learning and decentralized execution.One of the novelties of the MA-POCA algorithm is that it uses a special type of neural network architecture called attention networks that can process a non-fixed number of inputs. This means that the centralized critic can process any number of agents, which is why MA-POCA is particularly well-suited for cooperative behaviors in games. It allows agents to be added or removed from a group at any point – just as video game characters can be eliminated or spawn in the middle of a team fight. MA-POCA is also designed so that agents can make decisions for the benefit of the team, even if it is to their own detriment. This altruistic behavior is difficult to achieve with a hand-coded behavior but can be learned based on how useful the last action of an agent was for the overall success of the group. Finally, many multi-agent reinforcement learning algorithms assume that all agents choose their next action at the same time, but in real games with numerous agents, it is usually better to have them make decisions at different times to avoid frame drop. That’s why MA-POCA does not make these assumptions, and will still work even if the agents’ decisions in a single group are not in sync. In order for us to show you how well MA-POCA works in games, we created the DodgeBall environment – a fun team vs team game with an AI fully trained using ML-Agents.The DodgeBall environment is a third-person shooter where players try to pick up as many balls as they can, then throw them at their opponents. It comprises two game modes: Elimination and Capture the Flag. In Elimination, each group tries to eliminate all members of the other group – two hits, and they’re out. In Capture the Flag, players try to steal the other team’s flag and bring it back to their base (they can only score when their own flag is still safe at their base). In this mode, getting hit by a ball means dropping the flag and being stunned for ten seconds, before returning to base. In both modes, players can hold up to four balls, and dash to dodge incoming balls and go through hedges.In reinforcement learning, agents observe the environment and take actions to maximize a reward. The observations, actions, and rewards for training agents to play DodgeBall are described below.In DodgeBall, the agents observe their environment through the following three sources of information:Raycasts: With raycasts, the agents can sense how the world around them looks. Agents use this information to detect and grab the balls, avoid walls, and target their opponents. Different sets of raycasts – represented by different colors below – are used to detect different object classes.State data: This information includes the position of the agent, the number of balls it currently holds, the number of hits it can take before being eliminated, as well as information about the flags in Capture the Flag mode. Agents use this information to strategize and determine their chances of winning.Other agents state data: This information includes the position and health of the agent’s teammates, and whether any of them are holding a flag. Note that, since the number of agents is not fixed (agents can be eliminated anytime), we use a Buffer Sensor for agents to process a variable number of observations. Here, the number of observations refers to the number of teammates still in the game.The DodgeBall environment also makes use of hybrid actions, which are a mix of continuous and discrete actions. The agent has three continuous actions for movement: One is to move forward, another is to move sideways, and the last is to rotate. At the same time, there are two discrete actions: One to throw a ball and another to dash. This action space corresponds to the actions that a human player can perform in both the Capture the Flag and Elimination scenarios.Meanwhile, we intentionally ensure that rewards given to the agents are rather simple. We give a large, final reward for winning and losing, and a few intermediate rewards for learning how to play the game.For Elimination:Agents are given a +0.1 reward for hitting an opponent with a ball.The team is given +1 for winning the game (eliminating all opponents), or -1 for losing.The winning team is also awarded a time bonus for winning quickly, equal to 1 (remaining time) / (maximum time).For Capture the Flag:Agents are given a +0.02 reward for hitting an opponent with a ball.The team is given +2 for winning the game (returning the opponent’s flag to base), or -1 for losing.While it is tempting to give agents many small rewards to encourage desirable behaviors, we must avoid overprescribing the strategy that agents should pursue. For instance, if we gave a reward for picking up balls in Elimination, agents might focus solely on picking up balls rather than hitting their opponents. By making our rewards as “sparse” as possible, the agents are free to discover their own strategies in the game, even if it prolongs the training period.Because there are so many different possible winning strategies that can earn agents these rewards, we had to determine what optimal behaviors would look like. For instance, would the best strategy be to hoard the balls or move them around to conveniently grab later? Would it be wise to stick together as a team, or spread out to find the enemy faster? The answers to these questions were dependent on game design choices that we made: If balls were scarce, agents would hold on to them longer to prevent the enemies from getting them. If agents were allowed to know where the enemy was at all times, they would stay together as a group as much as possible. That said, when we wanted to make changes to the game, we did not have to make any code changes to the AI. We simply retrained a new behavior that would adapt to the new environment.Compared to training a single agent to solve a task, it is more complex to train a group of agents to cooperate. In order to help manage a group of agents, we created the DodgeBallGameController.cs script. This script serves to initialize and reset the playground (this includes spawning the balls and resetting the agents’ positions). It assigns agents to their SimpleMultiAgentGroup and manages the rewards that each group receives. For example, this is how the DodgeBallGameController.cs script handles an agent hitting another with a ball.In this code, the ball thrower is given a small reward for hitting an opponent – but only once the last opponent is eliminated will the whole group be rewarded for their collective effort.MA-POCA handles agents in a SimpleMultiAgentGroup differently than it does individual agents. MA-POCA pools their observations together to train in a centralized manner. It also handles the rewards given to the whole group, in addition to the individual rewards – no matter how many agents join or leave the group. You can monitor the cumulative rewards that agents receive as a group in TensorBoard.Since both Elimination and Capture the Flag are adversarial games, we combined MA-POCA with self-play to pitch agents against older versions of themselves and learn how to beat them. As with any self-play run in ML-Agents, we can monitor the agents’ learning progress by making sure the ELO continues to increase. After tens of millions of steps, the agents can play as well as any of us.This video shows how the agents progress over time when learning to play Elimination. You can see that, early into the training, the agents learn to shoot but have poor aim and tend to shoot at random. After 40 million timesteps, the agents’ aim improves, though they still wander somewhat randomly in hopes of running into an enemy. When they do meet an opponent, they typically engage them one-on-one. Finally, after another 120 million timesteps of training, the agents become much more aggressive and confident and develop sophisticated strategies, such as charging into enemy territory as a group.And here are the agents learning how to play Capture the Flag: Early in the training, at 14 million steps, the agents learn to shoot each other, without actually capturing the flag. At 30 million, the agents learn how to pick up the enemy flag and return to base, but other than the flag-carrying agent, it’s not clear how the other agents contribute. By 80 million timesteps, however, the agents exhibit interesting strategies.Agents who aren’t holding the enemy flag will sometimes guard their own base, chase down an enemy who has their flag, or wait in the enemy’s base for the flag-bearer to return and pummel them with balls. If they have a flag, the agent might wait at their own base until their teammates can retrieve the flag so they can score. The following video highlights some of the interesting emergent strategies that the agents have learned. Note that we never explicitly specified these behaviors – they were learned over the course of hundreds of iterations of self-play.The DodgeBall environment is open source and available to download here. We’d love for you to try it out. If you’d like to work on this exciting intersection of Machine Learning and Games, we are hiring for several positions and encourage you to apply here.Finally, we would love to hear your feedback. For any feedback regarding the Unity ML-Agents toolkit, please fill out the following survey or email us directly at ml-agents@unity3d.com. If you encounter any issues, do not hesitate to reach out to us on the ML-Agents GitHub issues page. For any other general comments or questions, please let us know on the Unity ML-Agents forums.

>access_file_
1303|blog.unity.com

How to level up your rewarded video monetization

Rewarded videos are the king of ads - which when implemented smartly can be essential parts of a mobile game’s core loop. For users, rewarded videos provide access to a variety of rewards that offer in-game currency or progression boosts - often at times when they’re very much needed. Because they’re opt-in, meaning the user chooses to interact with the ad, the user experience is typically a very positive one. As a result, 32% of gamers see rewarded video as being twice as useful as all other advertising formats, according to Marc Milowski, Strategic Partner Manager at Facebook Audience Network.For game developers, rewarded videos serve as a meaningful revenue stream that also increases retention rates, by enabling users to progress faster and enjoy premium content for free. Rewarded videos are so beneficial to everyone involved that 79% of developers running both in-app ads and in-app purchases said rewarded video was their most successful monetization format.However, succeeding with rewarded video requires a well-thought out plan and partnering with the right mediation platform - one that has the strongest tech tools to support a high growth rewarded video strategy. Below, we share our best practices to ensure you maximize revenue and keep your users as happy as possible.1. Perfect the UX by removing latencyThe positive experience rewarded video creates for users strengthens game retention and lays the foundation for strong engagement rate and usage rates - and in turn ARPDAU.Preserving this positive user experience is key to ensuring a successful rewarded video strategy over the long-term, and latency is a key factor in dictating that UX - in other words, whether or not the ad is served instantly after a user clicks on the traffic driver or button for the ad, or if there’s a delay in filling the ad space that causes the user to stare at a blank screen.Developers generally choose from three options when managing latency’s impact on the user experience. The first approach is to only show users a rewarded video traffic driver when there’s an ad ready to display. The second option is to always display the traffic driver, and if the ad inventory isn’t filled, users are left staring at a blank screen until they exit out. The third option is the same as the second, just with a slightly better user experience - if an ad isn’t available, a pop up will appear that states this.The drawbacks of these options are clear, harming ARPDAU, the user experience, or both. That’s why in recent years, developers have dedicated considerable time to optimizing their monetization strategy - typically through hybrid bidding and waterfall setups - to find the balance between sophisticated auctions and reduced latency. However, even the most optimized setup can’t deliver zero latency.That’s why you need to find a mediation platform that has dedicated technology to remove latency entirely. ironSource, for instance, developed an industry-first mechanism known as progressive loading, which ensures that there is always a rewarded video ad available, with no waiting.Not only does this ensure a perfect user experience by enabling users to watch multiple rewarded videos without a delay in between, it also means developers don’t need to deal with the time-consuming burden of latency management.Progressive loading in actionSince progressive loading was implemented in ironSource’s mediation, we've seen revenue shoot up between 3% to 20% per app. This range is mainly affected by genre - the deeper the game, such as those in the casual or midcore categories, the more significant the impact of progressive loading. That's because these games by design provide more opportunities for users to watch rewarded videos. In addition, games in these categories have more complex in-game economies, which the ads' rewards provide real value to.Design tips to leverage progressive loadingTo maximize usage rates of your rewarded video ads and leverage the power of progressive loading, we recommend using stacked multiplier rewards for a strip of rewarded videos. For example, the user watches one video and unlocks a "double income for 4 hours" reward. If they watch another video in the strip, 4 more hours will be added to the boost. Typically, developers add a limit to this cycle, for example up to 12 hours worth of boosts.Another option is encouraging users to watch several rewarded videos in a row to get a mystery box or a specific reward. We’ve also seen developers offer users the ability to open a treasure chest by watching a rewarded video, and as soon as they open it, offer them the possibility of doubling their chest loot by watching another rewarded video.That’s not all - a user may decide to open up a standard rewarded video placement on the game’s home screen multiple times in a row - especially in idle games that revolve around currency accumulation or scaling production. Each rewarded video ad could offer a multiplier that doubles or triples the earnings from the previous ad. Whatever the scenario and session depth, users can experience multiple ads with zero latency between them thanks to progressive loading.Make sure you A/B test different types of rewards and see which ones have the biggest impact on engagement and usage rates for your rewarded video ad placements. Maximizing the value you provide your players and encouraging them to watch multiple ads should be your focus - progressive loading technology will take care of the zero latency part for these multiple ads to ensure a perfect user experience.2. Use in-app biddingThe right in-app bidding partner will help you get the most out of your rewarded video ad placement strategy by maximizing your ARPDAU.There are four ways it does this - first, bidding strengthens competition for every impression, which means ad networks will bid higher than they usually would in order to outbid their competitors and fill your rewarded video inventory with an ad.Second, because all ad networks have the opportunity to bid to fill your ad request - not just the networks at the top of the waterfall - you never leave potential money on the table for your rewarded video placements.Third, with in-app bidding, bids for impressions are received in real time, which is more accurate than the flat [tooltip term="ecpm"]eCPM[/tooltip] or historical data used in waterfalls. This ensures you, as the developer, never undersell your impressions.Finally, sometimes ad networks are willing to pay top dollar for specific ad impressions - we’ve seen CPMs exceed $200 in some cases. You might not have accounted for such a high bid in your waterfall instances setup - if you’ve set your top instance at $50, you’d miss out on this potential revenue. In-app bidding ensures you’re always able to maximize your earnings thanks to its auction-like methodology.3. A/B test a variety of placementsOne of the best attributes of rewarded video ads is their potential to be implemented in very creative and innovative ways that complement the game experience. This brings us to the world of placement strategy.There’s two reasons why your placement strategy is key. First, with good placements your rewarded videos will be highly visible and accessible: this will maximize engagement and usage rates, and in turn meet your ad revenue goals. Second, a good placement strategy is also key to providing the best user experience. It’s not enough to make your rewarded video ads just visible - you need to make them visible in the right situations, when the rewards will be most valuable to users.That’s why adding different types of rewarded videos that offer users progression or currency-based rewards at multiple spots throughout your game is recommended. Below we unpack some of the most popular placements we’ve come across.Extra currency in home screen or shopExtra currency in the home screen or in-game shop is a common type of rewarded video placement. The traffic driver in the home screen can be left there for a limited amount of time or indefinitely, giving users the option to earn rewards whenever they need - just keep a close eye on IAP cannibalization.When users are in the game’s store, they’re showing some kind of intent to access premium content. Most users won’t be willing to spend real money, so giving them the option to watch an ad in return for earning gems - which can then be used to unlock premium items - can be very effective.Extra life after failingWhen a user fails a level, you can use a progression-based reward - like an extra life - that lets them keep playing in exchange for watching a rewarded video. In some games, developers let users automatically gain an extra life without watching an ad - but they have to wait a minute or more to receive it. As a time-saving alternative, users can open the rewarded video ad in order to gain the extra life and resume their play session.Double or triple rewards at the end of a levelA popular rewarded video placement is to offer a multiplier at the end of a level. Here, the rewarded videos can provide monetary or progression-based value to users. What does that mean exactly? When a user completes a level and earns a prize, you can offer them the opportunity to double or triple its value by watching a rewarded video ad.Surprise chest boxesThere are various ways to use chest boxes with rewarded videos: the traffic driver for the mystery box could appear only after specific achievements, like collecting a certain number of bombs; it could appear on a timer basis, every few hours; or it could appear based on progress, such as every two or three levels.Daily bonusDaily bonuses are a type of rewarded video placement that developers use as a retention boosting mechanism. The placement can be used in different ways: for example, you could give users a daily bonus for free and use the rewarded video to multiply the offer, or you could provide rewarded videos to users as a way to unlock daily rewards that ordinarily cost real money.Reward your users, reward your businessAn optimized rewarded video strategy can be the difference between a game business struggling to make a profit and one meeting all its KPIs. Progressive loading is a game-changer that unlocks what all developers strive for: a perfect user experience, no time-consuming latency management, and increased ad revenue. In an industry where the finest of margins can make all the difference, this winning trifecta helps provide a competitive advantage to scale game growth.The combination of leveraging the power of progressive loading technology - available only on ironSource’s mediation - with in-app bidding and a mix of classic and innovative ad placements will lay the foundation for success. Specifically, these best practices will keep users happy and engaged in your game, keep them coming back to your rewarded video placements, and as a result, driving up your ARPDAU over the long-run.

>access_file_
1304|blog.unity.com

Indie GameDev Spotlight: Q&A with NimbleBit

ironSource sat down with our mediation partner Ian Marsh, co-founder at NimbleBit, to learn how he and his team made it big as an indie game developer.NimbleBit started developing games back in 2008, when the iPhone was just born, and since then has amassed millions of downloads across its portfolio - many even still play their games from 10+ years ago. Keep reading for a transcript of the conversation and more of Ian's insights on how to build a game that lasts.How and why did you first get started in gaming?My twin brother David and I were big fans of video games growing up, and in high school I started to learn how to program really basic stuff like, HTML and flash. Meanwhile, my brother started making levels for one of our favorite games at the time, Counter Strike, and some of their mods.In 2008, David, with another friend of ours, started NimbleBit and their first game, which was a cart racing game on Steam - an ambitious first project that included multiplayer real-time physics.2008! So you were really early to the space. What was it like to develop games back then?At the time, I put out a game called Hanoi - a really simple puzzle game where you had to move different size disks across the screen on different sized pegs. It's a classic computer science thought experiment. I put it up on the App Store for free because I thought eventually my family members and friends might get iPhones and I could show them what I made.Hanoi ended up reaching #1 on the App Store’s free download charts and getting millions of downloads.That was the biggest thing I've ever made. It was the first software I even published myself, so I was kind of blown away. We then put up an extended version of the game for $0.99, which ended up generating more money than my day job. That’s when I quit and started publishing apps under my own name.Being so early, how did you deal with the increase in market competition over the years?To get a leg up, we decided to try something creative and start doing these “free weekends” where we would put our $0.99 or $2.00 games up for free for the weekend. They would get a ton of downloads and then we'd flip them back to paid. The word of mouth from all the free players would lead to more sales during the week. It was more money than the money we lost putting it free for the weekend.We already had a sense that the free thing could be a big deal when Apple started rolling out the free-to-play functionality in iOS. Once they officially did that, we designed our first fully free-to play-game, Pocket Frogs. That ended up being the very first Editor's Choice on iOS.What’s been the most exciting part of your gaming career so far?Releasing Tiny Tower was the game-changer. It was Editor's Choice, and the game just blew up. It was kind of a phenomenon in 2011 when it came out, and even the New York Times and Wall Street Journal covered the game. Then we got a call from Apple, and found out that it was going to be chosen as the iPhone game of the year for that year's roundup!We released a bunch of new games and a lot of them got Editor’s Choice again, but they never really reached the same level as Tiny Tower. In retrospect, we should have just doubled down on Tiny Tower and just rode that thing into the sunset. Though now we’re concentrating on adding some real big, exciting, new features to the game for our 10th anniversary update.Some of our games have already hit 10 year anniversaries. Like Pocket Frogs. I talked to some people in the industry and they just can't believe that we have decade-old games that still have sizable active audiences that are still profitable 10 years later. We've been really lucky to grow the community of players we had over the years. I've found huge player-run discords for most of our games and join them and become parts of them.What is your favorite part about developing games?My favorite part has always been the enjoyment that our games bring to others, and the potential to find such large audiences, especially on mobile and especially with free-to-play now. If you're going to spend your time making games, why would you make a game for thousands of people when you can make one for millions of people?It's also just really humbling. We get emails to our contact that say, “I've been playing this game since I was in middle school, and I’m a parent now, and it's got me through a lot of rough times” Or “my mother played Pocket Frogs through chemotherapy and it was the one thing that just kept her focused and relaxed.” We get some incredible stories and I know that they're far and few between in the real world, but they really help keep you motivated.What do you think is the most challenging part about developing games?I'd say the most challenging thing is really just keeping up to date with how the industry changes. It's gone through a lot of changes since I started 10 years ago, and advertising is becoming more important. Trying to navigate the market, especially when you don't have a lot of resources, you need to be really smart with who you partner with and really try to maximize the resources you have.It’s also been difficult to figure out how we can profitably grow our games. The challenge is getting each game to the level where it's profitable enough to still grow with advertising, but doesn't turn away all the players who've come to expect a certain type of game from us.You mentioned people are still playing your games 10 years later. What advice would you give to other indie developers trying to make a game that lasts?Developers starting at this point in time definitely have a lot more challenges than we did 10 years ago. One of the best things you could do is to try to make sure you grow a community. People will try your game, but if you don't listen to what they have to say, you won't know what needs improvement or how to change it. Connecting with the players of your game and trying to really listen to what they have to say is incredibly valuable and can help you improve it into something that can appeal to an even wider audience.

>access_file_
1305|blog.unity.com

The Unity Academic Alliance Arrives in Thailand

The Unity Academic Alliance, or UAA, is a program for postsecondary institutions that features competency-based curricular frameworks, discounts on Unity products, support for instructor and student certifications, and much more. With a strong dedication to using technology in its curriculum and seeing great value in Unity’s broad range of applications, the International Academy of Aviation Industry (IAAI) led the initiative for King Mongkut’s Institute of Technology Ladkrabang (KMITL) to become the first Unity Academic Alliance members in Thailand.Located near Suvarnabhumi Airport in Bangkok, KMITL-IAAI utilizes the Unity Academic Alliance resources to prepare students for competency in career-related areas of virtual reality (VR), augmented reality (AR), mixed reality (MR), engineering, artificial intelligence (AI), machine learning, simulation, serious games, and much more.The IAAI faculty, headed by Assistant Professor Dr. Soemsak Yooyen (Dean), currently offers a 4-year Bachelor of Engineering in Aeronautical Engineering and Commercial Pilot program as well as a Bachelor of Science in Logistics Management program. All programs at IAAI are taught in English, with a focus on innovative technologies. IAAI’s Unity Academic Alliance membership works in coordination with the IAAI Virtual research and development lab, and is now expanding to support the wider KMITL campus, which offers a robust selection of degree programs and certifications.With Unity at the top of the interactive content game and KMITL-IAAI emerging technology curriculum taking off, the Unity Academic Alliance is offering benefits to everyone involved. In fact, since its launch in 2018, the UAA has reached faculty and students in more than 100 postsecondary institutions and has helped them on the path to learning in-demand, industry 4.0 job skills.The Unity education team is dedicated to providing the tools and resources that post-secondary institutions need. The team works closely with member organizations, collecting feedback and adapting the program to assure success. As members of the program, KMITL-IAAI (as all member organizations) receives the opportunity for instructors to upskill and gain additional industry-recognized Unity certifications.Unity offers certifications appropriate for all learners, from the User certification for true beginners, to the Expert certification for industry professionals with over 5 years of experience. As the focus of the UAA is post-secondary education, UAA members can choose to offer students Unity Certified Associate or Unity Certified Professional courseware + exam bundles.UAA member institution representatives must maintain a Unity Certified Professional level certification (programmer or artist). With these Unity certifications and a CompTIA CTT+ Classroom Trainer Certification (or equivalent), one can become an official Unity Certified Instructor. UAA members have access to courseware, practice tests, and certification exams potentially worth over $33,000 USD to aid in a variety of learning and development paths. With all the options available for instructors and students to level-up their skills, the decision to join the UAA has been a clear win for KMITL-IAAI.Dr. Nuchjarin Intalar, a lecturer at KMITL-IAAI, is currently utilizing a Unity Certified Associate courseware voucher that was included with the UAA package. She says the online course is “well-organized” and “easy to follow at your own pace”. She is also impressed with how accessible it is, saying “It’s very easy to understand even if you have zero background in this field”. Dr. Intalar sums up a key takeaway by stating that “At the end of the course, you can create a real game that you and your friends can enjoy!” Now, students at KMITL-IAAI are joining Dr. Intalar in UCA courseware study.Dr. Intalar is also interested in virtual reality and says “Unity makes your imagination come true” and that it’s “user-friendly and full of features to help you enjoy the world of virtual reality.” She and her colleagues recently completed training for the new KMITL-Eon Reality Interactive Digital Center, learning how to create interactive lessons with Eon-XR and Unity’s XR Interaction Toolkit.KMITL-IAAI continues to discover new opportunities to integrate Unity Academic Alliance tools and resources into course curriculum, events, and competitions. From utilizing Unity for Humanity case studies in business ethics to Unity Monetization resources in marketing, the possibilities are endless. Equipped with the evolving Unity Curricular Framework, developed with input from Unity and other industry experts, KMITL-IAAI is well-positioned to help students master key learning objectives and enter the workforce with essential skills.KMITL-IAAI welcomes faculty, staff, and students who are interested in learning more about Unity or becoming a Unity Certified Instructor to contact us via our faculty website or our Facebook page. With KMITL-IAAI and the Unity Academic Alliance working together to bring you quality education and professional skills, the sky's the limit!Are you an innovative post-secondary institution looking to join the Unity Academic Alliance? Learn more about the benefits offered and explore frequently asked questions.Interested in becoming a Unity Certified Instructor? Learn how you can make an impact on Unity creators.

>access_file_
1306|blog.unity.com

Unity 2021.2 beta is available for feedback

Get a first look at what Unity is offering in this release cycle. These new features and improvements are available for you to try today. Recent releases have focused on stability, performance, and workflow optimizations. We’ve continued to emphasize these priorities in Unity 2021.2, but this release also lands many highly anticipated features, while numerous others are available in early testing. This blog post dives into some of these new features. These are some of the coding and general Editor highlights in 2021.2:Editor: Scene View tool overlays, quality-of-life improvements, Editor performance optimizations, a beta of the Apple Silicon Editor, the AI Navigation experimental package, feature setsScripting: Performance improvements, Asset Import and Scriptable Build Pipeline optimizationsProfiling: Better connectivity, platform, and Scriptable Render Pipeline (SRP) support in the Profiler, Experimental System Metrics Mali PackagePlatforms: Chrome OS support, Android ABB support, Android and WebGL improvements, Adaptive Performance updates, UDP improvementsUnity 2021.2 also offers a plethora of new features and tools that ready to share for feedback from artists and teams:High Definition Render Pipeline (HDRP): Volumetric clouds, Terrain details, Streaming Virtual Texturing, Nvidia DLSS, improvements to Path tracing and Decals UXUniversal Render Pipeline (URP): Scene Debug View Modes, Reflection Probe blending and Box Projection support, URP Deferred Renderer, decal system, Depth prepass, Light Layers, Light Cookies, SSAO performance improvements, new samples, and moreSRP: Lens Flare system, Light Anchor, GPU Lightmapper Lightmap Space Tiling, Enlighten Realtime GI, SRP settings improvementsAuthoring tools: Terrain tools updates, SpeedTree 8 vegetation, Shader Graph improvements, UI Toolkit runtime, better VFX graph and ShaderGraph integration, improved URP support, and more2D tools: 2D Renderer improvements, 2D Lights in Light Explorer, Custom Lighting node for Shader Graph, VFX support, new 2D URP template, Sprite Atlas v2 new APIs and folders, updates to 2D Animation, 2D Tilemaps, and 2D PhysicsCinematics: Experimental package Sequences, updates to Recorder, Alembic, and Python, Cinemachine simplified impulse, Unity Virtual Camera and Face capture in betaYou can get the latest beta from the Unity Hub or on our download page. As of today, it includes over 3000 fixes and more than 720 features and changes. Remember, the beta is not intended for use in production-stage projects. As always, make sure to back up your existing projects if you plan to use them with the beta.Your feedback is the most important part of the beta release. That’s why we’ve partnered with NVIDIA to offer you a chance to win one of 2 GeForce RTX™ 3090 graphics cards when you are the first to report an unknown bug that you encounter during testing. Find the details at the end of this blog post.Testing the latest releaseUnity teams look forward to seeing you explore and test new features. You can report any issues you find by using the Bug Reporter. By clicking Help > Report a Bug…, you can help us to efficiently investigate your issue and assign it a ticket in our system for faster resolution by our development teams. If you’re discussing the issue in the forum or in Unity Answers, sharing the Case ID is helpful to the team. Before you submit a bug report, you can make sure that your issue isn’t already known by checking our public Issue Tracker for any similar cases.There are benefits to actively reporting issues during betas. Besides helping us to iron out the remaining kinks and improving the release for everyone, each original and reproducible submission boosts your chances of winning one of our Beta sweepstakes prizes. Just make sure to add #Beta2021Win_NVIDIA to your bug report.Where you can help mostTheBeta and Experimental Forums are where the Unity community and staff connect to discuss pre-release technology and the beta. Participating in the forums helps Unity teams to evaluate the state of the beta, plan the product roadmap, and better understand developers’ needs and experience to fuel the evolution of Unity tooling. Please share any feedback that you have about the beta in the 2021.2 beta forum.If you’re interested in giving us feedback about your Unity experience in general and want to influence the future of Unity, you should join Unity Pulse. This is our new research platform and community where we conduct surveys, polls, roundtables, interviews, and group discussions that fuel how we prioritize our resources. You can learn more about it in this blog post.Now, let’s take a look at what you will find available for testing in this release.In Unity 2021.2, we continue to focus on quality-of-life improvements with significant Editor performance speedups and useful new workflow options.In this release, we overhauled the Scene View UX by adding overlays for artist-driven context-based tools, as well as customizable floating toolbars. We’re starting with Scene Tools (Move, rotate, scale, etc.), Component tools, Orientation, and Search. This system is extensible, so you can add custom tools and toolbars as overlays as well.You will also be able to find many improvements across the Editor to improve your efficiency, including:The Transform component can now constrain scale proportions.Project assets have copy/cut/paste support. Dragging multiple objects from the Hierarchy into Project window now produces multiple Prefabs.You can preview complex Prefabs more quickly in the Inspector, and “revert to Prefab” works with multiselection.The Game view “maximize on play” includes new options.We improved the number fields math expressions in the Inspector. For example sqrt(9) or *=2 that makes the value 2x larger across an entire selection. ToString() on various C# math types (for example Vector3) now prints two decimal digits by default instead of one digit.Clicking on a material slot in the Renderer component now highlights that material part in the Scene view.This release also includes many quality-of-life improvements for visual scripting. Opening an empty graph editor window prompts guidance on how to create or load graphs. Icons were adjusted for greater consistency with the Unity Editor. “Unit” has been renamed “Node,” and “Super Unit” is now “Sub-Graph.” We’ve reduced the amount of time it takes to import assets from a project using visual scripting. New nodes are available to simplify access to Script graphs and State graphs.We’ve also improved the workflows around Search. Use the new Table View to compare search results across multiple properties and sort items by names or description. Search can now be used to provide more relevant items when selecting a reference via the Asset Picker.In this release, the Package Manager includes feature sets, a new concept that groups together packages required for specific outcomes, like 2D game development or creating for mobile. They’re designed to work well together, and you can also access learning resources to help you get started quickly straight from the Package Manager.Additionally, we recently released the beta of our new Apple Silicon Editor, which provides M1 Mac users with a native Unity Editor experience. We’re looking for feedback during the beta period so we can make any necessary improvements prior to our full release in 2021.2. Learn more about how to access this beta and provide feedback in the forum.This release also brings a slate of improvements to asset workflows that will help you to speed up your iteration process throughout the development lifecycle in the new beta. The new Import Activity window helps you uncover what’s happening during the import process – which assets were imported/reimported, when it happened, how long it took, and it happened.This release also includes asset import speedups across the board thanks to accelerated texture imports, mesh import optimizations, and new importing options. See this forum post for more details on the improvements.Lastly, we’ve looked at optimizing the build process with Scriptable Build Pipeline optimizations and Build Cache performance improvements. We’ve also upgraded our player code build pipeline for Windows, macOS, Android, and WebGL with a solution that supports incremental C# script compilation. As a result, when you’re making small changes to your projects, the player build time will better correlate with the size of the changes you’ve made. We’re working on adding this improvement to the remaining platforms in future versions of Unity.A new IL2CPP Code Generation option in the Build Settings menu generates much less code (up to 50% less). This allows for both faster IL2CPP build times and smaller executable files. There may be a small runtime performance impact due to the different code generation methods, so this option is best suited for improving team iteration times. Let us know how this impacts your project speed in this forum thread.You can also find the AI Navigation Experimental release package, which offers additional controls for building and using NavMeshes at runtime and in the Unity Editor. Learn more in the documentation section and forum.We’ve included new performance improvements that will benefit coders, including:C# math performance improved by more aggressive inlining of functionsAsync Read manager API can be called from burst jobs, including APIs for async open, close, and cancelingAsset garbage collection code now multithreaded6x faster GUID hash generation for common data patternsThis release features many improvements for the profiling toolset:Improved Profiler connectivity with tethered Android devicesThe connection drop-down menu revamped into a tree view that groups player connections into local, remote, and direct connection categoriesImproved platform support for obtaining GPU timings of URP/HDRP codeNew APIs to pass arbitrary data to the Profiler and visualize it as a custom profiler module; enables exposure of performance metrics relevant to a game or any other systems in Profiler Window, as well as alternative visualizations of Profiler data to facilitate additional analysisImproved Memory module view in the Profiler Window. Forum discussion.AnExperimental release of the new System Metrics Mali package allows you to access low-level system or hardware performance metrics on mobile devices with Mali architecture for profiling or runtime performance adjustments. Learn more about it in the Documentation and in the forum thread. You can add it in the Package Manager using the feature “Add Package by name” and entering com.unity.profiling.systemmetrics.maliWe added four new Screen APIs, and these will provide greater control over the display settings in games, enabling players with multiple monitors to select which monitor the game window should appear on. These APIs are: Screen.mainWindowPosition, Screen.mainWindowDisplayInfo, Screen.GetDisplayLayout() and Screen.MoveMainWindowTo().The release includes support for Chrome OS within the Android Development environment. Unity will support x86, x86–64, and Arm architectures for Chrome OS devices. In addition, developers can build their own input controls to fully take advantage of keyboard and mouse setups or use built-in emulation. Since Chrome OS support is found within Unity’s Android ecosystem, this means less platform maintenance and an easier process for publishing to the Google Play Store. Read more in the documentation and our forum discussion.In 2021.2, Unity provides direct support for Android’s new expansion file format, Android App Bundle (AAB) for asset building. Using AAB, developers can meet the Google Asset Delivery requirements to publish any new apps to Google Play.Adaptive Performance 3.0 is available starting with 2021.2. This new version adds Startup Boost mode, which allows AP to prioritize CPU/GPU resources to help launch games more quickly. It also adds integration with the Unity Profiler to let you profile AP more efficiently in regular workflows. See the documentation and forum discussion for more information.Creators building for Android devices can now take advantage of new Android thread configuration improvements, including options that allow you to choose whether to optimize your apps to be more energy-efficient or more highly performant. While the default settings should be fine for most users, this feature gives more advanced users fine-grained control over how their apps run to maximize their performance on hardware.WebGL improvements include Emscripten 2.0.19, which gives faster build times and a smaller WebAssembly output for the WebGL Target.This release also includes features for future support of the WebGL Player in mobile web browsers, including gyroscope, accelerometer, gravity sensor and attitude sensor values (iOS and Android browsers). Other enhancements include forward- and rear-facing web cameras and the ability to allow full-screen projects to lock their screen orientation on Android browsers.Compressed audio support reduces the amount of memory used by the WebGL player in the browser for long-running background music and large audio files.You can now choose ASTC or ETC/ETC2 compressed texture formats to target mobile web browsers, as well as BC4/5/6/7 texture formats for higher-quality compressed textures on desktop browsers.Unity Distribution Portal (UDP) Improvements include support for the Editor’s Play Mode. Additionally, the game will fetch the IAP products defined in your project, and purchases and consumes will always be successful so that you can test your fulfilment in Play Mode without any disruption from UDP methods waiting for their callbacks.We’re also adding a guide to help you through your UDP implementation. Once it knows how you intend to implement UDP (directly, or via Unity IAP) it will provide you with step-by-step instructions, as well as code samples. It’s accessible through the Menu structure, where you should look for the Implementation Guide.2021.2 includes many improvements to our cinematic tools, as well as new packages.The new Experimental package Sequences (com.unity.sequences) offers a new workflow tool for cinematic creation that keeps a film’s editorial content organized, collaborative, and flexible. Check out the documentation for more information.The latest release of Recorder integrates Arbitrary Output Variable (AOV) recording, which is useful for creating separations in VFX and compositing. We’ve also integrated Path Tracing and Accumulation Motion Blur for more realistic rendering effects.The latest release of Alembic format support includes the ability to stream an Alembic file from an arbitrary location, effectively bypassing import, as well as improved material handling.The Cinemachine simplified impulse greatly reduces the complexity of setting up how cameras react to in-game events such as explosions.Python for Unity facilitates Unity’s interaction with various media and entertainment applications to ensure that you can seamlessly integrate Unity into a broader production pipeline. In version 4.0, you no longer need to install Python.It also adds support for Python 3.7, and in-process Python is no longer reinitialized on domain reload. PySide sample is much simpler and runs in-process, and there’s limited virtual environment support. Check out the documentation and forum discussion for more information.In 2021.2, new Experimental Packages take aim at improving how you use advanced cinematics.Unity Virtual Camera is an iOS app that leverages Apple’s ARKit to drive the movement of a camera in the Unity Editor using real-world AR-tracked motion from your device.Unity Face Capture allows you to use your Face ID-enabled iPhone or iPad to capture, preview, and record performances, then bind them to a model in iOS. To gain access to Unity Virtual Camera and Face Capture, sign up for the Cinematics Open beta.Artists can add procedural Volumetric Clouds in HDRP. It’s easy to quickly tweak the default parameters to achieve different kinds of realistic clouds, while advanced users can access more settings and import their own maps for finer artistic control.NVIDIA Deep Learning Super Sampling (DLSS) is a rendering technology available for HDRP that uses artificial intelligence to increase graphics performance and quality. It allows you to run real-time ray-traced worlds at high frame rates and resolutions. It also provides a substantial performance and quality boost for rasterized graphics and improves the performance of VR applications so they run at higher frame rates. This helps to alleviate disorientation, nausea, and other negative effects that can occur at lower frame rates.To celebrate this powerful technology coming to Unity, we’ve partnered with NVIDIA to offer beta participants a chance to win one of two GeForce RTX™ 3090 graphics cards, along with an exclusive, limited edition Unity x LEGOⓇ Minifigure. Find the details at the end of this blog post.HDRP Path tracer improvements include added support for volumetric scattering to path-traced scenes (only linear fog was previously supported). This feature also offers hair, fabric, stacklit and AxF materials, as well better HDRI sampling for enhanced visual quality when lighting a scene with an HDRI.Volumetric density volume format and blending improvements include the ability to take a Render Texture or Custom Render Texture as a volume mask in the Density Volume component. Other new additions in this release include colored volume masks, higher-resolution volume masks (up to 256 cube configured in the HDRP settings), and a falloff mode for density volume blend distance (linear or exponential). The 3D Texture atlas was improved to support different 3D texture resolutions and RGBA 3D textures.Based on artists feedback, we have improved the UX for HDRP Decals placement, including the Pivot point tool, improved UV manipulation, scale transform support, Prefab support, editing of gizmo colors, and multi-selection editing.Streaming Virtual Texturing (SVT) is a texture streaming feature that reduces GPU memory usage and texture loading times when you have many high-resolution textures in your scene. It works by splitting Textures into tiles, then progressively uploading these tiles to GPU memory when they are needed. SVT is an experimental feature and is only supported inHDRP. This release brings further improvements, including PS5 platform support.Improvements in this release bring URP’s Scene Debug View Modes closer to parity with the options available in Built-in Render Pipeline. The Render Pipeline Debug Window is also included as a new debugging workflow for URP in this release. Users can use Debug Window to inspect the properties of materials being rendered, how the light interacts with these materials, and how shadows and LOD operations are performed to produce the final frame.Reflection probe blending and box projection support has been added to allow for better reflection quality using probes and bringing URP closer to feature parity with the Built-In Render Pipeline.The URP Deferred Renderer uses a rendering technique where light shading is performed in screen space on a separate rendering pass after all the vertex and pixel shaders have been rendered. Deferred shading decouples scene geometry from lighting calculations, so the shading of each light is only computed for the visible pixels that it actually affects. This approach gives the ability to render a large number of lights in a scene without incurring a significant performance hit that affects forward rendering techniques.The new decal system enables you to project decal materials into the surfaces of a Scene. Decals projected into a scene will wrap around meshes and interact with the Scene’s lighting. Decals are useful for adding extra textural detail to a Scene, especially in order to break up materials’ repetitiveness and detail patterns.This release adds support for depth prepass, a rendering pass in which all visible opaque meshes are rendered to populate the depth buffer (without incurring fragment shading cost), which can be reused by subsequent passes. A depth prepass eliminates or significantly reduces geometry rendering overdraw. In other words, any subsequent color pass can reuse this depth buffer to produce one fragment shader invocation per pixel.Light Layers are specific rendering layers to allow the masking of certain lights in a scene to affect certain particular meshes. In other words, much like Layer Masks, the lights assigned to a specific layer will only affect meshes assigned to the same layer.URP Light Cookies enables a technique for masking or filtering outgoing light’s intensity to produce patterned illumination. This feature can be used to change the appearance, shape, and intensity of cast light for artistic effects or to simulate complex lighting scenarios with minimal runtime performance impact.Ambient Occlusion is used to approximate how bright (or dark) a specific surface should be, based on the geometry around it. This release brings several SSAO improvements, including enhanced mobile platform performance and support for deferred rendering, normal maps in depth/normal buffer, unlit surfaces, and particles.A new converter framework from Built-in Render Pipeline to URP makes the upgrade tooling more robust and supports more than material conversion.Motion Vectors support provides a velocity buffer that captures and stores the per-pixel and screen-space motion of objects from one frame to another.URP Volume System Update Frequency allows you to optimize the performance of your Volumes framework according to your content and target platform requirements.Discover new samples in the Package Manager for URP that provide use cases of features by showcasing their configuration and practical use in one or more scenes. These samples are provided to help facilitate teams’ onboarding and learning.The following features are compatible with URP and HDRP.This version introduces a new Lens Flare system. Lens Flares simulate the effect of lights refracting inside a camera lens. They are used to represent really bright lights, or, more subtly, they can add a bit more atmosphere to your Scene. The new system, similar to the one present in the Built-in Render Pipeline, allows stacking flares with an improved user interface and adds many more options.Light Anchor makes lighting for cinematics easier and more efficient by providing a dedicated tool to manipulate lights around a pivot point instead of in world space. Various presets allow lighting artists to quickly place lights around a character or any center of interest. This feature is also available for the Built-in Render Pipeline.GPU Lightmapper Lightmap Space Tiling. The tiled baking technique helps to reduce the GPU memory requirements by breaking the baking process into manageable chunks that can fit in the available GPU memory at any time. As a result, you can use the GPU Progressive Lightmapper for faster bakes, even when larger Lightmap resolutions are involved.Enlighten Realtime GI enables you to enrich your projects with more dynamic lighting effects by, for example, having moving lights that affect global illumination in scenes. Additionally, we’ve extended the platform reach of Enlighten Realtime GI to Apple Silicon, Sony PlayStation(R) 5, and Microsoft Xbox Series X|S platforms.The SRP settings workflow improvements are a series of UI/UX improvements intended to impact workflows and provide consistency between the SRP render pipelines. For this iteration, the focus was mainly on aligning the light and camera components between URP and HDRP. The changes consist of aligning header design, sub-header designs, expanders, settings order, naming, and the indentation of dependent fields. While these are mostly cosmetic changes, they have a high impact.In this release, the following features are now available in the Terrain tools:New Terrain sculpting brushes to bridge, clone, noise, terrace, and twist terrainErosion heightmap-based tools (Hydraulic, Wind and Thermal)Improved material painting controls with noise- and layer-based filtersGeneral quality-of-life interface improvements to streamline Terrain authoring workflows with the Terrain Toolbox.SpeedTree 8 vegetation has been added to HDRP and URP, including support for animated vegetation using the SpeedTree wind system, created with Shader Graph.In 2021.2, the Visual Effect Graph includes the following changes:Refactored ShaderGraph integration allows you to use any HDRP shader made with Shader Graph (unlit, lit, hair, fabric, and so on) to render primitives in the Visual Effect Graph. This change replaces the Visual Effect target in Shader Graph which is consequently deprecated (but still supported) for HDRP. It also allows you to modify particles at the vertex level, enabling effects like birds with shader-animated flapping wings or wobbling particles like soap bubbles.Signed Distance Field Baker is a new tool to directly and quickly bake a static geometry in texture 3D as a signed distance field in the Editor.We’re adding functionality to Bounds helpers that will help you set up your particles’ bounds to improve culling performance or prevent culling particle systems due to incorrect bounds.Structured/graphics buffer support adds a new possibility of passing data to the Visual Effect Graph using structured/graphics buffers in addition to textures. This feature is oriented at programmers who want to add complex simulations like hair or fluid movement or programmatically assign dynamic data such as multiple enemies positions using the Visual Effect Graph.Improved URP Support enhances the Visual Effect Graph’s stability and compatibility with URP on compute-capable devices. We’re adding support to render lit particles on URP and 2D Unlit Sprite shader.Shader Graph in 2021.2 includes the following changes:Shader keyword limits have effectively been removed. We added a more efficient API to work with keywords and made a very clear separation between global and local shader keywords. Learn more in the forum discussion.We’ve updated the ShaderLab Package Dependency syntax. Previously, there was no way to express dependencies between shaders and packages in tools and assets aiming to work with multiple render pipelines, which impacts both Asset Store and in-Editor developers. Tool authors would work around this limitation by shipping separate packages, one for each rendering pipeline supported. The ShaderLab Package Dependency feature removes this limitation by extending ShaderLab syntax and providing a possibility for shader authors to explicitly express the dependencies of shaders on packages.In 2021.2 UI Toolkit can now be used as an alternative to create runtime UI for games and applications. It provides dedicated tools for visual authoring and debugging UI, renders beautiful and scalable text with TextMesh Pro, provides crisp-looking textureless rendering, and can be used alongside Unity UI (UGUI). Learn more on the documentation, or join the forum discussion.Several URP/2D Renderer improvements can be found in this release.New SceneView Debug Modes in URP are relevant for 2D developers using the 2D renderer, who can now access the views: Mask, Alpha channel, Overdraw or Mipmaps. The Sprite Mask feature has been adjusted to work correctly in SRP. You can access it by going to Window > Analysis > Rendering Debugger > Material Override.The 2D Renderer can now be customised with Renderer Features which allow you to add custom passes.2D Lights are now integrated in the Light Explorerwindow, and they are no longer labeled as Experimental. 2D Shadows are being optimized, some of these improvements are implemented in this release, including refactoring work, rendering shadows to a single channel, and per-light shadow culling.2D Light textures produced by the 2D Lights are now accessible via the 2D Light Texture node in Shader Graph. One application of this is the creation of emissive materials for Sprites.VFX Graph now supports 2D Unlit shaders. In this first iteration, the Visual Effect renderer will not be affected by 2D lights. We look forward to hearing from your experience in this forum thread.A new 2D URP default template has been added. It includes all verified 2D tools, precompiled, so new projects load faster with the entire 2D toolset at your disposal, including URP and the configured 2D Renderer. The template also includes packages and default settings that are optimal for a 2D project.Other 2D improvements are Sprite Atlas v2 with folder support and new APIs to find duplicated sprites in several atlases for a single sprite, query for MasterAtlas and IsInBuild. 2D Pixel Perfect’s Inspector UI has a more intuitive setting display. 2D PSD Importer has new UX improvements, better control over the Photoshop layers, and Sprite name mapping. There’s a new option to flatten layer groups in Unity, and the tool can now autogenerate physics shapes, which can be convenient when you import scene elements that are not characters.2D Animation updates include bone colors, which can now be set in the visibility panel. This setup can help you to better differentiate colors or organization. UX improvements include shortcuts visible in the tooltips of the skinning editor’s tools, and a new tool to see sprite influences over bones.2D Tilemaps added the ability to override existing tile palette buttons or add new functionality to help you create custom tooling for tilemaps. API changes include the addition of TileChangeData struct,whichallows you to set a Tile at a position with color and transform it all at once, instead of invoking several calls. New APIs allow you to get information about animated Tiles, and get a range of tiles. We’ve improved performance when using APIs for setting multiple Tiles at once, such as SetTiles (Tile array and TileChangeData) and SetTilesBlock.In 2D Physics, you can now read and write primitive physics shapes (Circles, Capsules, Polygons, and Edges) using a new unified shape group feature. This new API provides the ability to add primitive shapes to a physics shape group or retrieve them from any Collider2D or all Collider2D attached to any Rigidbody2D. Additionally, a new CustomCollider2D provides the ability to write a shape group directly to it, providing fast and direct access to the Collider2D internals. The CustomCollider2D allows you to reproduce all existing Collider2D or create new simple Collider2Ds or complex procedural ones. In the future, the physics shape group will form the basis of new features including new physics queries and interaction with Sprite physics shapes.To celebrate the release of DLSS in preview, our partners at NVIDIA have provided two GeForce RTX™ 3090 GPUs for our Beta Sweepstakes so that our lucky winners can add all the power and efficiency of ray tracing and DLSS to their next HDRP project!To enter the draw, identify and report at least one original bug in a 2021.2 version while the submission period is open. The sweepstakes begin Monday, June 21, 2021 at 9am PST and the submission period ends Sunday October 2, 2021 at 5pm PST.An original bug is one that has not yet been reported at the time of submission and has been reproduced and acknowledged by Unity as a bug. Make sure to add #Beta2021Win_NVIDIA in your bug report submission. Every additional valid submission increases your odds of winning, but no participant can win more than one prize.No purchase necessary. Void where prohibited. See the full rules here. We will contact the winners directly.Access ray tracing tools and training using NVIDIA’s technology platforms by joining the NVIDIA Developer Program here.Want to provide feedback directly to Unity teams? Sign up for Unity Pulse, our new product feedback and research community. We created this community because we believe your experience and insights are vital to Unity’s evolution. By joining, you’ll have the opportunity to connect with Unity’s product teams, gain access to new product concepts and give feedback on beta products which will help us make the best products and experiences for you. We’re interested in your feedback. Sign up or Log In with your Unity IDUnity is looking for game creators to join us in revealing the next Unity platform release. Partner with us on marketing and PR efforts and help educate other developers on what’s new and available in the platform.If you’d like to be considered, we’d love to hear about your game! Tell us about your project here.The following is intended for informational purposes only, and may not be incorporated into any contract. No purchasing decisions should be made based on the following materials. Unity is not committing to deliver any functionality, features or code. The development, timing and release of all products, functionality and features are at the sole discretion of Unity, and are subject to change.

>access_file_
1307|blog.unity.com

Making robots more accessible with Forge/OS and Unity

We’ve seen our robotics customers do some really amazing things using Unity—from testing and training a robot in simulation to operating a real-life robot. But we love use cases that even we didn’t think of, like: using Unity to train the human operators of the robots.That’s the approach that READY Robotics, with the goal of making robots more accessible to end users, is taking with their latest Forge/OS robot software. After all, the robot revolution won’t happen if everyone needs PhDs to operate them!Unity’s core belief is that the world is a better place with more creators in it. Learn how READY Robotics is using Unity with Forge/OS to enable more robotics creators in this guest post by READY’s Co-Founder and Chief Innovation Officer, Kel Guerin, and their VP of Marketing, Erik Bjørnard.Robots have always captured the imagination. Because they represent a human creation that can interact with the physical world in the same way that people do, it's no wonder that we see them constantly represented in movies and TV. More recently, with devices like the Roomba, robots have entered our daily lives, but we often forget the millions of robots that help to make the things we use every day. Commercially, these industrial robots have been around since the 1960s, sharing that rough birthday with the first mass-produced computers. This is ironic, since computers have become a completely pervasive technology in the world, while there are comparatively few robots.The relative lack of robots deployed in the world is problematic. As we have seen poignantly in the last year, a manufacturing layer built almost entirely on human labor is very brittle, leading to shortages of critical medical components, microprocessors, and even lumber. Anyone in manufacturing will tell you that they would like to be using more automation, but they can’t.Why? Because robots are hard. They take a huge amount of knowledge to program and install, requiring advanced degrees or months of training. To compound the problem, every brand of robot is completely different, so those months of training only apply to the robot brand you originally learned, and switching to another brand means doing it all over again. This would be like buying a new laptop and having to learn a new operating system, which is again ironic, because this is exactly the problem that computers faced during the late 70s. Every manufacturer released different computer hardware and software that required specific expertise. They were not accessible and, like robots today, there weren’t that many of them.What solved this problem for computers is the same thing that can solve it for robots. In the 80s, computers were revolutionized by two things: a focus on usability (Apple, who set the trend, with others following) and common platforms (Microsoft DOS and Windows). When computers were accessible, like those from Apple, people immediately found applications for them. When there was a common platform like Windows, each computer ran the same software, so people could pick the right computer for the job without having to relearn everything. It's this lesson, and these two transformative ideas, that inspired us at READY Robotics to provide a software platform that runs on any robot and actually makes robots easy to use.Forge/OS was built by READY as the first end-user-focused operating system for robots. Forge does for robots what Windows did for computers (and Android did for phones) by providing a common set of interfaces so the same software “app” can work on any robot. To increase robot accessibility for everyone, we have started by building our own easy-to-use apps on Forge, just like the apps on your phone or computer. One such app is a robot-programming app called Task Canvas, which lets users program robots using simple building blocks in a flow chart. Task Canvas lets anyone easily learn how to program a robot in minutes, and begin working on serious tasks in less than a day. This is a pretty extreme advancement, considering the average industrial robot normally takes 70+ hours to learn. And since Forge runs on any robot, a person only needs to spend that short time to learn Task Canvas once then, just like using Excel on any computer, the user will be able to control any robot running Forge/OS via Task Canvas.One of the key remaining limitations to learning robots on any level, however, is access to hardware. Even Forge/OS and Task Canvas, which reduce the training time for using a robot from weeks to just a few hours, necessitates that you have a physical robot to work with. This is a huge issue, because while industrial robots are coming down in price, they still cost thousands of dollars, and are thus not accessible to everyone that wants to learn how to use them. Since READY’s core vision is to make robots accessible to anyone, we started looking at widely used simulation software. The idea was that any person with a computer could learn Forge/OS and Task Canvas by programming a simulated robot on their PC, instead of a physical robot in the real world.Our search led us to Unity and its game engine. Unity is used extensively by video game developers, but is also being adopted by professionals in other industries like manufacturing. This is because Unity has built a premier set of accessible tools for creating hyper-realistic simulated environments, with realistic textures, physics and lighting - a simulation tool that has gotten so good that it is often confused for reality. On top of that, Unity recently released a specific set of tools for simulating robots called Unity Robotics, including a new ArticulationBody GameObject, such that a robot in Unity behaves in a realistic manner when compared with its physical counterpart.For these reasons it was a natural move to build a robot simulator for Forge/OS in Unity, which we showed off in May at our Forge/OS 5 launch event. The Forge Robot Simulator connects the easy programming of Task Canvas with a simulated robot in Unity, which can be controlled just as you would control a real robot. Moreover, because of the powerful tools built into Unity, we were able to create incredibly realistic environments in which to use those robots.Everything from simple environments where you can learn the basics of robot motion, all the way to complete industrial workcells. Additionally, because you need to have a complete robot system to work with, we simulated items such as grippers and machine tools with Unity, enabling these devices to be programmed and work alongside robots to complete a task.The result is a realistic robot experience, where you can create a robot program to grab objects, trigger other devices, and generally perform industrial-like tasks, all on your PC, without physical hardware. And once you have learned Forge in simulation and are ready to take the plunge with a real industrial robot, everything you have learned in the simulator will directly apply to a real-world system, because they run Forge/OS too.We are so excited for the day when anyone, whether they are a student learning about robots or a professional preparing for a career in robotic automation, can boot up a computer and learn how to program a real robot. We believe Forge/OS has the power to unlock robots for everyone by making them accessible in the same way that Windows and Apple made computers accessible. And we believe the Forge Robot Simulator as the most accessible way for anyone to get started with Forge/OS in a compelling, realistic simulation powered by the Unity engine.Forge/OS is now available on the READY Robotics website. Look for the Forge Robot Simulator later this summer.Unity’s Digital Developer Day – Attend Unity’s upcoming virtual event where READY’s Co-Founder and Chief Innovation Officer Kel Guerin will be presenting. Register for free.Unity Robotics GitHub – Get started with Unity Robotics by going through some of our examples and tutorials today! Learn more.Unity Robotics mailing list – Want to stay up-to-date on the latest features and updates from Unity Robotics? Join our mailing list!

>access_file_
1309|blog.unity.com

Say hello to the new Starter Asset packages

As part of our dedication to empowering creators, our dev team has released the new Starter Asset packages. Find out what this means for your workflow below.What are Starter Assets?Starter Assets are Unity-created free and lightweight first- and third-person character base controllers for the latest Unity 2020 LTS and above, using Cinemachine and the Input System packages. Older versions of Unity may also work with Starter Assets, in some cases with slight modifications. They are designed to give you a quick start into prototyping and building character controllers for various game genres,through systems and methods that allow you to easily build and expand on just about any type of project.The Starter Assets are split into two separate Asset Store packages: A first-person character controller and a third-person character controller. You can quickly download and import the controller you need for your project directly from the Asset Store. Both controllers are built in the same modular way and based on the same scripts and logic.The project uses the built-in render pipeline, but the default materials can be upgraded to either URP or HDRP through the standard upgrade paths.What's inside the new Starter Asset packages?Character controllers: At the core of the new Starter Assets is a set of two lightweight character base controllers, adapted for third-person and first-person control.Input System: Starter Assets utilize the Input System. This way, you can modify controller input for various input methods, such as a gamepad, mouse, and keyboard setup, or a touchscreen mobile device (supporting both joysticks and touch zones). If your project is using the old Input Manager, you can enable simultaneous support for both systems in the Project Settings.Cinemachine: With the help of Cinemachine, the camera settings are customizable so they can be tailored to your project's needs. The character prefabs have several exposed values that are easy to adjust to your liking, so you can speedily customize the character’s movement and Cinemachine camera settings to suit your project.Visual assets: The Starter Asset packages are supplied with a humanoid-rigged armature that can support various animations, as well as an environment for testing character movement, and a basic joystick set for touchscreen controls.Compatibility: Both packages are separate and highly modular, which makes them adaptable to a large variety of use cases. The packages also include ready-to-use prefabs for more efficient integration.How are the packages used?Whether you’re completely new to Unity and want to add a player character into your scene, or you’re an experienced game developer looking to quickly test your own features and functionality, the Starter Assets can get you up and running before you know it!The Third Person Controller package includes an original armature, fully rigged with custom animations that use a humanoid character rig. This simplifies replacing the armature with another character that uses a humanoid avatar. Even more, animations can be changed and adjusted to create an original character for your own project.How does this change improve the user experience?The Starter Asset packages allow you to jump right into Unity 2020 LTS – no matter your previous experience or expertise. These starter packages provide a solid base to practice using Cinemachine and the Input System in a character controller with the built-in CharacterController component.Be sure to check out the package documentation located in the download package to learn how to leverage Starter Assets for your next project. We aim to ensure that these assets remain readily available and up to date for your use. Stay tuned for more announcements as we expand our support for Starter Assets.First Person Character ControllerThird Person Character ControllerWe hope that you find value in the new Starter Asset packages and look forward to hearing your feedback. Head on over to our forum discussion with any questions or comments that you want to share with us!

>access_file_
1310|blog.unity.com

5 A/B tests to increase revenue and users for your hyper-casual game

At the latest Hyper Games Conference, Lior Shekel, Strategic Partnership Team Leader at ironSource, presented the top 5 A/B tests every hyper-casual game developer should run to increase revenue and drive more users. Read on for a summary of Lior’s presentation or watch it below.Test 1: Creatives that boost your game metricsAs a hyper-casual developer, you likely already test many concepts and iterate them, A/B tests videos with interactive end cards to find the best combo, and continue challenging winning creatives to achieve low CPI, high ARPU and scale.But that’s where the process usually ends - once you find a “killer creative” that explodes on social and SDK networks and drives great engagement, what should you do next? Lior suggests integrating it into the game and A/B testing the change. For example, you can integrate it as ‘epic challenge level’ like Supersonic did -once every few levels, users were given the choice to “GO EPIC” or continue”, which led to a 10% ARPU uplift.If you make any changes to that killer creative, be sure to reflect those changes into your game, for example, different backgrounds, new colors, characters and more. Always A/B test the changes and see how it impacts your game.Test 2: In-app biddingIn-app bidding has many benefits, it helps maximize your ad revenue, saves precious time using automation, and enables full data transparency. User acquisition strategies are impacted as well - with a chance to bid for every single impression, you can increase reach and scale much quicker.Make sure you’re constantly testing the impact of in-app bidding on your games. If you find one month of in-app bidding doesn’t bring the ARPDAU lift you’re looking for, try again the next month. Mediation platforms are constantly adding new bidding networks and updating logic that could positively influence your revenue.ironSource data shows that in-app bidding impressively outperformed the traditional waterfall in 94% of A/B tests, resulting in a 10% ARPDAU uplift on average.Test 3: InterstitialsInterstitials are the basis of any hyper-casual monetization strategy and making sure they’re performing at maximum capacity is key to success. Lior reviews two critical tests every hyper-casual game developer should run. First, pacing, which is the frequency at which you show an interstitial in your game. In an A/B test for different pacing periods (25, 35, and 40 seconds), results showed that 25 seconds between interstitials performed best (in 80% of tests), with a 10% uplift in ARPU.The second interstitial test (below) centers around the stage of the game the interstitial is shown. Here, the test group shows the interstitial between the level end page and the win page, while the control group shows the interstitial at the end, after the user chooses to skip the rewarded video. In this test, the outcome showed an increase in interstitial engagement rate by 20% and a 150% increase in impressions per DAU. Although only 55% of the test groups won, you’ll want to test this with your game to maximize your ARPU (increased by 5% in this case).Test 4: Follow the ROASROAS is the most important KPI for user acquisition, especially for hyper-casual as this is where most of your profit comes from. To improve ROAS, try A/B testing a blanket bid vs bidding differently per source.For example, in a test in which the same bid was made on two sources with different LTVs, results showed a negative margin on source A and a positive margin on source B. Both sources showed the same results for install scale.Meanwhile, bidding granularly per source showed different results. By decreasing the bid on source A and increasing the bid on source B, margins were optimized and the scale of installs on source B doubled. These results show the importance of challenging your current UA strategies and testing source bidding to optimize and increase your ROAS.Test 5: Keeping your users within your portfolioBecause hyper-casual developers ship games often and quickly, it’s likely you have a pretty hefty portfolio. Keeping users within your portfolio is crucial - and the best way to do that is implementing a cross promotion strategy.Lior explains a test a publisher made comparing user acquisition with cross promotion, and user acquisition without. The test was run through the ironSource cross promotion tool, available on the ironSource platform. The cross promotion tool allows you to split and separate cross promotion activities between different users and groups.In the test, Group A ran cross promotion and group B didn’t. Cross promotion results showed zero impact on retention, and interestingly, ARPU slightly decreased (approximately a $0.01 difference). This outcome was the effect of the opportunity cost for cross promotion, an ad that you are paying for and the opportunity cost of not showing a different ad.Looking at the whole portfolio, it’s interesting to see the difference in LTV of users engaging with cross promotion starting on different days. In this case, cross promotion engagement starting on Day 2 drove the highest LTV. “With more than $0.30 difference between groups, this test shows how important it is to segment your users and run cross promotion within your game, until you find the sweet spot," Lior says.Key insightsRemember to continue challenging your setup with A/B testing, whether it’s user acquisition, monetization strategies or interstitial timing. Define your KPIs in advance and stick to them all while exploring new growth opportunities - it’s always possible to improve your game.

>access_file_
1311|blog.unity.com

UPM Dependency Confusion & AssetBundle Security in the Editor

Security is an important aspect for development of all kinds of software, and Unity projects are no different. As part of Unity’s Responsible Disclosure policy, we work closely with external researchers on possible vulnerabilities or issues that arise within the Unity environment. Recently we have been in contact with security researchers at IncludeSecurity. Working with them in the model of coordinated disclosure, we want to share information about insecure development practices that Unity developers may encounter. The two topics covered in this blog post are dependency confusion and AssetBundle security.Supply chain security vulnerabilities are a serious issue that all Unity developers need to take into consideration when creating their games.One of these vulnerabilities is known as dependency confusion. Dependency confusion occurs when an attacker is able to influence a developer's environment and tools to download a malicious package. This attack leverages the use of unsafe default behavior within some package managers and private repositories.The Unity Editor has its own package manager, Unity Package Manager, which supports fetching packages from NPM registries. This means a Unity developer using a private NPM registry could face the same risks of the dependency confusion vulnerability described above.Take for example, a Unity project that pulls from a private registry and a public registry like npm.io. A developer can upload Package A to the private registry, as a standard development practice. If the package manager scopes are too broad or if the private registry proxies a public registry, then a malicious attacker can upload a malicious Package A with a higher version number to the public repository. Due to its higher version number, the malicious package may be downloaded to the Unity project, which would result in code execution on the developer’s machine at dependency load time, or on any machines running the Unity project.NOTE: Unity does not recommend using public registries, such as the NPM public registry. Many packages in these public repositories are not supported in the Unity editor, and some features are not supported by the Unity Package Manager.By using private packages in the Unity editor with a private registry that proxies a public repository, a developer may leave themselves vulnerable to a dependency confusion attack. This attack is what IncludeSecurity describes in their article here.IMPORTANT: The default configuration of the Unity Package Manager is not vulnerable to a dependency confusion vulnerability. The developer must modify their manifest.json file, as detailed below, to become vulnerable. See below in the ‘Mitigations’ section to understand if you’ve modified your local package configuration in a vulnerable manner, and how to update your configuration to mitigate the vulnerability. The vast majority of developers should not be concerned, but should familiarize themselves with content nonetheless, in order to understand how they can continue to keep their codebase and customers protected.In the example vulnerable configuration below, a proxied registry is used as the only scoped registry, one that pulls from the private registry and from the public NPM registry. Since the packages defined in the scope share the same registry, a malicious attacker can upload “com.private.vulnerable.package” with a higher version to the NPM registry, which can result in the malicious package being downloaded to the user’s environment when the developer updates their packages.Example Vulnerable Configuration:The typical mitigations such as scoped packages for NPM are not supported in Unity Package Manager. Instead, using Scoped Registries can prevent this type of issue. By explicitly defining packages in the scopes of a scoped registry, the source of the package will be strictly locked down to what is defined in the scoped registry configuration file.Note that if a private registry also proxies NPM, then the Unity Package Manager cannot differentiate between a private published package or publicly published one. The mitigation needs to be applied at the registry level, which IncludeSecurity describes in their article.The following configuration fixes the vulnerable configuration by splitting the registries into their own scoped registry blocks, and explicitly defines which packages are used by each registry. This means a malicious package will not be downloaded into the user’s environment, even if the package has a higher version on a public registry.Example Secure Configuration:AssetBundles are a commonly used feature in the Unity engine that are used to deliver assets in a standardized format. They are commonly used to download additional assets to a user’s game when the asset is needed, cutting down on the amount of data required to be initially downloaded to play a game. In some cases, AssetBundles can even be used to distribute custom content for modding or customization in a Unity project.When using AssetBundles, it is important that they are downloaded from a trusted source. Even though AssetBundles cannot contain executable code, IncludeSecurity found that a specially crafted AssetBundle file could allow an attacker to exploit a vulnerability in the game code or the Unity runtime. This is the issue described by IncludeSecurity.Objects like AnimationEvent, UnityEvent, and SendMessage all have a similar attack surface in that they allow arbitrary methods to be called on components. If untrusted AssetBundles are going to be used, consider disabling these objects altogether or sanitizing them.As such, AssetBundles that are downloaded off the internet should be treated no differently than any other piece of software that is downloaded from the internet:Only use AssetBundles that are from trusted sources.Ensure that AssetBundles are transmitted through secure communication channels, such as TLS.If untrusted AssetBundles must be used, consider the following mitigation strategies:Use an explicit list of allowed OS-level C# methods that can be called in objects.Instead of an allowlist, another yet less effective action is to create a blocklist of all dangerous methods. For example, methods that access URLs, or manipulate the local file system. Blocklists must be maintained as new methods to bypass the list are discovered by malicious actors.For AnimationEvents specifically:Do not accept scripts. Consider removing AnimationEvents entirely when loading an untrusted object.Disable event handling altogether on Animators (see our documentation)As content creators, there are risks associated with the tools and services that are relied upon to help create amazing experiences. Taking the time to ensure that your own developer toolchain is secured and configured properly will provide the benefit of security to both yourself and your users.3 Ways to Mitigate Risk When Using Private Package Feeds, MicrosoftDependency Confusion: How I Hacked Into Apple, Microsoft and Dozens of Other Companies, Alex BirsanFind out more about Unity Security and all Unity security advisories.

>access_file_
1312|blog.unity.com

Analyzing cohort reports: The what, why, when, and how

What are cohort reports?In today’s market, analytics is a prerequisite to turning a game into a profitable business. Analytics allow you to track how various changes you make, such as introducing a new ad unit, impact KPIs and overall game health; it’s a way to reduce uncertainty, make smart decisions, and optimize performance.To gain a holistic understanding of your game’s health and accurately measure and track two of the most important metrics, retention rates and LTV, cohort reports are key. A cohort is a group of users who performed an action on a specific day - for example the day on which a group of users installed your game. Cohort reports provide insight into the behavior of user cohorts, by analyzing retention and LTV metrics. Why cohort reports are keyCohort reports are a crucial tool for unearthing the real picture of your game’s performance. Because they are based on the install date of user groups, cohort reports can show the impact on user LTV and retention from changes in your UA or monetization strategy. Nothing is ever as good as it seems...Cohort reports allow you to go a layer deeper than the surface-level metrics offered by user activity and performance reports. For example, perhaps these two reports show a rise in ARPDAU and DAU in a specific time frame; on the surface, it would seem like your app is hitting your monetization and UA goals. However, if you use a cohort analysis of the same time frame, you might see a reduction in user retention and LTV, despite the positive uplift in ARPDAU and DAU.What could possibly explain this? Maybe you’ve added a new interstitial ad placement and invested a bigger budget into UA to acquire users more aggressively, but the quality of these users is poor, and the new ad placement is disruptive to the user experience. As a result, despite the rise in ARPDAU and DAU these changes triggered, retention rates are decreasing as well as user LTV - which does not reflect a healthy growth strategy....Or as bad as it seemsConversely, you might see a drop in ARPDAU and DAU in your user activity and performance reports, and as a result try to make a change to your strategy, for instance increasing the frequency of interstitial ad placements. However, it’s feasible that retention and LTV are steadily increasing for user cohorts during the same period, and therefore making a change is unnecessary and potentially detrimental.Ultimately, the ability to analyze cohort reports on a regular basis is key to optimizing how you monetize your content and increases the likelihood of creating a profitable game over the long-run. Let’s look into how to actually analyze cohort reports - and when you should do it.How to analyze cohort reportsUnderstanding what you’re looking atCohort reports tracks the retention rate and revenue generated by different groups of users. Each row on the far left column represents a different user group, and the row at the very top shows the number of days since each user group installed your game.For every cohort, day 0 has a 100% retention rate, and as the days progress this number drops. The shade visually signals the strength of the retention rate or revenue value, allowing you to easily see any anomalies. For example, the April 29 cohort has a much higher Day 3 retention rate than the other cohorts and can be easily identified due to its darker blue shade. This could be due to a change in monetization strategy, for instance adding a new rewarded video ad placement in the game that helps users progress during a difficult level.This example shows a 14 day period, but you can choose different timeframes. Looking at cohorts from a few months ago can be useful as a point of comparison with more recent cohorts, and can shed insight on the effectiveness of your current monetization strategy and game design. Aside from customizing the time period, depending on the mediation or analytics platform you’re using, you can also select specific geos and, if you have multiple games, specific titles from within your portfolio.The same goes for revenue. As you can see below, this game’s users drive more value the longer they’re playing the game. When to analyze cohort reportsOn a daily basisYou should aim to look at your cohort report every day. First check your high level metrics in your performance and user activity reports, and then go to the cohort report to see the impact of your monetization and UA strategy on retention and LTV. By checking daily, you can stay agile with your strategy and make speedy, informed decisions to maximize the value of your users.After changing your growth strategyIn addition to checking in every day, it’s important to analyze your cohort report soon after making a change to your growth strategy. On the monetization side, for example, perhaps you remove an ad network from your waterfall, or roll out a new ad unit in your game.User activity and performance reports will show relevant metrics like eCPM, fill rate, ad engagement rates and revenue, but the only way to understand the impact on retention and LTV is through your cohort report.This is also true for your UA strategy - if you increase your bid and scale your installs as a result, it’s vital to check the impact of this on retention and LTV. Even though boosting your UA budget might drive an increase in DAU, if it’s bringing poor quality users, in turn lowering retention rates and LTV, then it's paramount you identify this issue early on. Otherwise, you risk wasting your budget.After changing your appChanges to your app, such as a new version update, should also be closely monitored using cohort reports. Perhaps the update triggers a short-term drop in ARPDAU, but because you fixed certain bugs and improved the user experience, retention and LTV of users is increasing.Analyzing cohort reports will contextualize such changes and give you a holistic view of their impact on your game’s business performance.The most successful game developers today are the ones who constantly optimize their game design and growth strategies. Cohort reports are a means to this end: they help you contextualize your other KPIs, measure the impact of changes you make, and ensure you’re never blinded by potentially misleading surface layer metrics. With cohort reports, you can always keep a close eye on the most important metrics for long-term success: retention and LTV.

>access_file_
1314|blog.unity.com

Announcing the Unity for Humanity Rare Impact Challenge winners

We partnered with Rare Beauty’s Rare Impact in October of 2020 to open a call for immersive experiences that address mental health and well-being. This challenge was designed to underscore the powerful role that real-time 3D can play in supporting the world’s mental health landscape. Today, we are proud to introduce the winning projects: Apart of Me and What It’s Like to Be Me.Creators from around the world submitted to the Unity for Humanity Rare Impact Challenge, sharing powerful stories and experiences using RT3D to support mental health and well-being. Fueled by their personal experiences and connections to underserved populations, the submissions powerfully illustrated how RT3D can raise awareness for, normalize mental health conversations, reduce barriers for treatment, and increase access to mental health support. We were floored by all of the incredible work creators are doing, and are proud to introduce the two Rare Impact Challenge winners, Apart of Me and What It’s Like To be Me.Read on to learn how these projects are supporting the mental health of critically underserved communities: tackling the unprecedented grief in adolescents exacerbated by the pandemic, and the unique stressors that make members of the LGBTQ+ community 3-4 times more likely to suffer from anxiety and depression.Designed by experts in grief, Apart of Me is a mobile game that helps young people and their families navigate through the heartbreak and confusion of grief. The game guides users through a beautiful, calming 3D world built to provide a safe space for making sense of loss, remembering loved ones, and connecting to wisdom shared by other youth who have experienced the loss of a loved one. By introducing powerful practices, including self-compassion, creativity, and community, young people discover new ways to live fully despite their losses.The global pandemic has brought on immeasurable loss. Apart of Me Co-Founder, Louis Weinstock, noted that “when we launched this app a couple of years ago, little did we know how badly our app would be needed today. Sadly, this pandemic has left over 4 million children and young people around the world grieving for loved ones.” With the Unity for Humanity Rare Impact Challenge award, this team will be able to expand their reach and meet the growing demand for the app.Apart of Me works in partnership with Child Bereavement organizations in the UK including Child Bereavement UK, Grief Encounter, and Winstons Wish. Download the free Apart of Me app from the Google Play and Apple App stores here.What It’s Like to Be Me is a VR experience that invites users to understand what it can be like for members of the LGBTQ+ community confronting stress in their everyday life brought on by negative attitudes about their sexual and/or gender identities. What It’s Like to Be Me is about witnessing and engaging with the stories of LGBTQ+ individuals as they navigate the world to recognize the mental, emotional, and physical strain, and also the resilience and strength developed through their experiences. This project validates common stories in the LGBTQ+ community to provide a source of comfort in knowing community members are not alone, and that happiness and joy are still possible.This project was created by two social scientists, Marc Svensson and Kate Luxion, who are completing their PhDs focused on LGBTQ+ mental health and minority stress at University College London (UCL). The LGBTQ+ identifying team behind the project believe that, “the most efficient way to improve mental health in our community and help LGBTQ+ people live happier lives is to educate the general population about the issues we are facing, as a community and as individuals. With education and improved understanding comes acceptance, inclusion, and empathy.” Kate Luxion, explains that the experience is “built around five true-life stories. Users are immersed in their experiences in the hope that more people understand what it’s like to experience and work through prejudice, discrimination, and stigma as an LGBTQ+ person,”With the grant funding, the team will complete the project, sharing the first 5 VR stories. Ultimately What It's Like To Be Me will become an integral part of the training workshops run by Helsa, the company Marc Svensson founded in July 2019 to be a digital platform for LGBTQ+ mental health support and training. Helsa offers extensive helpful resources for the LGBTQ community, including research, training, and support.Created with Rare Beauty’s Rare Impact, this challenge is a part of the existing Unity for Humanity Program within Unity Social Impact, which celebrates and empowers creators who are using real-time 3D to inspire change. If you are using real-time 3D to build environment-focused content, consider submitting to the Environment & Sustainability grant by June 3. Join the Social Impact Mailing List to stay up-to-date on these projects, upcoming grant opportunities, and more.

>access_file_
1316|blog.unity.com

Why this college is teaching real-time 3D to the next generation of automotive designers

The College for Creative Studies (CCS) in Detroit, Michigan has been at the forefront of innovation for over 110 years. We’ve partnered with this leading art and design college to create a series of hands-on courses to better equip post-secondary students with real-time skills for the automotive industry.At Unity, we believe that the world is a better place with more creators in it. This is why we partnered with CCS to inspire and educate students to ensure they have the skills they need to enter ever-changing industries. As a leading design institution, CCS typically partners with automotive original equipment manufacturers (OEMs) on curriculum projects to develop new ideas for vehicles. After graduation, their students and alumni go on to work at some of the world’s leading automotive manufacturers, design firms, and technology companies like Unity.Anuja Dharkar, head of academic and non-profit solutions at Unity, recently sat down with Paul Snyder, co-chair of the Transportation Design program, and David Gazdowicz, associate professor for the Entertainment Arts program at CCS, to chat more about design at CCS, the future of automotive design, and why the college is teaching with Unity.OEMs have been utilizing the power of real-time 3D across their workflows for years. Leading manufacturers such as Volvo, Honda, Toyota, Lexus, and BMW have seen great success using Unity in a variety of ways. As the next generation of automotive designers enters the industry, they bring new ways of thinking and are equipped with real-time tools, such as Unity, to excel within the industry.Since partnering with Unity, Transportation Design and Entertainment Arts students have been given the opportunity to delve deeper into out-of-the-box ideas and explore forward-thinking using real-time 3D for their semester project – Vehicles of the Future. Students were split into five teams for the 15-week course, composed of automotive designers working on the interior and exterior of the vehicles and entertainment arts designers working on the futuristic environments to place the vehicles in.“It’s a very difficult project but it’s also very imaginative. When you open projects up to the extent that you’re asking students “what will the future look like, what will people be like, and what sort of vehicles will they need”, then they really have to push their imaginations,” says Snyder.Students were encouraged to conduct cultural and environmental research to explore unusual topics which stimulate iteration – a foundational principle of design. Through multiple iterations and feedback from faculty and Unity staff, the students further developed their vehicle designs to improve functionality, proposed integrated technologies – such as in-car experiences – and overall ingenuity. This iterative process is what Snyder, and his colleagues at CCS, believe to be a fundamental part of the student’s education.Gazdowicz agrees to say that the Entertainment Arts students follow a similar process through traditional methods such as sketching prior to moving to three-dimensional modeling.“The awesome thing about using Unity is that students can get to the [3D modeling] process faster and ideate quicker. So the more they are able to get into the software, into the game engine, and rebuild, design, and develop – they can see the end product really quickly. Then students go through the ideation process all over again to see if something doesn’t click and if the idea is working towards the desired outcome,” says Gazdowicz.Along with this iterative process, Snyder says these skills are truly setting the students up for real-life success. “Without really knowing it, the automotive industry had been preparing itself for COVID-19 for several years. Then when it hit, pivoting into digital and real-time 3D for surface evaluation, theme evaluation, or real-time virtual reality presentations became almost seamless,” says Snyder.Unity and CCS are now working together on a new project to engage students in developing concepts for in-vehicle human-machine interfaces (HMI). This project focuses on the use of real-time 3D to encourage future thinking and collaboration.***Learn more about the College for Creative Studies and check out how you can prepare students for in-demand jobs.Want to teach Unity? Learn more.

>access_file_
1317|blog.unity.com

ML-Agents v2.0 release: Now supports training complex cooperative behaviors

About one year ago, we announced the release of the ML-Agents v1.0 Unity package, which was verified for the 2020.2 Editor release. Today, we’re delighted to announce the v2.0 release of the ML-Agents Unity package, currently on track to be verified for the 2021.2 Editor release. Over this past year, we’ve made more than fifteen key updates to the ML-Agents GitHub project, including improvements to the user workflow, new training algorithms and features, and a significant performance boost. In this blog post, we will highlight three core developments: The ability to train cooperative behaviors, enable agents to observe various entities in their environment, and harness task parameterization to support training multiple tasks. These advancements combined move ML-Agents closer to fully supporting complex cooperative environments.In our 2020 end-of-year blog post, we provided a brief summary of all the progress we had made from our v1.0 release in May 2020 through December of that same year. We also unpacked three main algorithmic improvements that we had planned to focus on for the first half of 2021: Cooperative multi-agent behavior, the capacity of an agent to observe a varying number of entities, and establishing a single model to solve several tasks. We can now proudly say that all three major improvements are available in ML-Agents.In addition to these three features, we made the following changes to the main ML-Agents package:Added a number of capabilities that were previously part of our companion extensions Unity package – i.e., Grid Sensors component and Match-3 game boards. Enhanced memory allocation during inference. In some of our demo scenes, we observed up to 98% reduction.Removed deprecated APIs and reduced our API footprint. These API-breaking adjustments necessitated our version upgrade from 1.x to 2.0. See our Release notes and Migration guide for further details on upgrading with ease.In the remainder of this blog post, we will expand on the roles of cooperative behaviors, variable length observation, and task parameterization, along with two incremental improvements: Promotion of features from the extensions package, and overarching performance. We will also provide an update on our ML-Agents Cloud offering and share a preview of our exciting new game environment that will highlight complex cooperative behaviors, ahead of its release in just a few short weeks.In many environments, such as multiplayer games like Among Us, the players in the game must collaborate to solve the tasks at hand. While it may have been previously possible to train ML-Agents with multiple agents in the scene, you could not define specific agent groups with mutual goals up until Release 15 (March 2020). ML-Agents now explicitly supports training cooperative behaviors. This way, groups of agents can work toward a common goal, with the success of each individual tied to the success of the whole group.In such scenarios, agents typically receive rewards as a group. So if a team of agents wins a game against an opposing team, everyone is rewarded, even the agents who did not directly contribute to this win, which makes it difficult to learn what to do independently. This is why we developed a novel multi-agent trainer (dubbed MA-POCA for Multi-Agent POsthumous Credit Assignment; full arXiv paper coming soon) to train a centralized critic – a neural network that acts as a “coach” for the whole group of agents.With this addition, you can continue to give rewards to the team as a whole, but the agents will also learn how to best contribute to their shared achievement. Agents can even receive individual awards, so that they stay motivated and help each other attain their goals. During an episode, you can add or remove agents from the group, such as when agents spawn or die in a game. If agents are removed mid-episode, they will still be able to understand whether or not their actions contributed to the team winning later on. This empowers them to put the group first in their actions; even if they end up removing themselves from the game through self-sacrifice or other gameplay decision-making. By combining MA-POCA with self-play, you can also train teams of agents to play against each other.More than this, we developed two new sample environments: Cooperative Push Block and Dungeon Escape. Cooperative Push Block showcases a task that requires multiple agents to complete. The video below exhibits Dungeon Escape, in which one agent must slay the dragon, causing it to be removed mid-episode, so that its teammates can pick up the key and escape the dungeon.Read through our documentation for details on how to implement cooperative agents into your project.One of the most commonly requested features for the toolkit has been to enable game characters’ reactions to varying numbers of entities. In video games, characters often learn how to deal with several enemies or items at once. To meet this demand, Release 15 (March 2020) now makes it possible to specify an arbitrary length array of observations called the “observation buffer.” Agents can learn how to utilize an arbitrary-sized buffer through an Attention Module that encodes and processes a varying number of observations.The Attention Module serves as a great solution in situations where a game character must learn to avoid projectiles, for example, but the number of projectiles in the scene is not fixed. In this video, each projectile is represented by four values: Two for positioning and two for speed. For each projectile in the scene, these four values are appended to a buffer of projectile observations. The agent can then learn to ignore the projectiles that are not on a collision trajectory, and instead pay extra attention to the more dangerous projectiles.What’s more, agents can learn the importance of entities based on their relations across entities in the scene. For instance, if agents must learn how to sort tiles in ascending order, they will be able to figure out which tile is the next correct tile based on the information of all other tiles. This new environment, dubbed Sorter, is now available as part of our example environments that you can download and use to get started.Read through our documentation for details on how to implement variable length observations into your project.Video game characters often encounter several tasks in different game modes. One way to approach this challenge is to train multiple behaviors separately and then swap between them. However, it is preferable to train a single model that can complete multiple tasks at once. After all, a single model lowers the memory footprint in the final game, and by extension, shortens overall training time since the model can reuse some parts of the neural network across multiple tasks. To this end, we added the ability for a single model to encode multiple behaviors using HyperNetworks in our latest release (Release 17).In practice, we use a new type of observation called a “goal signal,” as well as a small neural network called a “HyperNetwork,” to generate some of the weights of another bigger neural network. This bigger network is the one that informs the agent’s behavior and enables the behavior’s neural network to have different weights, depending on the goal of the agent, while maintaining some shared pieces across goals when necessary.The following video shows an agent solving two tasks present in the ML-Agents examples (WallJump and PushBlock) at the same time. If the bottom color is green, the agent must push the block into the green zone. But if the top-right square is green, the agent must jump over the wall onto the green square.Read through our documentation for details on how to implement task parameterization using goal signals in your project.In November 2020, we wrote about how Eidos developed a new type of sensor in ML-Agents called Grid Sensor. This Grid Sensor implementation was added to our extensions package at the time, before we went on to iterate on the implementation and promote it to this latest release of the main ML-Agents package.In Release 10 (November 2020) of ML-Agents, we introduced a new Match-3 environment and added utilities to our extensions package to enable the training of Match-3 games. We’ve since partnered with Code Monkey to release a tutorial video. Similar to Grid Sensors, we made our utilities for training Match-3 games a part of the core ML-Agents package in our latest release.Our goal is to keep on improving ML-Agents. After hearing your feedback on the amount of memory allocated during inference, we promptly made significant allocation reductions. The table below shows a comparison of the garbage collection metrics (kilobytes per Academy step) in two of our example scenes between versions 1.0 (released May 2020) and 2.0 (released April 2021). These metrics exclude the memory used by Barracuda (the Unity Inference Engine that ML-Agents relies on for cross-platform inference):In our v1.0 blog post, we first shared some details on ML-Agents Cloud. Our ML-Agents Cloud service lets you kick off multiple training sessions that run on our cloud infrastructure in parallel, so you can complete your experimentation in a timely manner. Today, ML-Agents Cloud’s core functionality gives you the ability to:Upload your game builds with ML-Agents implemented (C#).Start, pause, resume and stop training experiments. You can launch multiple experiments at the same time and leverage high-end machines to spawn many concurrent Unity instances for each training experiment – all with faster completion times.Download results from multiple training experiments.Throughout the rest of 2021, we plan to accelerate the development of ML-Agents Cloud, based on Alpha customer feedback. Additional functionalities will focus on the ability to visualize your results, manage your experiments from a web UI, and harness hyper-parameter tuning. In fact, we are still accepting applicants to the Alpha program today. If you are interested in signing up, please register here.In this post, we outlined three core improvements that move ML-Agents closer toward supporting complex cooperative games. We demonstrated each of these three improvements in isolation, and also discussed the sample environments recently added to the toolkit. What we did not yet reveal is another upcoming showcase environment called Dodgeball. Dodgeball is a team versus team game that highlights the way that all three features work together. Agents must reason in complex environments to solve multiple modes, cooperate with teammates, and observe varying entities in a scene. We plan to release this environment in the coming weeks, alongside another dedicated blog post. Till then, check out this sneak peek of our agents training to play Dodgeball.On behalf of the entire Unity ML-Agents team, we want to thank you all for joining us on this journey with ongoing support over the years.If you’d like to work on this exciting intersection of Machine Learning and Games, we are hiring for several positions and encourage you to apply here.Finally, we would love to hear your feedback! For any feedback regarding the Unity ML-Agents toolkit, please fill out the following survey or email us directly at ml-agents@unity3d.com. If you encounter any issues, do not hesitate to reach out to us on the ML-Agents GitHub issues page. For any other general comments or questions, please let us know on the Unity ML-Agents forums.

>access_file_
1320|blog.unity.com

Mortenson releases Unity on Climate Pledge Arena

Mortenson used Unity to create a virtual reality (VR) walkthrough of Climate Pledge Arena, the new home of the Seattle Kraken. The immersive and collaborative virtual environment enables project, sales and marketing, and ownership teams to review designs and give tours before the arena is built.U.S.-based Mortenson is a top-20 builder, developer, and provider of energy and engineering services. Its Seattle office has been building complex projects in the Northwest for over 36 years. In 2018, Mortenson was named the new general contractor on a KeyArena redevelopment project for the Seattle Kraken, the National Hockey League’s (NHL) 32nd franchise, set to play its inaugural season in 2021.Recently renamed Climate Pledge Arena, this extensive $1 billion project will reposition the venue as the premier sports and entertainment destination in the Pacific Northwest. The arena is aiming to be the first International Living Future Institute net Zero Carbon certified professional sports venue in the world. The institute’s net Zero Carbon Certification requires that 100% of the energy used to operate the building be offset by new renewable energy. In addition, all of the embodied carbon emissions associated with the construction and materials of the project must be disclosed and offset.Mortenson’s experience as one of the nation's largest sports and entertainment builders—recently completing the Las Vegas Raiders’ Allegiant Stadium and the Golden State Warriors’ Chase Center—taught them the importance of ensuring ownership groups develop clear conceptual knowledge of the physical realities of the design direction. When Oak View Group, the majority owner of the Climate Pledge Arena, asked Mortenson to provide a walkthrough of premium spaces as they neared the end of conceptual design for the Climate Pledge Arena, they turned to Unity to create an immersive VR experience.Mortenson saw the value of VR early and created a Virtual Insights team to integrate visualization technologies into its design and customer experience. It made them experts in using Unity to deliver interactive VR and 360 video experiences for a wide variety of use cases, such as design reviews and sales/marketing initiatives for construction and healthcare clients.“The AEC industry has been experimenting with the use of VR for well over 20 years. It has successfully provided value and ROI. However, for many projects, the form factor, technical requirements, and user friction for delivering walkthroughs prevented it from realizing its potential as an accessible and broadly-used communication and collaboration tool,” says Will Adams, VR Developer at Mortenson. “Now, the new generation of real-time 3D development platforms and standalone, six degrees of freedom (6DOF) headsets are ushering in a new era of VR utility in the AEC industry.”This new era signals a paradigm shift for the AEC industry’s use of immersive VR. It’s no longer a technical tool that companies are curious about from an individual standpoint. It’s an invaluable tool that empowers customers and development teams to collaborate in real-time 3D and speed up design reviews through interactive and at-scale VR experiences.“Various hardware and software improvements are reducing friction for users and facilitating comfort for longer periods of use,” says Adams. “These improvements have resulted in groups of people remaining immersed together for longer than previously possible. Users are experiencing environments intuitively, as if they were walking together, discussing issues and features naturally. It played a pivotal role in creating Climate Pledge Arena.”The renovation has been likened to "building a ship in a bottle” as crews lifted and suspended a 44-million-pound roof over the project site as the arena’s footprint is expanded underground and the walls are rebuilt. The complexity of the project meant Mortenson had to rely on intense team collaboration, top-down construction, and digital tools to simulate the built environment.When bringing the Climate Pledge Arena model into VR, Mortenson focused on the club level, suites, press bridge, arena bowl, structure, and atrium. Mortenson needed to port different design models from partners and building information modeling (BIM) data to build the arena in Unity.The structural, precast stadia, walls, and other data were ported from Autodesk Revit. The renowned architectural firm Rockwell Group provided Mortenson with assets from Rhino and Autodesk 3ds Max for the high-design spaces, including the club level and suites.After creating the model in Unity and assessing project needs, Mortenson decided to develop the environment for the Oculus Quest headset. “We chose the Oculus Quest for greater usability and flexibility. Because it’s easy to set up and maintain each headset, we can smoothly support 10 or more people in the environment simultaneously,” says Adams.Mortenson’s research has shown that the key to a successful VR environment is to reduce friction. An overly complicated, hard to learn environment breaks immersion and the willingness of the user to engage. It’s important to create an inviting and accessible experience for all users. Mortenson implemented a wide variety of features to create the best virtual environment possible.“We found that the use of ‘teleportation’ for primary navigation creates a psychological disconnect between users and the environment. It can reduce their ability to interact with other users socially and cause vertigo or motion sickness,” says Adams.To combat this, Mortenson pushed the device's limits for positional tracking by leveraging a large 40-foot by 100-foot open space at its project office. This created a virtual environment where walking and moving naturally in the space was the primary mode of navigation.Mortenson took it a step further by aligning the virtual environment with the physical walkthrough space. By using code development, networking, and a simple process of alignment, Mortenson could ensure that users' avatars are in the same place as their physical bodies. “It’s extremely important for users to interact normally in conversation, as it allows them to feel each other's presence through an accurate virtual representation of their position in respect to their voice,” says Adams.The Unity VR walkthrough has been a valuable tool for both the project team and the ownership group. Given the customers’ focus on creating live world-class sports and entertainment experiences, it was critical for Mortenson to deliver a smooth and successful arena operation. Utilizing an immersive VR environment enabled the arena’s operation personnel to be prepared and familiar with the space before it’s even open, ensuring a spectacular experience for fans.“We facilitated VR walkthroughs for more than 100 people with an average time-in-environment of well over one hour per person,” says Adams. “Ken Johnson, Project Executive at Oak View Group, has spent well over 10 hours in the environment. He’s become our best arena tour guide and has led tours for more than 30 people.”After experiencing the power and realism of the virtual environment, Oak View Group wanted a version of the environment for the sales team. Additional modeling was done in 3ds Max and Unity to create a high-fidelity version the Seattle Kraken sales center could use to show customers and fans. Mortenson provided the virtual environment and six Oculus Quest headsets and trained the team on how to do VR walkthroughs. It has been highly effective and the highlight of their sales pitch.---Learn more about Unity for AEC and see why industry leaders are embracing real-time 3D technology to change the way buildings are designed, created, and operated.

>access_file_