// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1689 transmissions indexed — page 71 of 85

[ 2020 ]

20 entries
1403|blog.unity.com

Unity 2020 events update

In our Unite Now blog post a few weeks ago, we announced that Unity would not be hosting a physical event this year. Instead, Unite 2020 will be completely digital. Since that post, we’ve received a lot of questions, so we wanted to take a moment to bring everyone up to speed on the details we know today.We are working through various creative solutions to deliver the content our Unity community loves, like speaker sessions and hands-on learning. We’ll also maintain some of the key ingredients of in-person Unite events, including our Expert Bar, “meet the devs,” and networking, and new experiences will be added. In true Unity fashion, we will collaborate with our partners and the community to help deliver this content.Unity will still sponsor industry events in 2020 and support sponsorships with digital content and experiences, including our Unite Now platform.This includes things like Developer Days, sales hospitality meetings, VIP dinners, and thought-leadership events. We will continue to add smaller virtual events to meet these needs until we get back to our normal state of event programming.We know there is no perfect replacement for in-person meetings, events or experiences. We believe that by focusing on digital direct channels and engagement, we’ll be able to continue supporting communities and building rapport with industry events and organizations, our customers, and the community.Check back often for updates. We look forward to staying connected with all of you.

>access_file_
1406|blog.unity.com

Learn to save memory usage by improving the way you use AssetBundles

Whether your application streams assets from a content delivery network (CDN) or packs them all into one big binary, you’ve probably heard of AssetBundles. An AssetBundle is a file that contains one or more serialized assets (Textures, Meshes, AudioClips, Shaders, etc.) and is loadable at runtime.AssetBundles can be used directly or through systems like the Unity Addressable Asset System (aka Addressables). The Addressables system is a package that provides a more accessible and supported way to manage Assets within your projects. It is an abstraction on top of AssetBundles. While Addressables minimizes the direct interactions developers have with AssetBundles, it is helpful to understand how the usage of AssetBundles can affect memory usage. For an overview of the Addressables system, please refer to this blog post and this session from Unite Copenhagen 2019.Developers working on new projects should consider using Addressables rather than working with AssetBundles directly. If you are working on a project with an already established AssetBundles approach, the information here about how AssetBundles affect runtime memory will help you get the best possible results.When Unity downloads an LZMA AssetBundle using the WWW class (which is now deprecated) or UnityWebRequestAssetBundle (UWR), Unity optimizes the fetching, recompressing, and versioning of AssetBundles using two caches: the memory cache and the disk cache.AssetBundles loaded into the memory cache consume a large amount of memory. Unless you specifically want to frequently and rapidly access the contents of an AssetBundle, the memory cache is probably not worth the memory cost. Instead, use the disk cache.If you provide a version or a hash argument to the UnityWebRequestAssetBundle API, Unity stores your AssetBundle data into the disk cache. If you do not provide these arguments, Unity uses the memory cache. Note that Addressables uses the disk cache by default. This behavior can be controlled via the UseAssetBundleCache field.AssetBundle.LoadFromFile() and AssetBundle.LoadFromFileAsync() always use the memory cache for LZMA AssetBundles. We therefore recommend using the UnityWebRequestAssetBundle API instead. If it is not feasible to use the UnityWebRequestAssetBundle API, you may use AssetBundle.RecompressAssetBundleAsync() to rewrite an LZMA AssetBundle on disk.Internal testing shows that there is at least an order of magnitude difference in RAM between using the disk cache and using the memory cache. You must weigh the tradeoff in memory cost versus added disk space requirements and Asset instantiation time for your application.To determine what effect the AssetBundle memory cache may have on your application’s memory usage, use a native profiler (our tool of choice is Xcode’s Allocations Instrument) to examine allocations from the ArchiveStorageConverter class. If this class is using more than 10MB of RAM, you’re probably using the memory cache.When building AssetBundles for large projects, do not assume that Unity by default will minimize the amount of duplicated information across them. To identify instances of duplicated data in the generated AssetBundles, you can use the handy AssetBundle Analyzer, written in Python by one of our colleagues in the Consulting & Development group. Used via the command line, the tool extracts information from generated AssetBundles, which is then stored in an SQLite database that features several useful views. You can then query the database using tools such as DB Browser for SQLite. This tool can help you find and resolve any inefficiencies in your build pipeline, whether you created bundles manually or via Addressables.Alternatively, check out the AssetBundle Browser tool, which you can download and integrate into your project straight away. Note that this tool provides similar functionality to Addressables, so if you are using Addressables, this tool is not relevant.The AssetBundle Browser tool allows you to view and edit the configuration of AssetBundles in a given Unity project and provides build functionality. It also provides some neat features, such as informing users about duplicated Assets that are being pulled due to dependencies, such as textures.When deciding how to organize your Assets into AssetBundles, you need to be careful about dependencies. Regardless of your AssetBundle topology, Unity makes a distinction between Assets that live inside the application binary (in or involving a Resources folder) and those that you need to load from AssetBundles. You can think of these two types of Assets as living in different worlds. It is impossible to create an AssetBundle that has a hard reference to the instance of an Asset inside the Resources folder world. To reference those Assets, Unity instead makes a copy of the Assets that it uses in the AssetBundle world.Take for example a game’s logo. The logo may be displayed in the UI of a loading scene while the game starts up. Because this loading screen must be shown before remote Assets are streamed to disk, you might include the logo Asset in the build so that it can be used immediately.This same logo is also used on an options panel in the UI, where users can select their language, sound preferences, and other settings. If this UI panel is loaded from an AssetBundle, then that AssetBundle will create its own copy of the logo Asset.If both the loading screen and the options panel are loaded at the same time, both copies of the logo Asset will be loaded, which is a duplication that costs memory.The solution to this problem is to break the hard link between one or both screens. If the logo lives in an AssetBundle, then some amount of streaming needs to occur before you can get a reference to the Asset. If the logo lives in the binary (inside a Resources folder, for example), then the UI panel will need to have a weak reference to the logo Asset and be loaded via an API such as Resources.Load.User scripting will need to use the string to load the image at runtime and assign it to the proper component.A happy middle ground may be to include the AssetBundle containing the logo Asset inside the application’s StreamingAssets directory. You will still load the Asset from the AssetBundle, but since you are hosting the bundle locally, you will not pay the cost in time required to download the content.It is not intuitive to use strings, paths or GUIDs to reference Assets, but you may want to create custom inspectors that enable Unity’s default drag-and-drop reference functionality on your weakly referenced fields. And don’t forget to use Unity’s MemoryProfiler package to identify Assets that are duplicated in Memory. Note that the Addressables system has its own mechanism for checking for duplicates in dependencies (for more information, see the documentation).Even though the Addressables system provides an abstraction on top of AssetBundles, knowing how things work under the hood can help you avoid costly performance problems like the ones described in this article.If you're currently using Addressables we want to hear from you via this short survey.We’re planning a roadmap for future entries of this series. Is there any area you’d like us to focus on? Leave a comment to let us know!

>access_file_
1408|blog.unity.com

How to build a winning creative team in your mobile game studio

As creative has taken up an increasingly central position in the gaming industry, I’m frequently asked by studios of all sizes and genres, “How would you approach building a great in-house creative team?” This is no easy task for studios trying to instill a strong creative marketing force in their ranks.Having overseen the growth of an in-house creatives team, I’ve slowly but surely consolidated my understanding of what environment, operations, and types of people can foster the production of winning creatives. Below, I’ll share with you my advice on how to hire the right people, and how to create the optimal culture and operational structure to get the results you want.Non-negotiable #1: curiosityOperating in a fast-moving and aggressive market where a competitive edge is increasingly hard to find, speedy and constant ideation is particularly important for a gaming creative team. In other words, the more ideas you can test, the higher the chance of finding a winning creative.To achieve this, our lead creative, Elad Gabison says that “your creative team needs to be a sponge”, constantly researching creatives and absorbing concepts from other games, fuelling an ever-ticking ‘ideas radar’. Ultimately, this comes down to hiring curious people.When hiring, a good barometer of the curiosity of the candidate is the number of questions they ask about your creative operation. If they simply accept what you say as it is and focus only on questions to do with their position, like, “what will my day look like?”, and not things like, “so why do playables perform better for casual games?”, then it is a reliable sign that the candidate lacks the level of curiosity needed to thrive in a creative team.Within the team, curiosity also means always asking “why”: why did this creative succeed, and why did this one fail? In addition to thinking up new ideas, a team full of curious individuals will ask these questions frequently and seek to pinpoint the underlying reasons behind the performance of creatives.Non-negotiable #2: Data drivenOften creative teams, which generally consist of designers and animators, aren’t interested in the data, or don’t have access to the tools for data analysis that are usually owned by UA teams. However, in a market where tiny changes can bring huge results in performance, paying close attention to the data is paramount, and should drive every thinking process — including those done by the designers and animators. In short, for a creative team to be successful it needs to be made of people with a love and understanding of numbers.At Playworks, we analyze different dimensions of data, from standard high-level metrics like click-through rates and conversion rates, to in-ad data for interactive creatives that give us an idea of what levels users like, which audiences preferred what, and what part of the ad needs to be optimized. This is the outcome of switching from a “luck-”based process of producing creatives and then simply moving on to the next one if it fails, to a data-driven, analytical approach that guarantees success and constant learning. Achieving this relies on making the data accessible to your studio and providing the tools, support, and training to the team.This isn’t to say you shouldn’t hire a designer who doesn’t live and breathe numbers. But you should look for someone who can appreciate numbers (or learn to appreciate them as I have), and other sides of the business, which brings us to our next non-negotiable.Non-negotiable #3: Multi-disciplinaryIt’s important to hire individuals who either possess or have the potential to learn different skills on top of their existing area of expertise. They don’t need to be experts in multiple fields, but having at least a solid grasp of their colleagues’ roles and being able to put themselves in the shoes of others can be highly beneficial. There are a few key reasons why this is a critical part of being at the forefront of a creative team today.First, it increases the pace of production: each person can do more without relying on colleagues for everything outside of their remit. Second, it fosters empathy, as team members will be more understanding of the challenges faced by each other. This not only helps teams bond but leads to a culture of collaboration — after all, once a team member understands what obstacles a colleague is up against, they can begin to think of ways to assist.Most importantly, hiring multi-disciplinary people helps create better products: team members are able to craft richer, more well-rounded experiences as they have a more holistic understanding of the process. At Playworks, for example, all of our developers learn game design so they aren’t just writing code, but also thinking about how to make the product as exciting an experience as possible for the end user.Challenge each other, respectfully.I said earlier that ideas are the lifeblood of a creative team, but equally important is building an environment in which team members aren’t afraid to critique others’ ideas. It all comes down to creating trust and mutual respect within the team, which encourages everyone to avoid beating around the bush and always provide their peers with constructive criticism. An important part of this is creating a sense of personal responsibility and leadership: if someone thinks something can be done better, they will actually help execute this idea rather than just dumping it on the colleague in question. Professional feedback should always come from a place of caring and desire to help others succeed.In our experience, these methods lay the foundations for “creative ping pong”, a constant and free-flowing back and forth — and optimization — of ideas. Luckily for us, in the end we always have the data to show what was the better call, so a challenge is always welcome and encouraged.Instill creativity at every levelCreativity is a mindset, not a tool or profession, and it must be practiced at every level. That’s why we instill an element of creativity throughout our operations, which can serve as a catalyst for the creative output of your team. There’s a few ways we foster this environment.First, for example, in every meeting and presentation, we have “inside terms” to describe ads based on how they performed, like ‘zombie’ (a dead creative which suddenly performs better) and ‘boss’ (a high-performing one), which always keeps even the heaviest of conversations a bit lighter.We also run team building initiatives every month that encourage communication and listening. Our art director, Giacomo, initiated a game of Blind Drawing during a daily meeting — where an employee draws the contour of a colleague without looking at the paper.And no birthday goes without an embarrassing photoshopped photo of the birthday victim.Build squads, not teamsSo, you’ve assembled a group of superstars. Now, how do you organize them to make sure your creative output is as efficient and productive as possible? Here’s how we do things at Playworks.Because we’re a fast-growing team, we believe operating as small and multi-functional squads is key to scaling and attacking multiple projects at the same time efficiently. In its first year, Playworks was arranged into teams based on profession: developers, QA, game designers, analysts and so on. In our never-ending process of optimizing our production, we decided to re-shuffle the team into a squad-based organization.This structure is based on the “agile methodology” that helped Spotify become a giant in the music streaming industry. After Spotify established itself as a successful company, it saw agile squads as the best approach to scale while maintaining high levels of creativity, productivity and innovation — eliminating hand overs which cause delays, miscommunication, and, especially, ownership.Our squads at Playworks consist of a squad leader, a game designer, two developers, and one graphic designer. Each squad is independent — organizing and managing itself as it sees fit in order to reach solutions in an efficient manner. This structure has enabled us to reduce the average time it takes for creating a playable ad from 21 days to 5.7 days. Going back to my earlier point, having multi-disciplinary employees in squads is important to ensure strong collaboration, creativity, and support among its members.Nevertheless, while there is a squad leader, all members need to be leaders. Business consultant Les McKeown writes in his book, Do / Lead, “it’s important to see that even when a group or team has formally designated ‘leaders’, those recognized leaders don’t have a monopoly over acts of leadership”. I try to instill this idea throughout Playworks’ team — the culture of collective responsibility is one way.“We’ve lost the ability to see true leadership for what it really is: an almost always un-glorious, headline-free, mundane activity that takes place in every minute of every day” - Les McKeownIn an industry where automation is becoming more prevalent and differentiation more difficult, creative innovation — and the human element of this process — has taken a central position. In this context, building a thriving creative team is all the more important, and one of the last available levers in gaining an edge when it comes to game growth. While there is no one size fits all approach, I hope these tips will help your game company crack this challenge.

>access_file_
1409|blog.unity.com

2020 Call for Code: How creators can help tackle climate change and COVID-19

Unity is fortunate to be a part of a global community of creators and developers who have the skills and expertise to drive real change in our world with new technologies.Call for Code Global Challenge is the largest annual tech competition focused on open-source collaboration. It brings together developers from around the world to create cutting-edge, practical technology to help solve some of the toughest issues facing humanity and our environment.The challenge was launched in 2018 by David Clark Cause and IBM, in partnership with the United Nations and the Linux Foundation. Past winners created solar-powered mesh network devices that build connectivity where there is none, and a health monitoring platform for firefighters. This year’s challenge began on March 22 and is open to all developers to begin designing and developing their solutions.When IBM asked our Unity for Humanity team to help with the 2020 challenge, we wholeheartedly agreed because the initiative combines innovation with effective real-world solutions, part of the mandate of Unity for Humanity and Unity’s DNA. Our role was to work with other partners to develop starter kits for participants. These valuable tools allow developers to get a head start creating solutions on one of two fronts: Climate change and COVID-19 mitigation.In February, Scott Sewell, a software developer with Unity’s Innovation Group – Unity Labs, attended the Call for Code kickoff in Geneva to collaborate on the templates. Scott worked with programmers, data scientists, designers, and others from companies around the world. In teams of four, they worked alongside subject-matter experts from the United Nations to create starter kits for each of this year’s focus areas.The kits provide examples of what possible solutions might look like on a programming level. This includes code that demonstrates how to integrate IBM cloud technologies like data analytics, artificial intelligence (AI), and internet of things (IoT) into potential solutions.For the first track, Call for Code is asking developers, data scientists, and other subject-matter experts to create solutions that help halt and reverse the effects of climate change in our world. This includes solutions that concentrate on water sustainability, energy sustainability, and disaster resiliency.For the COVID-19 track, they’re looking for solutions that will arm developers, visionaries, and problem solvers with resources to build open-source technology solutions that address three main areas: crisis communication during an emergency, ways to improve remote learning, and how to inspire cooperative local communities.As always, innovation and community are key Unity pillars and this challenge brings them together in a way that could create positive change for our planet. We hope that as participants are designing and producing what could be groundbreaking solutions for these two global issues, they will consider how our low-barrier-to-entry tools and wide-ranging technologies for XR, simulation, machine learning and other capabilities can help them.To get involved in this year’s challenge, please visit 2020 Call for Code Global Challenge.

>access_file_
1410|blog.unity.com

How Daimler uses Unity across its automotive lifecycle

We invited Daniel Keßelheim and Sebastian Rigling of Daimler Protics to share about the experiences they create with Unity as their development tool of choice. Learn how they implemented mixed reality across key stages of the lifecycle of Mercedes-Benz vehicles.Daimler Protics shapes digital reality for Daimler AG, one of the world’s largest automakers. Its mixed reality team develops everything from proof of concepts to ready-to-use applications for Mercedes-Benz and other brands.Watch the Unite Copenhagen talk below to learn how Daimler Protics uses Unity to create a mixed reality pipeline connected to systems and Product Lifecycle Management (PLM) data, then deploy applications to multiple platforms, including Microsoft HoloLens, Oculus devices, and smartphones.In their talk, Keßelheim and Rigling shared how Unity has provided a flexible development platform for everything from R&D to after-sales service. “For every problem we were confronted with, related to mobile mixed realities in automotive, we found a solution with Unity,” said Keßelheim. Let’s cover a few of the many ways they use Unity to create and deploy HoloLens applications at various stages of the automotive lifecycle.Daimler Protics uses Unity for a variety of use cases in the production phase, from planning factory layouts (e.g., previsualizing machinery and architecture) to assembly training (e.g., training workers on how to assemble the cars). This section walks through a safety inspection use case.Daimler’s HoloLens application enables a safety inspection of a robotic laser welding system.Automakers often use robotic laser welding to precisely and efficiently fuse various parts of the vehicle together. When Daimler’s robot cell is in operation, however, the space is closed off to prevent anyone from looking inside and losing their sight, making safety inspections difficult.The team developed an application that replays each robot’s logged movements on the HoloLens once a session is complete. This application displays predefined safety spaces, so it's easy to verify whether the robot’s movements have adhered to safety protocols.A HoloLens experience created by Daimler Protics for the Mercedes-Benz EQC.Mercedes-Benz formed the EQ brand for its new fleet of electric vehicles. For the Mercedes-Benz EQC, the automaker’s first fully electric compact luxury SUV, the Daimler Protics team created a HoloLens experience to help drivers better understand the inner workings of an electric vehicle compared to the gas-powered versions to which they’re accustomed.Designed for auto shows and dealership showrooms, the self-serve application guides users – the vast majority of whom have never used a mixed reality headset – showing them where to look and identifying various points of interest as they walk around the vehicle. Daimler uses Unity and the HoloLens to tell a rich, interactive story about the Mercedes-Benz EQC, including the location of the battery powering the vehicle, and how it works and charges.Daimler’s HoloLens application trains technicians on a nine-gear transmission. Traditional training programs use cut-section models to instruct technicians on how to service an automotive transmission. While working on a full-scale physical model is helpful for understanding, the value of a cutaway version disconnected from the car as an educational tool is limited. Daimler Protics solved this dilemma using mixed reality. The application not only surfaces the transmission’s various hard-to-see components, it also makes it easy to replicate the experience of the running engine and visualize how it changes when shifting gears or braking.---Unity is the leading platform for creating content for AR and VR applications. Learn how to get started developing AR applications with Unity and try Unity Industrial Collection today. For more information about how Unity is used across the automotive lifecycle, check out this blog post and read our whitepaper.

>access_file_
1411|blog.unity.com

Creating safer construction projects with virtual reality

Laing O’Rourke, one of the largest privately-owned construction companies in the UK, harnessed the power of Unity’s real-time 3D development platform to create an immersive virtual reality (VR) crane simulator to train its lift team before they ever step foot on a project.Safety management and training is a top priority for the construction industry. Involving workers and providing ongoing access to safety training are top aspects of a world-class safety program, according to a Dodge Data & Analytics report. VR is one of the top technologies expected to improve safety in the next three years, but there are already companies leading the charge now.At Unite Copenhagen, we invited Graham Brierley, Head of Digital Engineering from Laing O’Rourke, to share his firm’s experience with Unity and how VR is transforming the architecture, engineering, and construction (AEC) industry. Read on to learn how Laing O’Rourke used Unity to create an interactive VR training program to improve project safety and make the construction industry safer, more productive, and more sustainable.Laing O’Rourke has a long reputation of investing in breakthrough technologies and harnessing innovation as a positive force for change. Several years ago, Laing O’Rourke realized it wasn’t getting what it needed out of its current software. To solve this problem, the company tasked Brierley with finding the right solution.It was clear Brierley wasn't going to find what he needed in construction so he started searching outside of traditional software. That’s when Brierley found Unity. To harness Unity’s capabilities, he hired two game developers with no construction experience. It was a risky move, but it was also the right one. "They brought a skill set that we were otherwise struggling with. As engineers and technicians, we don't necessarily have a background in developing, coding, and data," said Brierley.Brierley’s team is now using Unity to develop VR and augmented reality (AR) applications to simplify, de-risk, and transform some of the most complex on-site construction activities.Operating a crane on a project is a dangerous endeavor that requires constant verbal communication. The crane operator usually works in isolation and is unable to see the lift team on the project. Teams can also look different from one project to the next and there is often more than one crane moving at a time. The lift team must understand the constraints and perspectives from each other so that it’s done safely.Laing O’Rourke turned to Unity and its newly hired game developers to solve safety issues. With Unity, Laing O’Rourke used VR to create an immersive environment to simulate crane operation and communication before workers ever step foot on a project. The crane simulator was developed from the crane operator’s perspective, placing them high in the air.The training connects multiple VR headsets to improve communication between the crane driver and banksman, the person who directs the operation of a crane. In the training, crane drivers must follow the banksman's instructions to control a virtual crane.Laing O’Rourke took it a step further and incorporated the technology into its two-day training course. Delivered by certified trainers, the course is designed to equip the lift team with enhanced, practical, and theoretical knowledge to deal with more complex products and lift operations on busy construction sites.The training moved beyond just crane operator and banksman communication to simulate environments, challenges, and perspectives across the entire lift team. With the VR crane simulator, Laing O’Rourke was able to improve communication and promote collaboration and shared learning across lift teams.Laing O’Rourke’s measurement of success was getting positive feedback from its workers and making sure they were focusing on the right training points.Creating the VR crane simulator in Unity was just the beginning. Laing O’Rourke has continued to create new custom applications with Unity to solve business problems and cement its status as a leader for innovation and excellence in the construction industry.Recently, Laing O'Rourke used VR to create temporary cofferdam inspection training for the Thames Tideway Tunnel. The tool uses a non-tethered VR headset and marker tool on PC to let engineers host and run training exercises, engagements, and briefings. Driving a deeper understanding of risk improved the retention of important safety information. The Unity made VR tool was shortlisted for a TechFest award for “Best Use of Technology: Health, Safety & Wellbeing Award.”For its VR crane simulator, Laing O’Rourke also won best AEC project in the Unity Awards 2019.The company is currently working on implementing Unity Reflect into its workflow to create real-time BIM applications.---Learn more about the power of Unity for AEC.

>access_file_
1412|blog.unity.com

Best practices for bringing AR applications to the field

Learn how industrial giant ABB is using Unity and augmented reality to transform field maintenance procedures into a completely paperless process.We recently invited members of ABB’s IS Innovation and Digital Scouting team, Maciej Włodarczyk and Rafał Kielar, to walk us through how they used Unity to develop a new digital field operator system. This multiplatform application runs on mobile devices and the HoloLens, and it’s designed to improve the efficiency and safety of the field operators maintaining and servicing equipment on industrial sites.Learn more in their presentation on a recent Unity webinar, including their migration process to the HoloLens 2. In the webinar, Microsoft’s Mixed Reality Academy lead engineer Nick Klingensmith also shares how Microsoft’s new device will take AR-enabled training, guidance, and maintenance to the next level.Watch the webinarLet’s explore the problems ABB sought to solve for its clients and some of their key learnings from the development process.For ABB’s customers, two key personnel are involved in this process: the field operators responsible for maintaining and servicing equipment, and control room operators who supervise the process and are located in another part of the facility.Due to the dangerous nature of the tasks involved, field operators have traditionally undergone time-consuming, expensive training programs before working on-site.Once they are in the field, however, it is difficult to assign and track performed service procedures, which are done on paper. This leads to further communication issues between field and control room operators, who often need to exchange information in real-time.ABB used the Unity Editor and Microsoft’s Mixed Reality Toolkit (MRTK) to test prototypes quickly and eventually build a production-ready software application called ABB Ability™ Augmented Field Procedures. This multiplatform application completely digitizes the field operator experience with remote-enabled augmented reality technologies.The application provides several advantages over the traditional, paper-based workflow. This system:Allows any field operator to follow digitized procedures and become an expert without costly trainingIntegrates the field and distributed control systems to enable real-time data capture (versus processing paper-based forms afterward)Ensures that the latest version of a procedure is always followed (rather than using an outdated document)Connects field and control room operators for real-time communication using Microsoft Remote AssistBased on their experiences, Włodarczyk and Kielar from ABB shared numerous best practices for those developing similar applications related to training, guidance, and maintenance use cases. In this post, we focus on several best practices for the HoloLens application, centered around the user interface (UI) and ergonomy of interactions.Check out the webinar for a complete list of ABB’s recommendations, including location/device recognition, hologram positions, and more.As seen in the image above, the UI should not obscure the user’s view. UI elements that block the real-life objects that the user needs to interact with can pose a safety hazard.To minimize clutter in the user’s field of view, allow navigation menus to be accessed on request rather than being omnipresent. In the video above, notice how the user controls the visibility of the menu with a gesture.In order to avoid blocking the field of view, some may think smaller menus and buttons make sense. On the contrary, these should be large enough to be easily targeted by gaze and selected by gestures.Włodarczyk and Kielar needed to make their app easier and more convenient to use than the paper-based Standard Operating Procedures (SOP) their clients were used to.That led them to automate as much of the experience as possible in order to limit the number of interactions the user needed to perform (e.g., having a window automatically appear following a gesture, as shown in the video above). It’s also important to provide clear instructions (e.g., the “tap to dock” message shown in the video) to ensure that the next step is always clear.Users should also be given the flexibility to select the interaction mode of their choice, such as voice commands, gaze, and gestures.Since most field operators have limited experience with AR but will be the end-users of these applications, it’s critical to share the app with a test group comprised of these individuals. They will be a source of valuable feedback that will help you to reduce complexity and streamline your app to its core components.---For more best practices from ABB, sign up to watch our on-demand webinar.You can also check out ABB’s presentation at Unite Copenhagen.Check out Unity Industry and learn how to get started developing XR applications with Unity.

>access_file_
1413|blog.unity.com

Training intelligent adversaries using self-play with ML-Agents

In the latest release of the ML-Agents Toolkit (v0.14), we have added a self-play feature that provides the capability to train competitive agents in adversarial games (as in zero-sum games, where one agent’s gain is exactly the other agent’s loss). In this blog post, we provide an overview of self-play and demonstrate how it enables stable and effective training on the Soccer demo environment in the ML-Agents Toolkit.The Tennis and Soccer example environments of the Unity ML-Agents Toolkit pit agents against one another as adversaries. Training agents in this type of adversarial scenario can be quite challenging. In fact, in previous releases of the ML-Agents Toolkit, reliably training agents in these environments required significant reward engineering. In version 0.14, we have enabled users to train agents in games via reinforcement learning (RL) from self-play, a mechanism fundamental to a number of the most high profile results in RL such as OpenAI Five and DeepMind’s AlphaStar. Self-play uses the agent’s current and past ‘selves’ as opponents. This provides a naturally improving adversary against which our agent can gradually improve using traditional RL algorithms. The fully trained agent can be used as competition for advanced human players.Self-play provides a learning environment analogous to how humans structure competition. For example, a human learning to play tennis would train against opponents of similar skill level because an opponent that is too strong or too weak is not as conducive to learning the game. From the standpoint of improving one’s skills, it would be far more valuable for a beginner-level tennis player to compete against other beginners than, say, against a newborn child or Novak Djokovic. The former couldn’t return the ball, and the latter wouldn’t serve them a ball they could return. When the beginner has achieved sufficient strength, they move on to the next tier of tournament play to compete with stronger opponents.In this blog post, we give some technical insight into the dynamics of self-play as well as provide an overview of our Tennis and Soccer example environments that have been refactored to showcase self-play.The notion of self-play has a long history in the practice of building artificial agents to solve and compete with humans in games. One of the earliest uses of this mechanism was Arthur Samuel’s checker playing system, which was developed in the ’50s and published in 1959. This system was a precursor to the seminal result in RL, Gerald Tesauro’s TD-Gammon published in 1995. TD-Gammon used the temporal difference learning algorithm TD(λ) with self-play to train a backgammon agent that nearly rivaled human experts. In some cases, it was observed that TD-Gammon had a superior positional understanding to world-class players.Self-play has been instrumental in a number of contemporary landmark results in RL. Notably, it facilitated the learning of super-human Chess and Go agents, elite DOTA 2 agents, as well as complex strategies and counter strategies in games like wrestling and hide and seek. In results using self-play, the researchers often point out that the agents discover strategies which surprise human experts.Self-play in games imbues agents with a certain creativity, independent of that of the programmers. The agent is given just the rules of the game and told when it wins or loses. From these first principles, it is up to the agent to discover competent behavior. In the words of the creator of TD-Gammon, this framework for learning is liberating “...in the sense that the program is not hindered by human biases or prejudices that may be erroneous or unreliable.” This freedom has led agents to uncover brilliant strategies that have changed the way human experts view certain games.In a traditional RL problem, an agent tries to learn a behavior policy that maximizes some accumulated reward. The reward signal encodes an agent’s task, such as navigating to a goal state or collecting items. The agent’s behavior is subject to the constraints of the environment. For example, gravity, the presence of obstacles, and the relative influence the agent’s own actions have, such as applying force to move itself are all environmental constraints. These limit the viable agent behaviors and are the environmental forces the agent must learn to deal with to obtain a high reward. That is, the agent contends with the dynamics of the environment so that it may visit the most rewarding sequences of states.On the left is the typical RL scenario: an agent acts in the environment and receives the next state and a reward On the right is the learning scenario wherein the agent competes with an adversary who, from the agent’s perspective, is effectively part of the environment.In the case of adversarial games, the agent contends not only with the environment dynamics, but also another (possibly intelligent) agent. You can think of the adversary as being embedded in the environment since its actions directly influence the next state the agent observes as well as the reward it receives.Let’s consider the ML-Agents Tennis demo. The blue racquet (left) is the learning agent, and the purple racquet (right) is the adversary. To hit the ball over the net, the agent must consider the trajectory of the incoming ball and adjust it’s angle and speed accordingly to contend with gravity (the environment). However, just getting the ball over the net is only half the battle when there is an adversary. A strong adversary may return a winning shot causing the agent to lose. A weak adversary may hit the ball into the net. An equal adversary may return the ball, thereby continuing the game. In any case, the next state and reward are determined by both the environment and the adversary. However, in all three situations, the agent hit the same shot. This makes learning in adversarial games and training competitive agent behaviors a difficult problem.The considerations around an appropriate opponent are not trivial. As demonstrated by the preceding discussion, the relative strength of the opponent has a significant impact on the outcome of an individual game. If an opponent is too strong, it may be too difficult for an agent starting from scratch to improve. On the other hand, if an opponent is too weak, an agent may learn to win, but the learned behavior may not be useful against a different or stronger opponent. Therefore, we need an opponent that is roughly equal in skill (challenging but not too challenging). Additionally, since our agent is improving with each new game, we need an equivalent increase in the opponent.In self-play, a past snapshot or the current agent is the adversary embedded in the environment.Self-play to the rescue! The agent itself satisfies both requirements for a fitting opponent. It is certainly roughly equal in skill (to itself) and also improves over time. In this case, it is the agent’s own policy that is embedded in the environment (see figure). For those familiar with curriculum learning, you can think of this as a naturally evolving (also referred to as an auto-curricula) curriculum for training our agent against opponents of increasing strength. Thus, self-play allows us to bootstrap an environment to train competitive agents for adversarial games!In the following two subsections, we consider more technical aspects of training competitive agents, as well as some details surrounding the usage and implementation of self-play in the ML-Agents Toolkit. These two subsections may be skipped without loss to the main point of this blog post.Some practical issues arise from the self-play framework. Specifically, overfitting to defeat a particular playstyle and instability in the training process that can arise from non-stationarity of the transition function (i.e., the constantly shifting opponent). The former is an issue because we want our agents to be general competitors and robust to different types of opponents. To illustrate the latter, in the Tennis environment, a different opponent will return the ball at a different angle and speed. From the perspective of the learning agent, this means the same decisions will lead to different next states as training progresses. Traditional RL algorithms assume stationary transition functions. Unfortunately, by supplying the agent with a diverse set of opponents to address the former, we may exacerbate the latter if we are not careful.To address this, we maintain a buffer of the agent’s past policies from which we sample opponents against which the learner competes for a longer duration. By sampling from the agent’s past policies, the agent will see a diverse set of opponents. Furthermore, letting the agent train against a fixed opponent for a longer duration stabilizes the transition function and creates a more consistent learning environment. Additionally, these algorithmic aspects can be managed with the hyperparameters discussed in the next section.With self-play hyperparameter selection, the main consideration is the tradeoff between the skill level and generality of the final policy, and the stability of learning. Training against a set of slowly changing or unchanging adversaries with low diversity results in a more stable learning process than training against a set of quickly changing adversaries with high diversity. The available hyperparameters control how often an agent’s current policy is saved to be used later as a sampled adversary, how often a new adversary is sampled, the number of opponents saved, and the probability of playing against the agent’s current self versus an opponent sampled from the pool. For usage guidelines of the available self-play hyperparameters, please see the self-play documentation in the ML-Agents GitHub repository.In adversarial games, the cumulative environment reward may not be a meaningful metric by which to track learning progress. This is because the cumulative reward is entirely dependent on the skill of the opponent. An agent at a particular skill level will get more or less reward against a worse or better agent, respectively. We provide an implementation of the ELO rating system, a method for calculating the relative skill level between two players from a given population in a zero-sum game. In a given training run, this value should steadily increase. You can track this using TensorBoard along with other training metrics e.g. cumulative reward.In recent releases, we have not included an agent policy for our Soccer example environment because it could not be reliably trained. However, with self-play and some refactoring, we are now able to train non-trivial agent behaviors. The most significant change is the removal of “player positions” from the agents. Previously, there was an explicit goalie and striker, which we used to make the gameplay look reasonable. In the video below of the new environment, we actually notice role-like, cooperative behavior along these same lines of goalie and striker emerge. Now the agents learn to play these positions on their own! The reward function for all four agents is defined as +1.0 for scoring a goal and -1.0 for getting scored on with an additional per-timestep penalty of -0.0003 to encourage agents to score.We emphasize the point that training agents in the Soccer environment led to cooperative behavior without an explicit multi-agent algorithm or assigning roles. This result shows that we can train complicated agent behaviors with simple algorithms as long as we take care in formulating our problem. The key to achieving this is that agents can observe their teammates---that is, agents receive information about their teammate’s relative position as observations. By making an aggressive play toward the ball, the agent implicitly communicates to its teammate that it should drop back on defense. Alternatively, by dropping back on defense, it signals to its teammate that it can move forward on offense. The video above shows the agents picking up on these cues as well as demonstrating general offensive and defensive positioning!The self-play feature will enable you to train new and interesting adversarial behaviors in your game. If you do use the self-play feature, please let us know how it goes!If you’d like to work on this exciting intersection of machine learning and games, we are hiring for several positions, please apply!If you use any of the features provided in this release, we’d love to hear from you. For any feedback regarding the Unity ML-Agents Toolkit, please fill out the following survey and feel free to email us directly. If you encounter any bugs, please reach out to us on the ML-Agents GitHub issues page. For any general issues and questions, please reach out to us on the Unity ML-Agents forums.

>access_file_
1414|blog.unity.com

50 Megatons of real-time collaboration: This student film made with Unity is getting noticed

How did a student film on a minimal budget find itself sharing the nominee spotlight with blockbuster productions like The Lion King and The Mandalorian at the VES Awards? By using an innovative, real-time workflow – just like the big studios.For their diploma project, students from Film Academy Baden-Württemberg produced a live-action short film called Love & 50 Megatons. The final short was nominated for a Visual Effects Society Award for outstanding VFX in a student project. The love story explores themes of separation, nuclear destruction, and propaganda, in a production showcasing a unique combination of retro and cutting-edge techniques that blend miniature models with live-action.The crew used virtual production methods reminiscent of those used in The Lion King and other recent hits to immerse real actors in their 3D environment, working with Unity as their real-time production platform.In addition to the technical challenges Unity helped the students overcome under short delivery deadlines, they found working with the real-time platform helped their team work together and iterate faster.Denis Krez, the VFX supervisor and compositor on the film, says the team first built a highly detailed miniature set, then photo-scanned the miniatures and converted them into 3D models within the Unity real-time platform. Once on set, they projected the scanned sets on an array of large displays surrounding the stage. They used live camera tracking to film the actors in front of the projections.According to Denis, “Our technical director, Paulo, developed an app in Unity to control the light, with color and focus, so we could adjust the digital background as needed in real-time to get as many shots done in-camera as possible.”Since the on-set crew could see the world around them, they were immediately immersed in the world of Love & 50 Megatons. Without using green screens and having to imagine the backgrounds, the process was much smoother and more focused.The students found that the most important consideration in filmmaking is collaboration – and the courses of study at Film Academy ensured they get this experience first-hand.“Filmmaking is a team effort,” says student Josephine Ross, VFX producer, and producer on the film. “For projects of the scale of Love & 50 Megatons, the project depends on many different team members contributing their professional knowledge. Nobody is a specialist in everything, and to get the best out of a film, a team has to come together with people who not only are technically great but also work well together.”The tools they choose to use can make a big difference in collaboration, too. “Working with a real-time production platform like Unity on set makes a lot of things easier – especially communication,” says Denis. “We can instantly present our thoughts to everybody involved and easily play around with new ideas.”The quick collaboration and communication extend beyond the concept stage and well into production, he says. “Occasionally, the actors approach me and ask for a quick view of the 3D scene to get a better feeling of the environment they’re supposed to be in.” In particular, both the director and director of photography can work more effectively compared to a green screen set, because they can see the final results in real-time and make better decisions based on that instantaneous feedback.“Unity was essentially our real-time viewport for virtual production,” says Paulo Scatena, the technical director. “It rendered our digitally reconstructed assets – the miniature set – by tracking the position of the camera, so the display would always act as a ‘window’ into the virtual environment. It was the engine that essentially allowed us to extend the set, real-time, in-camera.”Access to instant feedback allowed the crew to expand its creativity in the moment. “You instantly see what you get, and that’s especially valuable on a film set,” says Denis.And when you need to make changes, it’s crucial. According to Paulo, “Good filmmaking is often about making a lot of mistakes, fast, until you strike gold. So the rapid revision opportunity that comes with real-time is a big selling point for a real-time production platform like Unity.”Paulo also says the team put a lot of effort into making sure iterating was smooth and instant. “I built a remote control, Unity with TouchOSC, for Denis to operate color grading and focus settings for the projection screens. On a traditional set, you’d need to call for the gaffer to come out and change the lights. We could do it with a slide of one finger!”In fact, they never wrestled with whether to use traditional workflows instead of choosing a real-time platform – their deadlines decided for them. “The workflow had to be real-time,” says Denis. “We never had a discussion about whether or not to use Unity.”As students starting out in the world of film, the crew had about as much familiarity with real-time tools as with traditional ones, so they felt comfortable jumping into Unity. “It doesn’t matter if you just need a quick previs of a scene you’re about to shoot or if you need a photorealistic background.” Paulo encourages every filmmaker to get their hands on a real-time production platform. “Any type of production would benefit. It’s easy, it works in the budget, it’s the future of filmmaking.”Watch the Making Of Love & 50 Megatons, made with Unity, and learn more about the entire production process on the Love & 50 Megatons website.

>access_file_
1415|blog.unity.com

How on-demand rendering can improve mobile performance

It’s not always desirable to render a project at the highest frame rate possible, for a variety of reasons, especially on mobile platforms. Historically, Unity developers have used Application.targetFrameRate or Vsync count to throttle the rendering speed of Unity. This approach impacts not just rendering but the frequency at which every part of Unity runs. The new on-demand rendering API allows you to decouple the rendering frequency from the player loop frequency.On-demand rendering allows you to skip rendering frames while still running the rest of the player loop at a high frequency. This can be especially useful on mobile; bypassing rendering can bring significant performance and power savings, while still allowing the application to be responsive to touch events.Here are some example scenarios of when you may want to lower the frame rate:Menus (e.g., the application entry point or a pause menu): Menus tend to be relatively simple Scenes and as such do not need to render at full speed. If you render menus at a lower frame rate, you will still receive input during a frame that is not rendered, allowing you to reduce power consumption and to keep the device temperature from rising to a point where the CPU frequency may be throttled, while keeping a smooth UI interaction.Turn-based games (e.g., chess): Turn-based games have periods of low activity when users think about their next move or wait for other users to make their move. During such times, you can lower the frame rate to prevent unnecessary power usage and prolong the battery life.Static content: You can lower the frame rate in applications where the content is static for much of the time, such as automotive user interface (UI).Performance management: If you want to manage power usage and device thermals to maximize battery life and prevent CPU throttling, particularly if you are using the Adaptive Performance package, you can adjust the rendering speed.Machine learning or AI applications: Reducing the amount of work the CPU devotes to rendering may give you a little bit of a performance boost for the heavy processing that is the central focus of your application.Everywhere! On-demand rendering works on Unity 2019.3 with every supported platform (see the system requirements) and rendering API (built-in render pipeline, Universal Render Pipeline and High Definition Render Pipeline).The on-demand rendering API consists of only three properties in the namespace UnityEngine.Rendering.1. OnDemandRendering.renderFrameInterval This is the most important part. It allows you to get or set the render frame interval, which is a dividing factor of Application.targetFrameRate or QualitySettings.vSyncCount, to define the new frame rate. For example, if you set Application.targetFrameRate to 60 and OnDemandRendering.renderFrameInterval to 2, only every other frame will render, yielding a frame rate of 30 fps.2. OnDemandRendering.effectiveFrameRate This property gives you an estimate of the frame rate that your application will render at. The estimate is determined using the values of OnDemandRendering.renderFrameInterval, Application.targetFrameRate, QualitySettings.vSyncCount and the display refresh rate. But bear in mind that this is an estimate and not a guarantee; your application may render more slowly if the CPU is bogged down by work from other things such as scripts, physics, networking, etc.3. OnDemandRendering.willThisFrameRender This simply tells you if the current frame will be rendered to the screen. You can use non-rendered frames to do some additional CPU-intensive work such as heavy math operations, loading assets or spawning prefabs.Even though frames will not be rendered as often, events will be sent to scripts at a normal pace. This means that you may receive input during a frame that is not rendered. To prevent the appearance of input lag we recommend that you call OnDemandRendering.renderFrameInterval = 1 for the duration of the input to keep buttons, movement, etc. responsive.Situations that are very heavy on scripting, physics, animation, etc. but are not rendering will not benefit from using on-demand rendering. The results may appear choppy and with negligible reduction in CPU and power usage.Here is a simple example showing how on-demand rendering could be used in a menu to render at 20 fps unless there is input.Here is an example project demonstrating how on-demand rendering can be used in a variety of situations.Let us know in the forums how on-demand rendering is working for you. We’ve tested it on Windows, macOS, WebGL, iOS, and Android, both in the Unity Editor and with Standalone players, but we’re always open to more feedback.

>access_file_
1416|blog.unity.com

Mobile game KPIs, analytics & metrics every developer should be paying attention to

We’re living in a data-driven world, and today, measuring the right metrics are vital to a game developer’s success. Mobile game metrics provide game developers with key insights into their users’ behavior, offering a look into how users interact with the game. Simply put, in-game metrics let developers track and assess the success of their game. With access to usage, engagement, and business metrics, game developers have the necessary information needed to improve their app or game and therefore optimize it accordingly.Top 5 mobile game KPIs1. Retention rate is the percentage of users who continue engaging with your game over time and is typically measured at 30 days, 7 days, and 1 day after users first install the game. This metric helps developers understand exactly where within the game’s life cycle users begin dropping off. 2. Lifetime Value (LTV) estimates the revenue a single user generates through their entire lifecycle within a game. It’s a prediction of a user’s monetary value over time. LTV tells developers how good of a job they’re doing monetizing and retaining users. 3. Daily Active Users (DAU) reveals the total number of users who visit a game on a daily basis. 4. Average revenue per daily engaged user (ARPDAU) helps game developers understand how well their monetization strategy is working, whether it’s from ads, IAPs, or a mix of the two. ARPDAU shows how changes or events affect game revenue. 5. Effective cost per mille (eCPM) is the ad revenue generated per 1000 ad impressions. While most game developers think eCPM is strictly for the monetization side, this figure also represents buying power when it comes to acquiring new users. User and usage metricsUser and usage metrics tell developers crucial information about their audiences, ultimately illustrating which sort of users your game appeals to. Developers can use this information to optimize and localize their game for a better user experience, as well as their user acquisition campaigns. Some examples include: DAU, MAU, Device/OS, Segmentation.Engagement metricsEngagement metrics give developers insight into user behavior and how they’re engaging and interacting within a game. Once developers understand users’ interaction with their game, they’ll be able to improve functionality, retain a loyal user base, and optimize the monetization strategy. Example metrics include: Session duration, retention rate, churn rate, and app rating are some metrics developers have to determine engagement levels. Business metricsBusiness metrics ensure that developers are able to keep track of their monetization and user acquisition strategies, and provide granular data that informs business growth. The first step to examining business health is marketability. Marketability measures product-market fit, answering the question of whether or not there is a user base to actually download your game.Once you’ve assessed marketability, consider other metrics like eCPM, ARPU, and LTV.Looking for more tips on boosting monetization and user acquisition efforts? Check out our guide to in-game advertising.Changing the game with ironSource reportsBelow we walk through some of the most important, helpful reporting features ironSource offers mobile game developers.Cohort reportingA cohort is a group of users that perform a certain sequence of events within a particular time frame. Cohorts allow developers to analyze the behavior of groups of users as time progresses, giving them a complete picture of the lifecycle of their game.With cohort reporting, developers can easily track metrics like retention, revenue per user, and user engagement from the moment the user downloaded and opened the app within the defined date range. Cohort reporting is equally as useful for both monetization and marketing efforts.How we helped Homa Games with cohort reportingHyper-casual publisher Homa Games didn’t have the technological resources to build their own reporting dashboard and were on the lookout for a partner that could provide robust and actionable data that would give them insight into their monetization performance.Once Homa Games began utilizing ironSource’s cohort reports, they were able to instantly see the impact of the changes made to game design or ad monetization strategy. ironSource’s cohort reports immediately alerted Homa Games about a sudden drop in ARPU. Homa Games realized that some levels they had recently integrated into their game were too difficult, causing users from new cohorts to leave the game after only a few minutes. Thanks to the cohort report, they caught the problem quickly and implemented a solution.Homa Games also uses the ARPU calculation within the cohort reports to calculate the perfect bid in order to improve their UA efforts. The result? A 9% increase in ROI.User activity reportsUnderstanding user behavior is key to optimizing your monetization and marketing strategies. By utilizing user activity reports, you get access to valuable information and analytics that breakdown user activity and ad engagement through advanced metrics. In one report, you can see DAU, ARPDAU, and sessions/DAU for each game, ad unit, country, and ad source. Learn more about user activity reports here. Mobile game KPIs, analytics & metrics every developer should be paying attention toWe’re living in a data-driven world, and today, measuring the right metrics are vital to a game developer’s success. Mobile game metrics provide game developers with key insights into their users’ behavior, offering a look into how users interact with the game. Simply put, in-game metrics let developers track and assess the success of their game. With access to usage, engagement, and business metrics, game developers have the necessary information needed to improve their app or game and therefore optimize it accordingly.Top 5 mobile game KPIs1. Retention rate is the percentage of users who continue engaging with your game over time and is typically measured at 30 days, 7 days, and 1 day after users first install the game. This metric helps developers understand exactly where within the game’s life cycle users begin dropping off. 2. Lifetime Value (LTV) estimates the revenue a single user generates through their entire lifecycle within a game. It’s a prediction of a user’s monetary value over time. LTV tells developers how good of a job they’re doing monetizing and retaining users. 3. Daily Active Users (DAU) reveals the total number of users who visit a game on a daily basis. 4. Average revenue per daily engaged user (ARPDAU) helps game developers understand how well their monetization strategy is working, whether it’s from ads, IAPs, or a mix of the two. ARPDAU shows how changes or events affect game revenue. 5. Effective cost per mille (eCPM) is the ad revenue generated per 1000 ad impressions. While most game developers think eCPM is strictly for the monetization side, this figure also represents buying power when it comes to acquiring new users. User and usage metricsUser and usage metrics tell developers crucial information about their audiences, ultimately illustrating which sort of users your game appeals to. Developers can use this information to optimize and localize their game for a better user experience, as well as their user acquisition campaigns. Some examples include: DAU, MAU, Device/OS, SegmentationEngagement metricsEngagement metrics give developers insight into user behavior and how they’re engaging and interacting within a game. Once developers understand users’ interaction with their game, they’ll be able to improve functionality, retain a loyal user base, and optimize the monetization strategy. Example metrics include: Session duration, retention rate, churn rate, and app rating are some metrics developers have to determine engagement levels. Business metricsBusiness metrics ensure that developers are able to keep track of their monetization and user acquisition strategies, and provide granular data that informs business growth. The first step to examining business health is marketability. Marketability measures product-market fit, answering the question of whether or not there is a user base to actually download your game.Once you’ve assessed marketability, consider other metrics like eCPM, ARPU, and LTV.Looking for more tips on boosting monetization and user acquisition efforts? Check out our guide to in-game advertising.Changing the game with ironSource reportsBelow we walk through some of the most important, helpful reporting features ironSource offers mobile game developers.Cohort reportingA cohort is a group of users that perform a certain sequence of events within a particular time frame. Cohorts allow developers to analyze the behavior of groups of users as time progresses, giving them a complete picture of the lifecycle of their game.With cohort reporting, developers can easily track metrics like retention, revenue per user, and user engagement from the moment the user downloaded and opened the app within the defined date range. Cohort reporting is equally as useful for both monetization and marketing efforts.Looking to implement cohort reporting? Head to our developer center for more info.How we helped Homa Games with cohort reportingHyper-casual publisher Homa Games didn’t have the technological resources to build their own reporting dashboard and were on the lookout for a partner that could provide robust and actionable data that would give them insight into their monetization performance.Once Homa Games began utilizing ironSource’s cohort reports, they were able to instantly see the impact of the changes made to game design or ad monetization strategy. ironSource’s cohort reports immediately alerted Homa Games about a sudden drop in ARPU. Homa Games realized that some levels they had recently integrated into their game were too difficult, causing users from new cohorts to leave the game after only a few minutes. Thanks to the cohort report, they caught the problem quickly and implemented a solution.Homa Games also uses the ARPU calculation within the cohort reports to calculate the perfect bid in order to improve their UA efforts. The result? A 9% increase in ROI.User activity reportsUnderstanding user behavior is key to optimizing your monetization and marketing strategies. By utilizing user activity reports, you get access to valuable information and analytics that breakdown user activity and ad engagement through advanced metrics. In one report, you can see DAU, ARPDAU, and sessions/DAU for each game, ad unit, country, and ad source. Learn more about user activity reports here.Mobile game publisher Random Logic utilized user activity reports to analyze ARPDAU and engagement rates, remarking, “it’s a great report to know if your current strategy or testing is effective.” We’re living in a data-driven worldWithout access to data, developers today would be unable to track and understand their user base. From engagement metrics to business metrics, and just about everything in between, developers today have access to a wealth of data that will ensure that they make smarter business decisions.

>access_file_
1417|blog.unity.com

Digging into Terrain Paint Holes in Unity 2019.3

Unity 2019.3 continues to bring more exciting updates to Unity’s Terrain system, including – by popular demand – the ability to create holes in your Terrain!Using the new Paint Holes brush tool, you can mask out areas in the mesh of your Terrain Tiles, and even manipulate these masks through your code. This makes it easier than ever to add terrain characteristics like holes, portals, or even caves by taking advantage of in-Editor tools like ProBuilder, ProGrids, and Polybrush. Let’s take a look at how we can create a simple cave using this process.Place a new Terrain Tile in your Scene and create a rough mountain shape. If you haven’t tried our latest Terrain Tools preview package, check out this excellent primer along with this guide on Terrain material painting.From the Terrain Tools drop-down menu, select the Paint Holes brush. With your Terrain Tile selected, pick your brush shape in the Inspector and make sure the opacity of your brush is set to 100. Paint a round shape where you plan to place the entrance of your cave.ProBuilder and Polybrush are in-Editor tools for simple 3D modeling that can be used to create a basic cave. You can easily add both to your Project via the Package Manager. Once both are installed, you can start creating your cave with ProBuilder.Open the tool by navigating to Tools > ProBuilder > ProBuilder Window. Using the ProBuilder menu, create a new ProBuilder shape and select the Pipe preset. Identify which end of the pipe you’ll use for the cave’s entrance. Create a new Plane shape that’s slightly larger than the pipe’s radius, and use it to seal the other end of the cave. Select both objects in ProBuilder and merge them to create a single GameObject. Using the ProBuilder face selection tool, delete any extra faces on the plane that are outside of your sealed cave. Scale your object to match the radius of your Terrain hole, and move it into position.Using Polybush, push/pull the vertices along your cave entrance until they align nicely with your Terrain hole. You’ll also want to use Polybrush along the length of your cave to add variation and make it look more like a natural environment.Congrats, you now know how to add a bunch of fun details to your Terrain! If you’d rather do your modeling externally, you can still use your favorite 3D modeling program to create a cave mesh and import it using Unity’s DCC integration tools. Don’t forget to further decorate your cave with rocks or lighting!--To learn more about how to create rich Terrain, check out our Paint Holes documentation and Terrain workshop from SIGGRAPH 2019. Happy "terraining!"

>access_file_
1420|blog.unity.com

Unity 2019.3 is now available

This release features a brand-new Editor interface, new Input System, faster in-Editor iteration time and lots of other improvements. The High Definition Render Pipeline and many 2D packages are now verified for 2019.3.Regardless if you work in games, entertainment, automotive, architecture, or any other industry, the Unity 2019.3 TECH stream release has something for you. Read this post for the highlights and then visit the 2019.3 webpage for details on each feature area. The website collects the related technical talks from Unite Copenhagen, the latest tutorials, documentation on how to get started, and much more.If you are in pre-production or simply want to get your hands on all the latest features now, you can begin downloading the full release from our update page. For those of you who have projects in production or want to update live projects, we highly recommend waiting for the 2019.4 Long-Term Support (LTS) release. Unity 2019.4 LTS will ship this spring.It will have the same feature set as Unity 2019.3. The difference is that while the TECH stream offers you the latest features and improvements, in the LTS releases we focus entirely on stability and quality. We only add fixes that address crashes, regressions, and issues that affect the wider community. That means Integrated Success Services customer issues, console SDK/XDK issues, or any major changes that would prevent many of you from shipping your game. The LTS release is supported for two years, with biweekly updates providing further fixes, and is intended for projects beyond pre-production.Check out some of the Unity 2019.3 highlights in this video.You can now create holes, caves or trenches with ease in Unity 2019.3 thanks to the newest terrain updates.Preview your animation rigging and keyframing in Timeline for faster iteration and to take advantage of Timeline tools.With Presets, you can customize the default state of just about anything in Unity – components, import settings, even custom assets – without coding. Presets can benefit development teams of all sizes, from streamlining repetitive tasks or validating design decisions, to enforcing standards and project templating.Unity now supports third-party renderer materials, enabling you to import specific materials like Autodesk Arnold Standard Surface shaders and display their properties correctly.With Scene Picking you can now lock certain parts of your Scene so you can focus on what you actually want to update and not worry about making unintended changes.Unity 2019.3 also features several new additions to DOTS-powered artist tooling that make it easier for artists and designers to collaborate on DOTS-based projects and to take advantage of improved iteration speed and on-device performance.The new suite of 2D tools makes high-end 2D creation more accessible by bringing new and improved workflows to all creators, from individual artists to large teams. The following packages are verified to work with Unity 2019.3:The 2D PSD Importer allows you to import layered Photoshop images directly into Unity, conserving the layer information and Sprites, which is particularly useful if you plan to use the 2D Animation package.2D Animation provides all the tooling (Sprite rigging, tessellation, bone creation, etc.) you need to create skeletal animations right in the Sprite Editor.Unity now also includes two powerful tools for 2D worldbuilding: 2D Tilemap Editor makes it easy to create square, hexagonal, and isometric tilemaps, and 2D Sprite Shape allows you to create organic spline-based 2D terrains and objects.The 2D Pixel Perfect feature ensures that your pixel art remains crisp and stable in motion at different resolutions, and Cinemachine now includes a Pixel Perfect Virtual Camera extension to improve compatibility with 2D Pixel Perfect.We continue improving our 2D tools, and so this release also contains a preview of new 2D features:The new 2D Lights and 2D Shadows are included in the Universal Render Pipeline as part of the 2D Renderer.Secondary Textures allow you to add Normal Maps and Mask Maps to Sprites in the Sprite Editor to make these GameObjects react more realistically to 2D Lights conditions.With 2D Animation’s Sprite Swap, you can quickly change a character’s appearance while keeping the same rigging and animation.Lost Crypt, a new sample project that showcases this 2D evolution, is available to download.This release features a number of serialization improvements. The new SerializeReference attribute provides an alternative to ScriptableObjects for expressing relations between objects (e.g., graphs) and polymorphic containers (e.g., List). That means you can have regular C# objects referencing each other, which simplifies your code. And the transition to our new optimized UnityYAML library speeds up text serialization, including loading and saving Scenes.We added Configurable Enter Play Mode as an Experimental feature. By disabling domain and/or Scene reloading from the Enter Play Mode process (when there are no code changes) you will speed up iteration times significantly.We also upgraded the PhysX library from v3.4 to v4.1, which includes a new API and faster MeshCollider instantiation time, as well as a number of improvements for cloth.Profiling improvements include configurable frame count, allowing you to inspect performance data through a larger window of frames. Deep Profile now lets you instrument C# code in all Players, and the managed allocation, call stack support allows you to identify when a C# function is triggering the Garbage Collector in all Players.This release also introduces a number of efficiency improvements to the DOTS game code that allow you to achieve more with fewer lines of code. (See also the Data-Oriented Technology Stack (DOTS) section below.)In other news for programmers interested in DOTS, Havok Physics for Unity is now available via the Unity Package Manager, with subscription plans for Unity Pro users now available in the Unity Asset Store. This integration is written using the same C# DOTS framework as Unity Physics, and includes the features, performance, stability, and functionality of the closed-source, proprietary Havok Physics engine, written in native C++ for developers who have more complex physics needs.The High Definition Render Pipeline (HDRP) is now a verified package for 2019.3 and recommended for delivering performant, high-fidelity graphics and photorealism on high-end hardware. HDRP assets scale in quality, taking advantage of the available hardware resources. Unity 2019.3 updates to HDRP include Custom Render Pass and Custom Post processing and Physically Based Sky. Also HDRP now works for VR.HDRP now also includes real-time ray tracing features as a preview feature. Ray tracing takes into account the objects in your Scene and simulates true light, shadows, and reflections, which in the offline world would require long render times and/or big budgets.The Universal Render Pipeline, formerly known as the Lightweight Render Pipeline, lets you reach the widest number of Unity-supported platforms with best-in-class visual quality and performance. It comprises a full suite of artist tools for content creation, so regardless if you’re building a 2D, 3D, VR or AR project, you only need to develop once to deploy everywhere. The Universal Render Pipeline now comes with a completely revamped, integrated Post-Processing Stack for greater performance. And you can update your projects from Unity’s Built-in Render Pipeline to benefit from better performance and scaling.The Visual Effect Graph package is verified for Unity 2019.3 and integrated with Shader Graph, which allows you to easily create high-fidelity visual effects. We also added motion vector and Particle Strips to the Visual Effect Graph, providing you with even more control of your particle effects.In Shader Graph you can now add Shader Keywords to create static branches in your graph, which can be used for building your own Shader LOD system. We’ve also added support for vertex skinning for DOTS Animation, and sticky notes to improve your workflow, which let you leave comments and explanations for anyone working on the project.This release also includes multiple lighting updates. For example, you can now merge Light Probes in additively loaded Scenes, making it easier to handle lighting for large Scenes that are broken up into smaller chunks. We’ve also added many performance improvements and updates to the Progressive Lightmapper.The Heretic is a short film by Unity’s award-winning Demo team, now available on YouTube in its entirety. The first part of the project was revealed at GDC 2019 and we shared a preview of the second part at Unite Copenhagen 2019.The Heretic project runs on Unity 2019.3, using a broad range of out-of-the-box graphics features, including every possible aspect of HDRP and the Visual Effect Graph. Watch the whole film to see an entirely VFX-based character that we introduce at the end of the short.We’ve revamped the Editor UI with new icons, a new font, visual feedback, and much more to improve usability, legibility, and performance, and to support high-DPI display resolutions.With the new Quick Search feature, you can easily find anything in the Editor, including assets, game objects, settings, and even menu items.UIElements includes several new features that add useful functionality to the USS stylesheet. The new UI Builder is a visual authoring environment that lets users access the underlying framework of UIElements.We’ve improved the Package Manager, including giving you the option to install packages from a Git repository via a URL. Additionally, you can now manage your Asset Store collection directly through the Package Manager.The new Unity Accelerator provides a local network proxy and cache service that speeds up iteration times for Collaborate source code download and Asset pipeline importing.The new Addressable Asset System (i.e., Addressables) gives you and your team an efficient way to manage complex live content by loading assets by an address that can be called from anywhere.We’ve also updated the AssetDatabase Pipeline to Version 2, which provides asset dependency tracking and many other improvements that together lay the foundation for a more reliable, performant and scalable pipeline. It also greatly improves platform switching and swapping between previously imported versions of assets.The Input System is the new standard to integrate device controls in your projects. The new workflow is designed around Input Actions, an interface that lets you separate controls binding from the code logic. The new system is consistent across platforms, extensible and customizable, and is available in Preview.The Incremental Garbage Collector is now production-ready (no longer experimental). This feature can significantly reduce the problem of Garbage Collector interruptions by distributing the workload over multiple frames. It supports all target platforms except WebGL.Unity’s platform-abstraction layer, Baselib, unifies base functionality for the most common platform-dependent operations. In this release, Baselib updates improve the stability and performance of parallel data structures and synchronization primitives.Are you interested in publishing your game on Stadia? We now offer support for everything that approved developers need to create and ship their first game on Google's new cloud gaming platform. Interested developers should start the process with an application for resources on Google's Stadia developer website.AR Foundation, the framework that enables you to build your application once and deploy it across ARKit- and ARCore-enabled devices, now extends to Magic Leap and HoloLens devices.The XR Interaction Toolkit enables you to add interactivity to your AR and VR experiences, across our supported platforms, without having to code the interactions from scratch. It provides a set of monobehaviours/scripts that implement common object and UI interaction scenarios for both AR and VR devices.Ensure your AR and VR experiences reach the widest possible audience with our modularized XR plugin architecture workflow.To achieve highly realistic graphics and lighting effects that let you push the boundaries of high-fidelity VR, check out HDRP for VR.The Device Simulator (Preview) allows you to simulate how your content will look, as well as preview the behaviors and some physical characteristics, on a broad range of devices.With Unity as a Library, you can now insert features powered by Unity directly into your native mobile applications. These features include, but aren’t limited to, 3D or 2D real-time rendering functions for augmented reality, 2D mini-games or 3D models.On-demand rendering lets you control the rendering loop independently from the rest of our subsystems. This means you have more control to lower power consumption and prevent thermal CPU throttling.Finally, we have moved the System requirements for Unity 2019.3 to the Unity Manual (they were formerly here). We have also added the details for using the Unity Editor and Player on all supported platforms so you can clearly see what’s required and supported. Note that the minimum OS-supported versions are now 4.4 (API 19) for Android and 10 for iOS, and that OpenGL ES is deprecated on iOS.At Unite Copenhagen 2019 we revealed the DOTS Sample project. It showcases how all the DOTS-powered components, including Physics, Animation, NetCode, and Conversion Workflow, work in Unity 2019.3. While we designed it to be an internal test project, feel free to download it and experiment with it. It’s available on GitHub and includes all source code and assets. Here are some of the DOTS features available in this release:DOTS game code updates, which let you achieve more with less boilerplate code.The first iteration of our upcoming new Animation system for DOTS. It offers all the core animation functionality such as animation blending, runtime IK, root motion, layers, and masking.The FPS NetCode used in the DOTS Sample is built on top of DOTS and makes it easy to create a networked game with similar architecture. It provides client-side prediction, authoritative server, and interpolation.Unity Physics leverages the Burst Compiler and the C# Job System and provides functionality such as collision detection and raycasts used for shooting-game mechanics in the project.The Conversion Workflow enables you to convert your GameObjects to entities with one click to harness the power of DOTS while using the workflows you already know.With Unity Live Link, you can make changes in the Editor and push them in real-time to your target device, giving you instant feedback on how changes look, feel, and perform on the actual device.As with all releases, 2019.3 also includes a large number of minor improvements and bug fixes. Find the full list in the 2019.3 release notes. You can also use the Issue Tracker to find specific information on individual bugs.We are happy to announce the four lucky winners of our Unity 2019.3 beta sweepstakes! To celebrate the release of real-time ray tracing in Preview, NVIDIA supplied us with four brand-new NVIDIA GeForce RTX™ 2080 GPUs, which beta participants were eligible to win by helping identify bugs during the 2019.3 beta cycle. Congratulations to Antonios, Dwayne, Kevin, and Tom!Make sure to look out for our upcoming 2020.1 beta sweepstakes and stay updated with beta news by signing up for our newsletter. You can provide feedback on the new features and updates in our forums as well.Are you curious about what’s going to be in Unity 2020.1? You can get access to the alpha version now or wait for the beta. If you’re interested in knowing more about our Preview packages, check out the overview here.We are excited to announce our release plans for this year. With more and more features distributed as packages and continuously updated, we’re reducing the number of TECH stream releases from three to two per year. The 2019 Long-Term Support (LTS) release will be available in spring 2020.Also, remember that since we support each LTS for two years, Unity 2017 LTS will reach the end of its life in March 2020.The 2020.1 TECH stream release is scheduled for spring 2020 and the 2020.2 release is scheduled for fall 2020. The cadence for updates with bug fixes and regressions remains unchanged.

>access_file_