// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1689 transmissions indexed — page 72 of 85

[ 2020 ]

1 entry
1421|blog.unity.com

Lexus opens the door for virtual production in Unity

Car buyers increasingly expect rich and unique online experiences during their purchase journeys. In this post, learn how Lexus is using Unity to reimagine the production process for advertising/marketing content – ultimately paving the way for the automaker and its creative partners to create high-fidelity imagery and videos faster, more cost-effectively, and at an unprecedented scale.We recently collaborated with Lexus and its agency Team One to create a real-time configurator showcasing the Lexus ES MY19. While you may be familiar with consumer-focused car configurators – commonly seen in web-based “build and price” tools – this new process provides richer capabilities for industry professionals.First, it was purpose-built for developing virtual productions that are too difficult or too expensive to create in the real world. Additionally, instead of providing a generic user experience, it opens up a full-scale vehicle in real-time – with interactive components – letting Lexus and Team One tailor the car’s appearance, setting, perspective, time of day and more.With this new process, professionals can switch move through configurations, switching trims and paint colors on the fly. View the car at night or from any angle. Ditto for swapping out interior options. When they’ve tailored and staged their ideal vehicle, they can capture the perfect shot through a virtual single-lens reflex (SLR) camera. And that spells innovation for Lexus.Listen as Lexus and Team One explain how Unity is transforming their virtual productions.So how did this innovation at Lexus come about? It started simply, with the brand's philosophy to always put the customer first. And one of the ways they try to accomplish that is by practicing omotenashi, the Japanese concept of treating each customer as if he or she is a valued guest in your home and truly anticipating their needs.This plays out in interesting ways for virtual productions. Traditional methods of creating and delivering car imagery and content produce static, pre-rendered visuals that aren’t truly interactive. As a result, they don’t cater to people’s expectations or take advantage of the changing digital and social media landscape.Since the emergence of highly visual platforms such as Instagram, the behavior and expectations of consumers, especially millennials, have changed radically. Potential car buyers now want to see their “fantasy” ride in exciting or dramatic locations, igniting their imaginations. Virtual productions made with Unity pave the way for Lexus to offer these hyper-personalized experiences.Just before Unite Copenhagen in September, the Team One crew selected an appealing location in Denmark as a setting for their in-development app. They collaborated with experts from British image library Domeble, who specialize in CGI, 3D and VR production. Working at Amalienborg Palace, home to the Danish royal family, the team got down to work capturing high-resolution shots – at various times of day – of this marvel completed in 1760. This reality capture data was then used as the basis for photogrammetric scans, which helped the studio elite3d model the courtyard entirely in 3D.“One of the great things about creating volumetric assets for virtual production is the car doesn’t have to be there at all. This is a huge advantage for companies and their agencies,” said Edward Martin, Unity’s Senior Product Manager for Automotive, Transportation & Manufacturing. “First, virtual productions don’t require months of planning or incur the logistics costs of transporting the vehicle, including lighting and props, nor do they need a large crew, crane or set on-site.”Another typical issue for traditional photoshoots is that there’s a fair amount of secrecy around new models, making it almost impossible to pose the real car near urban landmarks or popular vistas. Virtual productions say goodbye to such concerns. And if the weather doesn’t cooperate, there can be costly delays.“For the Amalienborg shoot, we simply picked a day with nice weather,” explains Martin. “There were no special setups required. After a couple of hours – including golden hour – we had all the data we needed to create the volumetric scene, and Team One was able to load the best shots into their app and begin staging the car virtually.”As a proof of concept, Team One assembled their virtual production assets in Unity, adding a UI and virtual-camera options to create the real-time configurator. The app’s interactive controls for the car include the ability to configure the trim level, interior and exterior colors and add animations, while the Environment controls include Time of Day, Position, and Rotation. Once the creative team has tailored and staged the car exactly as they want, they can view it through the virtual SLR camera and access familiar photography controls. For example, they can choose from five different lens types, as well as f-stops and ISO settings, to get the perfect photo-realistic look – wide-angle or tightly cropped, narrow or wide depth of field, and so on.“We had fun on this project and it all came together in time for Unite Copenhagen,” Martin says. “Team One tapped a number of new Unity features, including the High Definition Render Pipeline (HDRP) and Shader Graph.”Other benefits for carmakers and agencies considering creating a real-time app include vastly simplified post-production workflows (e.g., with a virtual car, no smudges or other cosmetic blemishes to worry about), ease of rendering lifelike textures such as leather, and ability to integrate inexpensive digital props from the Asset Store rather than managing physical props.“I’m really excited about the possibilities of real-time and engines like Unity,” said Alastair Green, Executive Creative Director at Team One. “Creating at the speed of conception gives us the ability to have an idea in the morning, execute by the afternoon, and post on social media by the evening. We can take omotenashi to the next level by delivering personalized content at scale and in near real-time to the Lexus community.”“People’s expectations are set by the experiences they have online,” said Gabe Munch, Digital and Social Media Manager at Lexus USA. “When you can make something more interactive and personalized, you’re always going to see more engagement.”To learn more about how Team One developed their Lexus ES MY19 virtual production and real-time configurator in Unity, check out Part 2 of this post. Pierre Yves Donzallaz, from Unity’s Spotlight team, goes behind the scenes and explains how the High Definition Render Pipeline and other features were key for optimizing the visual quality for high-end visualizations.For more general information about the many use cases for Unity and real-time 3D, see Unity for Automotive. Get started on your real-time 3D journey by trying or buying Unity Industrial Collection today.

>access_file_

[ 2019 ]

19 entries
1422|blog.unity.com

Download our new 2D sample project: Lost Crypt

We put our new 2D tools through their paces to create a 2D side-scroller demo. In this post we show how these integrated 2D tools can help you create high-end visuals with Unity.Highly skilled teams have been making gorgeous 2D games with Unity for years, but we wanted to enable everyone, from individual artists to large teams, to have even more 2D tools available to create great-looking games. And many of them will be production-ready as part of Unity 2019.3, which is currently available in beta.We created Lost Crypt using the complete suite of 2D tools. This lively scene features animation, light effects, organic terrain, shaders, and post-processing, all made natively in 2D. It shows how teams and projects of all sizes, targeting any platform, can now get more engaging and beautiful results faster.Lost Crypt should run well on any desktop computer and we have also implemented on-screen controls with the new Input System in case you want to run it on an iOS or Android device. In our tests it ran at 30 fps on common devices like an iPhone 6S.Download from Asset StoreOnce you’ve downloaded Lost Crypt from the Asset Store, we recommend that you start with a blank New Project and select 2D, then import the project from My Assets in the Package Manager, or by clicking My Assets on your Asset Store page. The project includes all the 2D packages you need. It will then overwrite the project settings, changing rendering settings to the 2D Renderer within the Universal Render Pipeline.Once you import it, you will see the Main Scene. When you click Play you should be able to play normally using the keyboard arrows and spacebar to jump.The scripts and game logic are fairly simple, as the main focus of the demo was to make use of the 2D tools to materialize the demo’s concept art.We’ve broken down the demo into several tasks and picked which 2D tools to leverage for that visual challenge.The character was designed in Photoshop and imported directly with the 2D PSD Importer. Open the Sara.psb file with the Sprite Editor to see the character setup and rig. If you open the file with Photoshop, you will see how we kept the different parts and layer names intact.One of the features available with the Universal Render Pipeline is the new Sprite-Lit material. Compared to the usual Sprite-Default material, this one allows Sprites to react to 2D lighting conditions.We imported the character Normal maps in the Sprite Editor, using the Secondary Textures drop-down menu. You can add Normal and Mask maps to 2D animated characters, regular Sprites, tilemaps, and Sprite shapes.The character has 2D IK solvers in the legs to help the animator focus on positioning the ankle and foot tips correctly, then the legs will follow realistically.Once we rigged the character, we made the different animations with the Animation tool and Animator to control those states. You can see how the tool works in this talk from GDC 2019.The character’s ponytail is a different child GameObject and is controlled by 2D Physics. It reacts realistically to movement because every bone of the ponytail has a Hinge Joint 2D component with some restrictions. That allows her hair to move freely without curling too much or overreacting to the character movement.One of the most powerful possibilities of having dynamic lighting on Sprites is to alter the appearance of the environment. Using the same Sprites, you can change the mood, time of day or darkness of an area, which opens up a wealth of gameplay mechanics from stealth mechanics to lively rich worlds.We controlled the lights in the Scene with simple scripts that hold a Color gradient value (light color from daytime to nighttime) and the lights and materials change color following the Time parameter in the parent GameObject. With this kind of setup, you can visually control how different lights blend with each other.One of the challenges that developers of 2D games had in the early days was to efficiently create organic terrain like hills, slopes or irregular ground, which was only achievable through carefully crafted tile sheets. Years later, this was easier to achieve using multiple Sprites representing parts of the terrain, but the workflow or performance could be better.With 2D Sprite Shape you can generate terrain and colliders similarly to how you would do it in a vector-based drawing application. You can adjust the brushes (Sprite Shape Profiles) and start creating without worrying about having to adjust many Sprites or colliders as you iterate on the environment.Lost Crypt also uses some of the Sprite Shape extras like the NodeAttach script to attach elements to the spline, so they follow the spline. In this demo, the rocks use this script and the ConformingSpline in the flowers layer to follow the shape of the grass spline. You can use this feature for gameplay or for decorative elements like we did in the foreground grass layer.Tilemaps is probably one of the most essential 2D tools, not only to save memory space with small chunks of graphics that can be “tile-able” and repeatable, but because it’s also crucial for level design.In Lost Crypt, we used the 2D Tilemap Editor (available via Package Manager) to recreate the crypt interior where it uses some additional Tilemap extra tools (available on GitHub) to make the level-design process more efficient. For example, we used Rule Tile, a tile type that allows you to paint tiles like they were brushes. It automatically selects the right tile based on the neighbouring tiles or ends.Some common elements in games are fire effects, so we added one in Lost Crypt. We started by creating some GameObject torches using the Particle System and Shader Graph for 2D and used the Sprite-Lit Master node for the output shader. We made the fire animation in a traditional Sprite sheet that the Particle System uses to play the animation.The shader we made for the flame utilizes an HDR tint color to increase the intensity of the glowing around the object using the Volume post-processing. The parent GameObject contains some sparks particles and some lights that illuminate the alcove.Another common use case for shaders are reflections and refractions (e.g., water, ice, mirrors or monitors that display another area of the level) are quite common in games.We achieved the water effect in the crypt entirely with Shader Graph. We exposed several parameters (like water color, waves speed, and distortion, ripple effect, etc.), which allowed us to adjust the final appearance in the Scene. In order to project a mirrored image of the environment, we added an additional camera that would output the image in a texture to be read by the Shader Graph. Additionally, we added a pass of post-processing bloom to make the water caustics glow, which gives the water surface a nice effect.One way we animated the environment was to make the tree branches sway in the wind. To achieve this effect, we decided to have just one shader for each tree’s foliage Prefab – that would create variety and avoid repetition.On the Vegetation Wind-Lit graph, you can see how we created two effects. One is the animation or sway effect, which we created by displacing the Vertex position following a few parameters that modify a noise pattern. The second effect uses the G or green channel to adjust the tone of the rim light around the foliage.Light Blend Styles are a collection of properties in the 2D Renderer that describe how lights should affect Sprites. For example, you can create a blend style that will only affect a particular channel. When you add a light in the scene that uses that blend style, it will only affect the areas of the Sprite that the Mask map channel information dictates.In the example below, the parametric light uses our Direct Lighting blend style, which only affects the areas indicated in the R channel of that Sprite’s Mask map.Lost Crypt has a short cinematic when our adventurer grabs the magic wand in the crypt. To make the moment a bit more special, we changed the mood of the environment and spawn particles at the right time with Timeline, since we want to observe the change to nighttime. To follow the particles flying into the woods, we switched Cinemachine cameras by also having an animation track blending camera.The glowing ring that appears when you grab the wand uses of Sprite-type lights. The ring graphic simply expands and fades, creating an aura that lights up the environment.We achieved the particle glow effect mostly with the Bloom effect in Volume post-processing. Also, the material/shader for the particles and trail uses an HDR color to define how much intensity the post-processing effect should apply to this object.Look closely at the woods – you can see some spectral creatures in there. To do that, we created a shader that could be used for other ghosts. They are transparent but a fresnel effect adds some shine in the edges of the Sprite making them wobble like floating creatures.One interesting effect in the shader is the tracking of the wand GameObject transform position. For example, when the wand is close to the ghouls they become brighter. In order to achieve that, we attached a small script to the wand that updates its position in the material shader.The ghouls also have a small animation that swaps from one graphic to another. In order to do that, we created a Mask map with different graphics in each channel: R with one visual, G with the alternate visual, and B for the fresnel effect.For a final layer of polish, we added some post-processing effects included in the Universal Render Pipeline. For example, we created an empty GameObject and attached the Volume component to it. In Lost Crypt we use bloom, white balance, and vignette, but there are many other effects that can be used in 2D projects like motion blur, color correction of film grain effects.We hope the Lost Crypt demo will help you understand how you can use our integrated suite of 2D tools for similar use cases. If you have any questions about the Lost Crypt, you can reach us on the forum.If you have specific questions about our 2D tools, check out the dedicated threads in the 2D section of the forum and under Beta & Experimental features.Want to go behind the scenes with Lost Crypt in real-time? Sign up for our lively webinar where Global Content Evangelist Andy Touch will explain how we used 2D lights, shaders, and post-processing in Lost Crypt. The 2D team from R&D will also join us to answer your questions about the suite of 2D tools or the project itself.RegisterThere’s a limited number of spaces, so make sure you register soon and add a reminder in your calendar.

>access_file_
1424|blog.unity.com

3 ways to utilize impression level revenue to grow your game business

Developers in the gaming space are approaching growth in a way they’ve never done before - merging their monetization and user acquisition efforts and creating one virtuous cycle of growth. Previously, monetization and user acquisition teams were siloed, which hindered their own ability to grow. How can UA managers buy efficiently if they don’t know how much advertising revenue the monetization team is bringing in and how? With the two interacting with and informing each other, both sides gain a truly holistic view of the entirety of their business.Impression level revenue brings these two teams even closer. How? By giving monetization teams insight into users’ true ARPU, UA teams get the ability to accurately measure ad revenue on the device level, combine it with IAP revenue, and link that data to the marketing channel or creative that brought in the specific user.Here are three ways app developers can utilize impression level revenue:1. Buy users more efficiently across all marketing channelsOver the past year, we’ve witnessed the rise of 100% ad-based games (ie, hyper-casual), and even IAP developers implementing more ads to monetize their user base. However, without impression level revenue, these games (no matter the proportion of ads they use) couldn’t properly identify ad ARPU - preventing them from learning which marketing channels drove their highest-valued ad engaged users.Likely, these developers were either underbidding on strong marketing channels or overbidding on weak channels just because they thought that the users they were acquiring weren’t generating significant revenue. But even though these users weren’t making IAPs, perhaps they were generating a large amount of ad revenue.With access to impression level revenue, app developers can pinpoint the marketing channels that bring in high-value ad users, and bid higher and lower where necessary. Ultimately, when developers don’t have access to or ignore impression level revenue data, they don’t know the exact value of their users, and are unable to bid effectively.2. Understanding your users and optimizing your monetization strategyBy giving monetization managers a complete picture on a user’s true ARPU, which combines both ad and IAP revenue, impression level revenue makes it easier to segment based on monetization behavior. Ensuring that placements are both relevant and personalized results in a higher engagement rate and more revenue.For example, developers can use impression level revenue to identify ad whales, create segments around these players, and monetize them accordingly. It’s important not to cap the number of rewarded ad impressions for this segment. Since more often than not these users will engage with rewarded ads every time they’re exposed to them, it’s best to show them as many opt-in ads as possible. This is guaranteed revenue and guaranteed engagement.Conversely, developers can segment users who aren’t generating any ad revenue at all, and show only system-initiated ads. By covering every ad revenue segment, you’re better able to maximize your revenue.3. Utilize the ROAS optimizerIf developers are already utilizing ad revenue data, then there’s nothing stopping them from using ironSource’s ROAS optimizer. The algorithm, which optimizes for ROAS, considers both IAP and device level ad revenue data in order to work out the optimal bid and reach desired ROAS goals. Once the optimizer has determined the optimal bid, it’s able to automatically update thousands of bids at a time. This solution ensures that UA managers maximize both scale and performance - easing the level of manual work through automation.In fact, when hyper-casual developer Kwalee utilized the ROAS optimizer, they reduced their eCPI by 10% and raised ROAS by 30% on Android and 40% on iOS. This tool allows for UA managers to bid on actual user quality.With impression level revenue, developers have a full picture of where the revenue they’re generating is coming from. Now, the industry has a data set at their disposal that can help them determine their overall app business health, optimize in-app ad monetization strategies, and most importantly, optimize UA campaign spend. Impression level revenue perfectly bridges together the monetization and user acquisition side of app businesses, and now the gaming space has access to data that is actionable for both monetization and marketing teams.

>access_file_
1426|blog.unity.com

Unity opens new possibilities for the anime industry

Japanese animation studio Craftar took to the stage at Unite Tokyo 2019 to talk about the new possibilities that Unity brings to the Japanese animation industry.Few Japanese studios have explored the potential of Unity as a production tool for anime because they don’t know if its real-time improvements to their workflow can deliver the extremely high-quality animation that audiences demand. But as visionary studios like Craftar have found, Unity streamlines their production pipeline and delivers incredible results, while also creating new opportunities.The many achievements of Craftar, which is the consulting arm of major Japanese PR company Hakuhodo Inc., include world-class content such as the Netflix anime INGRESS and 2019 animated film The Relative Worlds. The latter, produced by its subsidiary animation company, Craftar Studios, implemented Unity in several difficult scenes. (You can see how Craftar worked with Unity’s NavMesh feature here.)This fall, global auto parts supplier Denso Corporation reached out to Craftar to create an animated promotional video showcasing Denso’s vision for a near-future smart city, highlighting the ways that VR and AR content could be integrated into self-driving cars. The entire animation was rendered in real-time with Unity and can be viewed as either a VR or a standard film experience.Using Unity, Craftar was able to seamlessly create the animation and VR content simultaneously. The result is an experience that immerses viewers inside an anime world that’s as captivating as a standard animated film, thanks to Unity’s real-time rendering capabilities.During the Keynote at Unite Tokyo 2019, Shoichi Furuta, CEO and creative director of Craftar, explained Craftar’s philosophy towards animation: “Our company doesn’t just make animation, we use animation to tackle issues in the industry and in society.” The company is driven by “smart CG animation,” its vision to push the industry forward using the latest technologies like real-time engines and AI – which ultimately led them to choose Unity to create the beloved, richly expressive Japanese anime style using cel shaders.“We at Craftar have only just begun bringing our wealth of expertise into Unity, which will soon become one of the core engines of life and society,” said Furuta. “We’re in an age where everything from smartphones to cars is going digital, which is massively expanding the UX/UI market. It’s essential for every interface to have excellent motion design, and Japanese animation expertise is invaluable when it comes to delivering abundant information in a short time with limited resources.”The innovative Denso project blurred the line between the entertainment and automotive industries, Furuta explained. “Thanks to Unity helping to bridge the gap between the anime industry and the automotive industry, we’ve even been able to smoothly overcome the barriers between businesses and between devices, and bring two previously unrelated industries in Japan closer together.” Using Unity, Craftar intends to continue not only to push the boundaries of games and anime but to also break through the walls between other industries.

>access_file_
1428|blog.unity.com

Training your agents 7 times faster with ML-Agents

In v0.9 and v0.10 of ML-Agents, we introduced a series of features aimed at decreasing training time, namely Asynchronous Environments, Generative Adversarial Imitation Learning (GAIL), and Soft Actor-Critic. With our partner JamCity, we previously showed that the parallel Unity instance feature introduced in v0.8 of ML-Agents enabled us to train agents for their bubble shooter game, Snoopy Pop, 7.5x faster than with a single instance. In this blog post, we will explain how v0.9 and v0.10 build on those results and show that we can decrease Snoopy Pop training time by an additional 7x, enabling more performant agents to be trained in a reasonable time.The purpose of the Unity ML-Agents Toolkit is to enable game developers to create complex and interesting behaviors for both playable and non-playable characters using Deep Reinforcement Learning (DRL). DRL is a powerful and general tool that can be used to learn a variety of behaviors, from physics-based characters to puzzle game solvers. However, DRL requires a large volume of gameplay data to learn effective behaviors– a problem for real games that are typically constrained in how much they can be sped up.Several months ago, with the release of ML-Agents v0.8, we introduced the ability for ML-Agents to run multiple Unity instances of a game on a single machine, dramatically increasing the throughput of training samples (i.e., the agent’s observations, actions, and rewards) that we can collect during training. We partnered with JamCity to train an agent to play levels of their Snoopy Pop puzzle game. Using the parallel environment feature of v0.8, we were able to achieve up to 7.5x training speed up on harder levels of Snoopy Pop.But parallel environments will only go so far—there is a limit to how many concurrent Unity instances can be run on a single machine. To improve training time on resource-constrained machines, we had to find another way. In general, there are two ways to improve training time: increase the number of samples gathered per second (sample throughput), or reduce the number of samples required to learn good behavior (sample efficiency). Consequently, in v0.9, we improved our parallel trainer to gather samples asynchronously, thereby increasing sample throughput.Furthermore, we added Generative Adversarial Imitation Learning (GAIL), which enables the use of human demonstrations to guide the learning process, thus improving sample efficiency. Finally, in v0.10, we introduced Soft Actor-Critic (SAC), a trainer that has substantially higher sample efficiency than the Proximal Policy Optimization trainer in v0.8. These changes together improved training time by another 7 times on a single machine. For Snoopy Pop, this meant that we were not only able to create agents that solve levels but agents that solved them in the same # of steps as a human player. With the increased sample throughput and efficiency, we were able to train multiple levels of Snoopy Pop on a single machine, which previously required multiple days of training on a cluster of machines. This blog post will detail the improvements made in each subsequent version of ML-Agents, and how they affected the results in Snoopy Pop.We first introduced our integration of ML-Agents with Snoopy Pop in our ML-Agents v0.8 blog post. The figure below summarizes what the agent can see, what it can do, and the rewards that it received. Note that compared to our previous experiments with Snoopy Pop, we decreased the magnitude of the positive reward and increased the penalty for using a bubble, forcing the agent to focus its attention less on simply finishing the level and more on clearing bubbles in the fewest number of steps possible, just as a human player would do. This is a much harder problem than just barely winning the level, and takes significantly longer to learn a good policy.In ML-Agents v0.8 , we introduced the ability to train multiple Unity instances at the same time. While we are limited in how much we can speed up a single instance of Snoopy Pop, multi-core processors allow us to run multiple instances on a single machine. Since each play-through of the game is independent, we can trivially parallelize the collection of our training data.Each simulation environment feeds data into a common training buffer, which is then used by the trainer to update its policy in order to play the game better. This new paradigm allows us to collect much more data without having to change the timescale or any other game parameters which may have a negative effect on the gameplay mechanics.In ML-Agents v0.9, we introduced two improvements to sample efficiency and sample throughput, respectively.Asynchronous EnvironmentsIn the v0.8 implementation of parallel environments, each Unity instance takes a step in sync with the others, and the trainer receives all observations and sends all actions at the same time. For some environments, such as those provided with the ML-Agents toolkit, the agents take decisions at roughly the same constant frequency, and executing them in lock-step is not a problem. However, for real games, certain actions may take longer than others. For instance, in Snoopy Pop, clearing a large number of bubbles incurs a longer animation than clearing none, and winning the game and resetting the level takes longer than taking a shot. This means that if even one of the parallel environments takes one of these longer actions, the others must wait.In ML-Agents v0.9, we enabled asynchronous parallel environments. As long as at least one of the environments have finished taking its action, the trainer can send a new action and take the next step. For environments with varying step times, this can significantly improve sample throughput.Generative Adversarial Imitation Learning (GAIL)In a typical DRL training process, the agent is initialized with a random behavior, performs random actions in the environment, and may happen upon some rewards. It then reinforces behaviors that produce higher rewards, and, over time, the behavior tends towards one that maximizes the reward in the environment and becomes less random.However, not all optimal behavior is easy to find through random behavior. For example, the reward may be sparse, i.e. the agent must take many correct actions before receiving a reward. Or, the environment may have many local optima, i.e. places where the agent could go that appear to be leading it towards the maximum reward but is actually an incorrect path. Both of these issues may be possible to solve using brute-force random searching but will require many, many samples to do so. They contribute to the millions of samples required to train Snoopy Pop. In some cases, it may never find the optimal behavior.But what if we could do a bit better by guiding the agent towards a good behavior by providing it with human demonstrations of the game? This area of research is called Imitation Learning and was added to ML-Agents in v0.3. One of the drawbacks of Imitation Learning in ML-Agents was that it could only be used independently of reinforcement learning, training an agent purely on demonstrations but without rewards from the environment.In v0.9, we introduced GAIL, which addresses both of these issues, based on research by Jonathan Ho and his colleagues. You can read more about the algorithm in their paper.To use Imitation Learning with ML-Agents, you first have a human player (or a bot) play through the game several times, saving the observations and actions to a demonstration file. During training, the agent is allowed to act in the environment as usual and gather observations of its own. At a high level, GAIL works by training a second learning algorithm (the discriminator, implemented with a neural network) to classify whether a particular observation (and action, if desired) came from the agent, or the demonstrations. Then, for each observation the agent gathers, it is rewarded by how close its observations and actions are to those in the demonstrations. The agent learns how to maximize this reward. The discriminator is updated with the agent’s new observations, and gets better at discriminating. In this iterative fashion, the discriminator gets tougher and tougher---but the agent gets better and better at “tricking” the discriminator and mimicking the demonstrations.Because GAIL simply gives the agent a reward, leaving the learning process unchanged, we can combine GAIL with reward-based DRL by simply weighting and summing the GAIL reward with those given by the game itself. If we ensure the magnitude of the game’s reward is greater than that of the GAIL reward, the agent will be incentivized to follow the human player’s path through the game until it is able to find a large environment reward.Since its initial release, the ML-Agents Toolkit has used Proximal Policy Optimization (PPO) – a stable, flexible DRL algorithm. In v0.10, in the interest of speeding up your training on real games, we released a second DRL algorithm, SAC, based on work by Tuomas Haarnoja and his colleagues. One of the critical features of SAC, which was originally created to learn on real robots, is sample-efficiency. For games, this means we don’t need to run the games as long to learn a good policy.DRL algorithms fall into one of two categories–on-policy and off-policy. An on-policy algorithm such as PPO collects some number of samples, learns how to improve its policy based on them, then updates its policy accordingly. By collecting samples using its current policy, it learns how to improve itself, increasing the probability of taking rewarding actions and decreasing those that are not rewarding. Most modern on-policy algorithms, such as PPO, learn a form of evaluation function as well, such as a value estimate (the expected discounted sum of rewards to the end of the episode given the agent is in a particular state) or a Q-function (the expected discounted sum of rewards if a given action is taken at a particular state). In an on-policy algorithm, these evaluators estimate the series of rewards assuming the current policy is taken. Without going into much detail, this estimate helps the algorithm train more stably.Off-policy algorithms, such as SAC, work a bit differently. Assuming the environment has fixed dynamics and reward function, there exists some optimal relationship between taking a particular action at a given state, and getting some cumulative reward (i.e., what would the best possible policy be able to get?) If we knew this relationship, learning an effective policy would be really easy! Rather than learning how good the current policy is, off-policy algorithms learn this optimal evaluation function across all policies. This is a harder learning problem than in the on-policy case–the real function could be very complex. But because you’re learning a global function, you can use all the samples that you’ve collected from the beginning of time to help learn your evaluator, making off-policy algorithms much more sample-efficient than on-policy ones. This re-use of old samples is called experience replay, and all samples are stored in a large experience replay buffer that can store 100’s (if not thousands) of games worth of data.For our toolkit, we’ve adapted the original SAC algorithm, which was designed to do continuous action locomotion tasks, to support all of the features you’re used to in ML-Agents - Recurrent Neural Networks (memory), branched discrete actions, curiosity, GAIL, and more.In our previous experiments, we demonstrated that for a complex level of Snoopy Pop (Level 25), we saw a 7.5x decrease in training time going from a single environment (i.e., v0.7 of ML-Agents) to 16 parallel environments on a single machine. This meant that a single machine could be used to find a basic solution to Level 25 in under 9 hours. Using this capability, we trained our agents to go further and master Level 25---i.e., solve Level 25 to human performance. Note this takes a considerably longer time than simply solving the level---an average of about 33 hours.Here, we declare an agent to have “mastered” a level if it reaches average human performance (solves the level at or under the number of bubbles a human uses) over 1000 steps. For Level 25, this corresponds to 25.14 steps/bubbles shot, averaged from 21 human plays of the same level.We then tested each improvement from v0.9 and v0.10 incrementally, measuring the time it takes to exceed human performance at the level. ll in all, they add up for an additional 7x speedup to mastering the level! Each value shown is an average over three runs, as training times may vary between runs. Sometimes, the agent gets lucky and finds a good solution quickly. All runs were done on a 16-core machine with training accelerated by a K80 GPU. 16 instances were run in parallel during training.For the GAIL experiments, we used the 21 human playthroughs of Snoopy Pop as demonstrations to train the results. Note that the bubble colors in Level 25 are randomly generated, so in no way do the 21 playthroughs cover all possible board configurations of the level. If so, the agent would learn very fast by memorizing and copying the player behavior. We then mixed a GAIL reward signal with the one provided by the Snoopy Pop game, so that GAIL can guide the agent’s learning early in the process but allow it to find its own solution later.Parallel Environments (v0.8) Asynchronous Environments (v0.9) GAIL with PPO (v0.9) SAC (v0.10) GAIL with SAC (v0.10) Time to Reach Human Performance(hours)34:03 31:08 23:18 5:58 4:44 Sample Throughput (samples/second) 10.83 14.81 14.51 15.04 15.28Let’s visualize the speedup in graph format below. We see that the increase in sample throughput by using asynchronous environments results in a reduction of training time without any changes to the algorithm. The bigger reductions in training time, however, come from improving the sample efficiency of training. Note that sample throughput did not change substantially between ML-Agents v0.9 and v0.10. Adding demonstrations and using GAIL to guide training meant that the agent used 26% fewer samples to reach the same training behavior, and we see a corresponding drop in training time. Switching to Soft Actor-Critic, an off-policy algorithm, meant that the agent solved the level with 81% fewer samples than vanilla PPO, and additional improvement is seen by adding GAIL to SAC.These improvements aren’t unique to the new reward function and goal of reaching human performance. If we task SAC+GAIL with simply solving the level, as we had done in our previous experiments, we are able to do so in 1 hour, 11 minutes, vs. 8 hours, 24 minutes.If you’d like to work on this exciting intersection of Machine Learning and Games, we are hiring for several positions, please apply!If you use any of the features provided in this release, we’d love to hear from you. For any feedback regarding the Unity ML-Agents Toolkit, please fill out the following survey and feel free to email us directly. If you encounter any issues or have questions, please reach out to us on the ML-Agents GitHub issues page.

>access_file_
1430|blog.unity.com

The new Asset Import Pipeline: Solid foundation for speeding up asset imports

As of 2019.3, the new Asset Import Pipeline is the default for new projects, aiming to save you time with faster platform switching and laying the foundation for faster imports. We’re also making the asset pipeline scale better for very large projects. Read on to find out more about the new improvements, and our motivation and considerations. Whenever you put a new asset into your project, it doesn’t actually become a part of the project until the Asset Import Pipeline discovers and imports it. The correct detection of the project state is what the Asset Import Pipeline is responsible for while allowing you to query for this state through various APIs. In 2017, the Asset Import Pipeline rewrite work began to pave the way towards a more robust and scalable approach, while also addressing a number of pain points reported by you in your daily workflows. With Unity 2019.3 (now available in beta), the new Asset Import Pipeline (also known as Asset Import Pipeline V2) will be the default implementation for new projects. Older projects can choose to upgrade to the new Asset Import Pipeline to get the benefits of this new system. Now is a good time to share some of the thinking behind the new pipeline. Specifically, we want to share the considerations taken to make sure the new system is compatible with the existing APIs so that scripts will not have to be re-written when upgrading to the new Asset Import Pipeline.There are numerous workflows that are part of a daily development cycle. We have identified the most time-consuming issues and have implemented solutions for them. Importing assets can take a long time. Converting the source data to a format that the Unity Editor, or a platform, is ready to utilize is not a trivial process. For example, importing a complex 3D model requires a large number of computations, and when combined with animation this time can quickly grow. To address this there are 3 key concepts which need to be addressed as part of the solution:With most types of assets, Unity needs to convert the data from the source file, depending on the target platform for your projects. The result will vary between compatible GPU formats such as PVRTC, ASTC or ETC. This is because most file formats are optimized to save storage, whereas in a game or any other real-time application, the asset data needs to be in a format ready for immediate use by hardware, such as the CPU, graphics, or audio hardware. For example, when Unity imports a PNG image file as a texture, it doesn’t use the original PNG-formatted data at runtime. Instead, when the texture is imported, Unity creates a new representation of the image in a different format which is stored in the project's Library folder. This imported version is what is used by the Texture class in the engine, and uploaded to the GPU for real-time display. This is referred to as the Import Result.We need to know that both you and I get the same Import Results in the same exact format when we import, even when we’re using different hardware. The principle of getting the same output for a given input is what we call determinism.The Asset Import Pipeline keeps track of all the dependencies for each asset and keeps a cache of the imported versions of all the assets. An asset's import dependencies are all the data that could influence the import result. This means that if any of your asset's import dependencies are changed, the cached version of the imported asset becomes outdated, and the asset needs to be re-imported to reflect those changes.There are different situations where importing can take a long time. We’ve identified the following two workflows and implemented two solutions to address the above issues: Fresh Project Import and Fast Platform Switching.When you set up a project for the first time, it’s essentially the same as when the Library folder is deleted. This means that every asset in the assets folder needs to be enumerated and imported by the Asset Import Pipeline. This is naturally an expensive operation. However, by ensuring that our import process is deterministic and stable across machines, the time it takes to retrieve import results can be reduced by many orders of magnitude, depending on the size of the Source Asset and the size of the Import Result. We achieve this by using the new Unity Accelerator which caches import results on the cloud from anyone who is connected to it, thus allowing you to directly download the import results from a server rather than having to go through the heavy processing which importing an asset would entail.Up until Unity 2019.2 (with the original Asset Import Pipeline), the Library folder was comprised of the GUIDs of Assets being their filename. Thus, switching from a platform to another platform would invalidate the Import Result in the Library folder, causing it to be re-imported every time you switch platforms.If you had to switch back and forth between platforms multiple times per day, this could easily take up hours, depending on your project size.Some of you have figured out workarounds for this, such as having a copy of your project per platform on your machine, but that doesn’t scale very well.With the new Asset Import Pipeline, we’ve removed the GUID to File Name mapping. Since dependencies for a particular Asset are tracked, we are able to Hash them all together to create a revision for the Import Result of an Asset. This allows us to have multiple revisions per Asset, which then means that we are no longer bound by a GUID to File Name mapping. Not having this requirement allows us to have Import Results which work across different configurations. For Fast Platform Switching, we could have an Import Result per platform, so that when you switch platforms back and forth the Import Result is already there, thus making the platform switch many orders of magnitude faster than with the Asset Import Pipeline V1.As you make changes to assets, Unity generates a number of new files that will get generated. This will take up more storage space on your disk. However, the way we have decided to approach this issue is to remove unused Import Results when Unity restarts. We keep track of the latest import result per platform so that Fast Platform Switching can still take place while older Import Results are removed, thus helping you free up some of your disk space.The new Asset Import Pipeline is available with Unity 2019.3 beta. If you have an existing project, you can upgrade to the new Asset Import Pipeline using the Project Settings Window in the Editor:Selecting Version 2 will tell the editor you now want to use the new Asset Import Pipeline together with this project, and restarting your project will re-import it using the new Asset Import Pipeline code. This essentially has the same effect as deleting your Library folder, but without deleting it. When switching to use Asset Import Pipeline V2, the Import Results from the original Asset Import Pipeline are not deleted as V2 creates its own folder structure to store its Import Results.Projects that you’ve created in Unity 2019.2 or older will still use the original Asset Import Pipeline by default. When opening such a project in Unity 2019.3 for the first time, you'll get an option to upgrade to the new Asset Import Pipeline. If you decline, your project will continue using the original Asset Import Pipeline. Furthermore, the selected version is stored in the EditorSettings.asset file of your project, so it can be version controlled.When creating a new Project with Unity 2019.3 or newer, the new Asset Import Pipeline has become the new default way to work. All new projects you create will be using it.At Unite Copenhagen 2019, our team presented two talks. My talk is a general introduction to the topics covered in this blog post and can guide your decision-making for your own Asset Management strategies. My colleague Jonas Drewsen talked about the upcoming features directed towards making the asset pipeline more extensible and ensuring project stability:Get the Unity 2019.3 beta and try out the new Asset Import Pipeline. We’re looking forward to hearing what you think on the forum! You can also get in touch with me on Twitter if you have further questions.

>access_file_
1431|blog.unity.com

How to use URL handlers and OpenURL safely in your Unity app

The Unity Security team focuses on helping Unity creators build more trustworthy games and applications. Tune in to this blog series for tips, techniques, and recommendations for creating more secure games and apps with Unity.Today we are launching an ongoing blog series about developing securely with Unity. This series will provide content that Unity developers can apply directly in their games and applications. We hope to cover a variety of topics ranging from basic to advanced knowledge, focused on best practices within using Unity products and services. If there’s a subject you’d like to read about, let us know. We look forward to your feedback. The primary focus of this blog is an overview of URL handlers.URL and file handlers associate file types with the installed program that can open the specified file, but they come with risks. For example, when you’re on your local machine and double-click to open a PDF file from your local drive, your operating system refers to its list of file handlers and selects the program assigned for that file type, so your PDF is opened by a program that can display it correctly. File handlers commonly use the file extension (e.g., .pdf – the suffix at the end of the filename) to decide how to handle the file.A similar mechanism, the URL handler, decides how to open URLs based on the path prefix. An example of this would be the ubiquitous https:// protocol, which opens your default browser. Another example of a common URL scheme would be file://c:/windows/system32/drivers/gmreadme.txt; entering this URL in the Run dialog will cause Windows to open this license file in Notepad.URL handlers are a useful feature of your operating system that save users time when launching applications. However, this convenient mechanism may occasionally be unsafe.The Unity Editor and Unity Runtime support programmatic use of URL handlers, both through their use of the .NET Framework, but also through a specific Unity scripting API, namely Application.OpenURL. Game developers often use OpenUrl so that when a player clicks a link in the game, it launches the local system’s web browser. However, if the game developer does not properly sanitize what is passed into Application.OpenURL, their player could be at risk.This scripting API is not inherently unsafe, but in any case where untrusted input is used as part of the URL that’s passed in, you need to take care.Untrusted input/data is any data that does not come from a trusted source. So then, what is a trusted source? Within the context of this article, only your endpoints with strict HTTPS enabled should be considered trusted.There are many examples of untrusted input. If you are designing an anti-cheat system, the player’s local file system should be considered untrusted. If you are developing a multiplayer game, all the players should be considered untrusted.There are other ways to protect data/input by leveraging things like public-private key encryption, but those are beyond the scope of this article. (Leave a comment if you’re interested in learning more about this.)While these handlers provide great convenience to users, they carry inherent risks. Here’s an example of unsafe use of Application.OpenURL:using UnityEngine; using System.Collections; public class VulnerableBrowserClass: MonoBehaviour { // Pass in URL from link a player clicked on from our game forums void OpenBrowser(string url_from_chat) { Application.OpenURL(url_from_chat); // ←- Badness here; value isn’t sanitized } } Figure 1. Example of unsafe use of Application.OpenURLIn this example, the in-game commenting system allows users to share links; when a user clicks a link, the VulnerableBrowserClass.OpenBrowser function is called.Figure 2. Sample scenario with a potentially dangerous linkYou can see how easy it is to send an unsuspecting user a link to a potentially dangerous application (Figure 2). If that URL is passed directly to Application.OpenURL, as shown in Figure 1, the victim's machine will immediately run the application at that link, potentially allowing an attacker to take control of the victim’s system.In the image above, the attacker could format the link above to show up as https://SuperLeetCheats.com/VulnTheGame in the chat window, but have the actual link go to their malware at: file://leethaxorz.net/super_malware.exe. The problem here isn’t that users can send each other links; the problem lies in taking the links sent by a user (potentially the attacker) and passing them directly to Application.OpenURL without any validation or sanitization, as seen in the code sample above (Figure 1). Without that sanitization, clicking the link above would cause the Unity Editor to hand the file directly to the target player’s OS, likely resulting in execution of the attacker’s malware.The safest way to use Application.OpenURL is to never use it with any untrusted data. Use it only to open URLs that come from your developers or servers, and over a trusted transport (i.e., HTTPS).If you use remote configurations (e.g., you host a list of content URLs for new updates), then ensure this data is retrieved only via HTTPS, with strict enforcement. Always retrieve remote content in this manner.Note: HTTPS won’t fix any vulnerabilities in your app due to untrusted/unsanitized input as described in the attack above. It will, however, ensure the data you send to your player hasn’t been tampered with during transport.If you’ve decided that you absolutely need to use OpenUrl with data from untrusted sources, then you must do your best to sanitize the input you receive from the untrusted source. There are a few ways to do this, such as with regex pattern matching, building URLs via .Net libraries, or leveraging external sanitization libraries. However, none of these mitigations will work 100% of the time, and no matter what solution you choose, some potential risk is assumed if Application.OpenUrl (and similar functions) is used with untrusted data.Further, as shown in Figure 2 above, there’s no way for users to know what URL is behind that link. At a minimum, give users a prompt with the full URL they’re about to visit. But you should not consider this a robust solution, as users are known to click through any prompt put in front of them blindly.Using OpenURL and file handlers is very common for developers, particularly with rich media applications and social media-like features, such as in-game chat, reviews, and comments, where users typically want to share content that resides outside the game on the internet. Further, there are common productivity scenarios, such as editing a config file, where you may want to pass a link to the OS, opening the user’s preferred code editing application as a convenience to the user. Application.OpenURL is a platform-independent API to support file handlers, saving Unity developers from having to write their own handlers for every platform.No. As described above, this is a common functionality in most operating systems and is supported by many languages and frameworks. Be mindful of the use of the Windows API Windows.System.LauchURIAsync (for Universal Windows Platform [UWP] apps), or the dreaded System.Diagnostics.Process.Start; both of these native .Net libraries provide the same functionality as Application.OpenURL. LaunchURIAsync allows for launching applications from within Windows’ secure application sandbox, and Process.Start can be used to launch any executable on the local system. Further, some native OS calls provide the same functionality, such as Apple’s open(_:options:completionHandler:). All of these types of APIs can be easily abused if untrusted, unsanitized inputs are passed into these APIs.We will be posting articles here periodically on topics critical to practicing and maintaining security best practices when developing with Unity. Upcoming topics include secure transport for game data and democratizing the secure software development lifecycle (SSDLC). We’re also working to open source some of our internal guidance and security tooling.Is there a security topic you’d like to know more about in a future article? Drop us a line!Find out more about Unity Security, including security advisories.

>access_file_
1433|blog.unity.com

Enter Play Mode faster in Unity 2019.3

Play Mode is at the core of what makes Unity fun to work with. But as your projects get more complex, it can take a while to get started. The faster you can enter and exit Play Mode, the faster you can make and test changes. That’s why we’re introducing Configurable Enter Play Mode in Unity 2019.3 beta as an experimental feature.Currently, when you enter Play Mode in the Editor, Unity does two things: it resets the scripting states (Domain Reload) and reloads the Scene. This takes time, and the more complex your project gets, the longer you need to wait to test out new changes in Play Mode. Starting with Unity 2019.3 beta, however, you’ll have an option to disable either, or both, of the “Domain Reload” and “Scene Reload” actions.Based on our test results, this can save you up to 50-90% of waiting time, depending on your project.When you enable Enter Play Mode Options in File > Project Settings > Editor, you’ll see that the options to reload Domain and reload Scene become available. Check out How to configure Play Mode in the documentation for more details.These options allow you to disable Domain and/or Scene reloading from the Enter Play Mode process when there are no code changes. You can also access this feature through an API and a Callback if you want to reset the game state before entering Play Mode.The diagram below shows the Enter Play Mode process before and after you disable Reload Domain and Reload Scene:See more details on the processes that Unity goes through when entering Play Mode in the documentation.Note that this feature is currently experimental and not all Unity packages are validated to work with disabled Domain and Scene Reloading. Please let us know on the forum if you run into any issues!As you can see, avoiding Domain reload is very simple, but it comes at a cost. You need to make adjustments to static fields and static event handlers in your scripts to ensure your scripting states reset correctly when Play Mode is entered.The following code example has a counter which goes up when the player presses the Jump button. When Domain Reloading is enabled, the counter automatically resets to zero when entering Play Mode. After you disable Domain Reloading, the counter doesn’t reset; it keeps its value in and out of Play Mode. This means that on the second run of your Project in the Editor, the counter might not be at zero if it changed in the previous run.public class StaticCounterExample : MonoBehaviour { //this counter will not reset to zero when Domain Reloading is disabled static int counter = 0; // Update is called once per frame void Update() { if (Input.GetButtonDown("Jump")) { counter++; Debug.Log("Counter: " + counter); } } } Use the [RuntimeInitializeOnLoadMethod(RuntimeInitializeLoadType.SubsystemRegistration)] attribute, and reset the value explicitly to make sure the counter resets correctly when Domain Reloading is disabled. Example:using UnityEngine; public class StaticCounterExampleFixed : MonoBehaviour { static int counter = 0; [RuntimeInitializeOnLoadMethod(RuntimeInitializeLoadType.SubsystemRegistration)] static void Init() { Debug.Log("Counter reset."); counter = 0; } // Update is called once per frame void Update() { if (Input.GetButtonDown("Jump")) { counter++; Debug.Log("Counter: " + counter); } } } After you’ve disabled Domain Reloading, Unity won’t unregister methods from static event handlers when you exit Play Mode. This can lead to complications if you have code that registers methods with static event handlers. For example, on the first Play of your project in the Editor, methods would be registered as normal. However, on the second Play of your project, those methods would be registered a second time in addition to the first, and would, therefore, be called twice when the event occurs.The following code registers a method with the static event handler Application.quitting:using UnityEngine; public class StaticEventExample : MonoBehaviour { void Start() { Debug.Log("Registering quit function"); Application.quitting += Quit; } static void Quit() { Debug.Log("Quitting!"); } } When Domain Reloading is disabled, the above example adds the `Quit` method again each time you enter Play Mode. This results in an additional “Quitting” message each time you exit Play Mode.Use the [RuntimeInitializeOnLoadMethod] attribute, and unregister the method explicitly so that it isn’t added twice:using UnityEngine; public class StaticEventExampleFixed : MonoBehaviour { [RuntimeInitializeOnLoadMethod] static void RunOnStart() { Debug.Log("Unregistering quit function"); Application.quitting -= Quit; } void Start() { Debug.Log("Registering quit function"); Application.quitting += Quit; } static void Quit() { Debug.Log("Quitting the Player"); } } See more details on modifying your scripts to perform correctly when Domain Reload is disabled in our documentation.We would like to make sure that popular Asset Store packages work with disabled Domain and Scene Reloading. You can help us by reporting any problems you encounter in your projects to the publishers of your asset packages.We believe that if your project is currently slow to enter Play Mode, this feature will speed things up significantly. Join Unity 2019.3 beta and try it out, we’re looking forward to hearing what you think on the forum! Since this feature is experimental, you can still help us shape it so that it fits your needs. We’re especially looking forward to hearing about any issues that you come across.Huge thanks to forum users @Sini, @chrisk, @Peter77, and @Baste who have already helped the whole community out by testing this feature and providing invaluable feedback.

>access_file_
1436|blog.unity.com

Customizing Snaps Prototype assets with ProBuilder

We recently introduced Snaps, asset packs designed to help bring your projects to life. Read on to learn how you can benefit from using Snaps to lay out your levels quickly and easily and how to modify the assets to use them in your game.Snaps Prototypes are modular prototyping assets created entirely with Unity’s ProBuilder 3D modeling package. They are designed to snap to a grid using the ProGrids system. Built to real-world scale, they make it easy for both novice and intermediate-level designers to lay out game environments. You can substitute the prototype assets with high-detail art assets later.Several Snaps asset packs, with different themes, are currently available on the Asset Store. Check back for more soon. We want you to be able to prototype anything.These low-polygon assets are designed to simplify level prototyping by making it modular. You have full control over how you use them. This can save you a lot of time as you no longer need to create your own 3D assets or use external digital content-creation tools to modify them.The assets are lightweight and do not contain any textures. Instead, parts of the meshes have different materials assigned to the topology. However, they are UV unwrapped, making it easy for you to texture them if you wish to do so. Alternatively, after getting your level layout in place, you can replace Snaps Prototype assets with highly detailed meshes of your own.Before we begin working with our assets, let’s take a brief look at how everything is organized within a typical Snaps package.Each Snaps asset pack comes with a script that will automatically download the ProBuilder and ProGrids packages if they are not already included in your project.All of the prototyping assets are within the AssetStoreOriginals folder, under _SNAPS_PrototypingAssets. Here, you will find an About folder containing some necessary information about Snaps, as well as all of your assets. Your assets are categorized according to their respective package names (e.g., ModernOfficeInterior or SciFi_Industrial).Each Snaps Prototype package contains a Prefabs folder, which has all of the 3D meshes with their assigned materials ready for you to place around the scene. There is a Materials folder with material files assigned to different areas of each model, as well as a SampleScenes folder. The SampleScenes folder contains examples of how the assets can be laid out in an environment.Without further ado, let’s see how we can quickly put something together from scratch.It’s time to create a new scene and see how we can arrange some Snaps assets into a level!We’ll use two of the Snaps packs currently available on the Asset Store – the Sci-Fi/Industrial and Office packs – to prototype some futuristic living quarters. This will also serve as inspiration to develop our own prototyping props down the line.Once you have imported Snaps into a project, along with ProGrids and ProBuilder, you will notice some new UI elements appear in the top left corner of your Scene view. This set of icons is part of the ProGrids package – it contains all the tools you need to start snapping your models together quickly. Using the Grid Visibility icon in the toolbar, you can turn on the grid display. Also, with the X, Y, Z, and 3D icons below, you’re able to choose the axis on which you want to render the grid. Just below it is the button that controls grid snapping. With it enabled, if you try to move objects in your scene, they will move within the specified snapping interval. You can optionally enable scale and angle snapping too. Check out the ProGrids documentation if you want to learn more.Let’s put our grid to use. To do that, we can grab one of the floor tiles from the Sci-Fi/Industrial pack and drag it into the scene. Then use the Push to Grid button to match our new tile to our grid, with the corresponding snapping increment (1 by default). Make sure that Snapping is turned on. Now, if we attempt to move our tile around the scene, it will always snap in increments of one unit. You will also notice that the tile’s pivot is set to one of its corners, making sure it will always snap with other tiles. Let’s add some more tiles, laying out the base floor of our level.In much the same manner, we can place walls, doors, and stairs. Once we get to positioning our props, we might need to change the snapping increment from 1 to something smaller, to give us more freedom over their placement. Alternatively, we could go ahead and place the props by hand, using Unity’s default snapping tools to snap them to surfaces where needed.And that’s how simple it is to start using Snaps. Have fun laying out your new game environments! Next, we’ll learn about editing Snaps assets and creating new ones using ProBuilder.Once you have your level layout in place, you might want to replace the low-poly meshes with some high-resolution ones, or vice versa, for faster iteration. You can do so very easily with this downloadable tool.You can access the script under Snaps → Snaps Swap Tool.In the script options, you can pass in a Prefabs folder containing your Snaps Prototype assets. In the case of this script, the Prefab filenames need to match the filenames of the Snaps assets. Once you point the script to the Prefabs, you can simply replace the objects you selected, or all the objects in the scene, with a single click.You can also create customized Nested Prefabs, using Snaps Prototype assets. The script can help you automatically generate high-resolution Prefabs that correspond to them.So, what happens if we want to alter one of the props inside the pack slightly? Perhaps give it a different material, or change the topology? Typically, you’d need experience with an external 3D modeling package to do that. However, since Snaps assets are made with ProBuilder, we can use it to edit any of our existing props – or create new ones – without leaving the Editor.ProBuilder is a completely free 3D modeling and level editing package, available directly within the Unity Editor. The installation steps for ProBuilder can be found at the beginning of this blog post. ProBuilder offers a vast array of 3D modeling tools and is especially useful for prototyping. It is also compatible with ProGrids, making it easy to create precisely positioned geometry.We won’t be covering the entirety of the ProBuilder toolkit here, but you can explore detailed documentation and many tutorial videos on the topic. If you want to get into 3D modeling or find an easier way to create level layouts in Unity, that’s the perfect place to start.To edit any of our existing meshes, we will need to convert it back to the ProBuilder editable format first. To do so, we need to select our mesh in the scene while we are in Object mode.Then, go to Tools → ProBuilder → Object → ProBuilderize. This will open a new ProBuilder toolbar at the top of the Scene view and will enable us to edit the mesh.We will also need to open the ProBuilder window to access the rest of the 3D modeling toolkit. To do so, go to Tools → ProBuilder → ProBuilder Window, and dock it somewhere handy.One of the features of ProBuilder is the Material Editor. This is separate from the main ProBuilder toolkit; you can find it under Tools → ProBuilder → Editors → Open Material Editor. ProBuilder has many other useful editors, such as the UV and smoothing editors, which you would typically find in a 3D modeling package. For now, let’s focus on changing the materials for our object.Since we’re going for a futuristic look, the color scheme of the assets from the office pack doesn’t fit very well. Let’s change it. First, ProBuilderize one of the props – I chose the desk. Now, let’s take a look at our Material Editor. Here, we can select an existing material and assign it to either the entire object or only the selected polygons. We can also assign some materials to hotkeys.Let’s quickly create a matte white material for the desk and a black metallic material for the legs. I can now assign both of these to different hotkeys. After this, let’s go into ProBuilder’s face mode by selecting the corresponding icon in the top toolbar of our Scene view.Let’s select all faces of the desk by going around the desk and selecting with Shift. After that, press either Assign Material or your designated key combination. You can then do the same with your metallic material for all the faces of the desk’s feet.You can also add some emissive materials for the computer screens, or just change the materials initially assigned to the computer screens.So now we know it’s super easy to reassign materials for Snaps with ProBuilder. You can also learn from other Unity users; this video shows you how to use the existing UVs and the UV editor to map a texture to your mesh, walking you through the complete creation process for two simple game props – a crate and a barrel.What if we want to change the geometry of one of the meshes in the level? There is currently an office chair in my level, but it’s not very futuristic. Let’s see how we can turn it into a glorious gaming chair instead.After looking at some references of what those look like, we get an idea of the general shape of our object. Let’s start simple: the back of our chair needs to be a lot taller, as it will have a rounded headrest. There will also be two holes in the backrest, near where the headrest pillow would be. The sides of the backrest are wing-shaped, and the bottom of the back needs to go down all the way to the seat.Let’s start by working on bigger shapes by going into face mode and selecting the top and bottom polygons of the backrest. Then, we can switch to the scale tool and scale-up on the y-axis. This will pull our top and bottom polygons further apart from each other. After that, we can use the move tool to do some micro-adjustments on the faces individually.Tip: The axis in which your scale, move, and rotate tools operate in the context of ProBuilder depends on whether you are in Local or Global transform mode, and whether your handle is at the Center of your selection or the Pivot of the last selected element, like a face or an edge. Generally, it is best to stay in Local transform mode, so that the axes are local to the normals of your selected objects, and to use the Center of your selection as your tool handle placement, to ensure that all of your selected objects are affected by the tool symmetrically.One thing that we can notice is that we don’t have enough detail on the top part of the chair to shape our headrest properly. To fix that, we can go into Edge mode, and use the Insert Edge Loop tool to add two new edge loops along the top of our chair. We can then grab the two new edges at the top of the back and move them up, which gives us a more rounded shape.Let’s get to work on the sides of the chair. We can grab two of our outside edges and scale them apart; however, we will need to add some new edge loops to round off our chair. We should also pull the wings slightly forward.Now it’s time to make the holes in the back. For this, we will need a few new tools. We can split some of our existing quad geometry into triangles, then delete those triangulated faces, and bridge the formed gap outline with new polygons.To begin, we will need to pull some of the geometry in the middle of our backrest upwards. After that, select four diagonally placed faces on both the front and the back of the chair, and press Triangulate Faces in our ProBuilder panel to see the result.As you can see, some of our new triangles are facing the wrong way. We can fix this by undoing our division and deselecting the faces which end up forming improperly oriented triangles. Then we triangulate again, on only one side of the mesh. After that, we can use the Flip Face Edge tool on the other set of polygons, and if we triangulate those faces now, you will see the edge has been flipped correctly.Now, select our new triangles and use Delete Faces to form a gap. Once complete, we can go around the newly formed mesh hole in edge mode, selecting pairs of faces from the front and the back and using the Bridge Edges tool.Earlier, we talked about ProBuilder’s smoothing editor. We can use it to make the corner edges of our chair appear slightly more rounded. Let’s bring up the Smoothing Editor by going to Tools → ProBuilder → Editors → Open Smoothing Editor.We can get rid of any existing smoothing groups on our chair by dragging a selection box around the object in Face mode (make sure Select Hidden is set to On) and pressing the Clear Smoothing Groups button in the Smoothing Editor window.Let’s start smoothing the parts of our chair that we wish to round off. First, we can select all the faces of our backrest, and add it to the same smoothing group by pressing on any of the numbers in our editor. There are no hard edges on the back of our chair, so we can safely do this, but make sure not to select the metal part of the chair base at the back.We can also do the same for the seat, and we can use the same smoothing group since the faces of the back and the seat are not adjacent.It also makes sense to add smoothing groups to other parts of the model that should appear rounded, like the hydraulic cylinder and the feet. Be sure to not assign the same smoothing groups in places where you want to keep a hard edge while smoothing both sides.And there we have it. Here’s the office chair we started with, next to the glorious gaming chair that we have in the end.You now have a good knowledge of using Snaps and editing them with ProBuilder. You might also want to explore how you can make your own props using these packages.Let’s do so by making a small alien-looking prop, which would function as a sort of control panel for our doors.We can begin by making a primitive in ProBuilder. To do so, let’s select the New Shape tool and open its configuration window by pressing the plus icon next to the tool’s name. This will open a dialogue box that lets us set dimensions and specify the type of our new shape.When deciding this, remember that all Snaps assets are made with real-world scale in mind. That means it can be helpful to think about how an object’s size will be relevant to, for example, the height of an average person. Here are the dimensions we went with.Now, the first thing we can do is make this object snappable. To do so, we will plant our pivot point at the base of the object and in the middle. This will mean that it always snaps to our grid. Alternatively, if you were making a floor or wall tile, you might want to put your pivot in the location of one of the corner vertices.For doing this, ProBuilder includes a handy Set Pivot tool, which will place the pivot at the center point of our current selection. Knowing this, we can select the bottom face of our new mesh, which will place our selection handle in the middle of it, and press Set Pivot to place it at the center-base of our object.We can start shaping our object. For now, let’s assign it a material and start working by scaling the top face of the mesh inwards since our prop will converge towards the top. We can then use the Extrude Faces tool to add some new geometry at the top and scale it inwards again.Tip: Did you know you can still use ProGrids snapping functionality when modeling? We can turn it on to place our newly extruded faces on the same height level as the top of our base mesh.After that, we can select the outer top edge loop of our base, and lower it slightly with the Move tool.Let’s extrude the top once more, and we will have the base shape of our object.It would be nice to add a glowing inlay to our prop. Let’s do that now. First, begin by adding two edge loops around either side of our mesh. Then add four more, one on each side of both of our new loops, as shown below.We can now select the two outer edge loops and scale them inwards to bring them towards the center; then do the same on the other side. Now, select all of the faces on the inner side of our edge loop, and in the Extrude menu, we can set them to extrude by Face Normal with a thickness of -0.01. This will give us a nice inlay on our mesh. Let’s make a new emissive material, and assign it to the inlay, for the result below.Currently, the top part of our prop is a bit boring, so let’s fix that. We can lower the outer vertices of the top surface. Before then, we can add some support edges with the Connect Vertices tool to ensure that the quads triangulate properly when we lower the vertices. After that, select the four outer vertices, and bring them down with the Move tool.Neat! One other thing we can do is add a floating element above our mesh. Let’s choose the four triangular faces currently at the top of our mesh, and press Detach Faces. This will give us a new ProBuilder mesh but will form a hole in our old one. To fix that, go back to the old mesh and use the Fill Hole tool by selecting the edges around our missing faces.Next, we can move our new detached mesh upwards and rotate it 90 degrees. Even though this element is floating, we want to keep it as part of the same mesh. To do so, we can merge our meshes by selecting them with Shift and using the Merge Objects tool.Let’s add some final touches by extruding from our new mesh twice more, and assigning an emissive material to it as well.And there we have the result, next to our door.We hope you found this guide useful.Learn more about Snaps Prototype and the other Snaps packs.

>access_file_
1439|blog.unity.com

Showcasing the world’s first photorealistic mixed reality demo by Varjo and Volvo

Get a behind-the-scenes look at an ambitious automotive project made with Unity from Varjo, the maker of industrial VR/XR headsets known for their superior visual quality in VR.The team at Varjo is behind some of the most innovative projects in the world of mixed reality. They previously shared a photogrammetry-based environment they created in VR with us, and today they will share how they, together with Unity and Volvo, broke new ground on a demo that brings the real and virtual worlds together like never before.Get a first-hand look at this project in-person at Unite Copenhagen. Volvo and Varjo will be on-site showcasing this experience, as well as speaking in multiple sessions:How Volvo embraced real-time 3D and shook up the auto industryCreating next-gen VR and MR experiences using Varjo VR-1 and XR-1Future mobility, smart cars, and autonomous driving: Preparing for a new era at VolvoThe content below is courtesy of Varjo.Mixed reality means blending virtual content with the real world. So far mixed reality has been accomplished with optical see-through, where the user sees digital objects augmented on top of reality through a pair of glasses. This is fine for portraying infographics or playing games, but for realistic scenes, it offers little value. Optical see-through devices can’t display black or opaque content on top of the real world. Everything appears hazy and holographic.We at Varjo wanted to get rid of this limitation and be able to render photorealistic, opaque content – where it is impossible to distinguish between what is real and what is virtual. Our mission was to make photorealistic mixed reality possible with video pass-through. Video pass-through means using cameras to digitize the world in real-time, and then showing the combined result of real mixed with virtual to the user.Before we could achieve this, we first needed a VR headset capable of displaying the real world in human-eye resolution. That is why we released our first human-eye resolution product VR-1, targeted for professional users, to the market in February 2019.And at Augmented World Expo 2019 in Santa Clara, we showed a glimpse of the magic that can be accomplished with video pass-through. We publicly demonstrated our new headset XR-1 Developer Edition for the first time with a joint demo with Volvo, made with Unity. With XR-1, you can blend virtual content seamlessly with reality with extremely low latency and integrated eye-tracking, in superior resolution.Here’s how the world’s first photorealistic mixed reality demo was created.This video is unmodified material shot through the Varjo XR-1 Developer Edition. With XR-1, you can see photorealistic virtual content blending with reality in a full field of view. You can also switch seamlessly from XR to full VR.Varjo began working on a video-pass through mixed reality headset in early 2018. The collaboration between Varjo and Volvo also started in spring 2018, as Volvo outlined the need for an XR headset that would allow them to test various elements of future cars – such as heads up displays, new materials, and UI for infotainment systems – inside a real car while driving on a real test track. The high requirements on readability and low-latency needed to drive a car on a test-track pushed Varjo to succeed in product development.Given how well Unity already worked for the VR-1, it was a natural choice to try out how the virtual objects would appear in mixed reality. The fact that Unity is easy to integrate and extend with C++ libraries, such as our own Varjo plug-in, made it possible for us to extend our plug-in to support mixed reality. By simply defining the empty background in a VR scene to be replaced by the video-pass-through signal, we were quickly able to see virtual objects in a real environment.The close collaboration and fast iterations were made possible by Unity’s ease of use, as our team was developing and improving pass-through simultaneously while working hand in hand with our customer. A year later, the first public demonstration of XR-1 brought to life the capabilities of our technology, combined with Volvo’s superior models and photorealistic Unity graphics.The demo illustrates the power of video pass-through mixed reality as opposed to optical see-through. In this demo, you have the following steps:1. Experience real realityYou see the real world around you through the XR-1 headset. The real world is streamed with <10ms latency via the high-res cameras in the front plate. You see the world in a full field of view and at a high resolution with a 90Hz framerate, which gives a sensation of not wearing any headset at all (i.e., seeing the real world with your own eyes). You can walk around and explore the real world freely.2. Enter photorealistic mixed realityA beautiful Volvo XC60 builds up in front of you. It first appears as a stylized transparent blue wireframe. The virtual car is anchored to the real floor in the room around you and oriented so that the chair in the booth is aligned with the driver’s seat of the virtual car. The viewer can take a seat in the real chair and is still able to see the real surroundings through the wireframe.The car now turns into a solid model, and the surfaces goes from transparent to opaque. The virtual car casts shadows on the floor of the real world, and looking on the car’s surface, it is possible to see that the real world is reflected in the car’s surfaces. The reflections come from an HDR cube map that was taken during setup on the exact spot of the car. The same cube map is also used for ambient lighting.This is the first time the viewer sees opaque mixed reality, and the effect is stunning. You can still see the real world and your colleagues through windscreens.How it was accomplished: The car model was provided to Varjo by Volvo. Because the resolution of the headset and the car was so high, we needed to do as much pre-processing as possible. The lighting was baked in the DCC to textures and multiplied in custom shaders. The baked textures only dealt with occlusion and the shading is still affected by the skybox.Mattias Wilkenmalm from Volvo handled asset creation and wrote custom car paint shaders that delivered superior results. We simply modified them to get the look and transitions we needed. The final model is around 7 million polygons and has around 150 4K textures. 3. Switch seamlessly into virtual reality – and back.The viewer is then asked to step outside the car, and we transition to Venice. The last pieces of the real world around the viewer are now disappearing in a unique transition as the reality transforms into a virtual scene of Venice, where the car is parked in one of the alleys. The reflections in the car are now those of Venice, and the shadows of the car are now landing on the streets of Venice.After a while, we transition back from virtual to the real world. The user can now go around the virtual car and see all the details and reflections. This shows that XR-1 offers the ability to still interact with others and select only the parts you want to virtualize. How it was accomplished: To make the transitions visually pleasing, Volvo’s Timotei Ghiurau, Lead, Virtual Experiences, and XR Research, used world space 3D noise with alpha cutouts to bring in the car and the environment. This is fast to do in the fragment and it looks very cool. It was a perfect combination when dealing with tight deadlines. Noise functions can be fetched from Unity’s Keijiro's repository.To get the smooth transition for the car reflections, the Venice environment was added to a separate layer so that the real-time reflection probe was only rendering the minimal amount of geometry. The reflection probe was rendered at 30 frames per second while the scene renders at a much higher frame rate. Having the transition visible in the reflection probe added much more immersion to the scene.The fact that XR-1 is a first-of-a-kind headset to offer the ability to seamlessly switch from real reality into mixed reality onwards into full VR and back to reality again makes for a very impressive demo. It is a Matrix-like moment to see the surrounding reality disappear and replaced by a virtual scene – and then traveling back.---Our thanks to Varjo for sharing this post with us. Join us at Unite Copenhagen to see Varjo VR-1 and XR-1 Developer Edition first hand. Get started developing XR applications today with Unity Industrial Collection.

>access_file_