// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1689 transmissions indexed — page 77 of 85

[ 2018 ]

20 entries
1524|blog.unity.com

Cinemachine for 2D: Tips and tricks

Have you been working on a camera system for your 2D game for ages and wish there was something like Cinemachine for 2D? Not many people know about it, but there already is! This blog post gives you some tips for getting the best out of Cinemachine, and how this tool can benefit and speed up the development of your 2D game significantly. Keep reading to find out more about Cinemachine Virtual Cameras, Confiners and more, specifically for use in 2D games.You can get Cinemachine from our Package Manager within Unity if you are using any version greater than 2018.1.0b7.This can be done by going to Window > Package Manager > All and then selecting Cinemachine.With Cinemachine, it’s relatively easy to start creating your camera system for a 2D environment. Let’s take a look at creating a Virtual Camera for 2D.1. Create a 2D Virtual Camera by going to the menu bar and selecting Cinemachine > Create 2D Camera. This will create a Virtual Camera set up for a 2D environment. If it’s your first Virtual Camera in the scene, it will also add a Cinemachine Brain Component to your Main Camera. 2. Drag your player from the hierarchy to the Follow target.3. Make sure that nothing is in the LookAt Target, if you have something there, select it and press backspace or delete to remove the reference.4. Adjust the Orthographic Size and Body settings to suit your needs.The main thing you will notice between a 2D Virtual Camera and a 3D Virtual Camera is the fact that we are using a Framing Transposer. This special transposer will follow a target on the camera’s X-Y plane, and stop the camera from rotating. For the framing transposer to work correctly we need to ensure that the Virtual Cameras ‘LookAt target’ is null. Another thing to note is that 2D games use an Orthographic View, when first creating your Virtual Camera you will need to change the projection to Orthographic on the Cinemachine Brain Camera of your Scene.An important thing to remember is that with Cinemachine you shouldn’t try to make one camera do everything. Instead, you can have different ‘Virtual Cameras’ around your scene and blend between them using the Cinemachine Brain. This blending can occur if the player is low health, the player has entered a certain area or any other scenario you can imagine that requires a change of Camera framing or Post Processing. You can adjust the blend settings on the Cinemachine Brain depending on how you want the visuals to be.Check out the video below to see how you can adjust your Virtual Camera during Play Mode.Another cool thing we can do with Cinemachine is to use a boundary box to confine the Virtual Camera to a certain area. This feature is available in the extension section of the Virtual Camera. Below are the steps to create this effect.1. Set up a boundary box for our level. This will be used to confine the level. This is done by:Create an empty ‘GameObject’.Add a ‘CompositeCollider2D’ to the GameObject.Set the ‘CompositeCollider2D’s Geometry Type to Polygons.Set the ‘Rigidbody’ on the GameObject to Static.Add a ‘BoxCollider2D’ to the GameObject.Adjust the ‘BoxCollider2D’ to fit your level.Set the ‘BoxCollider2D’ to be “Used by Composite”.2. Add the Confiner Extension to your Virtual Camera.3. Drag in the boundary box we created earlier, to the Bounding Shape 2D Box.4. Decide if you want the Camera to Confine to the Screen Edges, this can be adjusted with the “Confine Screen Edges” checkbox.5. Finally, we can decide if we want the Confiner to have damping. This will allow the camera to smoothly overlap with the edge of the confining collider. If you don’t want this effect - set the Damping Time to 0.See the results of this below:Video Example:Another cool feature from Cinemachine that we can use in 2D is the Group Camera. This camera allows us to create a target group for our camera to look at. A Group Camera is useful if you’re doing a cutscene where you want to show something of importance, you want to keep more than one object in each frame, or if you want to create a local multiplayer game in 2D.In Cinemachine we can do this the following way:1. Create a new 2D Virtual Camera2. Create a new GameObject in the scene 3. Add Component > Cinemachine > ‘Cinemachine Target Group’4. Add the GameObjects that you want to focus on to the Target Group5. Drag the target group to the Virtual Cameras ‘Follow Target’ property in the Inspector.6. Adjust minimum and maximum Orthographic Size - which in camera terms defines the ‘zoom’ of the camera - to your preferred setting7. Determine the type of Group Framing you want. ‘Horizontal’ only considers the horizontal frame dimension when framing the camera. ‘Vertical’ only considers the vertical frame dimension when framing. ‘Horizontal and Vertical’ type takes both dimensions into account.A ‘Target Group’ is a Cinemachine component that allows you to view multiple targets on the same Virtual Camera. It adjusts the camera’s size to ensure that the targets are visible depending on the weight. For example, if all of the targets have a weight of 1, the target group will ensure that all targets are visible.Here’s an example of a Group Camera that focuses on the player and a chest.With Cinemachine V2.2, we’ve added in a new Impulse Extension. This extension allows users to create camera shake effects without creating any code. Impulse Extension can be added as an extension to your Virtual Camera. I’m going to walk you through how to add this to your scene:Create a 2D Virtual Camera using the Cinemachine menu option and set it up to follow our target.Click Add Extension > Cinemachine Impulse Listener.On the Impulse Listener, ensure the checkbox named ‘Use 2D Distance’ is checked.The Channel Mask allows you to filter the impulses you want to listen to. In this example, we’re going to use the default channel.Select the GameObject to send the Impulse from, in this example we’re going to use a Ball that bounces. Every time it hits the floor, it will broadcast an impulse signal on the default channel and our Impulse Listener will pick it up.Click Add Component and search ‘Cinemachine Collision Impulse Source’.Under the ‘Signal Shape’ heading, we’re going to select a Signal for the Raw Signal variable. This is a ‘NoiseSettings’ profile. We can either use the default ones or create our own.Create a new ‘NoiseSettings’ profile by clicking on the gear icon, and select ‘New Noise Setting’. Save this to your project.We can choose to have the noise affect Position and Rotation or just one of them. Because it’s for a 2D Game, we’ll affect the X and Y positions and the Z rotation. A good noise profile is unpredictable. With noise, the only thing we want to do is replicate some of nature’s randomness. We can do this by having multiple layers with different detail.If we wanted to, we could add a gain to the Frequency and Amplitude that the Noise Profile applies.Under the Spatial Range heading, we’re going to adjust the Dissipation Distance. This determines at what range we no longer feel the impulse. Set it to 25 for now.If we wanted to we could adjust the Dissipation Mode to change the decay type. There are other settings we can adjust, but for now, that’s all we need.You can see the result below:So, to summarize, we now know how to:Get Cinemachine with Package ManagerCreate a 2D Virtual CameraConfine a Virtual Camera to a 2D SpaceCompose a Virtual Camera to follow multiple targetsSet up an Impulse Module to add Camera ShakeI hope you enjoyed this blog post and now feel ready to use Cinemachine in your next 2D project! If you want to keep up to date with Cinemachine? Join in the discussion on our forum.

>access_file_
1527|blog.unity.com

Unity Hackweek 2018: Creating X Together

When do you do your best creative work? At Unity, we know that when you’re around people you trust, in a relaxed, friendly environment, and you have a chance to deeply concentrate, interesting things happen. Add a time limit and a sense of shared purpose and you might just witness something pretty magical. That’s why we’re gathering our engineers together every year, for a week of experimentation, collaboration and overall good times that we call Unity Hackweek.The principle was simple: think of a project you want to do, find teammates, work on it for a week, present the result. What is special about the way we do Hackweek is the spirit of freedom, openness and collaboration. There’s no central planning. All of the projects that people want to work on are listed in a simple Google Sheet.To turn that wish list of projects into reality, we gathered in a small town in Denmark, around 90 minutes from Unity’s original hometown of Copenhagen. The area is facing the open sea and the huge bridge between islands of Sjælland and Fyn. It really felt like the sky was the limit.So what did everyone actually work on? Most of this year’s projects evolved around learning new things, like ECS, AR, film-making or Machine Learning, or helping fellow developers, both our own engineers and all of you creators. Some great Unity features, like IL2CPP, Progressive Lightmapper and the Profiler, started a long time ago as Hackweek projects. The vast majority of hackweek experiments don’t make it to the Unity roadmap though. The point of Unity Hackweek is to try new approaches, free from the usual quality and workflow constraints we place on Unity code.For Hackweek 2018, we’ve mixed things up a bit, and invited more than 50 external guests, mostly from partners such as Google Cloud, Nordeus and Zynga, but also some of our most enthusiastic and vocal community members. The majority of our guests were a part of our Women in Gaming initiative. All of them were free to join any teams, listen in on internal tech talks, network and share feedback.“I’ve always wanted to go to Unity Hackweek! Compared to going to a conference it’s been a lot more relaxing, very creative place to be,” says Lotte May of LotteMakesStuff. She’s been part of our ECS alpha group for a while and says that it’s been invaluable to be able to talk to the team face to face, instead of just the usual Slack channel. She was part of the “low hanging fruit” group that focused on those tiny practical improvements that we know a lot of people need, but for some reason, we haven’t implemented yet. “Touching Unity source code felt pretty magical! Even if what I made is just a proof of concept,” she says. You can read about her ListDrawerAttributes project on Twitter.Mark Mandel and Joseph Holley came to Unity Hackweek as guests from Google Cloud. You might remember that we just announced our strategic alliance with Google at Unite Berlin. Using Unity, Google Cloud platform and Multiplay hosting, their team was able to turn the Hover Racer game from last year’s Unite Austin Training Day into a multiplayer game with matchmaking in just two days. “It really helped that everyone who could answer our questions was in the same room, so we could move extremely quickly. But hopefully, this will soon be easy for anyone, thanks to our continuing collaboration!,” says Mark Mandel, Developer Advocate for Google Cloud Platform. You can learn more about what we're working on together in Mark's interview with Brett Bibby, our Vice President for Engineering, and Micah Baker, Product Manager for Gaming on Google Cloud Platform.Their project was also one of the many explorations of our new model for writing high-performance code by default, the Entity Component System (ECS). Another was “ECSCraft”, a small game with mining, crafting, and lots of data, designed to test how ECS can make a similar game run more efficiently. “Most of the team started with no knowledge of ECS, but in the end, we put together a prototype in just a few days,” says Fabrice Lété from our core engineering team who also did a presentation on ECS for everyone at the start of the week.Tove Brantberg from Ubisoft Redlynx, who’s a UI programmer in her daily work, coded the procedural generation of the environments in the project. She was a first-time guest at Hackweek, coming from Finland. “Everyone here is interested in the same thing. So even though there’s a lot of people, you can talk to anyone and you’ll have something in common. That’s such a really great feeling”.Morgan Paul (Natural Motion / Zynga), also got the introduction to ECS from Fabrice’s talk: “That absolutely helped. ECS represents a whole new way of thinking, so I had to move away from how I normally go about structuring code.” They worked together with the developers of our upcoming small runtime to explore Unity for Small things and ECS. The resulting game was just 330KB!Morgan has a 1.5 year old daughter and going away for seven days would normally present a logistical challenge for their family. For the first time this year, however, we offered a daycare at hackweek. “The standard of care here is great! This option really brings down the stress of attending a professional event when having kids,” says Morgan. The daycare also meant a lot to a couple who are both working at Unity. They didn’t have to pick who will get to go to Hackweek and who will stay home with children. Taking part in Hackweek is such a big part of being in Unity R&D that it was only natural that we got some proper professionals to look after the little ones while their parents hacked away.The daycare was also one of the things that set Unity Hackweek apart from what some might imagine a hackathon looks like. Yes, a large part of the event consisted of developers furiously drawing diagrams on whiteboards or intently staring at screens until very late in the evening. But the overall atmosphere was relaxed, and people took breaks to recharge. The weather turned out to be amazing, so swimming in the Baltic sea was an option, as well as walking on the beach, or just sitting on the grass and enjoying the view. The goal wasn’t to compete against one another; there were no winners and losers. “Well, my team is done, so I’m happy to help,” was a common sentiment on the last day.Richard Fine, from our Build team, is a veteran of four Hackweeks. “My first Hackweek, my project completely failed! I felt good about it though - Hackweek is a time for testing out risky and ambitious ideas, and if nobody fails, it means we’re not being risky enough.” This time he joined a team adding dynamic content to one of our upcoming example games. “We all learned a lot, but also have a huge list of feedback and code that the game team and the ECS team can take apart”.The basic idea of Unity Hackweek is that we all have a lot to learn from each other and can do amazing things when we get the right people together. Watching the results of all those clever experiments during one long presentation on Friday, with everyone is cheering and clapping, is incredibly inspiring. And inspired and motivated people make great game engines! Therefore, Hackweek is also our long-term investment in solving your real-world problems.If you would like to know more about working at Unity and see open positions, have a look at our Careers page.

>access_file_
1528|blog.unity.com

Book of the Dead: Quixel, wind, scene building, and content optimization tricks

In this blog series, we will go over every aspect of the creation of our demo Book of the Dead. Today, we will focus on our partnership with Quixel, our wind system, scene building and content optimization tricks. This is the fourth blog in our ‘Making Of’ blog series. In case you missed it take a look back at the last three posts that go through the creative process for characters, concept art and photogrammetry assets, trees, and VFX within Book of the Dead.Hi! My name is Julien Heijmans, I work as an Environment Artist as part of the Unity Demo team. I only joined Unity last year, but I have around 7 years of experience in the video game industry. This blog post will provide you some insight into the production of Book of the Dead from my perspective, the perspective of a content creator and an environment artist.I am kind of new in the work of photogrammetry assets, but I remember clearly the day Quixel announced the creation of Megascans several years ago. Ever since, I’ve been eager to get an opportunity to work with their assets. Joining Unity’s Demo team made that happen, as I started to work on The Book of the Dead.If you want to start experimenting with the tools discussed in this blog you can download the Book of the Dead: Environment project now.Download the projectWhen I joined the project I realized that we were not only using assets from Quixel’s Megascans library, but that Unity and Quixel were partnering together for the creation of this project.During the production process, the Demo Team created a list of the assets that they would need and Quixel would capture new assets if they were missing an appropriate match in their existing library. Many of those assets were vegetation such as grass, plants, and bushes, that require proper equipment and setup to scan.Quixel did not only provide us with texture sheets for those assets, but they also created the geometry, with their LODs and vertex color setup to support our wind shader.Between the released Book of the Dead: Environment project, and the unreleased assets used in the teaser, we received over 50 assets of high quality and of complexity that would have seen us struggle to make our deadlines with the few artists we have on the team.During the production, we could get the assets pretty quickly into the engine, and looking good. We would often tweak the textures (mostly the albedo, tweaking the brightness/levels/curve and often tweaking the colors to unify them across the scene), repack them properly, tweak a bit the LODs to the level we want, assign the textures to a new HDRP Lit material, and we would be done with it.Luckily Quixel has recently released a tool, Megascans Bridge, that would do most of the importing work that we did manually. It saves time in repacking textures for HDRP and the likes.For those who are interested in more Megascans assets, here’s a reminder that there are several Megascans collections on the Unity Asset Store. All the assets are ready to be imported into a project setup with the High Definition Render Pipeline or the Lightweight Render Pipeline.The creation of a wind system for vegetation assets and its whole pipeline is always a tricky process. There are many different kinds of vegetation assets that would need to be animated in different ways; two different trees might require completely different setup and different shader complexity.For this reason, our team decided to create a custom vertex shader based procedural animation for the wind effect on our vegetation assets. We made it tailored to work with our specific project and the trees or bushes it contains. Allowing us to have a complete control over it.Torbjorn Laedre, our Tech Lead built a shader that would support several different types of vegetation, using 3 different techniques:Hierarchy Pivot, for our trees and some plants with a very defined structure/hierarchySingle Pivot, for grass, small plants and for large bush with undefined structure/hierarchyProcedural Animation, for vegetation assets where pivots cannot be predicted.The trees were the more complex assets to prepare, on the content side, they are using the Hierarchy Pivot type of animation and they rely on 3 distinct levels of hierarchy:Trunk, that rests on the ground.Branches Level A, that are connected to the trunk.Branches Level B, that are connected to the branches of Level A.The shader needs to know the level of hierarchy and the pivot of every single vertex of the tree. I first had to author the geometry of the tree itself, and then assign the level of hierarchy for every polygon of the tree using the green vertex color channel.A value of 0 for the green channel of the vertex color would signify that it is the trunkA value between 0 and 1 would be the branches level AA value of 1 would be the branches level BI did this using Autodesk Maya, with some small scripts I was able to set up all of the LODs of an asset in 10-15 minutes.In addition to this, we also used what we called a ‘Flutter Mask’. They are texture masks that would help determine where in the geometry the pivot of the branch be. We used this for the branches that used hard alpha textures for geometry. Here is an illustration of this mask.With all this information prepared, I could use the C# script that would input my tree prefab, and generate a new prefab with the pivot information of every vertex baked in. After adding a WindControl object to my scene, I can import my tree in the scene, and start playing with the material properties.You can see that each hierarchy level has a range property (basically the length of the trunk, or branches) and an elasticity property.There are also some properties to set up wind flutter animation. They add a bit of procedural noise to the vertex positions, to imitate the vibration of the branches when the wind blows on them.Last, but not least, we had to make the wind sound FX influence the wind animation. The volume of the sound is driving the wind strength of the animation. It is really surprising how a simple idea can add to the project. If you have not done it already, you should open the project and walk around. You will notice the trees and all the grass around shaking when you hear large gusts of wind hit your surroundings.When targeting the level of detail and density of a project like Book of the Dead, it was important for me to think about how I was going to structure the level, to avoid performance issues later in production. One of the things I tried to be careful about, was to limit long view distances in the scene. You can do that by placing ‘corridors’ and ‘bottlenecks’ in the layout of the scene.Those layouts, together with assets correctly set up as ‘Occluder static’ and ‘Occludee static’ flags will make Unity’s occlusion culling more efficient.This video shows the Occlusion Culling Visualization, and you can easily guess where the camera is looking at from the top view. Around the end of the video, I enable/disable the occlusion culling, and see what objects are being culled by the occlusion culling.You will also be able to see that some objects are not culled, those are mostly the really tall trees, some over 25 meters tall, that have a very large bounding box and are therefore hard to cull behind the cliffs.When the trailer was released, we saw comments that there’s no way we use the legacy terrain system. But that’s exactly what we use, and we modified the HD Render Pipeline’s Layered Lit shader to support it.The HDRP Layered Shader allows blending of layers using their heightmap texture, so the result is better than the linear blend that comes with the legacy terrain shader.This is, of course, a temporary solution, and not properly integrated in the UI. To change the terrain you will need to edit the material that is applied to it, instead of using the ‘Edit Texture’ button in the Paint Texture tab of the terrain object.If you want to create a new terrain and apply different textures on it, you will need to duplicate this TerrainLayeredLit material and assign it to your new terrain. You will also need to create those 4 textures sets in the Paint Texture tab. The textures assigned in there won’t be used for rendering the terrain, but they will allow you to paint the different layers on your terrain. It is also there that you can change the tiling properties of the different layers.Also, to be able to fully use the LODGroup feature, all of the assets placed through the terrain are setup as Trees, and not detail assets.But actually, this project has a really high amount of assets scattered on the ground: grass, bushes, plans, wooden twigs, rocks, etc. With all of this, the terrain can be fairly simple, you can see below that in this particular shot the terrain is just a simple tiling material.When you walk around the level, you will notice in places a very large amount of small twigs and pinecones scattered on the ground.Those are not really that obvious when you simply walk around the level, but they really bring the level of detail of the scene when you start looking at the ground. There are sometimes hundreds of tiny twigs on the ground, between rocks and dead trunks, just like they would eventually rest if they fell down from trees. Placing these by hand would be simply impossible, it is for this reason that Torbjorn Laedre made a tool to help us scatter those small details in the level.The twigs are simple cutout planes with an alpha material. We added physics capsule colliders to them.The script will first spawn the desired quantity of those scatter objects around a transform position, and then simulate physics for them to fall down on the ground, colliding with the terrain and all the others assets (rock, dead trunks, etc). Then, by pressing the button ‘Bake’, they will be stripped of their colliders, merged into a single object, and assigned a LODGroup with a specific distance at which they should be culled.This script is used by objects called ‘UberTreeSpawner’ in the scene, and you are free to use it as you wish.Side note about this tool: For the twigs and other scattered objects to fall properly on the ground and other assets, you will need to have quite high-density mesh colliders on all the assets in the scene. At the same time, you don’t want those heavy colliders to be used when the game is running. For this reason, most of the assets in the scene have two different colliders: One light to be used at real-time in play mode by the PlayerController with the Default Layer assigned. And one used exclusively for the physics simulation of those twigs, with the ‘GroundScatter’ Layer assigned.The Book of the Dead: Environment project is using baked indirect global illumination with real-time direct lighting.Both the indirect lighting from the sun and direct plus indirect lighting from the sky is baked into lightmaps and light probes. Reflection probes, occlusion probes and other sources of occlusion are baked as well. Direct sun contribution, on the other hand, is real-time lighting. Shading in the HD Render Pipeline looks best when using real-time direct light, and it also gives us some freedom to animate the rotation, intensity and color temperature of the directional light at runtime.Since the indirect lighting is baked, we cannot change too much the intensity and color of the directional light, or it won’t match anymore with the baked lighting. We wouldn’t be able to get away with a full day/night cycle in this setup, even though a forest is a quite forgiving environment in terms of obscuring mismatched indirect lighting.Baked lightmaps are used mostly for the terrain and a few other assets, but we preferred to use a combination of light probes and occlusion probes for all the rocks and cliffs in the project, as they provide better results for objects with sharp angles and crisp normal maps.Lighting a dense forest is something tricky to achieve in real-time. Trees, with all their leaves and branches, have a huge surface area and complex geometry, so it’s not practical to cover them with lightmaps. Using a single light probe per tree would give it uniform lighting from the bottom to the top. Light Probe Proxy Volumes are closer to what we would want, but it’s not practical to crank up the grid resolution to capture fine details.For that reason that our Senior Graphics Programmer, Robert Cupisz, developed the occlusion probes.From an artist’s point of view, it’s a really nice and easy feature to use: you simply add the object to the scene, and it displays a volume gizmo that you need to scale for it to cover the area you want, and then setup its resolution parameters in X, Y, and Z.It also allows you to create ‘Detail’ occlusion probes if you want some area of the scene to have a higher density of probes. Once it is set up, you will need to bake the lighting of the whole scene. The occlusion probes will be baked during that process.Each probe in the 3D grid samples sky visibility by shooting rays in the upper hemisphere, and stores it as an 8bit value going from fully occluded 0 to fully visible 1. This gives us darker areas wherever there’s a higher concentration of leaves and branches - even more so when a few trees are clustered together.Probes unlucky enough to have landed inside trunks or rocks will be fully black. To avoid that darkness from leaking out, they are marked as invalid and overwritten by neighboring valid probes.Since the probes sample how much of the sky is visible, they should only attenuate direct sky contribution. For this reason, the lightmapper is set up to exclude the direct light contribution from regular light probes, and then probe lighting is composed as light probe plus direct sky probe occluded by occlusion probes.This way we can have tons of cheap occlusion probes sampling small details of how foliage occludes the sky, bring depth to the image, and very few more expensive light probes sampling slower changing indirect light.If you want to have a clearer picture of how they affect the scene, you can also use the SkyOcclusion Debug view.The occlusion probe API for baking occlusion probes and excluding direct sky contribution from light probes has been added to Unity 2018.1, and all the scripts and shaders are available in the project.We ported and re-used the Atmospheric Scattering solution that we originally developed for the Blacksmith demo.Our Senior Programmer Lasse Jon Fuglsang Pedersen has extended it to make use of temporal supersampling, resulting in a much smoother look.The HD Render Pipeline default Lit Shader supports several types of diffusion. It allows you to have materials with sub-surface scattering, or—like used for all our vegetation in this project—a more simple translucent material with only light transmission.This effect is set up in two different locations:On the material you need to choose the ‘Translucent’ material type, input a Thickness map, and choose a diffusion profile, which is the second location:The diffusion profile settings, where you can edit all the other parameters of your transmission effectNote: Our team added additional sliders to control separately the direct and the indirect transmission to have more control over the final result. But this change is not respecting the PBR rules and thus will not make it into the HD Render Pipeline.The Area Volumes are built on the core volume system offered by SRP and are very similar to the Post Process Volumes. Their function is to drive object properties depending on the position of the Main Camera object.Several objects, including the Directional Light, the Atmospheric Scattering, Auto Focus and the WindControl have their properties driven by Area Volumes, so if you want to change the current lighting setup, for example, you will need to do that in the corresponding Area Volume. Those Area Volumes objects are located in the main scene, under _SceneSettings > _AREASETTINGS, and have the suffix ‘_AV’.For those who have not used the HD Render Pipeline much, there is now a specific SRP debug window that you can open through the menu Window > General > Render Pipeline DebugWith this, you will be able to see individual GBuffer layers, lighting components or specific texture maps from your materials, or even override albedo/smoothness/normal. It is a really useful tool when you have some objects that are not rendering correctly or any other visual bug. It will help you pinpoint the source of the issue a lot faster.The best part if that is that those debug views are generated automatically from your shaders, and coders are able to create new debug views quite easily.I even used those debug views to create the tree billboards that are used in the background of the scene. I just placed my assets on an empty scene and took screenshots with the albedo, roughness, normal gbuffer layers visible, and used those to create my texture maps.While a big part of the optimization resides on the code side, it is also important that your assets and scenes are set up properly if you want to have a decent framerate. Here are some of the ways the content was optimized for this project:All our materials are using GPU Instancing.We are using LODs for most of the assets in this scene, this is a must-have.The LOD Crossfade feature is great, it allows the have a nice and smooth blending between the different Level of Details of you object. But this feature is quite heavy and can really increase the draw call count in your project. For this reason, we disabled it on as many assets as possible.To avoid noticeable transition between LODs, we started using Object Space normal maps on many of our large rock and cliff assets.Note: Using Object Space normal map instead of Tangent Space normal map will reduce the precision of the normal map. It is actually not very noticeable on our assets that are very rough and noisy, but you probably don’t want to use it for hard surface assets.While it is important to limit view distance by the way the scene is built, and by using occlusion culling, it is also worth knowing that many of the draw calls used to render your scene are actually coming from the rendering each cascade of your shadow maps (more specifically from the directional light in our project).We had a lot of draw calls coming from the small vegetation assets scattered on the terrain, hundreds and hundreds of them in some locations. We achieved a nice reduction of draw calls by creating larger patches of those grass and plant assets. Instead of having hundreds of them, we would then have only 15-20. Note that this has an impact on visual quality, with such large assets, it becomes really hard to avoid having the grass clipping with rocks and other assets placed on the ground.We are using layer culling, that is a feature already in Unity but does not have any UI. This feature allows you to cull objects that are assigned to a specific layer, depending on the distance they are from the camera. Torbjorn has extended this feature to be able to also cull the shadow casting of those objects at a different distance. For example, most of our small vegetation assets stop casting shadows at a distance of around 15 meters, which is not very noticeable given the amount of noise with the grass and other plants on the ground, and then they are completely culled at around 25 meters – no matter how their LODGroup are set up.---Stay tuned for the next blog post in the series. We’ll be exploring the work that went into creating the shading, lighting, post-processing, and more from the Book of the Dead.If you couldn’t make it to Unite Berlin, we’ll soon be releasing Julien Heijmans’s presentation about environment art in the demo. You can follow our YouTube channel to keep up to date on when that video is released.More information on Book of the Dead

>access_file_
1529|blog.unity.com

New Best Practice Guide - Memory Management in Unity

Here in Enterprise Support, we get to help out on many projects, with all kinds of combinations of Unity features. What we see is that 10 out of 10 games can improve their memory usage. That’s why we put together our newest best practice guide: Memory Management in Unity.When we go on-site, profiling is always the first order of the day. Whether we’re uncovering coding patterns that add small but unnecessary burdens to the CPU or substantial issues that cause memory fragmentation and Asset duplication, profiling your game early and often is the best way to keep tabs on application health. The most successful teams profile their projects’ memory.Memory is an exceptionally scarce resource (particularly on mobile devices with up to 1GB of memory, which represent 30% of the market today), so it is absolutely essential that you know where your memory is going and why. With memory being managed differently across platforms, it’s not always trivial to understand where memory is being consumed and what influence it has on CPU and GPU performance.But fear not! We’ve created a new best practice guide: Memory Management in Unity. This guide introduces the wide variety of tools available for memory profiling, and dives into the details of how to use them effectively. By using the techniques in this guide in conjunction with the best practices for minimizing memory usage, you will be able to effectively identify and fix problem areas.So you read all of the above and are still itching to dive into more juicy best practices? You’re in luck! While Memory Management in Unity is the latest installment, you can also check out all the other Best Practice Guides we’ve put together, each containing a number of tips and strategies to win back performance and make your project the best it can be:Understanding Optimization in UnityAsset Bundles + ResourcesOptimizing Unity UIMemory Management in UnityLighting in UnityMaking believable visuals in UnityWe update and add to these guides regularly, so be sure to check back once in a while to see what has changed!

>access_file_
1530|blog.unity.com

Solving sparse-reward tasks with Curiosity

We just released the new version of ML-Agents toolkit (v0.4), and one of the new features we are excited to share with everyone is the ability to train agents with an additional curiosity-based intrinsic reward.Since there is a lot to unpack in this feature, I wanted to write an additional blog post on it. In essence, there is now an easy way to encourage agents to explore the environment more effectively when the rewards are infrequent and sparsely distributed. These agents can do this using a reward they give themselves based on how surprised they are about the outcome of their actions. In this post, I will explain how this new system works, and then show how we can use it to help our agent solve a task that would otherwise be much more difficult for a vanilla Reinforcement Learning (RL) algorithm to solve.When it comes to Reinforcement Learning, the primary learning signal comes in the form of the reward: a scalar value provided to the agent after every decision it makes. This reward is typically provided by the environment itself and specified by the creator of the environment. These rewards often correspond to things like +1.0 for reaching the goal, -1.0 for dying, etc. We can think of this kind of rewards as being extrinsic because they come from outside the agent. If there are extrinsic rewards, then that means there must be intrinsic ones too. Rather than being provided by the environment, intrinsic rewards are generated by the agent itself based on some criteria. Of course, not any intrinsic reward would do. We want intrinsic rewards which ultimately serve some purpose, such as changing the agent’s behavior such that it will get even greater extrinsic rewards in the future, or that the agent will explore the world more than it might have otherwise. In humans and other mammals, the pursuit of these intrinsic rewards is often referred to as intrinsic motivation and tied closely to our feelings of agency.Researchers in the field of Reinforcement Learning have put a lot of thought into developing good systems for providing intrinsic rewards to agents which endow them with similar motivation as we find in nature’s agents. One popular approach is to endow the agent with a sense of curiosity and to reward it based on how surprised it is by the world around it. If you think about how a young baby learns about the world, it isn’t pursuing any specific goal, but rather playing and exploring for the novelty of the experience. You can say that the child is curious. The idea behind curiosity-driven exploration is to instill this kind of motivation into our agents. If the agent is rewarded for reaching states which are surprising to it, then it will learn strategies to explore the environment to find more and more surprising states. Along the way, the agent will hopefully also discover the extrinsic reward as well, such as a distant goal position in a maze, or sparse resource on a landscape.We chose to implement one specific such approach from a recent paper released last year by Deepak Pathak and his colleagues at Berkeley. It is called Curiosity-driven Exploration by Self-supervised Prediction, and you can read the paper here if you are interested in the full details. In the paper, the authors formulate the idea of curiosity in a clever and generalizable way. They propose to train two separate neural-networks: a forward and an inverse model. The inverse model is trained to take the current and next observation received by the agent, encode them both using a single encoder, and use the result to predict the action that was taken between the occurrence of the two observations. The forward model is then trained to take the encoded current observation and action and predict the encoded next observation. The difference between the predicted and real encodings is then used as the intrinsic reward, and fed to the agent. Bigger difference means bigger surprise, which in turn means bigger intrinsic reward.By using these two models together, the reward not only captures surprising things, but specifically captures surprising things that the agent has control over, based on its actions. Their approach allows an agent trained without any extrinsic rewards to make progress in Super Mario Bros simply based on its intrinsic reward. See below for a diagram from the paper outlining the process.In order to test out curiosity, no ordinary environment will do. Most of the example environments we’ve released through v0.3 of ML-Agents toolkit contain rewards which are relatively dense and would not benefit much from curiosity or other exploration enhancement methods. So to put our agent’s newfound curiosity to the test, we created a new sparse rewarding environment called Pyramids. In it, there is only a single reward, and random exploration will rarely allow the agent to encounter it. In this environment, our agent takes the form of the familiar blue cube from some of our previous environments. The agent can move forward or backward and turn left or right, and it has access to a view of the surrounding world via a series of ray-casts from the front of the cube.This agent is dropped into an enclosed space containing nine rooms. One of these rooms contains a randomly positioned switch, while the others contain randomly placed un-movable stone pyramids. When the agent interacts with the switch by colliding with it, the switch then turns from red to green. Along with this change of color, a pyramid of movable sand bricks is then spawned randomly in one of the many rooms of the environment. On top of this pyramid is a single golden brick. When the agent collides with this brick, the agent receives +2 extrinsic reward. The trick is that there are no intermediate rewards for moving to new rooms, flipping the switch, or knocking over the tower. The agent has to learn to perform this sequence without any intermediate help.Agents trained using a vanilla Proximal Policy Optimization (PPO, our default RL algorithm in ML-Agents) on this task do poorly, often failing to do better than chance (average -1 reward), even after 200,000 steps.In contrast, agents trained with PPO and the Curiosity-Driven intrinsic reward consistently solve it within 200,000 episodes, and often even in half that time.We also looked at agents trained with the intrinsic reward signal only, and while they don’t learn to solve the task, they learn a qualitatively more interesting policy which enables them to move between multiple rooms, compared to the extrinsic only policy which has the agent moving in small circles within a single room.If you’d like to use curiosity to help train agents in your environments, enabling it is easy. First, grab the latest ML-Agents toolkit release, then add the following line to the hyperparameter file of the brain you are interested in training: `use_curiosity: true`. From there, you can start the training process as usual. If you use TensorBoard, you will notice that there are now a few new metrics being tracked. These include the forward and inverse model loss, along with the cumulative intrinsic reward per episode.Giving your agent curiosity won’t help in all situations. Particularly if your environment already contains a dense reward function, such as our Crawler and Walker environments, where a non-zero reward is received after most actions, you may not see much improvement. If your environment contains only sparse rewards, then adding intrinsic rewards has the potential to turn these tasks from unsolvable to easily solvable using Reinforcement Learning. This has applicability particularly when it makes the most sense for simple rewards such as win/lose or completed/failed for tasks.---If you do use the Curiosity feature, I’d love to hear about your experience. Feel free to reach out to us on our GitHub issues page, or email us directly at ml-agents@unity3d.com. Happy training!

>access_file_
1534|blog.unity.com

Book of the Dead: Photogrammetry assets, trees, VFX

In this blog series, we will go over every aspect of the creation of our demo “Book of the Dead”. Today, we focus on photogrammetry assets, trees and VFX. This is the fourth blog in the series, take a look back at the last two blogs that go through creating characters and concept art from “Book of the Dead”.Hello, my name is Zdravko Pavlov and I am a CG and VFX artist with background in VFX, video compositing, editing, graphic design. I’ve been working with Unity’s Demo team since 2014 and contributed various particles, rigid body dynamics and cloth simulations on the demos “Viking village”, “The Blacksmith” and “Adam”.The “Book Of The Dead” demo was a little bit different. A completely new territory for me, since my role for this project would be to create various environment assets using a photogrammetry approach. Outdoor photography is my hobby, so I was more than happy to handle such a task. Creating trees? I mean, how hard can it be, right? In the following blog post I’ll try to describe everything that I learned during the pre-production and development phase of the project.Fortunately, at this point, the Internet is full of valuable info regarding that process, so that’s where my learning began. What most of the articles would tell you is that what you need is any DSLR, camera with 50mm prime lens. I didn’t have any of those at my disposal at the time, so I decided to make my initial tests with my 24MP mirrorless Sony a7II with a 16mm-35mm zoom lens instead. And let me tell you right away it works just fine! The wider lens gives you more distortion, but you can always fix that in Lightroom for example, but in fact, it is better if you don’t! The photogrammetry software handles it gracefully. Prime lenses are more rigid and in theory, should give you a sharper image. They are really great if you scan in a controlled studio environment and I highly recommend it in such scenarios. Out in the field, however, being able to properly frame the desired object with a quality build zoom lens will give you an advantage.I tried out most of the more popular photogrammetry software out there and some of them worked quite well. I chose RealityCapture because of its significantly better performance and ability to process a high amount of photos without running out of RAM. The amount of details it manages to reconstruct from the photos is amazing! I managed to get models, sometimes up to 185 million triangles and successfully export the geometry in PLY format.That, of course, is more than enough and also a little bit extreme. Most of my reconstructions ended up roughly about 50 to 90 million triangles. At first, I was using GF980TI, but later upgraded to GF1080 which gave me a slight performance boost.At some point, I also upgraded my camera to a 42MP Sony aRII with a Planar T* FE 50mm f/1.4 ZA Lens. However, doubling the resolution and using the superior super sharp prime lens didn’t give me the “WOW” results I was expecting. For one thing, the longer (and narrower) prime lens means that you have to step a few steps back in order to have the image overlap that you need for a successful reconstruction. That’s not always possible when you are in the middle of the forest, with all the other trees, shrubs and everything. It also means that you have to manage, store and process twice as many gigabytes of image data. But that doesn’t necessarily lead you to higher definition scans. Having more images is what gets you there and having it in 24MP is more manageable. That may sound obvious, but it didn't occur to me until I actually tried it first hand.As I mentioned I used a PLY format to export the insanely dense geometry. I prefer that over FBX even though the PLY exporter of Reality Capture didn’t have scale and axis orientation controls so unlike the FBXs, the PLYs were out of scale and rotated. I chose to deal with that because I was getting some errors when baking textures using the FBX. Also, the binary FBX export was implemented later.Not a lot of software can handle that amount of polygons, so I just stored the file and used RC’s decimation features to make a low poly version of the same model. Usually around 1M triangles. And that one can be opened in ZBrush, MeshLab or any other modeling software, where it can be retopologized and unwrapped. Depending on the model, I used different techniques for retopology. Often ZRemesher and sometimes by hand.Then I used xNormal to bake textures. xNormal doesn’t seem to be bothered by the hundreds of millions of triangles and handles it with ease. I baked the diffuse texture using the vertex color info. The vertex density in the highpoly was more than enough to produce a clean and sharp texture without any interpolation between vertices. I never used RC’s integrated unwrapping and texturing features.That being said, if for some reason your dense cloud is not dense enough, or there are some areas missing (like in the image below), projecting a texture from your photos can bring additional detail to those areas.What most of the photogrammetry tutorials would teach you is that it is best if you avoid direct, harsh lighting and shadows when scanning an object. If it is a small rock, that you are about to capture, you can bring it in the shade or even in the studio and use softboxes and turntables. You can’t really do that with trees though, so I was watching the forecast and hoping for cloudy weather. However, even in overcast conditions, there were some shadows and ambient occlusion. This is solved with Unity’s DeLighting tool. All it takes is a normal map, a bent normal map and baked AO. It keeps the diffuse values intact while removing the shadows.The resulted assets were then imported into Unity to test the dynamic lighting and shaders.There are times when it is just not possible to capture every single part of your model. Either there’s an obstacle and you can’t get all the angles. Other times you are in a hurry or your battery is dying and you miss something and you don’t realize until you get home and start processing the data. I made a lot of mistakes like that, but then I was able to salvage some of my work by using Substance Painter to clone stamp and try to fix the missing data.For most of the duration of the Book of the Dead production, the Demo team didn’t have an environment artist on staff and we were looking to find one. Some work was contracted out to an external environment artist, Tihomir Nyagolov, who had done the initial explorations and white boxed the environment, but the main load of the work fell on the Creative and Art Director, Veselin Efremov, and myself. Each of us would go out to our nearby forests to capture photogrammetry data, and the work naturally transitioned into producing the final game assets that were needed. I don’t have a background in environment art, and I had zero experience in dealing with game optimizations, LODs etc. At that point there were some placeholder trees already created by Tihomir with the help of GrowFx, so I took over from there, learning as I go.GrowFX proved to be really powerful and versatile tool for creating all kinds of vegetation. It interacts with other objects in your scene so you can achieve all kinds of unique and natural looking results. It isn’t exactly built with game assets creation in mind, but it is controllable enough and can be used for the task. It is a 3DS Max plugin. I’ve been a 3DS Max user for 20+ years and I really feel at home there. Unfortunately GrowFX relies on some of the outdated 3DS Max components like the curves editing dialogs, which aren’t very convenient, but it still was a good tool for the task at hand so I just had to deal with it.The forest in Book of the Dead was intended to be primarily conifer. There are some beautiful forests and parks near my home, so I went on a “hunt” and scanned some of those. Then I proceeded with stitching my GrowFX creations onto the scanned models. The final tree trunk was composed out of scanned geometry and unique texture for the lower part stitched to a procedurally generated trunk with tileable texture for the rest of it, all the way to the top.A small patch of the bottom was clone stamped to the top of texture to make it tileableIt is one thing to do photogrammetry on rocks and tree trunks, but scanning pine needles is a whole new deal. This is where Quixel stepped in and provided us with their beautifully scanned atlases. They collaborated with the Demo Team and did numerous small assets like grass, shrubs, debris, etc. specially created for “Book Of The Dead”.As I mentioned in the beginning, my background is in CG productions and I’ve made large forests before, using Multiscatter or Forest Pack Pro and rendering in V-ray. In such tasks, you can use the Quixel Megascans atlases as they are, but for a realtime project like Book of the Dead we needed to do some optimization. It included building larger elements (branches, treetops etc.) and arranging those into new textures, transferring the initial scanned data for the normal maps, displacement, transmission and so on.The existing Megascans normal data was slightly modified to give a fake overall volume impression.I used different normals editing techniques such as Normal Thief and other custom built 3DSMax scripts to blend the branches with the trunk. Altering the vertex normals so that they can blend with the trunkUsing this approach I was able to produce different types of pine trees.We wanted the forest to feel “alive” and the wind was a crucial element for us. The trees were set up for our vertex shader based wind animation solution by our Environment Artist Julien Heijmans.There are many different ways of creating a vector field and I looked up several different options. Being familiar with Chaosgroup’s fluid solver, PhoenixFD, I decided to see what kind of usable data I can get out of it and bring it into Unity. I was able to export the scene geometry, bring it in 3DS Max as an FBX and run some fluid through it, that swirls around the vegetation and creates the turbulent wind effect. The bigger trees were shielding the smaller vegetation and the effect there was less prominent.I looped the simulated sequence using the integrated PhoenixFD playback controls.The vector information was then read through a PhoenixFD Texmap, normalized and plugged as a diffuse texture over the procedurally created isosurface.The rendered image sequence was then imported back in Unity, where the final texture atlas was assembled. I used to do that in After Effects in the past, but now Unity has a very convenient Image Sequencer tool, that can do that pretty much automatically. It is one of the new VFX tools that is being developed by Unity's GFX team in Paris.The created texture atlas was placed in the scene. I made a simple box to define my simulation boundaries and used that as a position reference.To be clear, this was an experiment that allowed us to push the visuals of some of the shots in the cinematic teaser that we showed. It’s a method that I can recommend if you are using Unity for film production. It plugs into the main procedural vertex shader based wind animation solution, which was developed for the project by our Tech Lead Torbjorn Laedre and was used in most of the scenes of the teaser, as well as for the console version of the project that we showed at GDC.In an upcoming blog post, Julien and Torbjorn will explain more about how we handled the Wind and the final solution we adopted.I started to block some of the ideas about the Hive early on.After the initial design, I started building various game ready elements in order to build the Unity assets.For the screwies crowd, I did some exploration for the body variations. Again I used Chaosgroup’s PhoenixFD and ran a fluid smoke simulation. Then I cut out the screwie shape and created an isosurface based on the fluid temperature Some shape exploration made with PhoenixFDThis method allowed us to quickly preview different shapes and it was used as a general reference. The final screwie character model was created by Plamen (Paco) Tamnev and you can read all about it in his incredibly detailed blog post.To achieve the dripping sap on the screwie’s face, I used PhoenixFD again. I started with making a little proof of concept showing the capabilities and what we can achieve with a dense viscous liquid.I was quite happy with the overall result and the fluid motion, so I proceeded with setting up the real model. The goal was to prevent the simulation from forming too many separated pieces and droplets.That allowed me to get a single frame from the generated geometry sequence, retopologize it, make UVs, and use WRAP3 to project it over the rest of the shapes in the sequence. As a result, I’ve got a series of blend shapes that use the same topology.I also tried running a sap simulation over some of the tree trunks.We didn’t end up using those in the final project. However, I still find it as a nice way to add some detail over the scanned models.---Stay tuned for the next blog post in the series. We’ll be diving further into the environment art created for Book of the Dead with Julien Heijmans.Meet us at Unite Berlin on June 19 to walk through the Book of the Dead environment on a console yourself, and attend Julien Heijmans’s presentation about Environment art in the demo. See the full schedule here.More information on Book of the Dead

>access_file_
1536|blog.unity.com

Book of the Dead: Character and hero assets

In this blog series, we will go over every aspect of the creation of our demo “Book of the Dead”. If you haven't already, make sure to check our previous posts on the Book of the Dead creation process: Introduction to Unity’s Demo team and Book of the Dead: Concept Art.My name is Plamen ‘Paco’ Tamnev, and I've been working as a Character/Environment artist on the Unity Demo team for the last couple of years. Some of the previous projects I worked on were Adam, The Blacksmith, Viking Village and a few other smaller projects. In this blog I will go through some of the work I did for Book of the Dead, my process and the pipeline for the creation of the characters and hero assets in the demo.After our concept art director Georgi Simeonov explored lots of quick ideas in 2d, we decided to try and blend different elements that we liked in those sketches. To do that, I started with some rough sculpts in ZBrush to flesh out some of those ideas. Since we worked in parallel from such an early stage, we didn’t intend for these sculpts to end up as the final art, we were rather looking for the opportunity to try things that we otherwise wouldn't. I also wanted to have the chance to play around with different types of surface treatment and some material explorations. Some of the design elements that can be found in those draft sculpts stayed persistent through the entire character creation process. The proportions, on the other hand, changed quite a bit, because it was an important part of the narrative that the Screwie characters had to be realistic, as opposed to the earlier more stylized designs.I took the opportunity to do some material exploration as early as possible to hopefully give better context to the draft sculpts. Those tests helped to explore the dripping sap that made it into the final design.For the final design, I started from a male scan from an online scan library. starting with a scan gave me the base proportions and general features. After cleaning up the mesh in ZBrush, The final sculpt of the screwie was challenging in the sense that It had to be easily recognizable as a normal human at first glance, especially from the back. We had a slow approaching shot from the back and seeing all of the decay right away would have ruined the buildup. I started by blocking out the bigger cavities that were needed for the front of the character and for the back I stayed closer to the original silhouette and muscle flow. Then I've added some bits of bark to break up the silhouette just a bit in certain places, but still careful to not break the outline of the body too much. Most of the low to mid-level detail was hand sculpted and then I used some scanned alphas to add breakup. It was also a matter of balance to not go overboard in terms of detailing the sculpt in ZBrush, since I'm adding quite a bit of detail in Substance Painter later.The backstory of the Screwies required that their look incorporates a combination of solid resin for the dried up bits, and a more liquid resin for the fresh leaks. This complex material was going to be a challenge. We had the very talented Yibing Jiang do some RnD testing for the shader setup of the amber. She came up with the approach of using two layers of geometry. The base layer of amber acted as the core, and we had a slightly offset version of the same geometry as the top layer. The top layer had a dithered alpha as well as a different set of detail maps, in order to give breakup and variety to the core layer. This, in combination with the Subsurface scattering (SSS) profile adjusted the core layer and gave us a nice looking material with some depth to it.The animated sap dripping from the head was made by Zdravko Pavlov, he will talk more about it in his upcoming blog post.We wanted a quick way to have a crowd of characters with some variety to them, without actually having to build a whole new set of characters. We knew the crowd would be mainly visible in the mid to far distance, so we pushed for them to be good enough without wasting too much time on them. I started by using the clone brush in Substance Painter to create a few fully bark covered versions. I repeated the same process to create the amber layer that was undneath the bark.At this point, we still had no height maps placed. They helped to give a more organic feeling to the final material by adding parallax and breakup to the bark bits. After it was all set up and working, it was easy to make alterations to the decay of the crowd members through only editing the masks.The picture above is one of the first tests of the two layers approach, with a few random masks to test the breakup and decay. Shader-wise, there is a height map with tesselation to help with the offset and overall material definition.For the final look, we added the broken hollow bits by using both opacity and height maps. This workflow gave us the chance to add things like missing limbs easily.A shot of the final crowd with all of the effects applied.Karen’s hands and bracelet are seen for a very short time in the teaser, but they are important for the entire experience. As such, we had to treat them properly and give them enough care, even when compared to their short on-screen presence.For the hands of the player character, I started with a scan from the online library Ten24 that I had retopologized and cleaned up. Then I separated the nails and brought the hands into Substance Painter for the texture pass. Some basic weathering was needed as well, but without being covered in mud or being too distracting when on screen.The Bishop went through quite a few blockouts in Unity. This gave us the chance of trying out interesting ideas such as playing with scale, pose, and how they affected what the character had to convey. Once the final design of the character was finished, and approved by the whole team, we started to build the actual asset.We worked with freelance 3D artist Alex Ponomarev to create the high poly sculpt in ZBrush. Once the sculpt was finalized, I began building the game resolution mesh and used the rigging tools in ZBrush to help make the final pose. These tools are a great way to get a quick static pose and tweak the model, without having to build a complex rig and just gives more room for exploration. I then brought it into Substance Painter and made a few quick explorations for the materials and weathering. Given the size of the Bishop, we had to use several tiled detail maps to help with the resolution and sense of scale.The first thing I did after I got the high poly sculpt from Alex, was to build a quick ZSphere rig and pose the character. We didn’t need him to do any gesturing other than turn his torso slightly, it didn’t make sense to make a complex rig for him. In this case, the ZBrush posing worked just fine.As far as materials go, we decided to go with the more traditional type of look you would see for a large monument. I created several custom materials and for the final Bishop, we used detail maps to complement the sense of scale.Before we settled on the final design for the Bishop, I tried a few of Georgi’s earlier designs and brought them into Unity. It was important to try them out and see how they feel when you approach them from the POV of the game character.We approached Environment and Character artist Tinko Wiezorrek for the modeling and texturing of the shells, cars, and hive.He started by making several different sculpts of the shells based on Georgi’s concepts and notes. His approach was to use the sculpt as a base, add a few texture sets of the different scanned bark that we already had, and combine them with the sculpted details.In the teaser, you encounter cars and other mundane objects found in the world. Since those had to have a handcrafted look to them, we used custom baked geometry with opacity and sap added here and there.When creating the hive entrance, Tinko used the same approach he had for the cars. To help add visual interest and scale, the hive was built with even more variety and custom objects.There were a lot more assets and work done by the team, but I think for the purpose of this blog post, we covered most of our more interesting assets and how we approached their creation.You can follow my work on Artstation & Instagram---Stay tuned for our next posts in the series. Next week Zdravko Pavlov explores the themes and process of asset creation in Book of the Dead with his approach to the trees. After that, Julien Heijmans brings you through the tools and tricks used to make the environment believable. More to come after that.Meet us at Unite Berlin on June 19 to walk through the Book of the Dead environment on a console yourself, and attend Julien Heijmans’s presentation about Environment art in the demo. See the full schedule here.More information on Book of the Dead

>access_file_
1537|blog.unity.com

Procedural patterns to use with Tilemaps, part 2

In part 1 we looked at some of the ways we can create top layers procedurally using various methods, like Perlin Noise and Random Walk. In this post, we are going to look at some of the ways to create Caves with procedural generation, which should give you an idea of the possible variations available.Everything we are going to talk about in this blog post is available within this project. Feel free to download the assets and try out the procedural algorithms.This blog post conforms to the same rules as Part I. To remind you, these rules are:The way we distinguish between being a tile or not is by using binary. 1 being on and 0 being off.We will store all of our maps into a 2D integer array, which is returned back to the user at the end of each function (except for when we render).I will use the array function GetUpperBound() to get the height and width of the map. This means that we have fewer variables going into each function, allowing for cleaner code.I often use Mathf.FloorToInt(), this is because the tilemap coordinate system starts at the bottom left and using Mathf.FloorToInt() allows for us to round the numbers to an integer.All of the code provided in this blog post is in C#.In the previous blog post, we looked at some ways of using Perlin noise to create top layers. Luckily enough, we can also use Perlin Noise to create a cave. We do this by getting a new Perlin noise value, which takes in the parameters of our current position multiplied by a modifier. The modifier is a value between 0 and 1. The larger the modifier value, the messier the Perlin generation. We then proceed to round this value to a whole number of either 0 or 1, which we store in the map array. Have a look at the implementation:The reason we use a modifier instead of a seed, is because the results of the Perlin generation look better when we are multiplying the values by a number between 0 and 0.5. The lower the value, the more blocky the result. Have a look at some of the results. This gif starts with a modifier value of 0.01 and works it way to 0.25 in increments.From this gif, you can see that the Perlin generation is actually just enlarging the pattern with each tick.In the previous blog post, we saw that we can use a coin flip to determine whether the platform will go up or down. In this post, we are going to use the same idea, but with an additional two options for left and right. This variation of the Random Walk algorithm allows us to create caves. We do this by getting a random direction, then we move our position and remove the tile. We continue this process until we have reached the required amount of floor we need to destroy. At the moment we are only using 4 directions: up, down, left, right.We start out the function by:Finding our start positionCalculating the number of floor tiles we need to removeRemoving the tile at the start positionAdding one to our floor countNext, we move on to the while loop. This will create the cave for us:Well, first of all, we are deciding which direction we should move using a random number. Next, we check the new direction with a switch case statement. Within this statement, we check to see if the position is a wall. If it isn’t, we then remove the tile piece from the array. We continue doing this until we reach the required floor amount. The end result is shown below:I also have created a custom version of this function, which includes diagonal directions as well. The code for this function is a bit long, so if you would like to look at it, please check out the link to the project at the beginning of this blog post.A directional tunnel starts at one end of the map and then tunnels to the opposite end. We can control the curve and roughness of the tunnel by inputting them into the function. We can also determine the minimum and maximum length of the tunnel parts. Let’s take a look at the implementation below:public static int[,] DirectionalTunnel(int[,] map, int minPathWidth, int maxPathWidth, int maxPathChange, int roughness, int curvyness) { //This value goes from its minus counterpart to its positive value, in this case with a width value of 1, the width of the tunnel is 3 int tunnelWidth = 1; //Set the start X position to the center of the tunnel int x = map.GetUpperBound(0) / 2; //Set up our random with the seed System.Random rand = new System.Random(Time.time.GetHashCode()); //Create the first part of the tunnel for (int i = -tunnelWidth; i = 0 && neighbourX = 0 && neighbourY 4) { map[x, y] = 1; } else if (surroundingTiles 0) { tileCount += map[x - 1, y]; } //Ensure we aren't touching the bottom of the map if(y - 1 > 0) { tileCount += map[x, y - 1]; } //Ensure we aren't touching the right side of the map if(x + 1 2) { map[x, y] = 1; } else if (surroundingTiles < 2) { map[x, y] = 0; } } } } //Return the modified map return map; }The end result looks a lot more blocky than the Moore Neighbourhood, as can be seen below:Again, as with the Moore Neighbourhood, we could proceed to have an additional script run on top of the generation to provide better connections between areas of the map.I hope I’ve inspired you to start using some form of procedural generation within your projects. If you haven’t already downloaded the project, you can get it from here. If you want to learn more about procedural generating maps, check out the Procedural Generation Wiki or Roguebasin.com as they both are great resources.If you make something cool using procedural generation feel free to leave me a message on Twitter or leave a comment below!Want to hear more about it and get a live demo? I’m also talking about Procedural Patterns to use with Tilemaps at Unite Berlin, in the expo hall mini-theater on June 20th. I’ll be around after the talk if you’d like to have a chat in person!

>access_file_
1538|blog.unity.com

Pulling the strings: How Puppet3D brings your games to life

If you’ve ever had the thought: “Wouldn’t it be great if I could do all my skinning, rigging and animation directly in Unity,” you’re not alone. In fact, that’s exactly what was going through the mind of Asset Store publisher, Jamie Niman (AKA Puppetman) when he developed the insanely powerful Puppet2D, and the more recent, Puppet3D animation tools.As a technical animator, Jamie had plenty of experience working with tools like Maya on films such as World War Z, Pirates of the Caribbean, and Harry Potter.For the last six years, however, as Senior Technical Artist at Preloaded in London, his focus has been: games, games, games.“Making games for clients means short turnaround times, so Unity suits us perfectly from that point of view,” Jamie says. “And with the introduction of Timeline and other tools, animating directly in Unity is beginning to feel a lot like it does in other professional 3D animation software.But Jamie wanted to take that one step further. Having created rigging tools in Maya previously, Jamie wanted to develop a tool that would enable artists to do all their skinning, rigging and animation directly in Unity.As Brad Bird once said, animation is about creating the illusion of life. If that’s true, then it fits well with Jamie’s goal of creating tools that enable animators to focus on their art.“I’ve always looked at rigging as being like a puppet maker. You want to make controls that are user-friendly, so the animator can concentrate on the performance part. I try to make it as easy as possible for animators to add those little sparks of character and animation that polish the game and bring it to life,” he says.Puppet3D opens up everything in your game to be rigged and animated without having to go back and forth between Maya (or another program) and Unity. Naturally, that covers quite a lot of ground, and the possibilities of what you can do with Puppet3D are quite varied. Here are three examples:Modify existing animations You could download non-animated characters (or ones that don’t have the exact animation you had in mind), and the one-click “ModRig” feature will automatically rig it for you with controls that make animating easy.Do some quick skinning Perhaps you’ve got an unskinned character. Using the Autorig feature, you can quickly get it skinned to bones and rigged. It’ll also have an idle and walk cycle on it. Or if you’ve made any other animations with Puppet3D, the control animations that are made with one autorigged character will work on another autorigged character. You can also get these animations onto any Unity humanoid with the "ModRig" feature.Add more life to a cutscene Puppet3D is not just for characters. You could, for example, rig a vine so it swings in the wind, or add bones to a chair to make it run off before someone sits in it.Puppet3D has only been out for a few months, but based on his past experience as an Asset Store publisher, Jamie has every reason to believe that Puppet3D will be embraced by Unity artists.His first asset was Puppet2D, a 2D version of the current 3D title, which had followed the introduction of Unity features like sprite support and the 2D view, and really took off in the Unity community. In fact, it’s success took Jamie by surprise.“It dawned on me that I’d made something really well known when I saw a tweet linking to a book about it, which someone had published in Japan. I ordered a copy right away — it’s sort of a souvenir for me,” he says. “The Unity Asset Store has allowed me to make tools for a much larger audience. It so exciting to think of all the games that have been made with my tools, like the award-winning puzzle game, Hue, and the popular arcade platformer, Damsel, to name just a couple of them.”Jamie plans to continue making new rigging features for Puppet3D, and of course, supporting all of his assets, which will keep him busy for a while. However, he doesn’t rule out creating new assets in the future.“My head is always running through new asset idea, and eventually, it’ll be hard to resist making one of them,” he says.If you spend at least $99 during the month of June, you can get Puppet3D plus InControl for free and start animating directly in Unity to give your characters that extra spark of life. Just be sure to activate the promotion using your Unity ID first.

>access_file_
1539|blog.unity.com

Book of the Dead: Concept art

In this blog series, we will go over every aspect of the creation of our demo “Book of the Dead”. After we introduced the Unity Demo team last week, it’s now time for Georgi Simeonov to share his work on the concept art in the teaser we’ve revealed. “Book of the Dead” is a living project with a longer story which we haven’t yet told, so we’ll try not to spoil too much...We call these characters “the Screwies”, people stuck between two worlds.These characters had to to be recognizably human while bearing visible signs of disintegration. The challenge was to do that while avoiding horror and decay, while at the same time as avoiding them from to looking too ethereal (making them ghosts) or magical.Dead Matter to Semblance of LifeEarly designs used the bark strips which proved quite a convenient material to augment the human form and shift the silhouette to something else. A lot of those design ended resembling various animals, but also moving too close to feeling like fantasy/horror beast.Mud, Sticks, and Stones I continued to iterate on the connection to the forests, and the Screwies attempts to rebuild themselves with what’s around - sticks, strips of bark, mud, resin. I then briefly went through the idea of them starting to resemble various memory shapes, maybe associated with fading memories.In a lot of the cases where elements were added or otherwise augmented the base human figure, the designs quickly started to resemble something tribal and reminiscent of a fictional primitive culture.Subtractive / Shattered - Pale remains of what they once wereThey had to be less, a figment of what was once human or a soul, a frame barely standing. I experimented with various ways in which to subtract from or erode the silhouette.Misremembered ShellsNatural, yet semi-abstract and sculptural, unreplenishable amber core.A quick color study of the first Screwie variant that hit on some of the key points I aimed to cover with the design, experimenting with pale ahs, charred wood and a variation of waxy and clearer resin.Three variants for the Screwies that were quite cool in and of themselves, but weren’t right for the story so were discarded:A - This variant quickly devolved into some dark demonic minions and was abandoned.B - The stacked stone and woven grass Screwies reminded some fairytale creatures like trolls and elves too much.C - The candle people which our final design evolved from in many ways.The idea that stuck was the one about the interplay between the outside shell and the emptiness inside. Most silhouette alterations were made by juxtaposing the areas of darker tree bark material and fresh golden resin they used to fill their empty shells.Being quite restricted in expressing design through silhouette meant that I had to rely on textures and interesting erosion patterns. We went for a mix of charred tree bark for the base and the shell with a filling of resin with marks reminiscent of the grooves left by fingers on soft clay.The final design for the first Screwie the viewer or player encounters - incorporating a mixture of the most successful erosion patterns.Some early thumbnail ideas for the location where the protagonist (and the player) sees a Screwie for the first time,The bracelet is the only bit of visual design for Karen’s character that is seen in the teaser. The dialog, supplemented by the bracelet, is the primary way you discover some of her personality and backstory. The design is based on some elements common for medical bracelets but reinterpreted and hidden into what would be perceived as a purely decorative design.The teaser shows only a short glimpse of the Bishop, but he’s an important character for this world and plays a role in the evolving story.I managed to hit on something quite striking early on with his design and went through much fewer cycles of exploration. The sarcophagus/boat is inspired by both Charon, from Greek mythology, and ancient Egyptian sarcophagi. Charon, being the chaperone who helps souls cross the river Styx, and the Egyptian sarcophagi, bringing an evocative element of demi-gods and vessels for travel. We explored various poses and angles in which you can encounter him, sometimes a massive aerostat-like object floating in space sometimes as an unusual part of the landscape.The pattern on his chest and parts of his face is formed out of keyhole-like ornaments and other details found on locks, his earrings resembling keys. The inside of the boat was inspired by old broken typewriters and their tightly stacked letter hammers, in our case used as a support for his body and/or little archival plaques.The hive was based on Weaver birds nests or layered Wasps hives built on much larger scale using strips of tree bark bonded with mud or resin or roughly woven together. A lot of the smaller structures like the cars and lampposts drew inspiration from Cargo Cults, in the way that they mimic barely remembered or understood fragments of reality.This image is one of the early attempts to put our story progression and sequence of story beats into locations on a map. The cyan line is depicting a potential branching path across the environment.Various props embodying the physical manifestation of memories - objects from their past life, rebuilt from the materials available in the forest - tree bark, mud, sticks.Cars made from forest materialsThis is an overpaint of an early screenshot showing the approach to the Hive over a fallen tree bridge up a hill.This image was one of the exploration paintings for the entrance of the hive. One of the main challenges with the design of the Hive was to keep it from looking too much like a traditional manmade fort, or some other fantasy fortress. We wanted it to bare signs on intent and hint of purpose but only barely, straddling the line between accidental and constructed.Stay tuned for our next posts in the series. Next week Plamen ‘Paco’ Tamnev dives into Character Art, and the week after Zdravko Pavlov explores the themes and process of asset creation in Book of the Dead with his approach to the trees. More to come after that.Meet us at Unite Berlin on June 19 to walk through the Book of the Dead environment on a console yourself, and attend Julien Heijmans’s presentation about Environment art in the demo. See the full schedule here.More information on Book of the Dead

>access_file_