// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1689 transmissions indexed — page 80 of 85

[ 2017 ]

20 entries
1582|blog.unity.com

Unity 2017.2 is now available

Unity 2017.2 introduces new 2D world-building tools, support for new XR platforms, and thanks to an exclusive collaboration between Unity and Autodesk, faster importing and exporting between Maya/3DS Max and Unity. Unity 2017.2 includes updates to the robust storytelling tools Timeline and Cinemachine, as well as support for ARCore, ARKit, Vuforia, and Windows Mixed Reality immersive headsets. Unity 2017.2 is now available for download. This blog post gives you an overview of some of the highlights followed by more detailed information about new features and improvements.2DUnity 2017.2 puts the power in the hands of 2D creators with a complete suite of 2D tools, including the new 2D Tilemap feature for fast creation and iteration cycles, and Cinemachine 2D for intelligent and automatic composition and tracking.Cinemachine’s dynamic, procedural cameras come to 2D game design making it easy to automate composition and tracking to enhance 2D gameplay, characters, and environments.Tilemap makes it fast and easy to create and iterate level design cycles right in Unity, so artists and designers can rapidly prototype when building 2D game worlds.XRUnity 2017.2 increases the level of support for new XR (augmented and virtual reality) platforms significantly. You can now count on native support for Windows Mixed Reality, Vuforia, and OpenVR on macOS. This means you can reach a wider audience, as well as take advantage of performance optimizations and enjoy a more efficient development workflow.Platform support: Our mission to help streamline AR development continues with support for Google’s ARCore SDK and for Apple’s ARKit through Unity’s ARKit plugin. We are already inspired by the experimentation and innovation we’re seeing from creators around the world, and we can’t wait to see all types of augmented reality experiences come to life as we continue to build up an expanded, customized AR development workflow for the latest and greatest AR platforms. We are also unlocking access to numerous immersive headsets with new support for Windows Mixed Reality, which will enable virtual reality developers to reach the widest audience.Performance: 2017.2 also brings VR creators more features that dramatically boost and optimize performance. Stereo Instancing (the next iteration of single-pass rendering) is now available for all integrated PC platforms using DX11. This rendering advancement will help optimize use of hardware, allowing developers to build better games and experiences. Another new feature is Video Asynchronous Reprojection for Google VR, offering a much higher quality video experience on Daydream View.These new platforms and improvements make cross-platform VR and AR development easier and faster. Combined with our existing feature set, developers can continue to push the boundaries of immersive storytelling on the largest number of XR platforms.Digital Content Creation tools workflow–FBX supportUnity and Autodesk have been working together to dramatically improve FBX support. This collaboration has enabled Unity to work directly on FBX SDK source code speeding up the development of a smooth and non-destructive round-trip workflow between the tools.Now all users, including artists and designers, can easily send scenes back and forth between Maya/Max and Unity with high fidelity. The new 2017.2 FBX Importer/Exporter package includes a custom Maya plug-in, and provides the following features: support for GameObject hierarchies, materials, textures, Stingray PBS shaders, and animated custom properties.There’s more!Those are just the highlights of Unity 2017.2, read on to get the and juicy details!TilemapThe new Tilemap feature enables you to build complex, grid-based worlds in 2D games right in Unity. You can quickly and easily create tilemap-based levels without using a third-party solution. Among other things, you can create your own palettes of tiles and smart brushes and then easily access them to paint on a grid-based system directly in the scene.Cinemachine comes to 2DThe power of Cinemachine’s dynamic, procedural cameras has come to 2D game design. Now, you can easily enhance and automate composition and tracking for 2D gameplay, characters, and environments to improve the player experience and save hours of programming. The Cinemachine feature is available via the Asset Store, add it to your project now.While Cinemachine already had a broad feature set in which many of the modules functioned perfectly well for 2D, we’ve now added some 2D-specific functionality, including the following.Framing Transposer: Move the camera to track and follow objects.Group Target: Track the center of a group of objects, and adjust the weight and influence of each one.Group Composer: Have the camera zoom and/or dolly to keep a group of targets on screen (more for 2.5d or ‘3d’ 2d games).Orthographic projection rendering: Set the Main Unity camera to Orthographic projection for a pure 2D game (works in 3D for those 2.5d games where you want to use actual parallax and perspective).Other 2D improvementsImprovements also include sprite atlas packing, which now uses less time reconciling sprites from the cache atlas. We’ve also updated BoxCollider2D Sprite Tiling generation to produce cleaner shapes.Finally, we’ve changed the 9-Sliced Sprites rendered in SpriteRenderer with Sprite Tiling behavior for negative width/height values to produce a more polished result.2D Extras2D-extras is a repository containing helpful custom Tiles and Brushes like the custom rule based Tilemap shown at the Unite Keynote with assets from the 2D game Phased (by Epichouse Studios):These reusable scripts will help you create custom tiles & brushes to make your games. Feel free to customize the behavior of the scripts to create new tools for your own specific use cases! Timeline visualization of audio clipsYou can now see a visualization of audio clips in the form of audio waveforms when using Timeline.This is useful when timing actions and events to audio cues, just like in a non-linear video-editing system. You can easily drag any timeline clip to match audio, or move audio to match actions in the scene.Interactive TutorialsIn-Editor Tutorials offer a new interactive way of learning how to get started in Unity. The new Tutorial panel in Unity 2017.2 instructs you and responds to your actions as it leads you through a series of tutorials for absolute beginners. Each tutorial gives you the opportunity to interact with Unity to fix parts of a ready-made game as you try and get the player character to the goal. At the same time, you learn the interface and basic Unity concepts..We will be creating more interactive tutorials and plan to open up the creative tools that power this to the community, so that you too can create interactive tutorials for Unity or for your asset store tools. We hope you like this new tool and can’t wait to hear your feedback, so sound off in the comments below or on the dedicated forum thread and let us know!Visualization of NavMesh in real time for debuggingDebug data from the process of building a NavMesh with the NavMeshBuilder API can now be selectively collected and visualized in the Editor using `NavMeshEditorHelpers.DrawBuildDebug()`.Workflow with Digital Content Creation (DCC) toolsFBX Importer/Exporter: Unity and Autodesk are directly collaborating to bring you the best FBX support in the industry. We are working to make your content creation and interactive workflow pipelines as efficient and effective as possible. This collaboration has enabled Unity to work directly on FBX SDK source code making improvements to the Unity FBX importer and exporter and a custom Unity plugin for Maya. All in all, it has resulted in a powerful round-trip workflow.The new Unity FBX Exporter adds the ability to export FBX geometry for use outside of Unity. In particular, the improvements in FBX support allow you to send your work to Maya/Max and then non-destructively merge changes back into your Unity Asset. The exporter also provides support for materials and textures, and GameObject components including colliders, rigid body, scripts, and audio, etcExporting from Maya to Unity is now simpler and more complete than ever thanks to the Unity custom Maya exporter plugin. With one click in Maya, you can export FBX files, including materials, textures, and Stingray Physically-Based Shaders, with maximum fidelity for use in Unity.The improved Unity FBX importer provides support for hierarchies, materials, textures, Stingray PB shaders, and animated custom properties (when present in the FBX file). With these improvements, work done in Unity is preserved and updated Maya assets slot right back into your Unity scene, so you can simply pick up where you left off and continue your work.The FBX Exporter package (in beta) is available from the Unity Asset store, and includes the custom Maya plug-in.Embedded materials on importNow you can create materials inside the import prefab instead of in an external 'Materials' folder. FBX files may contain embedded textures and materials. Until recently, the first import always created additional assets. However, subsequent imports did not create additional assets, unless the generated materials had been moved or deleted. Textures were overwritten on every import. We have added the option of making embedded materials appear inside the FBX in the project, and made them read-only. You can also manually extract textures via a button in the import inspector. Finally, extracting an FBX file into the project creates an editable clone. This clone is explicitly associated with the original FBX meshes via the importer’s metadata.Animated Custom PropertiesVarious DCCs (e.g. Maya and 3DSMax) support adding custom properties (or attributes) to objects:Unity can now import animation curves on custom properties from FBX files (disabled by default):These will appear in the Animation Window as Animator properties, just like additional curves created from imported clips:You can then use a MonoBehaviour to drive other Component properties, or use an AssetPostprocessor to bind your curves directly to any Component.Maya/3DSMax Stingray physically based shaderImporting FBX files containing models using the Stingray PBS shader is now supported:Stingray PBS Materials in Maya 2016:Same materials imported in Unity:Notes:The Stingray PBS has slightly different properties than the Unity Standard Shader, so we have created a shader variant called “Standard (Roughness setup).” This shader has a separate roughness map, and thus consumes more graphics resources than the Unity Standard Shader. For this reason, this shader requires shader model 3.5 in forward rendering. We recommend using the Standard shader where possible.The Stingray PBS and Unity Standard Shaders have similar looks and responses to light, but do not use the same code. There will be differences between what you see in Maya or 3DSMax and what you see in Unity.Any changes made to the underlying ShaderFX graph in Maya or Max are not exported in the FBX file and thus will not be reflected in Unity. We do not recommend modifying the ShaderFX graph.Improvements to Avatar MasksIn 2017.2, the workflow and UI for mask updates has been improved by displaying the updated hierarchy, with the invalid mask paths in red. Mask checkboxes for invalid paths are disabled.New AssetBundle API for more controlAssetBundles lets you segment your app into multiple files, and call upon them when you need them, whether they’re local or remote. This is great for optimizing performance or managing how your app files are distributed. Shipping with 2017.2 is a brand new API for AssetBundles. If you use AssetBundles to pull in content securely from a CDN, this could be for you. Previously, your options for loading AssetBundles were either from File or Memory. Sometimes, however, an intermediary step needs to be taken before the data is usable. For example, we wanted to solve the issue where a client is pulls data from a secure CDN. In this case, decrypted content would be served as a stream, and you’d have to write extra steps to convert the data to a useable format. Now, you can read the data directly as a managed Stream object with a new API, AssetBundle.LoadFromStream. And less code is required from you! Platform: Built-in Vuforia SupportWe have introduced integrated support for the development of Vuforia-enabled apps in the Unity Editor. With Vuforia, you can now create cross-platform augmented reality experiences using everyday objects. Vuforia enables you to attach digital content to images and physical 3D objects, identify and track objects using custom-designed marker icons, and much more.Vuforia support can be installed through the Unity Download Assistant and enabled under Project Settings > Player Settings > XR Settings. You can learn more about Vuforia here and download their Core Sample Assets here for free.About Vuforia: Vuforia is a software platform for augmented reality applications on handheld and headworn devices. It delivers a cross-platform solution for attaching digital content to physical objects and environments. Vuforia is supported by a global ecosystem of more than 375,000 registered developers and more than 45,000 published applications.Windows Mixed RealityUnity now has brand new native support for Windows Mixed Reality immersive headsets, enabling creators to publish VR content to the Microsoft Store.Unity’s support also includes workflow enhancements, such as being able to preview the HMD view on-device through the Editor.Whether you’re creating a tailored experience just for this platform or porting an existing VR game, Unity has unlocked access to an entirely new range of VR devices.For more detailed information, head to the getting started guide.OpenVR support for MacOSUnity has worked closely with Apple and Valve to optimize Metal 2 to run against Unity’s current VR rendering paths, Multi-Pass, and variants of Single-Pass. For the final release, developers will be able to improve performance using the new Metal 2 features announced at WWDC and combining them with the use of instancing. This will cut the number of draw calls required in half.For more information on virtual reality development in Unity, head to our manual page.Google ARCore (plugin)We have added support for Google ARCore augmented reality technology when targeting Android 7.0 and above. The ARCore API provides accurate device position and orientation information as well as feature point-detection, which identifies the physical space of the user's surroundings.Unity's support for ARCore makes it easy for you to drive a standard Unity camera using your device's real-world position and orientation. This enables you to create planes representing surfaces of the device's surroundings and to render the color camera's image as the background for an augmented reality experience.The SDK currently supports development for Google Pixel or Pixel XL, and Samsung Galaxy S8 running Android 7.0 Nougat and above. It requires Android API SDK v.24 or later.To set up the SDK, follow these steps:Download and import the ARCore SDK for Unity.Configure Unity for ARCore development following the steps in the Unity Forum.We can’t wait to see the amazing experiences that you produce with ARCore. Join the discussion on our forum to be a part of this exciting new frontier in AR!Apple ARKit (Plug-in)Since the original announcement of ARKit at WWDC and the launch of our Unity ARKit plugin back in June, we’ve seen an incredible response from the community. We have worked side-by-side with developers and made continual improvements to our plugin.The Unity ARKit plugin provides you with friendly access to ARKit’s features: motion tracking, live video rendering, plane finding and hit-testing, ambient light estimation, raw point cloud data, and more. There are also Unity components that make it easier for you to create new AR apps, or easily integrate AR features in existing Unity projects.Unity’s ARKit Plugin has a unique capability that will save you hours of development time: the Unity ARKit Remote. This tool speeds up iteration by allowing you to make changes to the scene and debug scripts in the Unity Editor, in real-time, without having to build to the device. Our plugin now supports access to the following new functionalities:Light estimation for ambient color temperature, in addition to the ambient intensitySpecifying your own User AnchorsReceiving events when the tracking state has changedEvent notifications for when an AR session is interrupted or resumesWe have included many new examples to help you get up and running on your AR project fast, including:Scaled content (imagine a city in your living room)Focus Square (a UI element to show where to place objects)Occlusion (shader and material that will hide virtual objects behind real ones)Shadow (shader and material to ground your virtual object in the real world)The Unity ARKit Plugin is available now as a package from the Asset Store. It is also available as an open-source repository on BitBucket, where you can join us in making it even better. Head over to the forums to learn how to get started or if you have any questions.Stereo Instancing (PC)Stereo Instancing (also known as single-pass instanced rendering) is an evolution of Unity’s single-pass rendering and is now supported when building with DX11, allowing developers to greatly optimize performance for Vive, Oculus Rift, and Windows Mixed Reality immersive headsets.The biggest impact of using this technique is that you can dramatically reduce (half, at times) the number of draw calls generated on the API side, saving a good chunk of CPU time. Additionally, the GPU itself is able to more efficiently process the draws (even though the same amount of work is being generated). Note: Stereo Instancing is only supported with forward rendering.To enable this feature, open Player Settings (menu: Edit > Project Settings > Player). In Player Settings, navigate to XR Settings, ensure the Virtual Reality Supported checkbox is ticked, then tick the Single-Pass Stereo Rendering checkbox. Note that stereo instancing only works with Windows 10; you can find more information here.Tracked Pose DriverThe Tracked Pose Driver is a new cross-platform component that makes recognition between devices and game objects in the scene simpler and more intuitive.More info can be found here.Editor Simulation for Vive HMDThis new feature allows certain aspects of the Vive HMD to be simulated in the Editor, without the need of a physical HMD. This is enabled by adding "Mock HMD‒Vive" to the Virtual Reality SDKs in Player Settings > XR Settings.The mock HMD will use the same asymmetric projection matrix, hidden occlusion mesh, field of view, aspect ratio, and eye texture size as the Vive. Mock HMD can be used with both multiple and single-pass rendering paths, and it will render as a split screen stereo display in Editor. Native Rendering Plugin support for Nintendo SwitchOther improvements include Native Rendering Plugin support for Nintendo Switch, which enables you to implement low-level rendering and work with Unity’s multi-threaded rendering.macOS player Retina supportWe’ve added support for targeting Retina resolutions for macOS on devices that support it.Windows Player LauncherWe’ve moved the majority of the Windows Standalone Player into a separate signed DLL ("UnityPlayer.dll"), leaving the executable to be a thin wrapper that just calls into it.Support for Samsung Tizen & SmartTVUnity 2017.2 will be the last version to support Samsung Tizen and SmartTV. Following this release, Unity will provide 12-months of support, including patches and security updates. To get the latest info on Tizen and SmartTV for Unity, visit the partner page for Samsung. GI ProfilerThe addition of the GI profiler shows relevant statistics, including how much CPU time is consumed by the Realtime Global Illumination subsystem to help you optimize GI in your scenes.HDR EmissionGlobal Illumination emission now uses 16-bit floating point format for both realtime and baked GI. The HDR color picker limit increased from 99 to 64k to unlock the full range. This makes it possible to emit much stronger light from emissive surfaces.Lightmap background (push-pull dilation)The feature fills the empty areas in the lightmaps with content from lower MIP levels (push-pull dilation). This will fix the cases where dark pixels around geometry edges are visible when rendering with lightmaps. This is due to dark background texels bleeding in when lower MIPs are accessed.Progressive Lightmapper ImprovementsWe now support double-sided materials in the Progressive Lightmapper. We’ve added a new material setting that causes lightning to interact with backfaces. When enabled, both sides of the geometry are accounted for when calculating Global Illumination.Tree Lightmap BakingTerrain trees can now cast shadows into the baked lightmap for a lightmap static terrain. The trees themselves will be lit using a light probe that is automatically placed above the tree.Lightmap stitchingLightmap seam stitching makes it easy to get rid of those annoying edge seams in lightmaps.A-Trous filteringFor Unity 2017.2, we have added advanced filtering options with a new A-Trous kernel. The new filter preserves shadow edges and contact shapes much better, while at the same time smoothing out noisy areas.Linear Rendering with WebGL 2.0You can now be sure that linear rendering inputs, outputs, and computation are in the correct color space. The brightness of the final image will be adjusted linearly according to the amount of light in the scene. That means more consistent lighting across your scenes and assets. You can get more details here.Linear rendering is now supported on:Windows, Mac OS X and Linux (Standalone)Xbox OnePlayStation 4Android with OpenGL ES 3.x and VulkaniOS with MetalWebGL 2.0Linear rendering is particularly interesting because it allows you to use the Unity Post Processing Stack, including temporal anti-aliasing, and also achieves excellent results with WebGL.Linear rendering in Unity WebGL player works on any web browser that supports WebGL 2.0.Editable Custom Data Module LabelsThe Custom Data module allows you to specify data that can be used for many different purposes. We’ve made the labels on the curves and gradients editable to allow you to describe what each bit of custom data is used for.Inherit Lifetime for sub-emitter particlesThere is a new option in the Inherit dropdown for sub-emitters, allowing them to base their own lifetime on the remaining lifetime of their parent system.This can be useful for creating effects that are guaranteed to only last a specific amount of time, even if they collide and create new particles, for example.Linear DragA new option in the Limit Velocity over Lifetime Module allows you to apply linear drag to your particles. Add it to effects with various sized particles in order to have the smaller ones travel further and faster than the larger particles. A great use-case for this is when creating explosion debris effects.Auto-Destruct/DisableIt’s now possible to destroy or disable a Particle System when it has finished playback. Destroying is great for one-shot effects. Destroying allows you to avoid having to perform your own cleanup code. Disabling, on the other hand, can be useful when you are managing your own pool of Particle System Game Objects.Burst EmissionBursts counts can now be configured to use the same curve options as many other Particle System properties.Restart ButtonWe’ve added a Restart Button to the Scene View overlay in order to spare you the trouble of having to press Stop and Play to restart an effect. Running a game as a live service means you can adapt your game to your players’ needs, keep your game fresh, and make the experience more enjoyable. In 2017.2, our Remote Settings feature is officially out of beta.Remote SettingsRemote Settings is easy to use. It’s native to the Unity engine and employs an API similar to PlayerPrefs that most Unity developers are familiar with. Recently, we made a significant update to this feature. Remote Settings now supports segments. So you can act directly and immediately on player segments and tailor your game to suit specific groups of players — all without shipping a new binary. The above demo of the upcoming Final Fantasy XV Pocket Edition, a remaster of the original title in the form of an all-new mobile adventure, shows how Remote Settings works in a AAA title.Remote Settings is now officially live and available on your Analytics dashboard.Performance Reporting: Android Native CrashNow, when you use the Performance Reporting service, native crash reports will automatically get sent from your players’ Android devices to the Performance Reporting service. In the developer dashboard, you’ll be able to see these crash reports alongside your unhandled managed exceptions and native iOS crash reports. You can enable the Performance Reporting service for your project in the services window in the Editor.Recorder (experimental)The Recorder captures frames during gameplay and produces image sequences (JPG, PNG, GIF, OpenEXR) and video files (WebM, H.264/Windows only). You can add this feature to the Unity Editor by downloading the Recorder from the asset store.This first experimental release includes a dedicated recorder window to select recording options:You can also trigger recording sessions directly from Timeline with a Frame Recorder Track:Package ManagerAlthough Unity user won’t see any change in 2017.2, we wanted to give you a heads up. In 2017.2, we are introducing a Package Manager which will enable a more flexible and modular approach to manage all the components and subsystems that ultimately make up Unity. For this first release, we are exposing an API for enabling internal components to be updated more frequently than the editor (the first pillar of the system). Stay tuned for more details. As always, refer to the release notes for the full list of new features, improvements and fixes.We also want to send a big thanks to everyone who helped beta test 2017.2 making it possible to release 2017.2 today.Info on the asset store sweepstake winners We are currently reviewing all the beta 17.2 sweepstakes winners and will send gift vouchers to those of you who have qualified in the weeks to come by email.Be part of the 2017.3 beta If you are not already a beta tester–perhaps you’d like to consider becoming one. You’ll get early access to the latest new features, and you can test if your project is compatible with the new beta.You can get access simply by downloading our open beta test. By joining our open beta, you won’t just get access to all the new features, you’ll also help us find bugs ensuring the highest quality software. As a starting point, have a look at this guide to being an effective beta tester to get an overview. If you would like to receive occasional mails with beta news, updates, tips and tricks, please sign up below.Sign up for Beta Newsletter First Name: Last Name: Email Address: Processing...Thank you! Your data has been successfully sent.We are sorry! Some error occurred. please, try again later.

>access_file_
1585|blog.unity.com

Neon

Neon is a small environment created by our Demo team’s creative director Veselin Efremov in just a couple of days, with Unity 2017.1 and models from the Asset Store. In this blog post he will be sharing some thoughts around the project and how it came about.Starting with a bit of background: we were on the lookout for an environment artist, and I’d invited a candidate to an on-site interview in our Stockholm office. My idea was, rather than asking questions, to just build an actual small scene together for the two days we had. I’d been working on a larger project for a while, so it was a welcome break for me. The “art jam” experience felt refreshingly creative, so after we were done, I decided to do another one from scratch, with the help of our tech lead Torbjorn Laedre -- and it became Neon.I started with downloading the latest public release of Unity from the website, and brought in the Post-Processing stack v1 and the Realtime area lights / Volumetric fog on GitHub. These are the first things I usually put in a new project, because I like to use the high quality effects and lighting to guide my creative decisions later on. I can’t bear to look at ugly stuff even if it’s just a box in an empty scene.Torbjorn had suggested to go for a cyberpunk vibe, so I added some emissive boxes and cranked up the atmosphere density. Having made this rough layout, I liked the mood that the lighting and the fog were creating, and decided to develop it a bit further. I decided to use the Asset Store to achieve a finished look as quickly as possible.There’s a huge choice of assets, so I’d just grab whatever looked like it would fit the overall idea, and used these to detail the scene in passes. Many of the Asset Store assets were very cheap and some were even free, so I had a lot of choice and was able to advance from the first generic sketch to a fairly finished look in less than two days:I take many screenshots as I work - here they are in a sequence which shows how the look evolved.I quickly realised that I can go for a relatively polished look with these assets, so I decided to turn it into a deliberate challenge to not move a single vertex but rely entirely on the stock models. Instead of creating custom props, I would experiment with what was available and often used the assets in unlikely ways. For instance, the mechanical underside of this building overhang is actually a part of a spaceship. And I got an entire factory set only to use the cables.This is obviously a very unoptimised way of working, but its value is in how quickly it allows me to build a space which, to the “naked eye”, has the look and feel of something complete and visually appealing.Early prototyping to final quality is essential to the way I work, and applies to larger projects as well -- for instance, the cinematic demos we do at Demo team. If we were making a Blade Runner-inspired demo, this small vignette would serve as our starting point for the breakdown of the project. On a small team, having a prototype on day 2 from project start means that everyone gets a very good idea of what needs to be done. It’s especially important that the artists get to discover what potential the engine tech can give them, when it is early enough to change course creatively and decide to make use of some opportunities they wouldn’t otherwise foresee. It is even more important that the programmers get a clear picture of the art- and rendering-related tasks on the project, and can identify pitfalls early on. Game designers can dive right in and start experimenting with the experience. Needless to say, it also helps guide the decisions about where to put most effort in asset production. You don’t always need to design and create your own custom set of pipes and cables, when some existing ones might do a perfect job, while the art team is focusing on the most important assets that are unique and style-defining for the game.At the end stretch, we pulled in some help from the others on our team in order to make the scene feel more complete.The rain was an important part of achieving the look, and Torbjorn had been working on it in the meantime. Our VFX artist Zdravko Pavlov rendered the rain texture.The characters were static, as I was using them to populate the space and for scale reference, so it was great that our Animation Director Krasimir Nechevski could pitch in with some animations he had lying around from another project, and made the humans walk around and idle.Aleksander Karshikoff made the audio.By now many others from Unity are using this small scene to try something out, stretch a feature, or have some creative fun; and I’ve gone back to my main project.Hope you enjoyed this glimpse into my work process. Here is the full list of the stock assets I used:Asset Store AssetsPackage Name: Umbrella Pro | Publisher: Indie_G | Price: 4.02 (EUR) Package Name: Assorted Prop Pack | Publisher: Volumetric Games | Price: 10.71 Assorted Prop Pack Volumetric Games 10.71 Dragon Statue Viacheslav Titenko Free Fire Escape Undead Studios 4.47 SciFi Industrial Level Kit Ximo Catala 62.53 The Probe  MACHIN3 13.40 The Starfighter  MACHIN3 44.67 Pipes Kit Mojo Structure Free Neon Signs Asset Pack Zymo Entertainment 8.93 Street Props Pack Coded Soul 8.93 RainDrop Effect 2 Globe Games Free Retro Cartoon Cars - Cicada - Free Retro Valorem Free SF Buildings (RTS, FPS) GESO Media 17.87 Unity Particle Pack Unity Technologies Free Sci-Fi Top Down Pack 1 Velkin Labs 35.73 Street Lights AI Games 2.23 Total  213.49 Other Assets   Four surfaces Quixel Megascans Subscription Sky HDRI NoEmotionHDRs.net Free (CC 4.0) Human Character - Male Écorché 3dscanstore.com 45.00

>access_file_
1587|blog.unity.com

Introducing Unity Machine Learning Agents Toolkit

Our two previous blog entries implied that there is a role games can play in driving the development of Reinforcement Learning algorithms. As the world’s most popular creation engine, Unity is at the crossroads between machine learning and gaming. It is critical to our mission to enable machine learning researchers with the most powerful training scenarios, and for us to give back to the gaming community by enabling them to utilize the latest machine learning technologies. As the first step in this endeavor, we are excited to introduce Unity Machine Learning Agents Toolkit.Machine Learning is changing the way we expect to get intelligent behavior out of autonomous agents. Whereas in the past the behavior was coded by hand, it is increasingly taught to the agent (either a robot or virtual avatar) through interaction in a training environment. This method is used to learn behavior for everything from industrial robots, drones, and autonomous vehicles, to game characters and opponents. The quality of this training environment is critical to the kinds of behaviors that can be learned, and there are often trade-offs of one kind or another that need to be made. The typical scenario for training agents in virtual environments is to have a single environment and agent which are tightly coupled. The actions of the agent change the state of the environment, and provide the agent with rewards.The typical Reinforcement Learning training cycle.At Unity, we wanted to design a system that provide greater flexibility and ease-of-use to the growing groups interested in applying machine learning to developing intelligent agents. Moreover, we wanted to do this while taking advantage of the high quality physics and graphics, and simple yet powerful developer control provided by the Unity Engine and Editor. We think that this combination can benefit the following groups in ways that other solutions might not:Academic researchers interested in studying complex multi-agent behavior in realistic competitive and cooperative scenarios.Industry researchers interested in large-scale parallel training regimes for robotics, autonomous vehicle, and other industrial applications.Game developers interested in filling virtual worlds with intelligent agents each acting with dynamic and engaging behavior.We call our solution Unity Machine Learning Agents Toolkit (ML-Agents toolkit for short), and are happy to be releasing an open beta version of our SDK today! The ML-Agents SDK allows researchers and developers to transform games and simulations created using the Unity Editor into environments where intelligent agents can be trained using Deep Reinforcement Learning, Evolutionary Strategies, or other machine learning methods through a simple to use Python API. We are releasing this beta version of Unity ML-Agents toolkit as open-source software, with a set of example projects and baseline algorithms to get you started. As this is an initial beta release, we are actively looking for feedback, and encourage anyone interested to contribute on our GitHub page. For more information on Unity ML-Agents toolkit, continue reading below! For more detailed documentation, see our GitHub Wiki.A visual depiction of how a Learning Environment might be configured within Unity ML-Agents Toolkit.The three main kinds of objects within any Learning Environment are:Agent - Each Agent can have a unique set of states and observations, take unique actions within the environment, and receive unique rewards for events within the environment. An agent's actions are decided by the brain it is linked to.Brain - Each Brain defines a specific state and action space, and is responsible for deciding which actions each of its linked agents will take. The current release supports Brains being set to one of four modes: External - Action decisions are made using TensorFlow (or your ML library of choice) through communication over an open socket with our Python API.Internal (Experimental) - Actions decisions are made using a trained model embedded into the project via TensorFlowSharp.Player - Action decisions are made using player input.Heuristic - Action decisions are made using hand-coded behavior.Academy - The Academy object within a scene also contains as children all Brains within the environment. Each environment contains a single Academy which defines the scope of the environment, in terms of: Engine Configuration - The speed and rendering quality of the game engine in both training and inference modes.Frameskip - How many engine steps to skip between each agent making a new decision.Global episode length - How long the episode will last. When reached, all agents are set to done.The states and observations of all agents with brains set to External are collected by the External Communicator, and communicated to our Python API for processing using your ML library of choice. By setting multiple agents to a single brain, actions can be decided in a batch fashion, opening the possibility of getting the advantages of parallel computation, when supported. For more information on how these objects work together within a scene, see our wiki page.With Unity ML-Agents toolkit, a variety of training scenarios are possible, depending on how agents, brains, and rewards are connected. We are excited to see what kinds of novel and fun environments the community creates. For those new to training intelligent agents, below are a few examples that can serve as inspiration. Each is a prototypical environment configurations with a description of how it can be created using the ML-Agents SDK.Single-Agent - A single agent linked to a single brain. The traditional way of training an agent. An example is any single-player game, such as Chicken. (Demo project included - “GridWorld”)Simultaneous Single-Agent - Multiple independent agents with independent reward functions linked to a single brain. A parallelized version of the traditional training scenario, which can speed-up and stabilize the training process. An example might be training a dozen robot-arms to each open a door simultaneously. (Demo project included - “3DBall”)Adversarial Self-Play - Two interacting agents with inverse reward functions linked to a single brain. In two-player games, adversarial self-play can allow an agent to become increasingly more skilled, while always having the perfectly matched opponent: itself. This was the strategy employed when training AlphaGo, and more recently used by OpenAI to train a human-beating 1v1 Dota 2 agent. (Demo project included - “Tennis”)Cooperative Multi-Agent - Multiple interacting agents with a shared reward function linked to either a single or multiple different brains. In this scenario, all agents must work together to accomplish a task than couldn’t be done alone. Examples include environments where each agent only has access to partial information, which needs to be shared in order to accomplish the task or collaboratively solve a puzzle. (Demo project coming soon)Competitive Multi-Agent - Multiple interacting agents with inverse reward function linked to either a single or multiple different brains. In this scenario, agents must compete with one another to either win a competition, or obtain some limited set of resources. All team sports would fall into this scenario. (Demo project coming soon)Ecosystem - Multiple interacting agents with independent reward function linked to either a single or multiple different brains. This scenario can be thought of as creating a small world in which animals with different goals all interact, such a savanna in which there might be zebras, elephants, and giraffes, or an autonomous driving simulation within an urban environment. (Demo project coming soon)Beyond the flexible training scenarios made possible by the Academy/Brain/Agent system, the Unity ML-Agents toolkit also includes other features which improve the flexibility and interpretability of the training process.Monitoring Agent’s Decision Making - Since communication in Unity ML-Agents toolkit is a two-way street, we provide an Agent Monitor class in Unity which can display aspects of the trained agent, such as policy and value output within the Unity environment itself. By providing these outputs in real-time, researchers and developers can more easily debug an agent’s behavior.Above each agent is a value estimate, corresponding to how much future reward the agent expects. When the right agent misses the ball, the value estimate drops to zero, since it expects the episode to end soon, resulting in no additional reward.Curriculum Learning - It is often difficult for agents to learn a complex task at the beginning of the training process. Curriculum learning is the process of gradually increasing the difficulty of a task to allow more efficient learning. The Unity ML-Agents toolkit supports setting custom environment parameters every time the environment is reset. This allows elements of the environment related to difficulty or complexity to be dynamically adjusted based on training progress.Different possible configurations of the GridWorld environment with increasing complexity.Complex Visual Observations - Unlike other platforms, where the agent’s observation might be limited to a single vector or image, the Unity ML-Agents toolkit allows multiple cameras to be used for observations per agent. This enables agents to learn to integrate information from multiple visual streams, as would be the case when training a self-driving car which required multiple cameras with different viewpoints, a navigational agent which might need to integrate aerial and first-person visuals, or an agent which takes both a raw visual input, as well as a depth-map or object-segmented image.Two different camera views on the same environment. When both are provided to an agent, it can learn to utilize both first-person and map-like information about the task to defeat the opponent.Imitation Learning (Coming Soon) - It is often more intuitive to simply demonstrate the behavior we want an agent to perform, rather than attempting to have it learn via trial-and-error methods. In a future release, the Unity ML-Agents toolkit will provide the ability to record all state/action/reward information for use in supervised learning scenarios, such as imitation learning. By utilizing imitation learning, a player can provide demonstrations of how an agent should behave in an environment, and then utilize those demonstrations to train an agent in either a standalone fashion, or as a first-step in a reinforcement learning process.As mentioned above, we are excited to be releasing this open beta version of Unity Machine Learning Agents Toolkit today, which can be downloaded from our GitHub page. This release is only the beginning, and we plan to iterate quickly and provide additional features for both those of you who are interested in Unity as a platform for Machine Learning research, and those of you who are focused on the potential of Machine Learning in game development. While this beta release is more focused on the former group, we will be increasingly providing support for the latter use-case. As mentioned above, we are especially interested in hearing about use-cases and features you would like to see included in future releases of Unity ML-Agents Toolkit, and we will be welcoming Pull Requests made to the GitHub Repository. Please feel free to reach out to us at ml-agents@unity3d.com to share feedback and thoughts. If the project sparks your interests, come join the Unity Machine Learning team!Happy training!

>access_file_