// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1695 transmissions indexed — page 50 of 85

[ 2023 ]

20 entries
981|blog.unity.com

Context is everything: Cross-domain data mapping for augmented reality

AI is enjoying a heyday, although the path has yet to be paved in terms of how it can be leveraged for immersive experiences. AI is not the solution to every problem, but it does provide the missing link needed to create next-generation augmented experiences.Considering this AI boom, I’d like to talk about the concept of “context” as it relates to creating augmented reality (AR) experiences. A context is a collection of parameters describing a domain that can be encoded into machine-processable data. A domain can be literally anything, but in the AR space, three critical domains exist: physical, virtual, and human.The physical domain is the world we live in. The context for a specific location may include world coordinates (that is, a geographical position), environment scan data, object positions, weather, or images of the surroundings – whatever real-world parameters are relevant to support the generation of a solution to a specific need.The virtual domain contains any data that has a useful correlation with a location in the real world. This is a broad definition, but that’s the idea: AR experiences don’t need complex 3D assets or models to provide value. Any kind of location metadata can be the basis for an experience, for example, rainfall data or the location of stock in a store.Finally, the human domain is the body of human requirements, expressed in terms a machine can understand. This is the jumping-off point for AI, where natural language processing (NLP) and generative pre-trained transformer (GPT) models play a key role in converting the human context into machine language. The human domain also encompasses how machine-generated data is communicated.Generating a domain context is a relatively straightforward task. Where it gets tricky is ensuring the relationships between components are usable: physical and virtual coordinate systems must align, digital twins must be up to date with the physical world, human descriptions must be mappable to trainable behaviors, etc.Both software and hardware related to emerging technologies, including AI, robotics, and the Internet of Things, are evolving rapidly. Until standards (for interoperability, for example) and best practices are in place governing their implementation, effective usage and compatibility relies on the skilled design of networked components. But once this system is designed, you have the generalized foundation for creating augmented reality experiences for any application, be it industry, retail, or general productivity improvement.An example of how AI enables cross-domain mapping would be an individual pointing at an object in the distance. This is a physical context that many technologies can provide, but the gesture in itself doesn’t have intrinsic meaning and is not sufficient for defining the problem that needs solving. It could be in reference to a direction of travel, or an inquiry about an object. The context when correlated with language like, “How do I get there?” now forms a complete query. Thus, AI can process the physical data guided by human context data to “understand” the natural interactions we all perform daily without thinking about them and generate an appropriate response. That transparency of request/response is elevating all forms of AR experience, with the ultimate goal of making our lives easier.Let’s explore some examples of how context can be defined for different applications. AI’s primary role is in the human domain, processing user requests, anticipating user needs, drawing on relevant data, and facilitating communication between people and devices.Room painting: A user wants to paint a space and would like to know the quantity of materials needed. Specifics: They have a device that can measure the space and issue a voice command asking how much paint is needed.Physical: Lidar scan of physical spaceVirtual: Digital twin of the space created on the fly based on physical scan that includes windows, doors, and walls to accurately determine the wall surface areaHuman: The correlation between square footage and how much surface area a canful of paint can coverFitness routing: A user requests a run variation from the route usually taken. Specifics: The user has a headset that’s able to determine the user’s location, has a record of previous routes, and can project visual information.Physical: User location, and recordings of previous runsVirtual: Maps of the area that provide information on trails and sidewalksHuman: Understanding of what constitutes a route to allow calculation of a new oneAirport optimization: Situational awareness and automation to improve operations management. A user needs just-in-time prompts for conducting airfield activities safely. Specifics: The user has a wrist wearable that can determine the user’s location, and has a data connection to a central operational digital twin.Physical: Locations of users, aircraft, assets, and physical world objectsVirtual: Digital twin of airport enabling simulation prediction, navigation, locations of points of interest, and geospatial processingHuman: Understanding of the mission, challenges, and key safety objectivesAs one can see in these examples, the value of real-time 3D extends well beyond the generation of impressive visuals. It’s the core engine for processing cross-domain contexts to generate spatial solutions to problems. Given that we live in a 3D world, it’s not surprising that real-time 3D plays a central role.Unity as a core data engine has massive traction in the gaming market, and so its applicability to non-game use cases is often overlooked. As wearables, devices, and AI models advance technologically, capturing more and better data, ever richer contexts will be defined, generating more precise solutions. Unity will be the primary tool for collecting this data for creating the experiences that make our lives better at work and play.We’re excited by what our developers will create, and hopefully this blog post provided some ideas on how to structure your next-generation experiences.For more inspiration on how AI can drive immersive experiences, check out the potential of digital twins. And don’t forget to sign up for Unity AI Beta Program.

>access_file_
982|blog.unity.com

Unity runtime on Arm-based Windows devices

With the launch of Unity 2023.1, developers using Unity can now target Arm based Windows devices for their titles, and achieve native performance on devices that use the ARM64 processors, such as the Surface Pro 9 and the Lenovo ThinkPad X13s. This opens up new possibilities for developers to create high-performance, immersive experiences on a wider range of devices.This blog will dive into what is required to build games for Windows on Arm, and offer a glimpse into the future of Unity Editor support for the platform.The requirements for building your project for Windows on Arm are the same as for any other architecture Unity supports on Windows. If you’re using the Mono scripting backend, there are no other system requirements, aside from downloading and installing the Unity Editor itself. If you’re using the IL2CPP scripting backend, you will need the Unity Editor, Visual Studio 2019 or newer with the C++ compiler for ARM64 component, and the Windows SDK installed.Setting the build target to be Windows on Arm can be done from the Build Settings window by setting the Architecture to “ARM 64-bit”.Alternatively, if you have set up your own build scripts, you can use the UnityEditor.WindowsStandalone.UserBuildSettings.architecture property to set the targeted architecture to ARM64 and produce an Arm build of your project.In addition to Windows on Arm platform support, Unity 2023.1 includes improved features and render quality for both the High Definition Render Pipeline (HDRP) and Universal Render Pipeline (URP). It also features platform graphic improvements, additional connectivity types for multiplayer solutions, and more. Get started with Unity 2023.1 by visiting our download page or through the Unity Hub.First showcased at GDC 2023, the URP 3D Sample Scene shows Unity’s scalability on a wide range of platforms. The Garden scene in particular shows how you can use Unity’s URP features to create beautiful, immersive environments on any device players choose to run it on.Unity running natively on Arm-based Windows devices can fully utilize the power of the Arm processors to render the Garden scene in gorgeous detail, at a steady frame rate.The garden scene was showcased during Microsoft Build on May 24 during the breakout session, “Learn how to build the best Arm apps for Windows.” In this segment, you can see how native runtime support for ARM64 substantially reduces CPU usage when compared to running via an Arm emulation layer.Announced with the launch of the Windows Dev Kit 2023 Project Volterra, Unity is currently working on making the Unity Editor itself run natively on Windows on Arm devices to take advantage of Arm-based hardware capabilities. We’ll share more information about the Unity Editor for Arm-based Windows devices soon.The Windows Dev Kit 2023 (previously known as Project Volterra) is now available for testing your games on Arm-based Windows devices. You can read about it here.To learn more about the announcements made at Microsoft Build, check out Panos Panay’s blog post that covers highlights from the show.To learn more about the URP 3D sample scene, watch this talk from GDC 2023. In this recorded session, Jonas Mortensen, a technical artist at Unity, walks through how to build beautiful cross-platform games in URP and scale game graphics. You can also see technical rundowns of select graphics features like custom post-processing, custom lighting, and shaders, and find tips on how to apply them in your own projects.Q: How did this partnership come about? A: In August of 2022, Unity partnered with Microsoft Azure to bring our Create Solutions to the cloud and develop our cloud infrastructure to better meet your needs and to enhance your games and other experiences. Microsoft and Unity are also working together to make it easier to build and distribute your games on Windows and Xbox platforms.Q: How will this help my title? A: Multiplatform development helps enhance the reach of your title, getting it into the hands of players wherever they are.Q: Where can I access the Windows on Arm platform support? A: Unity 2023.1 Tech Stream and newer supports the Windows on Arm runtime.Q: Where can I publish my Windows on Arm games? A: Developers creating games targeting the Windows Store will continue to require either UWP or the Microsoft GDK for publishing. Since GDK at this time does not support ARM64, publishing ARM64 games to the Windows Store is not possible. Check with other third party stores for specific support for ARM64.Q: What is the Microsoft Game Development Kit (GDK)? A: The Microsoft Game Development Kit (GDK) contains the common tools, libraries, and documentation needed to build games for Xbox Game Pass for PC on Windows 10/11, Xbox consoles (Xbox Series X|S, Xbox One), and cloud gaming with Xbox Game Pass Ultimate.

>access_file_
984|blog.unity.com

Made with Unity Monthly: May 2023 roundup

Unite 2023 is coming to Amsterdam, the new LTS is launching, and generative AI continues its buzz. Read on to discover what Unity creators are doing to advance development in the interim, including the latest game releases made with Unity.May saw plenty of exciting games created with Unity share the spotlight.To start, Riot Forge and Double Stallion Games released CONV/RGENCE: A League of Legends Story™, taking us through the streets of Zaun, while tha ltd.’s Humanity put the fate of humankind in our paws. The month also saw Tuatara Games’s hilarious Bare Butt Boxing finally become available in early access, then Plot Twist Games sent us looking for clues in The Last Case of Benedict Fox (below).Rounding out creator milestones for the month were Wishfully’s highly anticipated release of Planet of Lana and Bossa Games’s early look at Lost Skies.We share new game releases and milestone spotlights every Monday on the @UnityGames Twitter and @unitytechnologies Instagram. Be sure to give us a follow and support your fellow creators.Like every month, we were lucky to have another developer take over our Twitter channel to share their best #UnityTips. For May, @samyam_youtube shared a variety of tricks – from pixel art in Unity to simple keyboard shortcuts. Some highlights include:A thread to import your pixel art with the best settingsWhy you should use TryGetComponent instead of GetComponentA guide for the new Unity Input SystemSome quick productivity tipsGreat learning resources and advice for learning UnityOther members of our community also added great tips to the conversation, including @MirzaBeig’s hack for calculating FPS and @kronnect’s useful trick for multiple object positioning.Keep tagging us and using the #UnityTips hashtag to share your expertise with the community.We continue to be stunned by what Unity creators make week to week, and you certainly kept the amazing projects coming in May. If we missed something that you meant to tag us in, be sure to use the #MadeWithUnity hashtag next time.On Twitter, @EvaBalikova was busy creating some beautiful embroidered art (see above), and @DevFatigued’s little caterpillar friend was on a jumping journey.Meanwhile on Instagram, @ShimpleShrimp was in full focus underwater, and Mr. Mustard Games’s (@MrMustardGames on Twitter) robot couldn’t resist smashing some boxes. Finally, @kng_ghidra traveled through different dimensions, and we ended the month with chill skateboarding vibes from @stokedslothinteractive.Finally, for a bit of bonus content, Project Ferocious dev Leo Saalfrank spoke with Shacknews on YouTube about using Unity to the fullest. (For a Ferocious throwback, head to our GDC 2021 showcase for a peek at the WIP.)We’re always here to continue the #MadeWithUnity love. Keep adding the hashtag to your posts to show us what you’ve been up to.On May 25, we hosted a Graphics Dev Blitz Day, covering topics like global illumination, shaders, SRP, URP, HDRP, GfxDevice, texturing, and more. The event was held in both the forums and on the Discord server. Throughout the day, we had more than 150 threads with 49 experts answering questions, and we’d like to thank everyone who participated.Keep an eye on Discord and our forums for future Dev Blitz Day announcements, and don’t forget to bookmark the archive of past Dev Blitz Days.May really took things up a notch on Twitch with the continuation of our Scope Check Let’s Dev series, releasing parts four, five, and six. We also took time to stream a Creator Spotlight showcasing Thomas Waterzooi’s Please, Touch The Artwork (watch above).To close out the month, we hosted Lana Lux on the channel to hear her Unity Tales and held a Nordic Game Jam Let’s Play session.If you don’t already, follow us on Twitch today and hit the notification bell so you never miss a stream.Are you interested in becoming an Asset Store publisher? Maybe you’re a publisher looking to enhance your marketing, community building, or customer support skills? Check out our freshly updated Publisher Resources page for tips on how to turbocharge your publishing journey.Taking things to social media, here’s a roundup of some of our favorite creator showcases from Twitter in May:TOON Farm Pack (coming soon!) | @steve_sicsFast Food Heaven Pack | @NekoboltTeamModern Studio Apartment 3 | NextLevel3DOn YouTube, we shared videos with Renaud Forestié about More Mountains‘s Feel and Freya Holmér about Shapes – two extremely popular assets.Looking to be noticed by the Asset Store team? Tag the @AssetStore Twitter account and use the #AssetStore hashtag when posting your latest creations.For our final update, here’s a non-exhaustive list of games made with Unity that released in May. Do you see any on the list that have already become favorites or think we missed a title? Share your thoughts in the forums.World Turtles, Re: cOg Mission (May 1 – early access)KILLBUG, Samurai Punk and Nicholas McDonnell (May 3)Tape to Tape, Excellent Rectangle (May 3 – early access)Toasterball, Les Crafteurs (May 3)Bare Butt Boxing, Tuatara Games (May 4)Darkest Dungeon® II, Red Hook Studios (May 8)Pan’orama, Chicken Launcher (May 9)Blobi Sprint, ChOuette (May 12)Humanity, tha ltd. (May 15)Tin Hearts, Rogue Sun (May 16)Greedventory, Black Tower Basement (May 17)Inkbound, Shiny Shoe (May 22)Planet of Lana, Wishfully (May 23)CONV/RGENCE: A League of Legends Story™, Double Stallion (May 23)Sunshine Shuffle, Strange Scaffold (May 24)Diluvian Winds, Alambik Studio (May 25 – early access)Evil Wizard, Rubber Duck Games (May 25)Friends vs Friends, Brainwash Gang (May 30)Everdream Valley, Mooneaters (May 30)Doomblade, Muro Studios (May 31)If you’re creating with Unity and haven’t seen your projects in any of our monthly roundups, submit here for the chance to be featured.That’s a wrap for May. For more community news as it happens, follow us on social media: Twitter, Facebook, LinkedIn, Instagram, YouTube, or Twitch.

>access_file_
985|blog.unity.com

Back-to-school shopping trends for your app or brand in 2023

The back-to-school shopping season is around the corner and consumers are writing up their shopping lists of pencils, textbooks, and first-day outfits. This season, 72% of consumers say they’re planning on going back-to-school shopping - making it a key opportunity for your business. To get a clearer picture of school shopping behaviors, spending habits, and consumer preferences, we surveyed 7,315 US shoppers. Here’s what we learned: 1. 54% of consumers use a mobile app for their back-to-school shoppingThe majority of US consumers are likely purchasing their back-to-school supplies from their devices - 54% of those surveyed say they’ll be doing their shopping from their mobile apps.Many consumers already rely on their phones for their shopping needs, from researching deals to finding a store’s location. It makes sense, then, that they’d rather download an app than drive to their nearest store.Takeaway: An easy way to win customers is to make sure they’re aware you have an app they can directly shop from, making their shopping more efficient during this busy period. Feature your app in your advertising campaigns to remind users that your brand has an app touchpoint they can use to browse or even make purchases.2. 60% of consumers plan to do their back-to-school shopping between July and August The back-to-school shopping season is around the corner and consumers are writing up their shopping lists of pencils, textbooks, and first-day outfits. This season, 72% of consumers say they’re planning on going back-to-school shopping - making it a key opportunity for your business. To get a clearer picture of school shopping behaviors, spending habits, and consumer preferences, we surveyed 7,315 US shoppers. Here’s what we learned:1. 54% of consumers use a mobile app for their back-to-school shoppingThe majority of US consumers are likely purchasing their back-to-school supplies from their devices - 54% of those surveyed say they’ll be doing their shopping from their mobile apps.Many consumers already rely on their phones for their shopping needs, from researching deals to finding a store’s location. It makes sense, then, that they’d rather download an app than drive to their nearest store.Takeaway: An easy way to win customers is to make sure they’re aware you have an app they can directly shop from, making their shopping more efficient during this busy period. Feature your app in your advertising campaigns to remind users that your brand has an app touchpoint they can use to browse or even make purchases.2. 60% of consumers plan to do their back-to-school shopping between July and August29% of consumers surveyed expect to do their back-to-school shopping in July and 31% in August. Since over 60% of consumers plan to get their shopping done in these two critical months, you only have a narrow window to maximize the impact of your back-to-school campaigns.The numbers peak in August and then drop to their lowest in September (3.6%) - so don’t miss the deadline or you could land in the seasonal dip. Though it’s a limited timeframe, it’s a great opportunity to drive significant ROI and create impactful engagements.Takeaway: It’s critical to be intentional about when and how you start to run your campaigns. Start running your back-to-school test campaigns as soon as possible to ensure you have time to analyze and optimize before the July/August window closes.3. 76% of respondents shop at 2-5 retailers for their back-to-school suppliesA majority of consumers surveyed had one thing in common - they would be shopping at multiple retailers for their back-to-school supplies. 50% of those surveyed said they would shop at 2-3 retailers and a further 26% said they would be visiting 4-5 retailers. Only 13% of respondents said they would be shopping at a single retailer.Takeaway: Smaller retailers should feature high-demand products in their campaigns so users know they can find exactly what they need with your brand. For larger retailers, focus messaging and creatives on your wide product range so users know they can get more of their back-to-school shopping done with you - saving them time.4. 78% of consumers plan to spend more than or the same as last yearWhile other retail sectors have seen a drop in spending this year, most consumers plan on spending the same as they did last year on their back-to-school shopping. In fact, 29% of surveyed shoppers plan to spend more.While budgets are tightening in other places, back-to-school shopping is a necessity for most - making it one of the few areas where customers are willing to increase their spending and a prime opportunity for advertisers to drive more business.Takeaway: Turn your browsers into buyers by using the seasonal shopping boost to promote your inventory as a whole. With consumers increasing their spending on back-to-school supplies, it’s a great time to push related school and children products - like sporting goods and toys. 5. 52% of respondents said they are likely to be influenced by rewarded adsOver half of all respondents (52%) said they are likely to be influenced by rewarded ads (in-app ad units that offer users a reward in exchange for opting in to interact with an ad) during the back-to-school shopping season - making them a great way to incentivize users to engage with your brand and choose your products.While rewarded ads are typically associated with games, it’s not only stereotypical ‘gamers’ seeing them: the majority of hypercasual gamers are women - the same group most likely to go back-to-school shopping (59%). Also of note, shoppers report the best predictor of where they’ll be doing their back-to-school shopping is a coupon or deal on their goods - 57% of respondents say they’re more likely to use a brand or retailer if they’re offered a promotion to shop there.Takeaway: Don’t underestimate the value of rewarded ads in your advertising campaigns. Use them to offer users deals on their purchases, giving them a good reason to choose your brand or app for their back-to-school supplies. With students getting ready to return to school, now is the perfect time to tailor your ad experiences for audiences shopping for the classroom. Use these findings to help you optimize your back-to-school strategy and meet customers how and where they’ll be shopping.

>access_file_
987|blog.unity.com

3 reasons editors love Parsec for Teams

When 72 Films, a Fremantle company, was choosing a remote desktop solution for its hybrid post-production team, one need was paramount: Keep the editors happy.Editors can’t work with added lag. They need a smooth user experience that mimics on-premises workstations — not COVID-induced stopgap measures. They want the flexibility of working wherever they need to be on a given day while still staying on track with deadlines.To give editors what they want, 72 Films chose Parsec for Teams. Parsec won out over competitors by delivering superior speed and seamless integration, so editorial workflows aren’t interrupted, and IT isn’t fielding frustrated calls about a bad connection or a login that doesn’t work.We wanted to hear from the post-production team themselves what they liked about working with Parsec. Here’s what lead editor Dan Gulley, editor Matt White, senior tech operator Joe Meekel, and machine room manager Frank Webb had to say. Their answers covered a range of subjects, but these are the themes we heard again and again.The consensus is clear: Parsec just performs better.Matt White told us, “I’ve used a variety of systems before, and Parsec is the best system I have used. It’s fast, simple, and it works on your existing kit. With another solution, they had to send me a specific laptop, and I prefer working on my own equipment. Parsec works particularly well with different home monitor setups. The full-screen playback works really well, which is great when you are at home and playing back sequences. Parsec also works extremely smoothly. I didn’t have problems with lag or speed.”Other Parsec benefits the 72 Films team mentioned include working effortlessly across platforms (Mac/PC), Parsec’s intuitive multiple display settings, and how simple adjustments can fine-tune the user experience to compensate for a poor internet connection.Ease of use and responsiveness also separates Parsec from the pack, which makes a real difference in how quickly and efficiently work gets done. “Before Parsec, other remote desktops would do in a pinch, but I was limited as to how I could operate my machine,” said Joe Meekel. “Parsec has drastically improved my workload. I can trust Parsec for speed, accuracy, and a user-friendly experience.”Dan Gulley agreed, saying, “We used to use another solution with a VPN, which we found lengthy to set up and not as responsive. Parsec gave us secure and fast logins and a quick and easy setup with very little lag. The responsiveness is really good – much better than other platforms.”Editors told us that Parsec allows 72 Films teams to work smarter and more collaboratively regardless of location, meaning team members don’t need to be together in one place. Despite going hybrid while facing transport issues and travel restrictions, Parsec enabled them to continue working and meet tight deadlines.They especially like how easy it is to set up, log in, and get going. “Setup time, both for me as a support operator and for our editor clients, has been reduced significantly. Everybody is able to start work faster,” said Frank Webb.Parsec also stands out for enabling a smooth, steady workflow, which improves user experience and saves loads of time. Meekle said, “The first time I used Parsec, I was astonished by how snappy it was. It’s frame responsive, and the playback is incredible. Not even terrible home internet would ruin the experience for me. Also, starting up with only a two-way authenticator and a catalog of machines at the ready is much faster than launching into a VPN with a list of IPs and then having to move in and out of software.”The flexibility to work where you are and when you’re available makes work easier, faster, and more fun. Editors talked about how a better work-life balance allowed them to be more efficient and productive. Doctor appointments, trips to the gym, and kids’ school events become routine to daily life rather than problematic disruptions.Likewise, an unexpected development at work doesn't require dropping everything and heading onsite for evenings and weekends. Being able to jump on remotely from wherever you happen to be means editors can quickly answer a question or resolve a problem, so the production schedule isn’t held up.Webb said, “Hybrid working is all about having a genuine and viable option to work from home if necessary, no matter the nature of the work. Parsec enabled that for 72 Films. It means I have no qualms with logging in from home to do complex NLE work, which would have caused me endless stress previously!”Hybrid work also allows both creatives and organizations to be selective. Editors are free to go where their interests and relationships lead them, and companies like 72 Films can draw from a deeper talent pool to choose the right people for the job.According to Gulley, “Parsec has allowed us to work with the best talent no matter where they are in the world, and this has transformed our business.”We can’t say it any better than Meekel: “Parsec is a testament to how far the remote editing experience has changed. I can’t wait to see more of it!”Discover what you can do with Parsec for Teams.

>access_file_
988|blog.unity.com

P&O fundamentals: busting the myths surrounding player ownership

Play and Own (P&O) gives your users ownership over their digital assets, turning those assets into collectibles and enabling them to get even more value through secondary markets. For you, this could mean a new way to unlock revenue streams, acquire the right users, and build a community around your games.Recently, we conducted a survey to find out what gamers think about P&O. The answers we got back were clear - a majority of users we surveyed want player ownership and say they are even willing to pay more to have it in their games. Yet, there remain some sticky myths surrounding P&O that are holding many developers back from mass adoption.To bust these myths and create a more accurate picture of what player ownership is, below are the most common misconceptions we hear, and the truth for each.Myth: the tech requirements needed to enable P&O are too demanding for most developersBehind this myth is the idea that to integrate player ownership into your titles, you need to have a technical team that’s proficient and experienced in blockchain coding (the tech, in part, at the foundation of P&O). In other words, you need to know how to build a decentralized app from scratch.The truth: you can integrate player ownership into your games with no technical blockchain know-howIt may have been true in the past that you needed a highly-skilled technical team to integrate decentralized assets into your games, and support their management. But these days, there are solutions - like Astra - that do the heavy lifting for you. They can take care of the smart contracts - and handle wallet creation, minting your assets, publishing, and balancing the economy of your game. It’s a single, easy-access entry point into P&O without the need for a technical team.Myth: player ownership won’t work on mobileMost users who interact with decentralized apps and player ownership have historically done so through PC - not on mobile. So, the thinking goes, the infrastructure hasn’t been built to accommodate mobile users. On top of that, due to the decentralized nature of the ecosystem, it's harder to regulate - making it difficult to offer titles through mobile app stores.The truth: player ownership isn’t just mobile-friendly, it can be mobile-firstWhile it may be true that in the past decentralized apps were kept mostly to PCs, that’s no longer the case. Thanks to new innovations enabling studios to give users the benefits of decentralization and player ownership on their mobile devices, the P&O experience isn’t only confined to desktops. Going one step further, there are solutions that are able to leave cryptocurrencies out of the equation and pass all transactions through the IAP mechanisms of mobile app stores - meaning your game can easily start and scale with mobile audiences.Myth: P&O solutions and tech have bad UX that causes users to churnA major problem for the ubiquity of player ownership in the past was the complexity of its tech. Just to enter into the world of player ownership meant that users had to have a decent understanding of blockchain technology, access to communities through Discord, and be able to store assets in a third party wallet. And that’s before they even get started playing the games or collecting and selling assets. However, things have changed since then.The truth: player ownership can be user-friendly, familiar, and easy to navigatePlayer ownership has had its UX growing pains - but that’s changed. As the technology matures, many developers have found new and better ways to offer users entry into player ownership.New solutions have made it easy to integrate player ownership, mint assets, and enable trading directly in your games. And these advancements have created a better UX for users: marketplaces, apps, wallets, and platforms are now familiar (they look and feel like apps users already have experience with). Users can now get ownership over their assets, collect, and trade them as easily as browsing Instagram or shopping on Amazon.Myth: users don’t understand the benefits of player ownershipA concern we hear from some developers is that despite the clear value in player ownership, they fear that users are intimidated by and mistrustful of the technology. But, in reality, users (particularly gamers) have been finding ways to create player ownership for a long time.The truth: many users are already finding ways to trade and collect digital assetsWe know how a lot of players feel about player ownership. We asked them. But even without those insights, there’s a tremendous amount of proof that many users not only understand player ownership, but are already actively seeking it out and creating it for themselves. From Diablo, to Counterstrike, Fortnite, and much more, collectible gaming marketplaces persist and have huge followings. It makes sense - gamers are often collectors, and the same drive that compels them to catch every Pokémon translates directly into player ownership in games, too.In this context, player ownership isn’t a leap of faith - it’s the next step. P&O provides the same trading, collecting, and community-building that many users want, but makes it even easier to access and trust.

>access_file_
989|blog.unity.com

Advanced tips for character art production in Unity

In this guest post, Sakura Rabbit (@Sakura_Rabbiter) shares how she approaches art production and provides tips for creating a realistic character in Unity.I finally got some free time as of late and it got me thinking… How about I write something about character creation? I’ve just finished creating several characters in a row, and I’m quite familiar with the entire creation process. I’m not referring to things like the art design of worldviews, character backgrounds, or character implementation techniques. There are already plenty of articles that elaborate on those topics, so I won’t touch on them here.What else, then? After giving it some thought, I’ve decided to prepare an article about producing realistic characters in the Unity Editor.You might be thinking, “What brings Sakura Rabbit to this topic?” Alas, it’s all because I’ve gone through an uphill journey learning the skill from scratch. I’m writing this so you can learn from my mistakes and reduce errors in your work.Now, let’s get started!Generally speaking, the implementation process of a character model involves the following steps:1. Three-view drawing → 2. Prototype model → 3. High-precision model → 4. Low-polygon topology → 5. UV splitting → 6. Baking normal map → 7. Mapping → 8. Skin rigging → 9. Skeletal and vertex animation → 10,. Shader in the engine → 11. Rendering in the engine → 12. Real-time physics in the engine → 13. Animation application and animator → 14. Character controller/AI implementation → 15. Special effects, voice, sound effects, etc.There are 15 steps in total. The process might seem complicated, but from a character design standpoint, all these factors and details will influence how your character will ultimately be displayed in your game engine. Therefore, these numerous steps are necessary for the final product to achieve the desired effect. The entire process takes a long time, and all the steps must be done in a specific sequence – every step is crucial. If one isn’t done properly or if you try to cut corners, the final product will be directly affected.Let’s start by looking at the preliminary preparation work of art production. The 15 steps previously mentioned can be summarized into four main phases:Original drawing → modeling → animation → renderingIsn’t this much simpler? Now, let’s get straight to the point. Through my hands-on experience, I’ve learned some things – hopefully you find them useful in your own project!First of all, you should set up some checkpoints before you start. I’m going to skip the usual ones, such as the vertex counts, the size of the map, the number of bones, etc. Instead, I’m going to focus on the following:I’d like this character to have a human skeleton, since this will affect the subsequent AI implementation. The human skeleton is advantageous because it enables you to use the motion capture device or interval animation library to quickly create a set of high-quality animations that can be used on the controller or AI.In addition, you also need to plan ahead on the material effects you want for your character. To produce the desired effects, preliminary steps such as the UV, edge distribution, and mapping are indispensable. If you only think about them after completing the model and animation, you will most likely end up reworking your design. It’s best to think about effects ahead of time to avoid doing more work later.For some physics effects of the character, physical processing is required for certain components and must be done independently. This is another criterion you need to consider beforehand.With these checkpoints in place, the next step is implementation. Here’s how to get started.To ensure your character creation process runs smoothly, it’s important that the first step, namely the original drawing, is done carefully. Failure to do this properly beforehand may affect the structure or effects in the subsequent steps. Keep the following in mind when drawing to facilitate what you need to do next.Model: You need to make the drawing suitable for modeling. For example, will the structure of what you draw be difficult to implement during modeling? Will it be challenging to distribute the edges for certain structures when making low-polygon topology?Animation: Likewise, you need to make the drawing suitable for animation. For example, will rigging be difficult for certain parts of the animation? Which structure does not conform to the human skeleton?Shader: Next, you need to take into account shader implementation. Ask yourself: Will the shader of the material effect I draw be difficult to implement? How about the performance? How about the classification of materials? Does it come with special effects? Can it be implemented using one pass or multiple passes?Physics: Which structure requires simulated computation? How is the motion executed?By keeping all these in mind when drawing, you can streamline your work in the subsequent steps.Tip: When drawing a human body, you can use a 3D modeling software to assist you with the process. Not only will this improve your efficiency, but also ensure structural and perspectival relationships are correct.For modeling, the same rules apply – that is, take into consideration the steps that follow. Modeling must be done properly, and factors such as UV mapping, edge distribution, and material classifications must also be planned in advance. Modeling is the most critical part of the process since it needs to go through the animation process before it gets to rendering. If there is an issue in rendering, then the modeling and animation processes must be reworked.Mapping: You need to make the model suitable for mapping as well. Which structures can share the UV? Can you maximize the use of pixels of the map? Which components require Alpha?Animation: You need to consider how facial expressions are created in blend shape and how the model should be divided for UV. Also, you need to identify the body structures that require animation and determine how the edges should be distributed to make the rigging of the model more natural.Shader: Now it’s time to think about how the UV should be arranged so that it can deliver special effects for the implementation of shaders, as well as identifying which materials need to be separated when classifying modeling materials.Physics: Similarly, you need to distribute the edges properly to make the simulated effects appear more natural.When creating a model, the best way to avoid reworking is to take into account the subsequent steps and make plans in advance.Tip: When drawing high-polygon models in ZBrush or other software, it isn’t necessary to include minor detailed textures. Due to the resolution limit, the effect of the details will be very poor after being made into a map through direct baking. These details should be separated using Mask ID in the shader and added through Detail Map. Remember not to include them in the main map!Adding details in the shader directly is the way to go.During model rigging, it’s good practice to export files one by one in .obj format and then import them into the animation software to preserve your model’s authenticity. Then, check the normal orientations of the model, the layers of the file, and the allocation of the shader to see if there are any issues. If everything is good, you can proceed with model rigging.Bone positions play a key role in model rigging since they will decide whether the movement at the joints is natural. Let me say this again: it is extremely important! You will find yourself in trouble if the skin weight was fine, but the bone positions were wrong.Tip: Let’s use the hip bone, which is located in the middle of the rear, as an example. If you want the movement to look natural, the positioning of the bone must be accurate. Otherwise, the animation will be deformed when using the motion capture device or when applying it to other animations.At this stage, you’re very close to the final step of your work. Still, you can’t afford to take things lightly. There are several issues you should consider during the creation process:Model: Check the model again to make sure the orientation of the normals are aligned properly, the soft and hard edges are fine, the classification of the model components and materials is done correctly, the components that require blend shape are combined, and the materials and naming are handled.Animation: Determine whether the current bone structure meets the humanoid requirement in the engine.Shader: Check again whether the structures that require the effect are split.Physics: Identify the parts of the simulation that use bones and the ones that use vertices.Now, you have completed all the preliminary work before using the engine. Next, we need to import the entire set of the model map into Unity and merge all our preliminary work.Tip: When working on the skin weight, you can switch between the skinning software and the engine to test the effect. When the character is animated, it’s easier to identify problems. See the image below as an example. When the character is moving, you can see there’s a glitch when her scapula reaches a certain angle. This is due to the vertex weight not being smooth enough.Thanks to the checkpoints you set previously, the implementation process should be a walk in the park.For the shader, all you need to do is set or create the material for the separated components independently, as you will have already classified the materials during the model-making process. For animation adaptation, you can use the humanoid of Unity directly since you will have set the human skeleton standard beforehand. This way, you can save a lot of time on the animation work.In addition, you can also apply motion capture to further reduce our workload. If the blend shape you have made fulfills ARKit naming conventions, you can directly perform a facial motion capture to produce the animation of the facial blend shape.Tip: If you use Advanced Skeleton to do your rigging, the alignment of the character's scapula and shoulder nodes will most likely be incorrect when imported into Unity. To solve this, adjust it manually on the humanoid interface.Well, that’s it! In summary, throughout the character creation process, from original drawing to modeling, animation to rendering, I recommend a results-oriented approach and determining the steps that you should take to achieve the result you want. Furthermore, you should also have a thorough understanding of the entire production process so that you always know what to do next and what to take note of in the current step.Please share my post if you found it helpful!/ / / (^_^) Sakura Rabbit 樱花兔Sakura Rabbit’s character art was featured on the cover of our e-book, The definitive guide to creating advanced visual effects in Unity, which you can access for free here. See more from Sakura Rabbit on Twitter, Instagram, YouTube, and her FanBox page, where this article was originally published. Check out more blogs from Made with Unity developers here.

>access_file_
991|blog.unity.com

Accessing texture data efficiently

Learn about the benefits and trade-offs of different ways to access the underlying texture pixel data in your Unity project.Pixel data describes the color of individual pixels in a texture. Unity provides methods that enable you to read from or write to pixel data with C# scripts.You might use these methods to duplicate or update a texture (for example, adding a detail to a player’s profile picture), or use the texture’s data in a particular way, like reading a texture that represents a world map to determine where to place an object.There are several ways of writing code that reads from or writes to pixel data. The one you choose depends on what you plan to do with the data and the performance needs of your project.This blog and the accompanying sample project are intended to help you navigate the available API and common performance pitfalls. An understanding of both will help you write a performant solution or address performance bottlenecks as they appear.For most types of textures, Unity stores two copies of the pixel data: one in GPU memory, which is required for rendering, and the other in CPU memory. This copy is optional and allows you to read from, write to, and manipulate pixel data on the CPU. A texture with a copy of its pixel data stored in CPU memory is called a readable texture. One detail to note is that RenderTexture exists only in GPU memory.The memory available to the CPU differs from that of the GPU on most hardware. Some devices have a form of partially shared memory, but for this blog we will assume the classic PC configuration where the CPU only has direct access to the RAM plugged into the motherboard and the GPU relies on its own video RAM (VRAM). Any data transferred between these different environments has to pass through the PCI bus, which is slower than transferring data within the same type of memory. Due to these costs, you should try to limit the amount of data transferred each frame.Sampling textures in shaders is the most common GPU pixel data operation. To alter this data, you can copy between textures or render into a texture using a shader. All these operations can be performed quickly by the GPU.In some cases, it may be preferable to manipulate your texture data on the CPU, which offers more flexibility in how data is accessed. CPU pixel data operations act only on the CPU copy of the data, so require readable textures. If you want to sample the updated pixel data in a shader, you must first copy it from the CPU to the GPU by calling Apply. Depending on the texture involved and the complexity of the operations, it may be faster and easier to stick to CPU operations (for example, when copying several 2D textures into a Texture2DArray asset).The Unity API provides several methods to access or process texture data. Some operations act on both the GPU and CPU copy if both are present. As a result, the performance of these methods varies depending on whether the textures are readable. Different methods can be used to achieve the same results, but each method has its own performance and ease-of-use characteristics.Answer the following questions to determine the optimal solution:Can the GPU perform your calculations faster than the CPU?What level of pressure is the process putting on the texture caches? (For example, sampling many high-resolution textures without using mipmaps is likely to slow down the GPU.)Does the process require a random write texture, or can it output to a color or depth attachment? (Writing to random pixels on a texture requires frequent cache flushes that slow down the process.)Is my project already GPU bottlenecked? Even if the GPU is able to execute a process faster than the CPU, can the GPU afford to take on more work without exceeding its frame time budget?If both the GPU and the CPU main thread are near their frame time limit, then perhaps the slow part of a process could be performed by CPU worker threads.How much data needs to be uploaded to or downloaded from the GPU to calculate or process the results?Could a shader or C# job pack the data into a smaller format to reduce the bandwidth required?Could a RenderTexture be downsampled into a smaller resolution version that is downloaded instead?Can the process be performed in chunks? (If a lot of data needs to be processed at once, there’s a risk of the GPU not having enough memory for it.)How quickly are the results required? Can calculations or data transfers be performed asynchronously and handled later? (If too much work is done in a single frame, there is a risk that the GPU won’t have enough time to render the actual graphics for each frame.)By default, texture assets that you import into your project are nonreadable, while textures created from a script are readable.Readable textures use twice as much memory as nonreadable textures because they need to have a copy of their pixel data in CPU RAM. You should only make a texture readable when you need to, and make them nonreadable when you are done working with the data on the CPU.To see if a texture asset in your project is readable and make edits, use the Read/Write Enabled option in Texture Import Settings, or the TextureImporter.isReadable API.To make a texture nonreadable, call its Apply method with the makeNoLongerReadable parameter set to “true” (for example, Texture2D.Apply or Cubemap.Apply). A nonreadable texture can’t be made readable again.All textures are readable to the Editor in Edit and Play modes. Calling Apply to make the texture nonreadable will update the value of isReadable, preventing you from accessing the CPU data. However, some Unity processes will function as if the texture is readable because they see that the internal CPU data is valid.Performance differs greatly across the various ways of accessing texture data, especially on the CPU (although less so at lower resolutions). The Unity Texture Access API examples repository on GitHub contains a number of examples showing performance differences between various APIs that allow access to, or manipulation of, texture data. The UI only shows the main thread CPU timings. In some cases, DOTS features like Burst and the job system are used to maximize performance.Here are the examples included in the GitHub repository:SimpleCopy: Copying all pixels from one texture to anotherPlasmaTexture: A plasma texture updated on the CPU per frameTransferGPUTexture: Transferring (copying to a different size or format) all pixels on the GPU from a texture to a RenderTextureListed below are performance measurements taken from the examples on GitHub. These numbers are used to support the recommendations that follow. The measurements are from a player build on a system with a 3.7 GHz 8-core Xeon® W-2145 CPU and an RTX 2080.These are the median CPU times for SimpleCopy.UpdateTestCase with a texture size of 2,048.Note that the Graphics methods complete nearly instantly on the main thread because they simply push work onto the RenderThread, which is later executed by the GPU. Their results will be ready when the next frame is being rendered.Results1,326 ms – foreach(mip) for(x in width) for(y in height) SetPixel(x, y, GetPixel(x, y, mip), mip)32.14 ms – foreach(mip) SetPixels(source.GetPixels(mip), mip)6.96 ms – foreach(mip) SetPixels32(source.GetPixels32(mip), mip)6.74 ms – LoadRawTextureData(source.GetRawTextureData())3.54 ms – Graphics.CopyTexture(readableSource, readableTarget)2.87 ms – foreach(mip) SetPixelData(mip, GetPixelData(mip))2.87 ms – LoadRawTextureData(source.GetRawTextureData())0.00 ms – Graphics.ConvertTexture(source, target)0.00 ms – Graphics.CopyTexture(nonReadableSource, target)These are the median CPU times for PlasmaTexture.UpdateTestCase with a texture size of 512.You’ll see that SetPixels32 is unexpectedly slower than SetPixels. This is due to having to take the float-based Color result from the plasma pixel calculation and convert it to the byte-based Color32 struct. SetPixels32NoConversion skips this conversion and just assigns a default value to the Color32 output array, resulting in better performance than SetPixels. In order to beat the performance of SetPixels and the underlying color conversion performed by Unity, it is necessary to rework the pixel calculation method itself to directly output a Color32 value. A simple implementation using SetPixelData is almost guaranteed to give better results than careful SetPixels and SetPixels32 approaches.Results126.95 ms – SetPixel113.16 ms – SetPixels3288.96 ms – SetPixels86.30 ms – SetPixels32NoConversion16.91 ms – SetPixelDataBurst4.27 ms – SetPixelDataBurstParallelThese are the Editor GPU times for TransferGPUTexture.UpdateTestCase with a texture size of 8,196:Blit – 1.584 msCopyTexture – 0.882 msYou can access pixel data in various ways. However, not all methods support every format, texture type, or use case, and some take longer to execute than others. This section goes over recommended methods, and the following section covers those to use with caution.CopyTexture is the fastest way to transfer GPU data from one texture into another. It does not perform any format conversion. You can partially copy data by specifying a source and target position, in addition to the width and height of the region. If both textures are readable, the copy operation will also be performed on the CPU data, bringing the total cost of this method closer to that of a CPU-only copy using SetPixelData with the result of GetPixelData from a source texture.Blit is a fast and powerful method of transferring GPU data into a RenderTexture using a shader. In practice, this has to set up the graphics pipeline API state to render to the target RenderTexture. It comes with a small resolution-independent setup cost compared to CopyTexture. The default Blit shader used by the method takes an input texture and renders it into the target RenderTexture. By providing a custom material or shader, you can define complex texture-to-texture rendering processes.GetPixelData and SetPixelData (along with GetRawTextureData) are the fastest methods to use when only touching CPU data. Both methods require you to provide a struct type as a template parameter used to reinterpret the data. The methods themselves only need this struct to derive the correct size, so you can just use byte if you don’t want to define a custom struct to represent the texture’s format.When accessing individual pixels, it’s a good idea to define a custom struct with some utility methods for ease of use. For example, an R5G5B5A1 format struct could be made up out of a ushort data member and a few get/set methods to access the individual channels as bytes.The above code is an example from an implementation of an object representing a pixel in the R5G5B5A5A1 format; the corresponding property setters are omitted for brevity.SetPixelData can be used to copy a full mip level of data into the target texture. GetPixelData will return a NativeArray that actually points to one mip level of Unity’s internal CPU texture data. This allows you to directly read/write that data without the need for any copy operations. The catch is that the NativeArray returned by GetPixelData is only guaranteed to be valid until the user code calling GetPixelData returns control to Unity, such as when MonoBehaviour.Update returns. Instead of storing the result of GetPixelData between frames, you have to get the correct NativeArray from GetPixelData for every frame you want to access this data from.The Apply method returns after the CPU data has been uploaded to the GPU. The makeNoLongerReadable parameter should be set to “true” where possible to free up the memory of the CPU data after the upload.The RequestIntoNativeArray and RequestIntoNativeSlice methods asynchronously download GPU data from the specified Texture into (a slice of) a NativeArray provided by the user.Calling the methods will return a request handle that can indicate if the requested data is done downloading. Support is limited to only a handful of formats, so use SystemInfo.IsFormatSupported with FormatUsage.ReadPixels to check format support. The AsyncGPUReadback class also has a Request method, which allocates a NativeArray for you. If you need to repeat this operation, you will get better performance if you allocate a NativeArray that you reuse instead.There are a number of methods that should be used with caution due to potentially significant performance impacts. Let’s take a look at them in more detail.These methods perform pixel format conversions of varying complexity. The Pixels32 variants are the most performant of the bunch, but even they can still perform format conversions if the underlying format of the texture doesn’t perfectly match the Color32 struct. When using the following methods, it’s best to keep in mind that their performance impact significantly increases by varying degrees as the number of pixels grows:GetPixelGetPixelBilinearSetPixelGetPixelsSetPixelsGetPixels32SetPixels32GetRawTextureData and LoadRawTextureData are Texture2D-only methods that work with arrays containing the raw pixel data of all mip levels, one after another. The layout goes from largest to smallest mip, with each mip being “height” amount of “width” pixel values. These functions are quick to give CPU data access. GetRawTextureData does have a “gotcha” where the non-templated variant returns a copy of the data. This is a bit slower, and does not allow direct manipulation of the underlying buffer managed by Unity. GetPixelData does not have this quirk and can only return a NativeArray pointing to the underlying buffer that remains valid until user code returns control to Unity.ConvertTexture is a way to transfer the GPU data from one texture to another, where the source and destination textures don’t have the same size or format. This conversion process is as efficient as it gets under the circumstances, but it’s not cheap. This is the internal process:Allocate a temporary RenderTexture matching the destination texture.Perform a Blit from the source texture to the temporary RenderTexture.Copy the Blit result from the temporary RenderTexture to the destination texture.Answer the following questions to help determine if this method is suited to your use case:Do I need to perform this conversion?Can I make sure the source texture is created in the desired size/format for the target platform at import time?Can I change my processes to use the same formats, allowing the result of one process to be directly used as an input for another process?Can I create and use a RenderTexture as the destination instead? Doing so would reduce the conversion process to a single Blit to the destination RenderTexture.The ReadPixels method synchronously downloads GPU data from the active RenderTexture (RenderTexture.active) into a Texture2D’s CPU data. This enables you to store or process the output from a rendering operation. Support is limited to only a handful of formats, so use SystemInfo.IsFormatSupported with FormatUsage.ReadPixels to check format support.Downloading data back from the GPU is a slow process. Before it can begin, ReadPixels has to wait for the GPU to complete all preceding work. It’s best to avoid this method as it will not return until the requested data is available, which will slow down performance. Usability is also a concern because you need GPU data to be in a RenderTexture, which has to be configured as the currently active one. Both usability and performance are better when using the AsyncGPUReadback methods discussed earlier.The ImageConversion class has methods to convert between Texture2D and several image file formats. LoadImage is able to load JPG, PNG, or EXR (since 2023.1) data into a Texture2D and upload this to the GPU for you. The loaded pixel data can be compressed on the fly depending on Texture2D’s original format. Other methods can convert a Texture2D or pixel data array to an array of JPG, PNG, TGA, or EXR data.These methods are not particularly fast, but can be useful if your project needs to pass pixel data around through common image file formats. Typical use cases include loading a user’s avatar from disk and sharing it with other players over a network.There are many resources available to learn more about graphics optimization, related topics, and best practices in Unity. The graphics performance and profiling section of the documentation is a good starting point.You can also check out several technical e-books for advanced users, including Ultimate guide to profiling Unity games, Optimize your mobile game performance, and Optimize your console and PC game performance.You’ll find many more advanced best practices on the Unity how-to hub.Here’s a summary of the key points to remember:When manipulating textures, the first step is to assess which operations can be performed on the GPU for optimal performance. The existing CPU/GPU workload and size of the input/output data are key factors to consider.Using low level functions like GetRawTextureData to implement a specific conversion path where necessary can offer improved performance over the more convenient methods that perform (often redundant) copies and conversions.More complex operations, such as large readbacks and pixel calculations, are only viable on the CPU when performed asynchronously or in parallel. The combination of Burst and the job system allows C# to perform certain operations that would otherwise only be performant on a GPU.Profile frequently: There are many pitfalls you can encounter during development, from unexpected and unnecessary conversions to stalls from waiting on another process. Some performance issues will only start surfacing as the game scales up and certain parts of your code see heavier usage. The example project demonstrates how seemingly small increases in texture resolution can cause certain APIs to become a performance issue.Share your feedback on texture data with us in the Scripting or General Graphics forums. Be sure to watch for new technical blogs from other Unity developers as part of the ongoing Tech from the Trenches series.

>access_file_
992|blog.unity.com

AI is changing how we learn and create – here’s why you should lean in

I’m not the first to say it, but the generative AI boom is here – and it’s bringing massive opportunities for creative learning.In fact, we’re entering the age of the AI personal tutor. Of all the learning media available, personal, one-on-one tutoring has the strongest evidence for learning impact. According to educational psychologist Benjamin Bloom’s 2 sigma, this kind of tutoring is actually two standard deviations better than other learning models.Of course, generative AI isn’t a human tutor, and we have yet to see if an AI personal tutor will be able to replicate the learning motivation and efficacy that comes from human connection and empathy. But what is perhaps more promising is the potential for collaboration between AI and human tutors to increase both tutor productivity and learning impacts for students. In any case, it’s clear that we’re on the cusp of AI-boosted teaching and learning, and these advancements are coming much faster than human behavior can adapt.So, how do we change our mindset and behavior to maximize our potential with generative AI? And, what does that mean for creators?I had the opportunity to hear Sam Altman, founder and CEO of OpenAI, speak at the 2023 ASU + GSV Summit in April. Here’s how he said he would approach generative AI if he were just starting his career:“I would get as comfortable with the tech as I could. Really try to develop a native feel for it, prepare myself for a high rate of change in the world […] and a lot of resilience and ability to adapt to new things and try new things quickly.”I think Sam is right about the importance of resilience and adapting rapidly. As I discussed in my last blog post, games are one of the best learning media to teach us “the resilience to keep trying and learning from our failures, the creativity to face a new problem and solve it with no rulebook, and the resourcefulness needed to figure out solutions.” While it may seem like generative AI is poised to make resilience and resourcefulness unnecessary, I’d argue that learners will just need enhanced critical thinking skills to deftly evaluate solutions and make the right choices.I’m not only recommending we play more games that build our resilience and adaptability – I’m also recommending we start adapting our behavior with generative AI by gamifying how we learn to use it.As you do when beginning a new game, get curious and start tinkering with the hundreds of AI tools that have popped up over the last few months. Choose a few of your favorites, commit to embedding them in some key parts of your day to day, and take note of what you discover about your own learning. Learn something new that you never thought you could do – some of my favorite stories are of people who couldn’t build a video game in Unity at first, but are now doing it in a few hours. Play together with your team or community and talk about what’s helping you save time, create higher-quality experiences, and expand your creative possibilities. Measure and track your learning achievements with generative AI: Where are you more productive? How many new things have you learned? How are you changing how you learn?There’s incredible potential here to maximize human creativity. Chris Dede, professor in Learning Technologies at the Harvard Graduate School of Education, describes the ideal state:“As we understand the complementarity between human strengths and the strengths of AI, what is ideally going to happen in the workforce is a partnership between humans and AI called IA, Intelligence Augmentation. In this relationship, the person does what he or she does well, the AI does what it does well, and the whole is more than the sum of the parts. The partnership can accomplish more than either side can on its own.”At Unity, our hypothesis is that generative AI will empower creators to dream bigger than ever before. We believe we should be learning to create and leveraging generative AI at the same time.Hopefully, learning to create becomes more accessible to more people, and the possibilities for innovation continue to grow as we let AI handle the time-consuming tasks it’s best-suited for. Then, over time, we’ll be expanding our collective capacity to truly make the world a better place with more creators in it.

>access_file_
993|blog.unity.com

Unity 2022 LTS is coming in June

Editor’s note: As of June 1, 2023, the Unity 2022 LTS is now available.When I talk to our partners, customers, and those of you who work day to day with Unity’s Editor and runtime, one thing that keeps popping up is the desire to be more ambitious with your game design. You want tools that enable you to create with fewer constraints, delivering player experiences that wow and cause wonder.Unity 2022 LTS is designed to give you that power. You’ll discover that you can rely on the new LTS release to create sophisticated DOTS-powered games, multiplayer experiences, immersive HD environments, and visuals that perform great on any platform you target. And we remain committed to ensuring you have wide platform support with amazing tools for any device in this release.So, I couldn’t wait any longer to tell you about the Unity 2022 LTS release, which is just around the corner.On June 22, 2023, we’ll host a multi-hour livestream diving into some of our favorite features, so be sure to join us then by signing up to get notified. But if you’re like me and are waiting with bated breath to hear about the new features, I’ve got some inside knowledge for you right here.Let’s dive into some of the highlights from this release that I’m particularly excited about.Yes, DOTS is here! The team, led by Laurent Gibert and Joe Valenzuela, has delivered the first fully supported-for-production version of our Entity Component System (ECS), so you can now build with the full Data-Oriented Technology Stack (DOTS) solution – Burst compiler, C# Jobs System, and now ECS for Unity.Your most challenging projects require a game engine that provides power and flexibility. ECS for Unity offers the ability to gain greater control and determinism over data in memory and better runtime process scheduling. ECS is integrated into the Editor, so you can leverage your existing GameObject-based experience and use the power of Entities-based code when it’s beneficial.We’ll have more information about how you can harness the power of DOTS when Unity 2022 LTS is released, but you can engage with the team who actively participate in our forums here.With Unity 2022 LTS, we’re excited to support multiplayer games with a rapidly growing end-to-end ecosystem of creation workflows and cloud services. This is possible by tight integration between the Unity game engine and multiplayer services provided through Unity Gaming Services (UGS).We’re shipping the Netcode for Entities package with Unity 2022 LTS, fully supported for production. Netcode for Entities is a powerful networking feature to boost a game’s performance and capabilities. You’ll be able to increase the number of players, interactable objects, and backend server-side entities in competitive action games like first-person shooters and massive multiplayer online games.More than any other genre, multiplayer games rely on successful ongoing operations for live titles. That’s why we have a suite of multiplayer-specific services as part of UGS. Matchmaker, Friends, Leaderboards, and User-Generated Content are designed to simplify the implementation of multiplayer capabilities, while services like Relay, Lobby, Game Server Hosting (Multiplay), and Voice and Text Chat (Vivox) round out capabilities for your live game.To see our multiplayer functionality in action, we will also be releasing our new Megacity multiplayer sample in June, which is built on ECS for Unity and features UGS tools for hosting, matchmaking, authentication, and voice chat between players. This demo demonstrates how you can bring our powerful tools together in a fast-paced, competitive, third-person shooter that supports 64+ players.Immerse your players in beautiful, physically based environments with High Definition Render Pipeline (HDRP). Unity LTS 2022 includes numerous features to enhance your game worlds for an even richer player experience.Introduce advanced procedural fog and volumetric effects using Volumetric Materials and Shader Graph to create atmospheric game environments such as haunted forests, desolate landscapes, or misty valleys.The new Water System enables you to add oceans, rivers, and underwater effects to your game environments. You can create realistic waves, ripples, foam, and more, as well as simulate underwater environments with advanced caustics, refraction, and reflection effects.Improvements to Cloud Layers with dynamic lighting and Volumetric clouds allow you to create even more realistic skies with clouds that change and move based on weather conditions, simply blending between different weather effects such as sunny and cloudy skies.A new update to the Spline package helps you procedurally generate paths, roads, or fences in your environments with greater precision for more organic and varied game environments with less manual work.The Universal Render Pipeline (URP) provides a scalable solution for high-quality graphics across platforms. Upgrades in this release allow you to create more realistic lighting in your scenes and achieve higher-quality visuals that scale across devices.Forward+ Rendering eliminates the light limit count so you can use more lights in a scene while maintaining performance. This feature is particularly useful if you want to achieve high-quality real-time lighting in your scenes.LOD crossfade which produces smoother graphic transitions as objects move closer or further away from the camera. Temporal Anti-aliasing (TAA) reduces aliasing problems such as pixelated and flickering edges to improve the overall visual quality of game scenes.Decal Layers allow you to add extra texture details in your scenes with control. You can use Decal Layers to filter and configure how different objects get affected by different Decal Projectors in a Scene.Customize your rendering experience with features like Shader Graph Full Screen Master Node and custom screen space effects. These allow you to create unique visual effects in games such as distortion or other types of post-processing, and they’re available in both URP and HDRP.Shader Variant Prefiltering significantly improves build time and memory optimization, reducing the build time and improving games’ overall performance.Finally, the Built-in Converter helps you to move existing projects from the Built-in Render Pipeline to URP, making it easier for you to take advantage of URP’s performance and scalability benefits.With Unity 2022 LTS, you can optimize your games for the latest platforms across mobile, console, desktop, and XR, with added performance and stability for key features.Maximize platform potential with increased performance and stability using the DirectX 12 graphics API on Windows and Xbox®. You can also experiment with the latest ray tracing support for Xbox® Series X|S and PlayStation®5, which allows for more realistic real-time lighting and reflections in game scenes.Iterate faster with the latest incremental player build process for Xbox®, iOS, PlayStation®5, and Nintendo Switch™*, which improves deployment efficiency, reducing time to market and improving overall game quality.Dig deeper into the performance of Android games with access to low-level data through the System Metrics Mali package from ARM. Games on Samsung devices can now take advantage of Adaptive Performance 4.0 with support for visual scripting. Better manage WebGL memory usage and gain native C++ multithreading along with support for touch controls and texture compression on web builds for mobile devices.Unity 2022 LTS expands your reach in XR with an updated XR Interaction toolkit (XRI) to enhance build times for PlayStation®VR2 and Meta Quest 2. Ensure your VR games run well across a variety of performant devices with late latching and motion vectors and improved graphics performance for games using Vulkan. These updated tools enable you to build immersive, high-fidelity XR experiences that can be deployed across multiple platforms.With a minimum of two years development and testing, Unity 2022 LTS has relied on user feedback and testing. Many of you have joined us on the journey through beta and Tech Stream releases, and I want to personally thank you for partnering with us to deliver the most stable Unity release in the 2022 cycle. And remember, LTS releases are supported for at least two years through biweekly updates, so we have you covered.Learn more about our 2022 LTS and the 2023.1 Tech Stream release in our GDC 2023 roadmap session.Wherever you are in your production process, Unity 2022 LTS offers resources to help you bring your project to the finish line and beyond. Get notified when LTS 2022 is available on release day by signing up for email updates.We hope this overview of the Unity 2022 LTS has given you a good idea of the powerful features and tools available in this release. We’re always eager to hear your feedback, questions, and ideas, so please keep the conversation going.Interact with other Unity creators and experts in our forums or directly through our platform roadmap. Catch our live streams on Twitch, and be sure to follow us on Twitter to stay up to date on the latest news and announcements. We look forward to hearing what you think.*Nintendo Switch is a registered trademark of Nintendo.

>access_file_
996|blog.unity.com

Why we’re excited about AI at Unity

We believe that the world is a better place with more creators in it. We make tools and services that help creators succeed, from individuals building their first games to professional studios working on the next great franchise.That’s why we continue to be excited by the promise of AI- and ML-driven techniques to reduce complexity, speed up creation, and, most importantly, unlock new ideas. Simply put, we think that this technology’s accessibility will help more people to become creators.We’ve worked for years, both internally and with partners, to explore how AI can be used in simulation, content creation, and game optimization. We see the present moment’s Cambrian explosion of generative AI as an opportunity to go even further.Unity is uniquely positioned to help you succeed while adopting generative AI because of the Unity Editor, runtime, data, and the Unity Network.More people use the Unity Editor to create games and other real-time 3D (RT3D) experiences than any other workflow in the world. Over the last 18 years, the Unity Editor has helped to democratize game development while contributing to a massive proliferation of new games across countless devices.Today, we strongly believe that the power of generative AI will enable Unity creators to be much more productive while ushering in scores of new creators who will face lower barriers to building RT3D games and experiences. We think that these AI tools will complement rather than replace existing tools and workflows. They offer the promise to help creators do more for and by themselves by filling the gaps in skill sets and resources so they can achieve what scarcely seems possible today.Just as a student might use a generative pre-trained transformer (GPT) tool to jumpstart research or even create a first draft before refining and finalizing a paper in Microsoft Word or Google Docs, Unity creators will be able to use natural-language generative tools together with deterministic, non-AI tools to create code, animations, physical effects, or other real-time content. Creators will move back and forth from rough approximations and text to fine-grained controls and code to iterate and refine the experience they envision.What’s better, we’re building the technology in the Unity Editor to better define what AI draws from. This not only means using appropriate and licensable datasets for generating content but also integrating AI techniques that are customized to their specific content (for example, by using Low-Rank Adaptation, or LoRA, language models during asset builds to deliver new content that’s trained on their existing work).The Unity runtime powers the most real-time applications in the world, with billions of downloads on billions of devices every month, in well over 100 countries. This means that Unity is the predominant way that content created with AI tools will come alive for consumers and users, since the output of any generative AI creation tools made available in the Unity Editor get delivered via the Unity runtime. The Unity runtime makes 3D content interactive and available on almost any device, ensuring that it responds to user input, as well as simulating effects like lighting or physics.But we see an even bigger opportunity. We believe that AI is not just the domain of creation tools, but that it offers the opportunity for new forms of interaction by moving inference – the process of feeding data through a machine learning model – to runtime.We’ve been working on this technology – code-named “Barracuda” – for more than five years. What will it mean when designers can build game loops that rely on inference on devices from mobile to console to web and PC? What happens when that AI capability is fast, efficient, scalable, and does not require expensive cloud compute?We have some ideas – NPCs that come to life, diffusion content as a gameplay mechanism, boundaryless user-generated content – but we know that our creators will do far more with this technology than we could ever even dream.Most of the digital content in the world today is 2D and linear – think sprites, photos, a set of film frames, a rendering of a building floor plan, or source code. AI data models train on this information to learn and, in the case of generative AI, to create content.Unity enables the real-time training of models based on unique datasets produced in the creation and operation of RT3D experiences. Through this training, we can build ever-richer services on top of Unity and provide extraordinary capabilities for our partners to leverage Unity as a data creation, simulation, and training engine for their own needs. Natural-language AI models incorporated into the Unity Editor and runtime train on real code and images. That real-usage training data is abstracted from its initial use (it’s not captured or recorded as-is), however this learning enables Unity’s customers to substantially increase their productivity.The Unity Network, whichconsists of our analytics tools, ad networks, publishing systems, and cloud services, reach a combined total of more than 4B users each month. Each of these service fields yields data that we can use to help our customers massively improve how they attract new users, increase engagement, or drive greater revenue from that base. Unity has been using the power of neural networks to help continuously optimize systems to support user acquisition, engagement and monetization for over three years.Generative AI has been used in some form or another for much of the history of video games, and it has tremendous potential as a tool to help developers achieve more with fewer resources. We’ll be sharing more over the coming months about our vision for AI at Unity, what we’re working on, and how this technology can help you achieve your vision.Stay tuned to the blog for more about Unity and AI, and, if you haven’t already, sign up for the AI Beta Program to be the first to hear about new tools and services.

>access_file_
997|blog.unity.com

3 ways mixed reality is driving change in car development

“Mixed reality has already changed the way cars are designed with gaming engines like Unity. So it’s already here.” – Jussi Mäkinen, Chief Brand Officer at VarjoVarjo, Volvo Cars, and Unity have a well-established collaborative partnership. This partnership brings together Varjo’s human-eye resolution headsets, Unity’s leading real-time 3D technology, and Volvo’s drive for innovation.Drawing on their joint experiences, Varjo chief brand officer Jussi Mäkinen, Volvo innovation leader Timmy Ghiurau, and Unity director Jeff Hanks participated in a panel discussion at SXSW 2023. Each firmly agrees that mixed reality (MR) is an important technology for the automotive industry.Here, we’ll explore the top three ways that mixed reality is already proving its worth. You can check out the full panel conversation on SXSW.com.The automotive industry has wholeheartedly embraced mixed reality, and it’s changing the way cars are sold, engineered, designed, and repaired. As Ghiurau explains, Volvo Cars was an early adopter of real-time 3D – largely driven by his understanding of the potential of gaming technology to be applied in industry.Their virtual twin of a Volvo car allows design and engineering teams to focus on how the machine interacts with the human, which he argues is essential to build trust between human and machine.By adopting real-time 3D technology, teams at Volvo Cars are able to incorporate the human factor early in the design process, which allows them to consider unique challenges such as driver frustration and address them early on.“With mixed reality, we are putting the human at the center to see how people might react to various scenarios or human factors like entertainment and stress.” – Timmy Ghiurau, Innovation Leader at Volvo CarsVarjo partners with innovative companies across industries to create virtual and mixed reality (VR/MR) products and services for advanced users. As Mäkinen says, their experience clearly shows that the automotive industry is already feeling the benefits of mixed reality tech. These range from a reduced dependence on physical mockups for creating new vehicles, to allowing faster design iteration, greater creativity, and running more efficient testing scenarios.As early adopters of real-time 3D and immersive technology, automotive companies are already reaping the benefits in key use cases like 3D product design and visualization, human machine interface (HMI), and immersive training.Connecting data silos is a fundamental challenge across the automotive industry. Real-time 3D is addressing that challenge by creating paths to interoperability, often through the medium of mixed reality. For example, by overlaying complex engineering data with design options in a mixed reality environment, stakeholders can envision design while retaining accurate engineering specifications. This single, visual source of truth enables design iteration to become a conversation between machine and aesthetics.Take the recently launched Volvo EX90 for example. As Ghiurau explains, Volvo Cars used Varjo mixed reality throughout the design process of this model, and improved efficiency, faster build, and the ability to conduct real-time testing from the back seat were just a few of the benefits.“The tools are already helping us to make design and engineering decisions more efficient and streamlined. With that saved time, we can explore other implications such as circular economy, sustainability, and other aspects.” – Timmy Ghiurau, Innovation Leader at Volvo CarsMäkinen explains how mixed reality delivers further benefits by enabling the interoperability of big data that allows more people from different backgrounds to access the information. Simply put, these solutions help developers, designers, and nontechnical end users to envision, interact with, and collaborate on complex processes more intuitively. The opportunity to better understand end-user challenges results in more inclusive solutions.What does democratization really mean, in the context of the automotive industry?With real-time 3D, Volvo Cars can make simulators and mixed reality functionality available across multiple departments. Crucially, by using Unity, teams are able to adopt the technology for themselves – they’re not dependent on developers to build mixed reality solutions. Because of this democratization, the teams have found that their uses of mixed reality go beyond what was originally anticipated. Ghiurau cites examples of teams using the tech for simulating fluid dynamics and predicting crash test data flows.Future success for automotive companies is dependent on the need to iterate and adapt faster. Many are using Unity’s real-time 3D tech to enable their visions of the future, including testing and simulating for long-term scenarios, or powering next-level infotainment systems. Mixed reality solutions enable teams to go beyond their immediate tasks to anticipate and influence the automotive future.With technology advances like Varjo Reality Cloud and hand-and-eye tracking for menu option selections, the barriers to adopting mixed reality are disappearing everyday. Cross-team collaboration is truly possible in mixed reality.To learn more about what role mixed reality could have for Volvo Cars’ future, check out Timmy Ghiurau’s Unity Creator Day session.Going beyond automotive design teams, there are numerous potential advantages to extending mixed reality to the creation of consumer experiences, such as:Enhancing the driver’s experience: Navigation and infotainment systems could be improved by providing real-time information on the surroundings and road conditions through interactive dashboards or head-up displays (HUDs).Improving driver safety: Automotive companies are working on the development of full-windshield HUDs, as well as sensors that can transmit real-time information about the vehicle conditions and environmental factors.Supporting consumer maintenance: Interactive manuals contain simple step-by-step instructions and video tutorials that allow users to maintain some car functions.Interactive sales experiences: Digital car showrooms give consumers a way to engage with and customize their potential purchases.Contact us to discuss how real-time 3D and mixed reality could work for you, or check out our software suite for Industry.

>access_file_
999|blog.unity.com

Start creating for Magic Leap 2 with new Unity Learn course

Ready for a big leap forward in augmented reality (AR) development?We’re excited to announce that the Unity Learn team has partnered with Magic Leap to develop a new, self-guided learning experience to support the release of the Magic Leap 2 headset.Magic Leap develops immersive, enterprise AR technology that seamlessly integrates the digital into the physical world. The Magic Leap 2 is their latest innovation, designed to address barriers preventing the widespread adoption of AR technology. With emphasis on accessibility and ease of use, this new headset is poised to play a key role across sectors including education, health care, manufacturing, and beyond.Unity Development for Magic Leap 2 is a course designed by the Unity Learn team in collaboration with Magic Leap. The course is aimed at intermediate developers interested in learning how to create compelling AR experiences for this new device. Magic Leap has produced user guides and documentation, and Unity already supports the Magic Leap 2 through its plug-in system and XR Interaction Toolkit, so a Unity Learn course for the Magic Leap 2 was the natural next step.Because the course is targeted at intermediate developers, it has been structured differently than beginner-level Unity Learn courses. Instead of showing learners how to write code, the course provides ready-to-use code snippets and example scenes that developers can quickly implement in their projects. The example prototype scenes include a wall-mounted media player, a model viewer with hand tracking, and a sticky note app that takes advantage of the Magic Leap 2’s space localization capabilities.The course uses an in-Editor tutorial to guide developers through the process of configuring their projects and deploying apps to the Magic Leap 2. Then, there’s a series of additional tutorials to complete which cover the device’s various features. Each tutorial includes an overview of the feature, a set-up guide, examples of how it can be implemented, and tips for using it effectively. Through these tutorials, course participants will learn about plane detection, global and segmented dimming, marker tracking, hand and eye tracking, meshing, and more. You can see examples of these features below.This collaboration with Magic Leap will give Unity developers the skills to build AR experiences that make the most of the Magic Leap 2’s many capabilities. If that sounds exciting to you, dive into the Unity Learn course today and start creating.

>access_file_
1000|blog.unity.com

6 community videos to get you started with multiplayer in Unity

Are you looking to try your hand at multiplayer game development in Unity? For the past year, we’ve been launching new products and features to flesh out our Multiplayer suite of tools to support the creation of multiplayer games spanning all genres and platforms – from dedicated hosting to friends list management to in-game voice chat.However, connecting all the dots into a tech stack that you can use to build the multiplayer game of your dreams can be confusing. That’s why we’re showcasing six recent YouTube video tutorials from community content creators that cover Unity's multiplayer tools. From Code Monkey’s in-depth tutorials to Dapper Dino’s expert insights, each video provides a wealth of knowledge and inspiration for game developers.Let’s dive in.Samyam is a YouTube creator who focuses on game development tutorials for indies. In this video from March 18, Samyam introduces you to Netcode for GameObjects (NGO), one of Unity’s proprietary networking libraries, and shows how to leverage the package alongside Unity Transport to build a simple multiplayer game.Samyam’s video includes a helpful overview of high-level multiplayer terminology before diving into the tech, so this is a great place to get started.Check out the video to see:An introduction to high-level multiplayer terminologyHands-on work with NGO 1.2.0 and Unity TransportA simple game setup to integrate with hosting and matchmaking servicesCode Monkey is a professional indie game developer who creates YouTube content about Unity and C# gamedev.In this recent video, Code Monkey covers how to run dedicated game servers with Game Server Hosting (Multiplay) from Unity Gaming Services (UGS). He walks through:An introduction to Game Server HostingMaking a dedicated server build of your gameUploading your server to the cloud on the Unity DashboardGetting your game servers online for playersUnity’s Matchmaker is a smart, rule-based matchmaking system that plugs easily into the Unity game engine. Dapper Dino walks you through how to integrate Game Server Hosting (Multiplay) and Matchmaker into any game powered by Netcode for GameObjects, using one of his existing projects (which you can access here) to show how to host your game and provide matchmaking for players.Check out Dapper Dino’s full video to learn how to:Access the Unity dashboardSet up your serversSet up MatchmakerTest the servicesIn this follow up to his first video in our roundup, Code Monkey covers how to add matchmaking to multiplayer games and define rules with as much or as little complexity as you want. He addresses how Matchmaker is integrated with Game Server Hosting to get your game online, as well as how to sort players into matches.A major benefit of this tutorial is Code Monkey’s explanation of the different rule sets you can explore within Matchmaker to set up skill-based, geography-based, or platform-based matchmaking (or other combinations) to optimize your player experience.Code Monkey has even made the project files available for viewers. Explore how it all works together to build a live multiplayer game by pressing play below.Tarodev, another popular YouTuber who makes game dev tutorials, walks viewers through getting started with NGO in this video. In the video you’ll learn:How to get started with NetcodeThe difference between server and client authority (and when to use each)How to write performant network codeHow to use NetworkVariable and INetworkSerializableAbout cheap multiplayer interpolationWhat ServerRPC and ClientRPC areTricks to make your multiplayer game feel greatIf you’re looking for an end-to-end video guide to multiplayer development in Unity, this is the tutorial for you. In this six-hour YouTube course, Code Monkey walks through the full experience of building a small multiplayer game in Unity – from networked gameplay to integrating live services.The video builds on Code Monkey’s previous course on making a singleplayer game in Unity and converting that tutorial project into an online-ready multiplayer experience. The video covers:Setting up Netcode for GameObjectsNetworking your gameplayHandling player joins and disconnectsIntegrating LobbyHosting with RelayExploring Game Server Hosting (Multiplay), Matchmaker, and Voice and Text Chat (Vivox)Multiplayer debuggingWhat else do you want to see tutorials on in the future? Let us know in the Unity multiplayer forums or on our multiplayer Discord. Happy creating!

>access_file_