// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1691 transmissions indexed — page 63 of 85

[ 2021 ]

20 entries
1242|blog.unity.com

How KO_OP uses version control to foster better teamwork

KO_OP turned to Unity Version Control for version control, cross-team collaboration, and source code management on their biggest project yet.Let’s face it: Getting a studio of talented artists and engineers aligned on a single production process is challenging. Those moments when teams are scrambling to track accidental file duplication or overwriting are often the result of mismanaged assets and departmental silos.Canadian studio KO_OP experienced some of these frustrations firsthand. While Git initially seemed like the right version control platform for their programmers, not everyone felt comfortable using it. This eventually resulted in a general slowdown in production – something they needed to mitigate (and fast) ahead of their massive upcoming release, Goodbye Volcano High.While searching for a solution that would serve their whole studio, KO_OP selected Unity Version Control as their new version control system (VCS). Find out what led them there, and what’s changed since making the switch.Rapid release cycles, large file sizes, and distributed teams can become difficult to balance in even the most well-coordinated companies. Workflows can get messy; marked by questions and confusion around who’s working on what part of a project, what changes are being made, and when.That’s how artists and engineers can end up working on the same file without the other’s knowledge, leading to inevitable merge conflicts. Despite the fact that creative and technical teams tend to work independently, their lines cross more than is immediately apparent. Both are essential through all phases of production, from the initial conception and creation of a game, all the way to its release, revisions, and ongoing updates. This is, similarly, the case at KO_OP’s Montreal-based studio, where all full-time team members are equal owners of the company, and as such, share in key decisions surrounding game design, development, and just about everything in between.Founded in 2012 by studio director Saleem Dabbous and programmer Bronson Zgeb, KO_OP has always valued a more democratic and experimental approach to their interactive projects. Production highlights like the Lara Croft GO expansion The Mirror of Spirits and the Apple Arcade game Winding Worlds have required a serious team effort, and in turn, equally well-rounded support for their team. As Dabbous explains in the company’s recent Vice profile, “This studio exists to support the people who are part of it, not the other way around.”At the time, however, the team felt somewhat stifled in their mission. KO_OP used Git for version control, which they found lacked the sort of overarching, panoramic visibility that would help them collaborate more efficiently. While programmers had relied on Git for source code management throughout their careers, the creatives with less technical expertise did not intuitively grasp this seemingly mysterious system. And once the pandemic dispersed everyone, forcing them to work remotely, communication only worsened, errors proliferated, and the team at KO_OP knew that they needed a change.While working on Goodbye Volcano High, KO_OP finally turned to Version Control. As a Unity studio, it seemed like an obvious step. The Version Control system serves to refine workflows and enable smooth collaboration without compromising on performance or branching and merging capabilities. Perhaps most importantly, it ensures the team’s reliance on a single source of truth.Migrating from Git was strikingly simple and straightforward. KO_OP’s team appreciated Version Control’s detailed documentation, which provided best practices and other methods for efficiency. “It showed [us] how to set up a branch model at a much more granular and effective level than what we were used to,” says Dabbous. Its approachable and visually rich tools appealed to artists and engineers alike.Before making the switch, KO_OP’s artists relied on programmers to bring assets into their projects safely. Now, thanks to Gluon, a user-friendly GUI and workflow, just about anyone can pick up the files and handle large binaries without much oversight or deep knowledge of branching and merging. Developer Jacob Blommestein refers to this as “a surprise for artists [who could] just add in their .psd files. The versioning was transparent.”At the same time, writers on the narrative team gained visibility into project status, whereas developers were taken with Version Control's branching visualization. “It is easy to parse and much easier to navigate than Git,” shares Dabbous. “People can jump around the project in ways that won’t be destructive.”To better integrate Version Control with other vital communication tools (think Slack and Jira), KO_OP’s programmers have even started working on a series of DevOps tools. As Dabbous puts it: “We felt we should take the next step and improve collaboration across the board.” That drive, paired with the newfound ability to rapidly reuse code, refine it, and keep track of KO_OP’s other interdependent systems has been a turning point for the team.Unity’s unique approach to version control, ultimately, provided KO_OP with the perfect occasion to reboot and redefine their production line in a way that goes far beyond project planning. It offers the capacity for open communication and quick iteration to get to market fast. The team’s alignment to a unified workflow has since served them well in preparation for Goodbye Volcano High’s hotly-anticipated release. Everyone is now more aware of what others are doing and how their own work fits into KO_OP’s shared vision, operating less like a series of independent contributors and more like a connected group of likeminded people.Looking to equip your team with the tools to do your best work together? Try Unity Version Control for free. Or, read the full case study to learn more about KO_OP’s experience with Version Control.

>access_file_
1243|blog.unity.com

How to A/B test your subscription app’s monetization strategy

The apps in the top 200 on the app store manage to earn around $82,500 on a daily basis according to Tekrevol. However, according to that same article, daily revenue drops down to $3,500 for the top 800 apps. That’s why optimizing your app’s monetization strategy from the beginning is important to bring in the big bucks. But how can you be sure of what works best?We often see that developers start off designing their app’s monetization strategy by imitating other successful apps from the same category. While this is valuable at first, there can be many fundamental differences between apps within the same app categories, such as target audiences or geographical location, that make full imitation unsuccessful. Just because the market leader in your app category offers a three-day free trial for subscriptions doesn’t mean it’s right for your audience.A/B testing is far more impactful than imitation. A/B testing will give you statistical confidence in knowing what features improve your revenue, eliminating any guesswork that imitation may have. Running strong A/B tests that have a clear vision and will bring value to your app, you can optimize engagement for revenue and get a better understanding of your audience. That said, be careful to not over-test and spend time and resources on tests that won’t bring you value in the long run.Here are some key tests to help your app bring in more revenue.3 A/B tests you should be running to boost app revenueWhen running these tests, it’s important to try and get an understanding of long term success. But with user lifetime hard to identify in apps, these tests are a good place to start.1. Test your content behind the subscription wallTo drive incremental revenue, it’s important to find the best balance between the amount of free content you offer users and the amount of content you place behind the subscription wall. The goal is to offer enough free content to improve subscription rates without harming retention.To get started, look at the engagement rates of different features and test how those features perform behind the subscription wall. While you may hypothesize that putting the most popular content behind the subscription wall will improve conversions, doing so could end up pushing loyal users out of your app, increasing churn. To be sure, run an A/B test and keep an eye on retention and revenue.Testing your premium content and what features are most effective behind the subscription wall is important to keeping users engaged in your app and converting them to subscribers.2. Test your ad placementsTesting the types of ads you show and where in the funnel you show them is also important to optimizing your monetization strategy. When adding a new ad placement into your app, your goal is to maximize ARPU without harming retention and hurting the in-app economy. Keep in mind that a placement you hadn’t thought about before could end up significantly boosting performance.Consider whether you’ll implement user-initiated ad units like rewarded video and offerwall or display ad units like interstitials and banners, or both. Ultimately, different ad units appeal to different types of users, so having a mix of multiple ad units is the best way to monetize the widest audience.Looking at an app that uses the ironSource platform, a prominent lifestyle app started testing ad placements for the first time to improve LTV and drive more revenue. While they started out implementing only rewarded placements like rewarded video and offerwall, through A/B testing, they soon realized that implementing interstitials increased their ARPDAU and eCPM significantly. Ultimately, while one ad unit can seem like the most effective way to increase subscribers, through A/B testing, you may find that others are just as, if not more, effective.If you do choose to include rewarded ad units, you’ll need to test the rewards you’re offering to ensure you’re improving conversions. For some apps, it’s best to offer users a taste of premium content. For other apps, it may be more impactful to offer virtual currency that users can spend however they want. Each audience is going to respond to rewarded elements differently, and it’s important to test the reward to ensure you’re using these revenue drivers effectively.On top of that, be sure to test how often you show ads (capping) and how much time you leave between ads (pacing). This includes testing how you show ads to different segments of users. For example, you can segment your users into different ad engagement cohorts (ie, low ad engagement, medium ad engagement, high ad engagement) or into subscriber vs. non subscribers. From there, you can test the effect of ad placements on each cohort. For example, after thorough A/B testing, the same lifestyle app decided to not show subscribers their interstitial ads.3. Test your pricing modelTo ensure you’re getting the most revenue out of each user, it's important to put resources towards designing the best pricing model. Your ultimate goal is to convert users to subscribers, and you’re leaving money on the table if you’re not A/B testing your subscription prices and time frames.This means finding the optimized combination between the length of your subscription periods (weekly, monthly, or yearly) and the cost of the subscription. To measure the impact of your pricing model, look at the LTV of each subscription time frame. This way, you’ll see the effect of the pricing model you’re testing in the long run.Part of testing your pricing strategy includes testing the impact of offering a free trial, especially considering there are both pros and cons for this strategy - without a free trial option, the potential for churn is higher, but with a free trial, the subscription rate could drop. A/B testing is the only surefire way to determine how your long-term revenue will be impacted with this strategy.Testing the placement of the free trial is also incredibly important to long term success - you can offer the free trial as part of the paywall itself, adjust the time frame of the trial, allow users to extend their free trial, or even test when to ask for credit card information. Ultimately, users are going to behave differently to free trials when it comes to apps, so it’s important to not make any preliminary judgements.How to run a successful A/B testYou want to set yourself up for success from day one to see optimal results over time and improve your KPIs. Here are some tips for running a successful A/B test.Gather your own dataStart by gathering data on your performance, such as number of downloads and where users are spending the most time. With performance being subjective - a social media app is going to have many more downloads than a subscription based photo editing app - it’s crucial that you develop your own data rather than do research into the average performance of the category you’re in.With this granular data, you can have accurate benchmarks to measure your app’s overall monetary success against, which will help you formulate an A/B test to reach those goals. For instance, time spent is a crucial metric for social apps. Conversion rates, on the other hand, are important for e-commerce and subscription based apps.Be happy with a disproved hypothesisJust as you would in a traditional research study, formulate a hypothesis to serve as the basis of your A/B test. This comes down to analyzing the data and metrics you’ve collected and making observations based on user behavior, which will help inform your prediction. While your hypothesis is likely what you’re hoping will happen, this doesn’t mean the A/B test was a failure if disproved. On the contrary, a disproved hypothesis can be just as valuable.With subscription based apps, your hypotheses may be focused on the cost of the subscription or the length of the subscription period. One app, for example, hypothesized that lowering the dollar amount of weekly subscriptions would increase subscription rates. Makes sense, right? While they disproved this hypothesis, the developers now know they don’t have to reduce their price to get more subscribers, saving them money and opening up doors to more A/B tests.Choose the right time frameImitating the time frame of other successful A/B tests isn’t going to make your current test more impactful. In fact, testing for a time frame that makes sense for the metric you’re looking at is vital in ensuring your results aren’t confusing or skewed.For example, if you’re running a test that puts a lot more content locked behind the subscription wall, there are a few different metrics involved. When looking at the number of paying users as a key metric, keep the test short term. When looking at retention as a key metric, a long term test is probably best.A/B testing is a way to reduce uncertainty of what will work for your app and make great business decisions based on data, making it a stronger method than imitation. While this guide is meant to help you get started, with all of the moving parts, it may be beneficial to use A/B testing tools like the ones by ironSource.

>access_file_
1245|blog.unity.com

Globe-Trotter takes luxury shopping to new heights

Building a connection between consumers and products has become more important than ever. Discover how luxury luggage manufacturer Globe-Trotter delivered a new digital marketing experience to help customers create one-of-a-kind memories.Globe-Trotter has delivered high-end luggage to clients like Daniel Craig, Eddie Redmayne, and Kate Moss for centuries. Established in 1897, the luxury travel lifestyle brand produces handcrafted luggage and leather collections for in-store and online purchases. Recently Globe-Trotter partnered with SmartPixels, a Paris-based startup specializing in 3D product configurators, to create its first online custom luggage service for its many online shoppers.Knowing traditional ways of selling products like photographs or rendered images wouldn’t be enough to turn shoppers into buyers, Globe-Trotter decided to deliver a more immersive experience that would help their customers feel confident in purchasing custom luggage valued at $2,700 USD sight unseen. In fact, personalized experiences are more likely to improve the likelihood of repeat buyers, and 40% of consumers reported that they have purchased something more expensive than they originally planned because their experience was personalized.“For years Globe-Trotter has offered customers the opportunity to design and build their very own bespoke luggage. As well as being unique, it is a service that will always be in demand so we’re delighted to be able to offer this online, making it available to a wider audience and giving our customers the freedom to customize their luggage wherever they are,” says Momiji Matsuura, Press Release Manager at Globe-Trotter.Now, Globe-Trotter shoppers can take advantage of this interactive 3D web configurator to personalize every component of their bespoke luggage. With over a trillion possible configurations to choose from, customers can effortlessly specify the suitcase’s color, interior lining, locks, buckles, body, and more and view their selections in real-time. They can also take their luggage personalization one step further, choosing up to three characters to be monogrammed in five location options.As customers configure their suitcase, the model is rendered in real-time 3D, allowing them to see a virtual, photorealistic replica of their personalized luggage before it is even made. Customers can interact with the luggage by zooming in and rotating the suitcase using the 360-degree viewing option.“Unity’s rendering technology allows SmartPixels to deliver hyperrealistic 3D visuals, which is essential for our clients in the luxury industry, such as Globe-Trotter,” says Marie Guilloton, Marketing Manager at SmartPixels. “Unity’s platform fits perfectly into SmartPixels’ production pipeline as it has proven to be easy and intuitive to use.”The finalized designs are sent to Globe-Trotter’s workshop in Hertfordshire, England, where each piece is carefully made by hand and delivered weeks later, equipping customers for their next journey._______________________________________________________________Learn more about SmartPixels and start creating your own 3D product configurator with Unity Industry and Pixyz.

>access_file_
1246|blog.unity.com

The on-device advertising opportunities with foldable phones

Foldable smartphones, which are smartphones with a folding dynamic, represent an opportunity to reinvent what on-device advertising can be for advertisers. Developed out of a desire to create the biggest screen that can still fit in your pocket, foldable phones are the latest trend in innovative technology. In fact, half of US consumers are either very (16%) or somewhat (34%) interested in buying a foldable phone as their next device according to CNET. Additionally, according to the same survey, 52% of Samsung owners are interested and 47% of iPhone owners are interested, which is a sizable portion of Apple users.With hardware and software advances creating more advanced folds, foldable phones will only continue to evolve. As advertisers, these devices represent a valuable channel to reach a rich, growing demographic of users on prime real estate. Let’s face it, the future is folding.Here are some reasons why foldable phones represent a major opportunity for your advertising strategy:1. Take advantage of the screen designFoldable phones are best known for their adjustable screen sizes, which allow for flexible usage as well as significant ad opportunities. The Samsung Galaxy Fold, for example, has a typical sized screen on one side and a camera on the other. When unfolded horizontally, the screen becomes the size of a tablet. The Galaxy Flip unfolded is the size of a traditional phone, but when folded vertically, becomes a square. So, what does this mean for your on-device advertising strategy?First, it means new engaging ad creatives that leverage the unique folding format. For example, you could design ads that allow users to reveal more ad content as they fold or unfold their devices, like origami. Or you could offer options like, “unfold for more info” or “fold to see more.” This way, the fold becomes a tool to keep users engaged and entertained.Second, you can design more detailed ad creatives due to the larger screen sizes - since you have more room to fill your ad with important information that will catch the users' attention. Ultimately, the content you would’ve had to separate into two ads on traditional phones can now fit into one ad on foldable phones - more bang for your buck!Last, the folding feature on the Galaxy Flip lends itself nicely to push notifications. When the phone is folded into a square, the cover screen is prime real estate for notifications or app previews. Users can seamlessly tap your app notification on the cover screen and then unfold their phone directly into the app or app store. Users also see the same notifications when they unfold their phones to the lock screen, meaning users will see your notification not once but twice.On top of the design of the phones, foldables represent a turning point in the mobile industry.2. Meet users at the center point of their digital livesThe key to running a successful campaign is reaching users where they’re most engaged. With foldable phones changing the way we work and interact with our phones, making it easier to complete certain tasks that weren’t always optimized for mobile channels, consumers are going to start spending more time on these devices compared to others, representing a valuable opportunity for your on-device app advertising strategy.Years ago, desktops became the center of our digital lives in terms of capabilities, replacing the TV and landline. To order an advertised product, consumers would have to call the number on the screen during a commercial. As the internet became more accessible and computers gained popularity, online ordering became the norm and computers were optimized for most day to day digital tasks.Today, foldable phones are replacing the computer as the center point of our digital lives. These devices are able to adapt to fit users’ needs at any moment and entertainment and productivity capabilities are being optimized to work on foldables. For example, office and design products, such as Word and Adobe, work well on foldables because there’s more room to write and create. Further, while users once booked travel plans through desktops because of the bigger screen to see more booking options at once, users are able to see the same amount of options through foldable phones. Entertainment apps even have a leg up with foldables. For example, if you need subtitles when watching a video, users no longer have to worry about the words taking up the whole screen as they do on traditional phones.With foldable phones allowing users to do more, advertising on foldables is a great way to reach a powerful set of users that are likely using their phones more than they ever have before. It’s also a valuable opportunity to show how your app will be optimized for these devices. For example, if you’re a music streaming app, you can advertise how you now offer a “karaoke” feature where users can read the lyrics on their foldable screens and sing along. After all, users will engage with your app differently on foldables compared to traditional phones, and it’s important to show how.Even more so, the users on foldable phones, today, are incredibly valuable. 3. Reach the early adopters of the mobile industryAny time a device enters the market, especially a new generation of devices, you can expect consumers to flock to the stores, intrigued and excited for what’s new and innovative. But foldables are still a fairly niche technology, meaning that the consumers purchasing foldables are early adopters with high tech awareness - the most influential demographic of users.With 29% of users saying that the tech’s cool factor is driving their interest and 25% saying the uniqueness is a selling feature according to YouGovAmerica, it’s clear that foldable phone users are highly interested in the innovation and growth of the mobile industry. As early adopters, these users are optimists, overlooking challenges of being the first to jump into a new technology, and influencers, looking to share their reviews. This demographic also tends to have high disposable income, well-informed and are from higher social statuses, which means they have significant thought leadership.Ultimately, the users on foldable phones, today, are incredibly excited to start using their new phones, discover new content, and download new apps. They’re also prepared to share their thoughts and reviews, which means you’ll will be in front of industry authorities and experts ready to talk about your app. For all you know, these early adopters may mention your app as the next big thing in their next Facebook post.Meet foldable phone users directly on their devices with AuraFoldable phones aren’t going anywhere any time soon, but there’s no better time to start advertising directly on these unique devices. After all, getting on foldable phones as soon as possible means you’ll be reaching the quality users, today, and master your strategy for the growing audience of the future.

>access_file_
1247|blog.unity.com

Welcome, SyncSketch!

Unity has acquired SyncSketch, the creator of synchronized real-time collaboration tools that allow users across the world to work together from anywhere.We are on a mission to enable creativity anywhere and on any device. We believe that it should be effortless for people to invent, share, understand, and build on great ideas together. That’s why we’re so proud to announce that the SyncSketch team is joining Unity.SyncSketch creates intuitive collaboration tools that allow teams to seamlessly communicate, give feedback, and contribute to creative projects - in real-time. With SyncSketch, creative collaboration is just a URL-share away. Like Parsec, the remote access platform recently acquired by Unity, SyncSketch makes it easier for creatives to collaborate and work from anywhere. Like Weta Digital, they are laser-focused on serving artists. With this acquisition, we are doubling down on productivity tools for the Unity creator community.Creators expect to be able to work from anywhere on any device. We’ve gone from linear, five-day workweeks in the office to distributed, asynchronous work as the new normal. In that time, creators across industries have invented new workflows that help them design buildings, share designs, create movies, release games, and much more.SyncSketch helps artists, animators, VFX professionals, students, and creators of all types to communicate in real-time and collaborate around their work.SyncSketch supports 2D imagery, video, and much more. But it also shines with 3D models. Users can easily rotate 3D assets, change the lighting, markup with notes, and share feedback notes in real-time. The platform facilitates natural visual communication that lets creators focus on what they’re making rather than where they’re making it.SyncSketch’s products were created by incredible artists and technologists turned entrepreneurs. Bernhard Haux, CEO of SyncSketch, has worn many hats in over 25 years in the VFX, animation and real-time industries. As an artist, in the last decade alone, Bernhard contributed to five Academy Award-winning movies at PIXAR and several acclaimed real-time projects, two of which won Emmy Awards. His work and vision for SyncSketch has always been in the sweet spot between art and technology.Phil Floetotto, SyncSketch’s CTO, is a veteran engineer and artist with over 16 years of experience creating pipeline tools for major visual effects studios. He devised PIXAR's internal production management tool used for planning, scheduling and tracking every task across all of the studio’s productions. His background as a VFX artist gives him a unique understanding of the creative process and enables him to craft intuitive solutions that allow creators to fully focus on their art."Both Unity and SyncSketch have always been focused on empowering creators. It's at the heart of what we do. By combining our technology and talent, we will continue to define the way creators will want to work and communicate, from anywhere in the world." - Bernhard Haux, CEO of SyncSketchHead to syncsketch.com to learn more. And if you’re interested in trying out SyncSketch for your own creative projects, get in touch with your Unity account rep.

>access_file_
1248|blog.unity.com

How To Develop A Winning Indie Title: Q&A With Evgeny Grishakov From Garden Of Dreams

ironSource sat down with Evgeny Grishakov, CEO at Garden of Dreams, an indie game studio based out of Moscow, to find out how his studio approaches game development and creating a winning title. Read on to learn some best practices for growing your app business and bringing in more users.What kind of games does Garden of Dreams make? Can you explain your business model?Overall, Garden of Dreams has an unusual business model. While we started out like all game studios - developing a variety of games trying to find the niche that would let us make the most money - we eventually realized that none of our team members had any experience in game growth. In the beginning, we released five games and all of them had almost no revenue - complete failures.I soon realized that to succeed in game development, our team needed to learn more about the industry. We started a YouTube channel to interview successful game developers and we are still posting videos today.We soon managed to make a game, Offline Dice, that brought in a lot of revenue. We developed it for nine months before selling it. When we talked about the sale on our YouTube channel, other developers began contacting us for help selling their games. We started bringing buyers and sellers together in the game development market.Most importantly, we buy games ourselves, improve and develop them, and increase their revenue.You’re the CEO of Garden of Dreams. How and why did you get into developing games?Game development started as a hobby for me. I was working during the day and developing Flash games in the evenings. I even released three games that brought in some revenue.Five years later, I sat down and began to think: what do I want to do for the next 10 years? What will make me happy and use my best qualities?On December 31, 2019, I decided - game development!How does Garden of Dreams balance monetization between IAPs and ads? Walk us through your monetization strategy.As a rule, we always put in-app purchases first. One of our main goals when developing a game is to find ways to integrate IAPs harmoniously so that players are likely to buy them. However, many genres focus on advertising alone - I would guess that about 75% of games in the store are focused purely on advertising.This is why we decided to learn more about implementing and monetizing with advertising, and how we started working with ironSource. By integrating ironSource’s mediation solution into our very first project, we managed to increase the income per user 2X. Cool!What channels do you use for user acquisition? Tell us about your UA strategy.We’ve just started mastering our UA strategy. Today, we are buying US players - some Spanish speaking - and constantly doing CPI tests for new games.Since we are newbies, we generate most of our downloads through TikTok videos - we have a mini-department of content makers who shoot, edit and publish videos on TikTok.Are there any game development or growth trends that you are excited about?On the other hand, it seems to me that there’s a trend to build more communication between the studio and the players. In fact, players are starting to want to participate in the development of the games, share their ideas, become testers, and help the game in every way outside of just playing! This is the coolest and most valuable thing in game development - to create communities of people around the game.What advice would you give to other indie developers trying to make a quality game?I would recommend getting to know some colleagues around you - find some games that interest you and reach out to the developers. Gaming is a very friendly and open industry. I try to help all the developers who reach out to me because I also needed some guidance at the start and talking to experienced industry leaders, I managed to immediately make a successful project.Covering ironSource mediation for newcomers from A to ZCheck out this video Evgeny made about ironSource mediation.

>access_file_
1249|blog.unity.com

Announcing the EdTech Creator Challenge winners

Real-time 3D powers a vast array of immersive learning platforms and tools designed to advance the technical skills of the next generation, and increase access to quality learning experiences around the world. We’ve spent the last few years contributing our products, technology, and expertise to support more than 400,000 students and educators learning Unity per year.That is why we partnered with GSV Ventures, the leading EdTech investment firm, funding $7+ trillion in the education technology sector in August 2021. Together we launched the EdTech Creator Challenge to empower creators to continue to change the landscape of education and support ALL learners using Unity.Today, we are thrilled to introduce the 5 winners of the Challenge, and overall top 25 projects. A team of over 60 internal and external judges reviewed and rated the 250 submissions we received. Each of the Top 5 winners will receive $100K in funding from Unity and $10K of Amazon Web Services (AWS) cloud computing credits from Amazon Web Services EdStart. Our top 25 winners will receive $2K in AWS credits.Read on to learn how these projects are empowering creators to change the landscape of education and support all learners.Blue Studios is a live and on-demand PreK-12 STEM edutainment platform that leverages automation and synthetic media to create content cheaper, faster, and better, while also enabling any creator in the passion and gig economy. Using game-design principles, they believe that they can create the perfect teacher, accessible to any child, around the world, available 24/7 in any language using synthetic media.Today, Blue Studios has eclipsed over 10,000 monthly subscribers in over 10 countries to date!Learn more about Blue Studios.Boddle's mission is to create interactive experiences that improve student outcomes and inspire learning, particularly for students in under-served communities. By using AI and gameplay, they help kids in underperforming schools catch up by boosting engagement, and by tailoring learning to the right individual levels.Boddle Learning currently serves over 425,000 students in 28,000+ classrooms all over the United States and is also used in over 50 different countries. Boddle aims to inspire the next generation of lifelong learners with interactive learning experiences and is building a metaverse where educational content of any subject and topic can be delivered to kids through an ever-expanding selection of interactive games that meet them exactly where they are at.Fun Fact: The company was named after the unique bottle-headed game characters that were created to illustrate filling up on knowledge like how you would fill up a bottle. While Boddles are learning, their heads fill up, and then they pour back out to grow plants and perform superpowers. This teaches them the importance of filling up on knowledge and pouring back out to help others!Learn more about Boddle Learning.Before founding iCivics, Sandra Day O’Connor had never opened up a computer in her life. But after a 12-year-old convinced her that educational gaming was the right approach to teaching young people what they need to be engaged citizens - she discovered that we can all learn something from kids, and iCivics was born.iCivics envisions a thriving American democracy supported by informed and civically engaged young people. They champion equitable, non-partisan civic education so that the practice of democracy is learned by each new generation. They work to inspire life-long civic engagement by providing high-quality and engaging civics resources to teachers and students across our nation.iCivics has found that students who receive a comprehensive and high quality civic education are more likely to be informed and actively engaged citizens and voters. Specifically, they are more likely to vote and discuss politics at home, complete college and develop employable skills, volunteer and work on community issues. Their games and resources are proven – by both external validation and internal measures - to improve students’ civic knowledge, civic attitudes, and core literacy skills!Over the last five years, iCivics has more than doubled its reach from 64,000 educators and 4 million students in 2016 to 140,000 educators and 9 million students today. We’re excited to support their growing impact.Learn more about iCivics.Shimmy Technologies’ AI-powered app-based training is designed to upskill and reskill garment manufacturing workers anywhere—supporting efficiency, spikes in demand, and Industry 4.0.While automation can sometimes substitute for human work, it also—more importantly—has the potential to create new, more valuable, and more fulfilling careers for humans. Shimmy focuses on understanding how work and automation will evolve over time.Since 2018, Shimmy has conducted pilots in Indonesia, Bangladesh and the US. They’re on track to upskill 1600+ workers in 5 large factories in Bangladesh by the end of 2021 and are in the process of signing agreements committing to upskill and reskill 14,000 more workers across additional factories in 2022.Did you know? The company name comes from the famous “Shimmy” dance move, originated in the 1900’s and was considered a rebellious act when performed. It is also how mechanics describe what happens when you flip the switch on a machine and it springs to life!Learn more about Shimmy Technologies.Social Cipher’s mission is to to represent and empower youth of all neurotypes, to increase their self advocacy skills, and ultimately, to build their self confidence. Social Cipher is a social-emotional learning (SEL) platform that connects youth of all neurotypes and their advocates (counselors, teachers, mental health professionals) in an immersive virtual world and empowers them to navigate the universe.Both the game and curriculum are not based on teaching kids how to emulate neurotypical behaviors and rewarding them for their ability to assimilate. Instead, it aims to develop children’s social emotional learning skills to help them foster a healthy sense of self.40% of Social Cipher team members are on the Autism spectrum. They use the framework of the social model of disability to inform their intentions as a company. Currently in pilots with 16 different schools and therapy centers, 94% of professionals reported that their youth were more engaged and motivated to learn using Social Cipher. Above all, 100% of students in the pilot program wanted to keep playing the game series!Learn more about Social Cipher.We received so many incredible submissions and are inspired by all of the amazing EdTech innovations expanding the playing field and access to quality education for all. Congratulations to the 5 winning projects and finalists. Learn about the 25 finalist projects that will receive AWS credits to support their continued impact:STEMuliAge of LearningHellosaurusARch-ae-o ExplorerTeach the World FoundationScholarcadeManoké, IncBuddy.aiXplorealms LtdVidaly, Inc.VictoryXRXR Lab at Bellevue CollegeMovers & ShakersImmersed GamesShoonya DigitalEnduvoArizona State University Learning FuturesTransfrBaltu TechnologiesSimInsights IncTechRowThe VR HiveXpertVRLUDUS TECH SLIDEA Games--Congratulations again to all of our winners and thank you to everyone who submitted to the EdTech Creator Challenge.We announced our latest grant opportunities, the Unity for Humanity 2022 Grant and the Imagine Grant at the Unity for Humanity Summit on October 12. The Imagine Grant was created in partnership with award-winning artist, actor, and activist Common and the theme is inspired by his latest single, “Imagine.” The grant will be awarded to the project that best ‘imagines a better world.’Applications for both grants are open through December 3, 2021. We’re awarding $500K USD in total across the grants. While a single project cannot receive both the Imagine Grant and a Unity for Humanity 2022 Grant, you can apply for both via the same application. Learn more.

>access_file_
1251|blog.unity.com

Welcome, Wētā Digital!

Today, Unity announced that it has entered into a definitive agreement to acquire Weta Digital, specifically its artist tools, core pipeline, intellectual property, and award-winning engineering talent. The Academy Award-Winning VFX service teams of Weta Digital will continue as a standalone entity known as WetaFX and will become Unity’s largest customer in the Media and Entertainment space. By combining the industry leading VFX tools and technical talent from the incredible team at Weta, plus the deep development and real-time knowledge within Unity, we aim to deliver tools to unlock the full potential of the metaverse.I remember when the first preview of Fellowship landed in the theaters — just the preview, mind you, not the actual film — and how the hair on the back of my neck stood up. It’s an experience that I would find myself having over and over again with Caesar, the Na’vi, King Kong, and in many films where I didn’t even know Weta Digital was behind the great work. I was a fan before I fully appreciated the genius of Peter Jackson and knew the depth of the expertise and talent housed in this New Zealand based studio.Weta Digital’s pipeline represents the most complete toolchain for 3D creation, simulation, and rendering ever created. The brilliance of Peter Jackson and the entire team at Weta Digital is incredibly inspirational to all of us at Unity.The unified tools and the incredible scientists and technologists of Weta Digital will accelerate our mission to give content creators easy to use and high performance tools to bring their visions to life. This pipeline has been developed with an artists-first mentality and the result is an incredible set of tools capable of the pinnacle of visual effects (VFX) forged within the uncompromising schedules of hundreds of film and TV productions.Our goal is to put these world-class, exclusive VFX tools into the hands of millions of creators and artists around the world, and once connected with the Unity platform, enable the next generation of RT3D creativity. Whatever the metaverse is or will be, we believe it will be built by content creators, just like you.This list represents just a glimpse of the breadth and depth of innovation from Weta Digital’s 15 years of deep research and development. Individually, these tools are capable of spectacular results, but their real power is as part of a unified pipeline allowing changes made using one tool to instantly be reflected in another tool and allowing groups of artists to collaborate in the pursuit of their vision. These tools have been foundational in award-winning TV and films like Avatar, Black Widow, Game of Thrones, Lord of the Rings, Planet of the Apes, Wonder Woman, The Suicide Squad, and more.Manuka: Manuka is the flagship path-tracing renderer used to generate final frames and is able to produce physically accurate results based upon specific spectral lighting profiles.Gazebo: Gazebo is the core interactive renderer used for viewing scenes in real time with visual fidelity inside any pipeline attached application. Since the Gazebo real-time rendering of the 3D viewport approaches the same results from Manuka, artists can iterate in context of the final frame regardless of which application they use. Gazebo is also the core of the production pipeline for pre-visualization and virtual production workflows.Loki: Loki provides physics-based simulation of visual effects including water, fire, smoke, hair, cloth, muscles, and plants. Physical accuracy for complex simulations is delivered through the use of cross-domain coupling and high-accuracy numerical solvers.Physically-based workflows: Tools including PhysLight, PhysCam, and HDRConvert provide the foundation for lighting and color workflows. Using these tools, artists can create spectral-based lighting and accurately replicate effects of different lenses, sensors, and other parts of the pipeline, resulting in a physically accurate rendering workflow for both Gazebo and Manuka.Koru: Koru is an advanced puppet rigging system optimized for speed and multi-character performance. Using Koru, technical directors and developers can create constraints, rigs, deformers, and puppets to support high-performance animation, cloth simulation, and similar applications.Facial Tech: Facial Tech provides advanced facial capture and manipulation workflows, using machine learning to support direct manipulation of facial muscles and transferring actor face capture onto a target (puppet) model.Barbershop: Barbershop is a suite of tools for hair and fur that supports the entire workflow from growth through grooming. Artists can use a combination of procedural and artist-guided tools to grow hair and fur, adjust growth patterns, and groom the final model. Advanced procedural tools support concepts such as braided hair, and the resulting models are simulation-ready to provide realistic dynamics resulting from motion and wind.Tissue: Tissue enables artists and animators to create biologically accurate anatomical character models that accurately represent behaviors of muscle and skin, and transfer the resulting characters into simulation tools.Apteryx: Apteryx provides artists with a complete workflow starting with procedural generation of feathers, hand sculpting, and grooming for animated feathered creatures and costumes.World Building: These tools include Scenic Designer and Citybuilder to support world building, layout, and set dressing ranging from planet-scale to small-scale scenes. With these tools, artists can procedurally create scenes with node graphs, place content programmatically, and manually adjust placement.Lumberjack: Lumberjack provides the core toolset for vegetation and includes modeling, editing, and deformation tools. Using Lumberjack, artists can author and edit plant topology including animated geometry, manage levels of detail, instancing, and variability among individual assets.Totara: Totara is a procedural growth and simulation system for vegetation and biomes that integrates with Lumberjack to create large-scale and complex scenes procedurally. Using Totara, artists can grow individual trees and entire forest biomes, grow other vegetation such as vines, adjust growth parameters and control biomechanics, add snow cover, and reduce the complexity and size of scenes.Eddy: Eddy is an advanced liquid, smoke and fire compositing plug-in for refining volumetric effects. Eddy allows artists to generate new, high-quality fluid simulations and render them directly inside their compositing environment.Production Review: HiDef and ShotSub are the foundation for production review. HiDef is a core tool for production review, with features for note taking, version browsing, and more, integrated with a color-accurate browser and playback engine. ShotSub is a core tool for production review, with tools to prepare artist work for review with the appropriate color space, frame ranges, and settings for frame rate and resolution.Live Viewing: Live viewing tools support the mixing of computer-generated (CG) content in real-time with on-set camera feeds. These tools support live mixing for on-set viewing, live compositing of CG elements onto chromakey or other CG elements, depth-based live compositing and projection of face capture onto a motion capture puppet.Projector: Projector is a production tool supporting scheduling, resourcing, and prediction, with controls for data access and analytics to improve production decision-making.Another exciting element of this acquisition is the asset library we’ll inherit from Weta Digital, which includes urban and natural environments, flora and fauna, humans, man-made objects, materials, textures, and more. The WetaFX team will continue their industry-leading VFX work for major film and TV productions and feed into this asset library for years to come.Here is one last video featuring our graphics architect, Natasha Tatarchuk and the visionary and award-winning VFX artist Joe Letteri that speaks to the power and potential of these tools:To achieve the full potential of these tools, we will work to unify this pipeline to deliver content across the spectrum from cinematic realism to real-time XR on mobile devices. This includes linking these capabilities with our other content tools and services such as SpeedTree, with proven tools for scaling vegetation from VFX to real-time, and Pixyz, which provides sophisticated services for managing large, complex models.Our intent is to cloud-enable these tools and ensure they easily integrate with the workflows artists already use. It should be easy to take advantage of these advanced capabilities directly in the digital content creation (DCC) tools such as Maya and Houdini; and it should be easy to move and manipulate content into the Unity engine and more.The vision is simple: you will be able to use the DCC canvas you already know and love, get access to a growing set of incredibly powerful tools used in movies like Avatar and Wonder Woman, and get incredible content from our content library to fulfill your vision.And finally, to the point of this significant acquisition — to our creators now and in the future. We believe we are just at the beginning of an enormous need for rich, interactive, compelling, 3D content — in games, in movies, and far beyond.We believe we need to do more to make it easy for anyone to be able to create, and this acquisition is one of the foundational elements we will use to deliver this vision. We want to make it easy for these tools where content creators already are. In tools like Unity, in tools like Maya and Houdini, and in many others. We want to use the cloud to give content creators super powers by making these deep tools available, accessible and more. It will take some time to realize this complete vision, but please see this as our first step.And to the entire Weta team, thank you for using your imagination and vision to inspire us. We are looking forward to this future together.I am very excited about the creation of Weta Digital at Unity. The other day, somebody shared with me a very powerful Māori proverb:Nā tō rourou, nā taku rourou ka ora ai te iwiI believe the literal translation means "with your bread basket and my bread basket, our people will thrive.” This thought — if we work together, we can do much more than if we are alone — captures exactly what we are trying to do.Join the discussion on the Unity Forums.

>access_file_
1252|blog.unity.com

#unitytips Dev Takeover: VFX and shaders with Harry Alisavakis

The #unitytips Dev Takeover is an ongoing series on our @unitygames Twitter account. The Unity team invites super users from our community to share their insights, tips, and tricks directly with our followers. We’re kicking things off with Harry Alisavakis, tech artist at Jumpship Studio and VFX wizard extraordinaire.If you don’t know Harry yet, you might recognize him from his neon-green avatar floating around whenever, and wherever, there’s talk of shaders. Here’s a quick rundown on how Harry has become such a rockstar in the world of visual effects:Currently working as a technical artist on the upcoming game Somerville over at Jumpship, Harry spends his ‘spare’ time learning about VFX and shaders. In fact, he continues to inspire creators through his weekly compilation of tweets around game development called “Technically Art,” where he also promotes the work of other talented artists (be sure to give him a follow!) Through his related Discord channel, “Technically Speaking,” he leads chats about technical art, Unity creative challenges, and AMAs to answer as many user questions as possible. Check it out here.Below are just a few stills from Harry’s most recent work. You can find even more in his portfolio.Now onto the #unitytips, courtesy of Harry Alisavakis.Let’s start with a small VFX trick for you to try out. While timing particle system effects with each other can be a bit fiddly, there’s actually a simple way to iterate on your visual effects using timelines. 🧵In Unity, timelines have built-in support for particle systems, so you don’t need any custom scripting whatsoever. Just drag and drop your particle system right on there, and you’ll be able to pan through it.Combining these tracks with animation or any other timeline gives you a much better idea for syncing up all the individual, animated elements, to create some really juicy VFX.There’s a super fun way to get more bang for your buck when using particle systems and custom shaders, and that’s through custom vertex streams.Let’s take a moment to fully understand what these are, and how we can use them for more advanced particle effects. 🧵As you know, rendered models in Unity are made with triangles that consist of vertices. The vertices hold all the essential information regarding these models, such as their individual position, UV coordinates, and vertex color.The cool thing is that we can add any sort of arbitrary data to our vertices and use it in our custom shader however we like. ✨ That’s the beauty of custom vertex streams in particle systems: We can pass particle-related information to our vertices and only leverage it as needed.The option to add custom vertex streams can be found under the particle system’s Renderer module. Enabling it will show you all of the vertex streams already in use, like the UV coordinates and vertex color.Finally, let’s make a simple dissolve shader for our particle system using Shader Graph. We’re talking about an unlit, double-sided Universal Render Pipeline (URP) shader with alpha clipping. The interesting thing here to notice is what drives the dissolve effect – the third component of our UVs.You might be wondering why, especially since we tend to work with UV coordinates for texture sampling through the x and y components.Well, next to each stream’s name, you’ll see where data is stored.Here, the new stream is stored in TEXCOORD0.z, which corresponds to the third component of the first texture coordinate channel (a.k.a. UV0.z). By adding the lifetime age percentage, this value will start from zero and move toward one during the particles’ lifetime.With our shader, this makes particles dissolve over time. Applying the shader to the particle system can give us this neat result:So far so good, but what if we want even more control over the particles’ lifetime? Age percentage works, but it’s quite linear and not very useful for creating more complex effects. The solution lies in this Custom Data module:We can use Custom1.x instead of age percentage, which in turn, allows us to employ a curve that alters the value over the particles’ lifetime, similar to built-in curves like Size over Lifetime.Now we can better manage how our particles dissolve over time. ✨ How great is that?Of course, there’s tons more data that you can pass to custom vertex streams. The possibilities for using them inside your custom shaders are plenty.That said, we’d love to know about your own creative uses for custom vertex streams in the comments below.Happy VFXing! ✨Follow our Unity for Games Twitter for weekly #unitytips on Tuesdays and monthly Dev Takovers. Let us know in the comments who you would like to have featured in our future Dev Takeovers on Twitter.

>access_file_
1253|blog.unity.com

Expert tips on optimizing your game graphics for consoles

If you’re a regular reader of the Unity blog then you probably noticed the recent series of posts that shared many great tips for optimizing mobile games, including graphics and assets, profiling, memory, and code architecture, and physics, UI, and audio.And today we’re back with more handy tips, this time for optimizing high-end graphics on consoles. Get pointers on how to reduce batch count, what shaders to avoid, rendering options, and more. These tips come from a new e-book of advanced optimization techniques for PC and console games, available for you to download for free.Though developing for Xbox and PlayStation does resemble working with their PC counterparts, those platforms do present their own challenges. Achieving smooth frame rates often means focusing on GPU optimization.To begin, locate a frame with a high GPU load. Microsoft and Sony provide excellent tools for analyzing your project’s performance on both the CPU and on the GPU. Make PIX for Xbox and Razor for PlayStationpart of your toolbox when it comes to optimization on these platforms.Use your respective native profiler to break down the frame cost into its specific parts. This will be your starting point to improve graphics performance.As with other platforms, optimization on console will often mean reducing draw call batches. There are a few techniques that might help.Use Occlusion Culling to remove objects hidden behind foreground objects and reduce overdraw. Be aware this requires additional CPU processing, so use the Profiler to ensure moving work from the GPU to CPU is beneficial.GPU instancing can also reduce your batches if you have many objects that share the same mesh and material. Limiting the number of models in your scene can improve performance. If it’s done artfully, you can build a complex scene without making it look repetitive.The SRP Batcher can reduce the GPU setup between DrawCalls by batching Bind and Draw GPU commands. To benefit from this SRP batching, use as many Materials as needed, but restrict them to a small number of compatible shaders (e.g., Lit and Unlit Shaders in URP and HDRP).Enable this option in Player Settings > Other Settings to take advantage of the PlayStation’s or Xbox’s multi-core processors. Graphics Jobs (Experimental) allows Unity to spread the rendering work across multiple CPU cores, removing pressure from the render thread. See Multithreaded Rendering and Graphics Jobs tutorial for details.Be sure to use post-processing assets that are optimized for consoles. Tools from the Asset Store that were originally authored for PC may consume more resources than necessary on Xbox or PlayStation. Profile using native profilers to be certain.Tessellation subdivides shapes into smaller versions of that shape. This can enhance detail through increased geometry. Though there are examples where tessellation does make sense (e.g.,Book of the Dead’s realistic tree bark), in general, avoid tessellation on consoles. They can be expensive on the GPU.Like tessellation shaders, geometry and vertex shaders can run twice per frame on the GPU – once during the depth pre-pass, and again during the shadow pass.If you want to generate or modify vertex data on the GPU, a compute shader is often a better choice than a geometry shader. Doing the work in a compute shader means that the vertex shader that actually renders the geometry can be comparatively fast and simple.When you send a draw call to the GPU, that work splits into many wavefronts that Unity distributes throughout the available SIMDs within the GPU.Each SIMD has a maximum number of wavefronts that can be running at one time. Wavefront occupancy refers to how many wavefronts are currently in use relative to the maximum. This measures how well you are using the GPU’s potential. PIX and Razor show wavefront occupancy in great detail.In this example from Book of the Dead, vertex shader wavefronts appear in green. Pixel shader wavefronts appear in blue. On the bottom graph, many vertex shader wavefronts appear without much pixel shader activity. This shows an underutilization of the GPU’s potential.If you’re doing a lot of vertex shader work that doesn’t result in pixels, that may indicate an inefficiency. While low wavefront occupancy is not necessarily bad, it’s a metric to start optimizing your shaders and checking for other bottlenecks. For example, if you have a stall due to memory or compute operations, increasing occupancy may help performance. On the other hand, too many in-flight wavefronts can cause cache thrashing and decrease performance.If your project uses HDRP, take advantage of its built-in and custom passes. These can assist in rendering the scene. The built-in passes can help you optimize your shaders. HDRP includes several injection points where you can add custom passes to your shaders.For optimizing the behavior of transparent materials, refer to this page on Renderer and Material Priority.The High Quality setting of HDRP defaults to using a 4K shadow map. Reduce the shadow map resolution and measure the impact on the frame cost. Just be aware that you may need to compensate for any changes in visual quality with the light’s settings.If you have intervals where you are underutilizing the GPU, Async Compute allows you to move useful compute shader work in parallel to your graphics queue. This makes better use of those GPU resources.For example, during shadow map generation, the GPU performs depth-only rendering. Very little pixel shader work happens at this point, and many wavefronts remain unoccupied.If you can synchronize some compute shader work with the depth-only rendering, this makes for a better overall use of the GPU. The unused wavefronts could help with Screen Space Ambient Occlusion or any task that complements the current work.In this example from Book of the Dead, several optimizations shaved several milliseconds off the shadow mapping, lighting pass, and atmospherics. The resulting frame cost allowed the application to run at 30 fps on a PS4 Pro.Watch a performance case study in Optimizing Performance for High-End Consoles, where Unity Graphics Developer Rob Thompson discusses porting the Book of the Dead to PlayStation 4.If you want access to the full list of tips and tricks from the team, we’ve also published a 92-page e-book available here, packed with actionable insights.DOWNLOAD E-BOOKIf you’re interested in learning more about Integrated Support services and want to give your team direct access to engineers, expert advice, and best practice guidance for your projects, then check out Unity’s success plans here.Didn’t find what you were looking for?We want to help you make your Unity applications as performant as they can be. If there’s any optimization topic that you’d like to know more about, please keep us posted in the comments.

>access_file_
1255|blog.unity.com

A guide to multiplayer mobile games for 2021

How do multiplayer mobile games work?As mobile devices become more and more complex and progress technologically, they also allow the capacity to support more engaging and intricate gameplay to meet a certain standard players demand - such as live multiplayer gameplay.The constant need to live up to console-level quality has caused mobile developers a great deal of stress. Engaging countless players in matches running in real time, with no lag, secure gameplay while maintaining player satisfaction on mobile is no simple feat.The phenomenon that is multiplayer mobile games is accomplished by connecting players to one another via a server that is ideally of equal distance to all the players in the game, using a sophisticated matchmaking system. This method of grouping players together also deals with other crucial factors such as opponent skill levels, optimal number of players, mid game drop-ins and drop-outs, and acceptable wait time for a match connection. These elements all play in to running an effective service-based multiplayer game in the competitive landscape that is mobile gaming.How do multiplayer mobile games monetizeAside from the technological hurdles developers have to jump through, there is the matter of how to monetize users in real-time. In-app purchases are one way to monetize multiplayer games, however they cannot rely on the tried and true method of rewarding high spenders with the most premium tools or weapons. Why? Doing so will shift the odds in favor of IAP purchasers favor, while players who don't buy as many items will be left in the dust, causing resentment among the player community. The key is to be thoughtful of the player experience, and not to force any in-game purchases, rewarding skill and determination instead - as those values have proved successful historically with skill based games. Take DOTA 2, for example, they made $18 million a month in part due to their IAP reward system.Things get a little trickier when monetizing with ads for multiplayer mobile games as they don't necessarily have intermission points between levels to integrate an interstitial ad, and you definitely can't place an ad mid-gameplay. But there are a few ways around it, such as placing interstitials at the end of a match, in the game lobby, or perhaps for spectators while they wait for the next round. Interstitials are typically recommended over other ad units such as rewarded video, as they do not alter the dynamics of the game with more rewards granted for users who watch more videos. As multiplayer mobile gaming is still in its infancy, it pays to test out different monetization mechanics before integrating ad units left and right.What are cross-platform multiplayer mobile games?Nowadays many online video games support cross-play between mobile, handhelds, consoles, and computers to unite players on all platforms. This is achieved through sharing the same online server connecting PS4, Xbox One, PC, and mobile together. Aside from giving the player more freedom in deciding where, when, and on what device to play on, cross-platform gameplay has opened up the door to an expansion of the player community. This provided a solution to a common problem of engaging with those stubborn friends loyal to different devices, or worst yet, finding an online match.How cross-platform multiplayer gaming is changing the industryTimes are a changin’, the idea of cross-platform multiplayer gaming was unheard of not so long ago, as Sony, Nintendo, and Microsoft were fierce contenders in a war for gaming platform superiority. Massive fully cross-play multiplayer hits like Fortnite and Minecraft have broken through platform barriers and proved to be successful nonetheless, awakening competitors to the potential of collaboration in supporting more online multiplayer titles. It is only the beginning it seems, as more and more gaming communities express tremendous desire for cross-play, we can expect to see additional games fully adopting cross-platform multiplayer gaming and riding this new wave of cooperation into the future.Best mobile games to play with friendsHere are some of the best multiplayer games to make their way to mobile:MinecraftCost: $4.99 Available on: iOS and Android This highly addictive game immerses the user in a strange 3D building block world where anything goes. The Minecraft series has been around for what feels like a century now, and has consistently been one of the most popular multiplayer games on any platform, whether it’s been PC, Xbox, and now - mobile. The mobile version, Minecraft: Pocket Edition, is probably the closest you’ll get to console level quality in a mobile game today.Pokemon GOCost: Free Available on: iOS and Android Pokemon has been a childhood sensation for as long as we can remember. This best-selling video game franchise has rocked the boat when it released the first ever successful augmented reality game in mobile. Instead of dropping the player in the world of Pokemon, this unique gameplay brings these fantastic virtual creatures into the players world, thanks to GPS and AR technology. Essentially letting you locate, capture, train and battle them as you go out for a morning stroll. Read more about Pokemon GO here.Fortnite: Battle RoyaleCost: Free Available on: iOS and Android The word ‘Fortnite’ has become synonymous with gaming, as it drew in more than 125 million players in less than a year, and rose to be the most popular game in the world. The mobile version, Battle Royale, is essentially a shoot-survival game that transports the player onto an island with a 100 or so other players ready to scavenge, build and kill each other until the last man standing. By March 2019, the game has been played by over 250 million people, generating over $2 billion globally.Words with FriendsCost: Free Available on: iOS and Android The classic crossword puzzle games have seen new found interest since Words with Friends came out in 2009. The gameplay offers players the opportunity to engage their friends or random strangers in a highly competitive word game. It has been so successful that in May 2017, Words with Friends, was given the title of the most popular mobile game in the US.Clash of ClansCost: Free Available on: iOS and Android The Clash of Clans gameplay is set in a make-believe world where a community of online players get into clans, gather resources, trade, and battle one another for honor, glory, and loot. There has been a lot of attention surrounding this game as Supercell has been heavily promoting it on every media channel. They even ranked as the 5th most watched Super Bowl commercial in 2015.

>access_file_
1256|blog.unity.com

Q&A: How Moonee boosted eCPM using ironSource in-app bidding and Vungle’s bidding network

Moonee is a hyper-casual game developer with over 200 million downloads to its name. Its portfolio includes hits like Square Bird, Idle Streamer, Makeover Run and Monsters Gang. We asked Gabriel Oltarz, Head of Growth at Moonee, how they used ironSource in-app bidding together with Vungle’s bidder network to grow their business.What does in-game monetization look like at Moonee?Our in-game monetization at Moonee is focused on ads - in particular interstitials, rewarded and banner formats. That’s why it was crucial we found a solution to fully maximize our ad revenue potential.We have our own engine that enables us to optimize waterfalls at scale - however, traditional instances have their limitations, making it challenging to fully maximize the potential of our demand partners.We are still maximizing traditional instances potential with Moonee’s Engine, however, we understand that in-app bidding is where the industry is heading: real time auctions increase competition between monetization partners and give publishers that extra mile that may be the difference between a good or bad [tooltip term="ecpm"]eCPM[/tooltip]. With such fierce competition, leveraging in-app bidding’s ability to maximize revenues and grow our business is a must.How have you leveraged in-app bidding to boost your monetization?We began using ironSource’s in-app bidding solution, and the results on our eCPMs and ARPDAU have been really promising.The jump in revenue has largely been due to the high competition within ironSource’s in-app bidding solution - they have a large number of bidding networks, which means they’re able to really boost demand for our ad inventory. Specifically, we saw that adding Vungle’s bidding network as part of the Alpha test to our mix performed especially well, helping spike competition and in turn our eCPMs. Vungle is performing above our expectations, increasing its share of voice by 28% thanks to its bidding solution.Apart from this increase in ARPDAU and eCPM, have there been any other benefits of using ironSource’s bidding solution?We’ve found that in addition to strengthening these KPIs, bidding has also freed up time we previously spent on waterfall optimization. In addition, the whole user experience on the ironSource platform makes everything easy and quick to manage. Thanks to the very intuitive UI, our team can focus on key tasks rather than struggling to understand the platform.“In-app bidding allows publishers to maximize opportunities and achieve the best price out of their inventories” Gabriel Oltarz, Head of Growth, Moonee Publishing Ltd.Aside from the platform, how’s it been working with the ironSource team?We’ve been really impressed by ironSource’s communication and service. When they let us know that Vungle was available to use as a bidder via the ironSource mediation, we understood that our success is all that matters to them - even if it’s not only through the ironSource network itself.In this high-paced industry, communication and availability is crucial to success. ironSource is very professional and provides us with quick responses to our inquiries. It’s really good for us knowing that there is someone to talk with upon having any issue or question.

>access_file_
1257|blog.unity.com

Best practices for monetizing mobile games with banner ads

When it comes to monetizing apps with ads, diversification is key. That means using a mixture of user-initiated ads like rewarded videos, and system-initiated ads like interstitials and banners, to maximize your ad revenue.The majority of banner demand is filled by brands and agencies, so serving banner ads is a great way to expose your app traffic to brands and increase revenue. In this article, Growth Strategy Manager at ironSource, Rotem Weinberg, shares the best practices to make sure your banner ads generate the most revenue for your app while preserving a positive user experience.1. Set the right refreshing time for your appMobile banner ads are typically displayed on the top or bottom of the screen, sticking to the screen for the duration of the user session. As the developer, you can set the frequency in which the banner content refreshes to show a new advertisement.There is no one-size-fits-all approach to this: in some apps, setting the refresh time to every 30 seconds could be effective in increasing ARPDAU by maximizing exposure, while in other apps a longer refresh time could make sense. In general, the refresh time ranges from 25 seconds to 2 minutes.Meanwhile, if you have an app with longer average play sessions, you have the luxury of being able to experiment with both short and long refresh times. The key is to A/B test frequently, measuring the impact of different refresh times on your KPIs like eCPM, ARPDAU, and retention.2. Test different banner sizesThe size of your banner ads can have a direct impact on their overall performance. The standard banner size of 320x50 is most commonly used. This takes up minimal real estate on-screen, which is a plus for the user experience.While standard banners are great, publishers may find higher CPMs on 'MREC' implementations, which are 250x300 Medium Rectangle banners which can fit in many menus and screens through the user session.There are multiple sizes to choose from, however, and it's also possible to create custom sizes. The key as always is to A/B test - you can do that with the A/B testing suite on ironSource’s mediation, which offers analytics reporting so you can really drill down into your KPIs to optimize effectively.3. Test the singleton approachAn often effective way to increase revenue from your game’s banner ads is to use the singleton approach - which refers to keeping the same banner in place throughout the app experience.For example, if you serve the user a banner ad on the game’s homescreen, make sure this ad follows the user even as they leave the homescreen and begin navigating through new pages in the game.Keeping your banner ads ‘sticky’ using the singleton approach helps increase revenue because it maximizes user exposure to a specific ad. Often users will scroll right past a banner or pay minimal attention to it initially, but if it follows them around the game there’s a greater likelihood of them noticing it, engaging, and ultimately generating revenue for you.4. Use in-app biddingTo maximize the revenue your banner ad space generates, make sure you’re using in-app bidding. Bidding works as an auction, with ad networks bidding in real time to serve ads in your game. This maximizes demand from ad networks to fill your banner ad space, increasing the revenue you earn per impression.In addition, bidding is a largely automated process, removing the burden of time-consuming manual waterfall optimization. With the considerable time this saves, you can focus your efforts on perfecting your banner placement strategy and user experience - A/B testing things like the refresh time, banner size, and the singleton approach.Put these tips to practice: Monetize with banners on ironSource

>access_file_
1258|blog.unity.com

Tales from the optimization trenches: Better managed code stripping with Unity 2020 LTS

Managed code stripping is a critical step in the build process that helps decrease the size of an application’s binary files. This occurs through the removal of unused code.By removing code, you ensure that it won’t be compiled or included in the final build. While this should slightly reduce memory usage for projects running with the IL2CPP backend, there’s always the risk of missing types and methods at runtime (among other issues) with higher managed code stripping levels.Throughout the build process, some code is considered unused, and consequently, stripped down. Manually adding needed assemblies to the link.xml file might not be the simplest approach to preserve them from removal. During project reviews that I conduct, as part of my work as a Unity Software Development Consultant, I’ve received questions from customers on how they can better handle managed code stripping. That’s why I’ve gathered these tips and best practices which may improve your workflow with support from new managed code stripping annotation attributes.The removal of unused code is especially important when using the IL2CPP scripting backend. The Unity linker, a version of the Mono IL linker customized to work with Unity, performs a static analysis to strip managed code.Unity supports three levels of managed code stripping for IL2CPP – Low, Medium, and High. The managed code stripping manual explains how the code stripping process works, which factors strip down certain code, and how stripping levels differ from each other. In short: The higher the code stripping level, the harder the linker tries to find and remove the unused code. You can modify the Managed Stripping Level in your project’s Player Settings.Static analysis, leveraged by the linker for the identification of unused code, cannot cover all cases when a certain object’s type is only defined at runtime. These cases lead to false-positive results. Though there might not be any reference to a class or method while compiling, the class or method is still required by some parts of the code at runtime. In this context, the code that uses Reflection serves as a good example. Consider the following snippet:While this is a valid and commonly used type of code, the linker doesn’t know whether MyAssembly, MyType, and MyMethod are actually used at runtime. This can cause them to be stripped down, and by extension, result in a runtime error. Check out the stripping restrictions manual for more information.Developers that use Dependency Injection frameworks like Zenject, or serialization libraries like Newtonsoft.Json, have to be aware that false-positive code stripping is a possibility, and should address it accordingly. Here are some of the most common approaches:The linker recognizes a number of attributes and allows you to annotate dependencies when it cannot identify them. As such, you can add the [Preserve] attribute to assemblies, classes, and methods that should not be stripped down.A link.xml file is a per-project list that describes how to preserve assemblies, types, and other code entities within them. You can manually add needed assemblies, classes, and methods to link.xml, or use the UnityEditor API GenerateAdditionalLinkXmlFile to generate the link.xml file during the build process.Even the Addressables package harnesses the LinkXmlGenerator. The Addressables’ build script reviews the list of assets in the groups and adds types used by assets into the link.xml file. It also adds in types used by Addressables internally via Reflection at runtime. Consider reviewing the default build script, BuildScriptPackedMode.cs,for more details on implementing a similar solution as a step in your build process, like with the Scriptable Build Pipeline.Unity supports multiple link.xml files located inside of the Assets folder or one of its subfolders. During the build process, entries of multiple link.xml files are merged and considered by the linker.Using the [Preserve] attribute might require some manual work. But if your project is already on Unity 2020 LTS, you can use a number of new managed code stripping annotation attributes to easily and precisely mark assemblies, classes, and methods that shouldn’t be removed during code stripping. Here are just some of them:RequireAttributeUsagesAttribute: When an attribute type is marked, all CustomAttributes of that type will also be marked, reducing complications when working in the High stripping level.RequireDerivedAttribute: When a type is marked, all types derived from that type will similarly be marked.RequiredInterfaceAttribute: When a type is marked, all interface implementations of specified types will be marked.RequiredMemberAttribute: When a type is marked, all of its members with [RequiredMember] will be marked. This makes code stripping more precise as it will stop the declaring type from becoming unstrippable. Please note, however, that if the class itself is not used, members will also be stripped, despite being marked with the [RequiredMember] attribute.RequireImplementorsAttribute: When the interface type is marked, all types implementing that interface will be marked. As such, there’s no need to mark every implementation. If the interface is not implemented anywhere in the code base, however, it’ll still be removed, despite the fact that it’s marked with the [RequireImplementors] attribute.In Unity 2020.1 and 2020.2, the tool received API updates to match Mono IL linker. It can now detect some simple reflection patterns, which means that if you’ve upgraded to Unity 2020 LTS, you have fewer reasons to use link.xml files.For more information on how Unity 2020 LTS can help optimize your coding workflows, check out this feature overview and the updates page in the Unity 2020 LTS manual.As part of our 2021 goal of making it easier for you to deliver high-quality builds to your testers and players, we’ve stayed focused on improving code stripping workflows. More specifically, we’ve added a new Managed Stripping Level called Minimal to the 2021.2 release. This will be the default for the IL2CPP backend, so be sure to stay tuned.

>access_file_
1259|blog.unity.com

Get acquainted with HDRP settings for enhanced performance

Learn how to leverage High Definition Render Pipeline (HDRP) settings to maximize performance and achieve powerful graphics all at once.With the release of HDRP version 10 for Unity 2020 LTS and beyond, the HDRP package has continued to prioritize its user-friendly interface, flexible features, stability, and overarching performance. But to set HDRP up for optimal use, it’s crucial to understand all of the main settings, how they work, and what they do. That’s why we’re looking at how HDRP operates from the perspective of CPU/GPU Profiler captures, the Render Pipeline Debug view, and HDRP’s shader framework.From graphics debugging to profiling and optimization, this blog unpacks tips to help you customize HDRP for your project using the Custom Pass API, or another local part of the package.Before we start analyzing frames, it’s important to get to know the HDRP features at hand. We recommend watching our Unite Now presentation, Achieving high-fidelity graphics for games with HDRP, the Ray tracing with Unity’s High Definition Render Pipeline webinar, and the Volumetric Clouds, Lens Flare, and Light Anchor talk, which are all great guides to HDRP.Users moving from the Built-in Render Pipeline to HDRP often find that the migration takes some adjustment time. This is because:HDRP has a unifiedand physically-basedrendering framework, meaning that its attributes use real-world units: Exposure value is used for camera light sensitivity, whereas Candela is used for light intensity. Our Unite Nowtalk reveals how to think in a physically-based way to get consistent results while lighting a scene.There are many parameters you can control in an HDRP project, and these parameters exist in many places. This is partly because HDRP has more integrated features, as well as deeper customization capabilities for both artists and engineers to fine-tune and optimize their work.To get familiar with these HDRP capabilities, we’ll begin by looking at its Global settings.Global settingsFor the Built-in Render Pipeline, the Graphics settings cover most per-project graphics settings. There are also Player settings, which contain some general graphics settings under the context of a particular target platform, such as Windows, Linux, Mac, or Xbox.HDRP projects similarly use Graphics and Player settings, with the addition of three more sets of settings that provide access to advanced default configurations of the render pipeline.In Graphics settings, for instance, the Scriptable Render Pipeline (SRP) settings refer to a default HD Render Pipeline Asset. This HD Render Pipeline Asset contains settings that can be overridden at each quality level.The HDRP Default Settings tab configures: Default Frame settings, with default properties that can be overridden for each camera (including cameras used for Planar Reflections or Reflection Probes). Here you can decide whether the cameras render transparent objects by default.Default Volume components, which contain properties that can be overridden for each “camera position in scene.” For example, you can define default post-processing effect intensities, which can be overridden and become “strong outdoors but weak indoors” using specific volumes for your scenes.The Default Diffusion Profile Assets property, which can be overridden by a Diffusion Profile Override component in the Volume components section of the HDRP Default Settings tab. This, in turn, can be overridden per “camera position in scene.” Currently there’s also a “redundant override layer” for the Diffusion Profile system, but since we’re constantly looking to improve UX in HDRP, a solution for this issue is already underway.Other properties, which are “pure global settings,” cannot be overridden.Finally, some low-level settings that are less likely to require configuration are specified in the HDRP Config package. These settings are also “pure global settings.” Changing them requires recompilation of the C# assembly and the HDRP shader framework. That’s why they’re placed at a different location.Quality levelsWith the Built-in Render Pipeline, you can define a number of quality levelsin the Quality Settings tab. For each quality level, some graphics settings, such as anisotropic texture usage, can be specified so that less hardware resources are used on low-end platforms.For HDRP projects, specifically, an override HD Render Pipeline Asset can be selected for each quality level. This offers more configurability compared to the Built-in Render Pipeline, since the HD Render Pipeline Asset stores several parameters, such as maximum number of directional, punctual, and area lights onscreen, the color-grading LUT size, and the light cookie atlas size, among others.Some properties in the Quality Settings tab for a Built-in Render Pipeline project only apply to the Built-in Render Pipeline. In an HDRP project, these settings might disappear from their original locations and reappear elsewhere as “replacement settings.”In a Built-in Render Pipeline project, for instance, the Quality Settings tab controls the Shadow Resolution property. In an HDRP project, however, the Lighting > Shadowssection of a HD Render Pipeline Asset controls the resolution of shadow maps.Camera and Frame settingsTo render your scene in HDRP, you need to add Cameras just like in the Built-in Render Pipeline. HDRP also makes use of an extra HD Additional Camera Data component (attached to the same Game Object), to store extra per-camera parameters.Indeed, HDRP offers many more per-camera parameters for customization. There are several Physical Camera settings, and if you tick the Custom Frame Settings property of a camera, you can decide how the camera draws the frame through the Frame Settings system.The Frame Settings system is a stack of camera property overrides. You can specify default values for Frame settings in the HDRP Default Settings tab. On top of that, each camera can override the default Frame settings.The Camera panels of theRender Pipeline Debug window help visualize the Frame Settings override stack.Using the Camera panelThe following example demonstrates how the Camera panel of the Render Pipeline Debug window works:There is a camera called Main Camerain the Scene. Main Camera only draws static objects. The HDRP Default Settings tab enables drawing motion vectors, whereas the Frame Settings override from Main Camera disables this function to improve overall performance.The Motion Vectors override stack displays the state of the OverriddenFrame settings on the left of the DefaultFrame settings. See Figure 4, Highlight A:Additionally, theRender Pipeline Debug window shows the state of the SanitizedFrame settings on the left of the OverriddenFrame settings. Sanitization ensures that the Overridden Frame settings stay consistent. Looking at the same example, Opaque Object Motion and Transparent Object Motion have not been explicitly disabled inthe Main Camera’s Frame Settings override. But since Motion Vector is disabled, these dependent features are also turned off by the sanitization system, as shown in Figure 4, Highlight B.Volume systemAs discussed in our Unite Nowtalk, HDRP supports a Volume system. Similar to the Post-processing stack in the Built-in Render Pipeline, the HDRP Volume system controls post-processing. Even more, however, it determines the way the sky is rendered, the strength of indirect light, and some shadow settings, among other features.Simply put, the HDRP Volume system is an abstract framework that can be used to alter rendering settings as the Camera moves across the Scene. There is a hard-coded Default Value for each Volume property. To see these values, use the Volume panel in the Render Pipeline Debug window. See the rightmost column in Figure 5, where the Default Intensity of Lens Distortion is 0.These hard-coded default properties can be overridden by property overrides in the Volume Components section ofthe HDRP Default Settingstab. Note that these property overrides can similarly be overridden by Volumes in the Scene.Conversely, the Camera picks up a blend of property values from Volumes in the Scene. If there are none, it then picks up property values from the HDRP Default Settings tab. Otherwise, it picks up the hard-coded default property values.As shown in Figure 5, the Volume panel of the Render Pipeline Debug window is useful for visualizing the current Volume property override stack. It’s particularly effective when debugging, as it displays the Volume properties currently in use.Meshes and surfacesJust like in the Built-in Render Pipeline, geometries to be rendered are usually specified by Mesh Renderers or Skinned Mesh Renderers in the Scene. HDRP-specific data is predominantly stored in the Materialsso that they can use the appropriate Renderers or Shader Graphs.IlluminationLike in the Built-in Render Pipeline, HDRP projects have Lights with data storage specific to HDRP for each light. The HD Additional Light Data components are attached alongside the regular Light components.Consider that there are many settings for lighting that derive from places other than Game Objects with Light components. Here are just a few examples:Indirect lighting is determined by Light Probe Groups, Reflection Probes (with HD Additional Reflection Data attached), Planar Reflection Probes, and Lighting Settings. It can also be tuned by the Indirect Lighting Controller Volume component.The Volume system determines sky lighting.The Volume system also controls effects related to screen space. These effects act like a source of lighting or shadowing: Screen Space Reflection, Screen Space Refraction, Screen Space Global Illumination, Screen Space Ambient Occlusion, and Contact Shadow.Subsurface Scattering also simulates “surface-to-surface lighting.” Most Subsurface Scattering properties are specified by Diffusion Profiles, which are, in turn, determined by Materials. Meanwhile, you can leverage the Volume system to select the Diffusion Profile Override.Now that we’ve taken a tour of the HDRP UX, let’s turn to some less familiar graphic properties for your next HDRP project. Figure 7 illustrates a possible approach, starting with general settings at the top and override settings at the bottom. As you can see, the scope widens as we go from top to bottom.HDRP’s Graphics settings must adapt to the following:The quality level of the program, such as the platform the program runs onThe current active CameraLocation of the Camera in the SceneThe Materials of the rendered geometriesThe Lights affecting the rendered geometriesNote that HDRP settings are particularly attune to the settings’ dimensions.Conflicts between the settings’ dimensionsThere are often conflicts between the following settings’ dimensions:The quality level and the current active Camera might try to control the same graphics parameter. For example, if you want to reduce the Subsurface Scattering sampling count on low-end devices, you might also want to reduce the Subsurface Scattering sampling count for cameras that render to Render Texturesfor picture-in-picture effects.The quality level and the Camera location in the Scene might try to control the same graphics parameter. So if you want to reduce the quality of post-processing effects on platforms with limited GPU power, you must be aware of the fact that some Scene locations already use significant GPU time for complex lighting. As such, you should strive to make the quality of the post-processing effects lower at these locations to recover some performance budget.The quality level and lights in the Scene might try to control the same graphics parameter. So if you want to reduce the shadow map resolution on platforms with limited RAM, you must keep in mind that there are likely many small shadow-casting spotlights in the Scene that require shadow maps with lower resolution.To address these conflicts, the HD Render Pipeline Asset supports tieredsettings. Instead of indicating just one value for a property, a number of values can be attributed to a number of tiers – Low, Medium,High, and in some cases, an Ultra tier.For cameras that render the picture-in-picture effect, you can specify a tier for both the volumes that control the post-processing effects and the spotlights that request the shadow maps:HDRP can then look up the property from the appropriate tier in the active HD Render Pipeline Asset. It’s this property that will be used.Of course, it’s also possible for cameras, volumes, and lights to ignore the tiered settings system, and directly determine their desired behaviors.Three settings’ dimensions overlappingLet’s look at another example where the settings’ dimensions overlap.Imagine that there are some Mesh Renderers in the Scene using a Shader Graph with complex vertex animations. It might be too expensive to perform vertex animation on low-end devices. There’s also extra camera rendering to consider for the Render Texture when it comes to picture-in-picture effects, so you don’t need that extra camera to render with any vertex animation.In this case, there are three settings’ dimensions overlapping:Materials of geometries in the SceneQuality level of the programCameras in the SceneTo address cases like this, there is a special Material Quality keyword available in Shader Graph:Unlike regular Shader Graph keywords that are controlled by users per Material, this is a global keyword, set up internally by HDRP. In the HD Render Pipeline Asset, you can control the Material Quality Levels available, as well as the default Material Quality Level.For each camera, you can override the default Frame settings and specify a Material Quality Level, overriding the active HD Render Pipeline Asset.The HDRP has a systematic approach to handling settings from artists. After all, maintaining a great UX for artists is the key to inspiring high-quality content.When starting an HDRP project with a simple setup, the project might cost a surprising amount of performance. This is because HDRP determines many of the used features by default. Best practice is to control the HDRP settings so you only pay for what you intend to use.To represent a minimalistic rendering workload, let’s create a scene of 225 cubes using the default material, illuminated by a spotlight, a point light, a directional light, and ambient lighting.How does this simple setup perform? Let’s build a standalone player with a resolution of 2880x1620, on the IL2CPP scripting backend, with VSync turned off. Running the player on a Windows machine with Intel i9-10980HK GPU and NVIDIA RTX2080 GPU, the Profiler shows that the mean frame time is 4.6 ms.Looking at the Timeline view of the Profiler, a significant amount of time is spent on the DXGI.WaitOnSwapChainmarker, indicating that it is GPU bound.Taking a GPU capture using Nsight Graphics shows that this occurs because the HDRP has several features active by default:There are many extra visual effects active, such as SSAO, Subsurface Scattering, Dynamic Exposure, Motion Blur, and Bloom.There are multiple Color Pyramid passes and an Upsample Low-res Transparent pass in action, all of which support complex transparent rendering.As you can see here, you can control the HDRP Asset, override the Camera’s Frame settings, and add Volume overrides, so that only the absolute minimum features are enabled. In other words:Decals, Low-res Transparency, Transparent Backface, Depth Prepass, Depth Postpass, SSAO, SSR, Contact Shadows, Volumetrics, Subsurface Scattering, and Distortions are all disabled in the HDRP Asset.Refraction, Post-Process, After Post-Process, Transmission, Reflection Probe, Planar Reflection Probe, and Big Tile Prepass are all disabled in the Camera’s Frame settings.The Volume overrides the Exposure mode to Fixed Exposure.After the modification, the result has a mean frame time of just 2.45 ms, which is significant compared to rendering the same scene in a Built-in Render Pipeline.In practice, you do not need to turn off so many features in an actual game for the Main Camera, although some extra cameras do require this treatment.If you’re interested in even cheaper cameras, the HDRP UI Camera Stacking package in 2021.2 allows you to stack multiple camera rendering UI at only a fraction of the cost of a standard camera.This example not only highlights the extent of control you have over HDRP’s performance characteristics, but the importance of tuning your HDRP project’s setup.It begins with light: The definitive guide to the High Definition Render PipelineThe HDRP in Unity 2020 LTS brings you improved tools for creating evocative, high-end lighting in your games. Get this new, in-depth guide, to learn how to harness the power of physically based lighting in HDRP.Get the guide

>access_file_
1260|blog.unity.com

Entertainment innovation with Unity at Carnegie Mellon Thailand

Carnegie Mellon Thailand (CMKL) was established in 2017 as a collaboration between Carnegie Mellon University (CMU) and Unity Academic Alliance member institution King Mongkut's Institute of Technology Ladkrabang (KMITL). CMKL University provides cutting-edge education and brings world-class partnerships into a local context, making technology more accessible and driving innovation for the benefit of Thailand, the Southeast Asian region, and beyond. Through its various programs, CMKL tackles the challenges necessary to power future development.The Entertainment Innovation Center (EIC) has introduced a unique interdisciplinary MS in Technology and Creative Innovation (MSTCI), formerly known as the MS in Entertainment Innovation. The program places a strong emphasis on the world of professionals, bringing together the brightest scholars and experts from the fields of art, design, engineering, technology, business, and management. The curriculum is wide-ranging, including everything from improvisational acting to building virtual worlds with Unity and other extended reality (XR) technologies. Through the program, students at the EIC learn to collaborate, problem-solve, communicate, and lead.‍Students typically choose a specific area of focus within the MSTCI program and spend four immersive semesters learning the vocabulary, values, and working patterns of other disciplines, as well. This emphasis on acknowledging and understanding the creativity of others fuels innovation in turn. The EIC aims for a student body with the following composition:40% students with technical backgrounds40% students with creative backgrounds20% students with business/management backgrounds‍Through a rigorous curriculum that includes training in creative development and technical skills, students from diverse backgrounds work together in capstone groups to devise prototypes and working proofs-of-concept that require creativity and collaboration across disciplines. Final deliverables can be mobile apps, transformational games, performances, exhibitions, products, XR experiences, or artifacts showcased at the annual EIC Playground.In the MSTCI program, led by Program Director Natasha Patamapongs, students have the opportunity to create with Unity in two courses: Building Virtual Worlds, taught by creative technologist and tech entrepreneur Kamin Phakdurong, and Building Virtual Worlds II, taught by Unity Certified Instructor and XR specialist Jeremy Luisier. Both are modeled after Entertainment Technology Center cofounder Randy Pausch’s groundbreaking course of the same name, and challenge students to work quickly, creatively, and collaboratively to design functional prototypes with Unity and other software. Building Virtual Worlds students also explore productions and projects in various other entertainment media, their work culminating in public festivals with hundreds of spectators – and an incredible sense of accomplishment. In fact, many Building Virtual Worlds ideas go on to become full-time research projects, student spin-offs, and commercial successes. Let’s take a look at what students are currently working on in Unity!Natcha Lohasawad’s Virtual Reality Auditorium for public and improvisational speech practice was created using the Unity XR Interaction Toolkit. Natcha, a recent Faculty of Journalism and Mass Media Studies graduate, seeks to help users face their fears of public speaking through practice in a virtual environment. In the Unity VR application, users can get a true feel for transitioning from backstage to the spotlight on center stage, practice holding a microphone while speaking, and get comfortable with using hand gestures. Need a topic to practice? Natcha’s Virtual Reality Auditorium can help by providing useful prompts. The app cleverly uses color psychology in the lighting scheme, giving a unique feel to each area of the environment. Now, we’re eagerly awaiting her next application to help us deal with our debilitating fear of clowns.Aratchporn Chaladol (Mint) created the ThamLuang Cave experiential and interactive VR documentary to help users understand what it was like when, on June 23, 2018, twelve boys went exploring in Thailand’s Chiang Rai province with their soccer coach and ended up trapped deep inside a cave. The rescue mission was extraordinary and Mint, a Thairath TV-Channel 32 news anchor and Faculty of Arts graduate, takes users through the dangers of the cave – with all the mud and water that Thailand’s monsoon season brings. Mint began with resources like Unity Learn’s Create with VR course and expanded upon the lessons to create a truly immersive version of this powerful story, filled with suspense, heroism, and loss. She has completed an impactful prototype of the first main scene, “The Entrance: Chamber 3,” and we look forward to seeing the upcoming Unity scenes (“Choke Point” and “The Kids”) soon.Jakarin Sirikulthorn (X) created his Recycle VR game as a fun way to educate children about recycling using Unity and virtual reality. The game employs colorful objects that demonstrate X’s 3D modeling and texturing abilities as well as some carefully chosen items from the Unity Asset Store. In the game, children attempt to toss items into the correct bin to receive points. Successes and failures are met with quirky and engaging sound effects that round out the experience. X, with a background in industrial product design and investment informatics, has just begun his PhD journey. As he delves into work on smart farming and automated drone technology, we’re eager to see the exciting Unity XR applications he’ll come up with next.Ludwik Bacmaga (Ludi) is a Unity developer with CMKL and is working on learning management system gamification. For his project, Ludi created an escape environment with puzzles set in a horror-themed virtual world. The Unity application gives users the sense of a large environment within a small space. He jokes that “there are no bugs, just features!” – including an innovative interaction system where puzzle panels come to the user, keeping the ambience of the confined space at the forefront of the experience. Ludi presents an intriguing duality, combining a sense of classic horror (with a slow reveal) and a modern stylized aesthetic. With his programming skills and XR expertise, we’ll be waiting in suspense to see where Ludi takes this project. (Just no clowns, please.)Gunyootapong Nopakun (Barge) got his start making hip-hop music as a teenager and developed his passion into a career as a TV host and radio DJ. In his Unity VR project, Barge funneled his media creation skills into an immersive experience in 1990s nostalgia. In the virtual room, you can listen to popular songs and watch a curated selection of movie clips from the decade. The room truly evokes the ‘90s sensibility, from the toys one can find around the space to the posters adorning the walls. With Barge’s many talents, we’re sure he’ll continue to be a master of the past and future, and maybe our next Unity Certified Instructor.Witchuporn Jingjit (Volt) is an energetic violin virtuoso who started playing the instrument at the age of five and began performing a few years later. Volt brought his love of music to his Unity VR project, creating an experience that mimics a traditional jam session. In his project, virtual instruments allow users to play harmonies that Volt, a Berklee College of Music graduate, created in Logic Pro X and imported into Unity. With the harmonies in place, users can improvise solos and “jam” in VR. The future looks very bright for this friendly young violinist from the Land of Smiles – and for the development of his Unity XR projects.Dhanadhat Trairatwongse (Marwin) graduated with an interior design degree and worked at a lighting design company before becoming a TV producer. For his Unity VR project, Marwin decided to bring his lighting experience, production skills, and lifelong affinity for horror movies and games together. He wanted to move beyond a game that makes you jump and toward an experience that makes you doubt your reality. His virtual world is designed to give the user a sense of familiarity while exploring a dark, eerie mansion. The story unravels as you encounter the chilling spectres that haunt it, with atmospheric sounds helping to flesh out the supernatural experience. Wait, was the desk on that side of the room the last time you looked? Are you sure? Marwin isn’t telling.Vich Sanardharn (Ray-O) is a creative director, editor, music lover, instructor, scriptwriter, and life enthusiast with a passion for experiential design and helping others. For his Unity VR project, Ray-O was interested in creating an experience that would leave an important mark on people’s lives. The application takes users to the Land of Nowhere – aptly named due to the desolate space – where the user does the creating, engaging in self-reflection on values and interpersonal relationships. The experience is rooted in sand tray therapy, where one may construct their own microcosm using miniature toys and colored sand. The created scene acts as a reflection of the user’s life and allows them an opportunity to resolve conflicts, remove obstacles, and gain self-acceptance. We’re fascinated by the concept and where Ray-O plans to take it next.—Are you interested in the MSTCI program at CMKL-EIC? Learn more about the curriculum, scholarships, frequently asked questions, and more.Curious about becoming a Unity Academic Alliance member institution like KMITL? Learn more about the program and its benefits.Want to become a Unity Certified Instructor? Learn how you can make an impact on Unity creators.

>access_file_