// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1690 transmissions indexed — page 65 of 85

[ 2021 ]

20 entries
1282|blog.unity.com

Delivering personalized marketing experiences for consumer electronics with Unity Forma

By relying on Unity’s product portfolio for marketing professionals, Onanoff, a global leader in safe, kid-friendly audio equipment, levelled up their digital marketing and customer experience. We partnered Onanoff with Visionaries 777, who leveraged the existing computer-aided design (CAD) data to create Onanoff’s marketing content pipeline – proving that compelling personalization technologies are accessible to all companies.Onanoff are reaping the rewards of fully embracing real-time 3D technology with Unity Forma and Forma Render. They can now offer 400% more design capacity for customer personalization, and their marketing material gets to market 2.5 months earlier. Onanoff challenges the common belief that only big industry players can afford cutting-edge real-time 3D customization technologies. In this blog we step through the process that Onanoff followed to get those amazing results. Read on to find out more.“Unity Forma gives us the capabilities of a big company with the agility of a small one.” – Pétur Hannes Ólafsson, CEO, Onanoff.Check out their webinar to hear the story directly from Onanoff and Visionaries 777.As a small company with fewer than 30 employees, Onanoff needs to be resourceful to successfully compete with bigger companies in their industry. They saw their competitors offering product customization, and they wanted to find a way to play the same game. To offer a truly customized service to their buyers, Onanoff needed to review their production processes. From their research they understood that without the proper tools, building a CAD to real-time 3D workflow is complex, disruptive and costly.Bearing that in mind, and aiming to get the competitive edge, Onanoff decided to start with BuddyPhones, their line of easy-to-use, kid-safe headphones. They set three goals for their real-time 3D adoption:Speed up the design and development process.Offer multiple customer personalization options to meet the market need.Reduce the time to get their product to market.Onanoff wanted to offer their customers a truly interactive online shopping experience with a dynamic product configurator, starting with their BuddyPhones product.For Onanoff, the design process relied on a computer graphics (CG) pipeline, which presents some major drawbacks in the race to get a product to market. A major concern is the time required to render assets, which can range from 15 to 90+ minutes (depending on the complexity and resolution required). Waiting for assets to render after every design iteration is essentially non-productive “dead time.” Only once rendered can each design iteration be reviewed by the various stakeholders.Additionally, the CG pipeline requires extensive software expertise and high processing power. All of these factors amount to a CG pipeline that is inflexible, time-consuming and costly.Onanoff already had their CAD models, so Visionaries 777 used the Pixyz Plugin to directly ingest that model data. Pixyz Plugin accepts nearly 40 3D and CAD formats, so there was no need for Onanoff to rework their CAD model, which was built in SolidWorks. With automatic tessellation and UVs, getting the Onanoff model data optimized and ready to build their configurator was straightforward.Onanoff’s concern was that the CG pipeline image fidelity wouldn’t accurately portray the materials and fabrics used in their products. Everyone needs to see a true image of the product – not only customers, but also internal design teams and key stakeholders in the development process. They didn’t want to add hours of rendering time to their production process, and they wanted to keep prototyping costs down.Visionaries 777 knew the best way to achieve fidelity was with Unity ArtEngine, a tool for creating ultrarealistic materials using AI-assisted artistry. By scanning samples of the materials into Unity ArtEngine, Visionaries 777 easily created realistic textures and color variations. These can be imported directly into Unity Forma, ready to use in the configurator build.Visionaries 777 recognized that additional work might be required to ensure the 2D textures of the materials were mapped in a realistic way to the 3D images (known as UV mapping). This is not unusual if the product has many overlapping geometry pieces in the 3D model, or if the data is not available (for example, for soft parts like cushions that do not exist in the CAD models).Third-party tools are sometimes helpful in creating realistic soft material textures (UV mapping). This is where Unity’s integration with third-party digital content creation (DCC) solutions, such as Autodesk Maya or 3ds Max, comes into play. Visionaries 777 finessed the 3D object UV mapping to make the textures and materials look their best using a third-party DCC tool, and then brought the materials back into Unity.By processing the materials textures through ArtEngine and Unity Pro to create custom physically based rendering (PBR) materials – a quick and easy operation –, Visionaries 777 were then able to apply them to the configurator model. This ensured that the soft materials were accurately represented in Onanoff’s configurator model.The educational technology market is growing rapidly, with many educational programs now requiring a student-to-smart device ratio of 1:1. Onanoff allows their customers to personalize their products with their own brand colors and logos. They don’t want their configurator technology to determine (or limit) the level of personalization they offer.As such, Onanoff’s configurator had to serve the needs of hundreds of different customers. With a render for a simple personalization taking up to 60 minutes, using a traditional CG pipeline simply wasn’t efficient.For Visionaries 777, the only choice for the Onanoff configurator is Unity Forma, because it allows them to:Rapidly import existing CAD model data.Showcase products in realistic visual quality.Offer endless personalization options.Build easily and quickly, without the need for developers.In a matter of days, Visionaries 777 delivered a fully operational product configurator – a process that would previously have taken several weeks.With teams in six different countries, Onanoff needed to make sure that everyone involved in product development was able to track progress and contribute to the process. Factors such as remote working, time zone differences and increased IT security must not compromise the product development or delay the launch. Relying on traditional prototyping and a CG pipeline wasn’t going to meet their requirements.By deploying directly from Unity Forma to Furioos, Unity’s cloud streaming solution, Visionaries 777 and Onanoff were able to share an interactive 3D product model in real-time.The solution seamlessly addressed the concerns of sharing across time zones and technologies. Every team member can access the live configurator from their web browser – no software installation or high-powered graphics card required. This enables true collaboration, with teams able to view design iterations as they happen in the Furioos stream.In a competitive market, the push to get your product to market as fast as possible is ever present. But a product launch requires supporting marketing materials such as packaging, digital advertising, printed collateral, and so on. Often created on tight timelines even before the first real-life samples are ready, marketing materials need to represent the new product accurately.Onanoff needed to get their product to market quickly, and Forma Render provided the ideal solution. Forma Render now ships with Unity Forma, and it allows users to create high-quality 2D, 360° images and video directly from the real-time 3D model already built in Unity Forma.Forma Render is:A virtual photo and video studio, for producing 2D images and video contentAn image-on-demand render engine that can respond to requests from websites or applications and deliver personalized contentA bulk rendering tool for mass content creationThis means Onanoff can create images of every possible product variation, from any virtual camera angle, and at multiple ratios and resolutions – dispelling the need to wait for prototypes (or time-consuming CG renders) and accelerating their marketing content production. And thanks to the accuracy of the Unity software, the Visionaries 777 team noted a 90% reduction in the need to carry out 3D asset retouching.Frantz Lasorne, cofounder of Visionaries 777, said, “Unity Forma enables us to easily create high-quality and scalable interactive visualization in order to deliver compelling solutions to our clients.”To get their product to market in a competitive arena, Onanoff knew they would need to make some big changes to their product development and marketing workflows. Working in partnership with Visionaries 777, they’ve addressed those challenges. By leveraging the usability and flexibility of Unity’s software for marketing experiences, Onanoff can now offer their customers a true personalization experience.And those goals they set? See how they did:Speed up the design and development process: 90% less retouching of 3D assetsOffer multiple customer personalization options to meet the market need: 400% more design capacity in the direct-to-customer modelReduce the time to get their product to market: 2.5 months quicker to market with marketing materialsGet the detailed story by checking out the webinar with Onanoff and Visionaries 777.Unity’s marketing solutions provide brands with the tools to build marketing content that competes with the big players. Find out more about Unity Forma, or reach out to our Unity experts to try it for free.

>access_file_
1283|blog.unity.com

New possibilities with VFX Graph in 2020 LTS and beyond

In 2020 LTS and 2021.1, VFX Graph’s updates have primarily focused on stabilization, performance optimization, better integration with gameplay using the new CPU event output, and more possibilities to spawn particles from meshes.As we look forward to 2021.2 and beyond, our main goal is to push both platform and content reach, with refined support for the Universal Render Pipeline (URP), heightened compatibility with 2D, and extended platform reach (compute capable mobile, Oculus Quest, Switch, etc.) We also want to incorporate more tools for building and customizing your VFX – particularly with the advanced integration of Shader Graph in VFX Graph. This provides direct access to all URP and High Definition Render Pipeline (HDRP) master nodes, like hair and fabric, as well as a Signed Distance Field baker tool in-Editor, to save you time otherwise spent back-and-forth with third-party tools. We’re even adding support for graphics buffers to develop advanced simulations, like dynamic hair, without having to leave the GPU.To accelerate project development, we’ve added a new sample in the sample library and updated the library to 2020 LTS. We’ve also upgraded the Spaceship demo, in light of its second anniversary. From visual improvements to better project compatibility with 2020 LTS, Mac, and Linux support, you can now choose between various quality settings to run the demo on a wide range of desktops.Before you get started with VFX Graph, we recommend watching the following short videos for a quick overview of the tool:Creating fire, smoke, and mist effects with VFX Graph in UnityRendering particles with Visual Effect Graph in UnityMultilayered effects with Visual Effect Graph in UnityOnce you’re ready to go, you can experiment with the HDRP Scene template in the Unity Hub as a starting point for your next project. This template contains a variety of lighting conditions to test your own particle systems, in addition to a few particle systems that have already been integrated, such as dust, butterflies, and falling leaves in the glass cage.If you want to create an entirely new effect, you don’t need to start from scratch. Use the nodes in the system category to harness pre-configured setups for the main use cases.For complete systems, you can download the VFX Graph samples, which include effects like the bonfire, butterflies, and magic book, in addition to brand new effects that will be revealed later in this post.Once you start using nodes, you can access contextual help with the new tool tips and error messages in 2020 LTS, or get further guidance from the updated documentation.For some in-depth content, check out this list of tutorials and VFX breakdowns created by our accomplished teams of artists working on various productions:Thomas’s in-depth tutorials (Thomas is our senior in-house VFX artist)Hardspace: Shipbreaker Tech Talk: Explosions with VFX Graph | Unite Now 2020Making snow with VFX Graph | Unite Now 2020Real-time VFX workflows in The Heretic – Unite Copenhagen 2019VFX Graph tutorial – Magical LibraryIf you’re looking for even more, take a look at these time-saving tools to ramp up your project:VFX Graph’s package (additional content)VFX ToolboxThe new Meteorite sample showcases a complete scene with interactions between diverse elements triggered by the meteorite’s impact. This sample was made using a new feature in 2020 LTS called VFXOutput Event Handlers, which chains events and additional effects with a timeline.VFX Output Event Handlers are used to create the camera shake, the impulse of planks, as well as the light animation.In this example, we use the Output Event to trigger a camera shake with a script component in the Inspector for the VFX. The camera shake gets velocity information from the Spawn Event Velocity node and is spawned as a single burst, 1.15 seconds after the main FX is triggered.The reactive grass, which comprises VFX particle strips, interacts with another effect that is rendered into a render texture to trigger the blast and burning. This buffer VFX reacts because of a VFX output handler on the main meteorite.A camera is set up to record the buffer VFX (which is not rendered by the main camera) and saves it to a render texture used in the grass to implement wind blasts, for example, and dissolve shader and flame effects. This is how information flows from one effect to another and keeps everything in the same position.Timeline is another great tool for VFX artists. It can be used to trigger many effects and adjust their timing as needed.Additional effects, such as light flash, flying birds, and falling leaves were initially linked to a button canvas with the main VFX, but later changed to be used in Timeline with the Demo Scene.The trees were also made using VFX graph mesh outputs, subgraphs, and Shader Graph, which enable you to randomize many of their parameters.We’ve come a long way since the initial release of our first spaceship demo. Over time, this demo has been polished, optimized, and upgraded to every major version of Unity, and is now nearing the end of its upgrade lifecycle.Download the demo here, then watch this video for a brief introduction to the project. This example demonstrates the importance of integrating and optimizing different features together in a real production.Please note that we made some slight changes and improvements to the demo’s content in 2020.3 to reflect many of VFX Graph and HDRP’s latest features.Looking into spaceThe more we thought about it, the more we realized the importance of actually seeing your intergalactic surroundings while out exploring in space (why else would you be in a spaceship?) That’s why we finally implemented new details to the demo, so you can get a glimpse of outer space from the spacecraft – planets and all.To emphasize the turbulent trajectory of the spaceship, we also added a secondary shake to the environment cubemap, with a different noise, on top of the camera shake.Sparkle, sound, and light sync done rightWhile we had previously altered the way that we handled sparkles by using a script to synchronize its sound effects and light animations, we decided to make it even more accessible to VFX artists.We updated the system to use the Output Event Handlers as part of the additional VFX Graph samples (new in 2020.3). We changed the behavior of the VFX, so that the random spawn occurred directly in the VFX Graph, and configured the duration of the flash at that time.Then, with support from a Prefab spawn and Play Audio Helpers, we were able to perform the random spawn entirely in VFX Graph, and synchronize its light and sound effects with the Spawn Context. By using the Flash Lifetime attribute as delay, the VFX Output Event Prefab spawn was also able to disable the Prefab after that time span.Quality settings and specific contentWe implemented a Quality Settings option to run the Spaceship demo on a wide range of hardware. This allows for more detailed post-processing, volumetrics, and increased lights during the walkthrough in an Ultra Quality mode, no matter the device.We even added a low-quality option for low-end gaming hardware – down to GTX 760 – disabling the most expensive options.We aim to release the final Spaceship demo later this year, with a project upgrade to 2021.2 that brings accelerated performance, NVIDIA’s DLSS and AMD FSR support, and many more exciting, unannounced features – so be sure to stay tuned.We’ve added many new features to the 2020 LTS version, leading up to the 2021.1 development cycle, which centers on bug fixes that will benefit both versions. Here are a few highlights.Import and iterate fast on flipbooksThe improvements made to our Image Sequencer, to scale with additions to the Texture Import, are also handled by VFX Graph in 2021.1. You no longer need to set the rows and columns manually when changing flipbook textures in VFX Graph, since we added new options to import flipbooks as Texture arrays, where the row and column values are stored directly in the Importer and handled automatically by Image Sequencer.With the new Skinned Mesh Sampling feature in 2021.1, you can create flames and trails, while dissolving, morphing, and implementing many other effects on both characters and objects.To explore this feature, use the Position (Skinned Mesh) operator in either Initialize, Update, or Output contexts – depending on the effect you want to achieve.Next, set up an exposed Skinned Mesh Renderer property with a transform in the blackboard.After linking your Skinned Mesh to the position block in your desired mode, you need to get the transform information of your Skinned Mesh in the Scene in order to position the effect in the right place. Leverage the VFX Game Object Inspector to do this. With support from the Property Binder, you can locate the component needed to bind the hierarchy of your skeleton.Here is another example of Skinned Mesh Sampling in the Update context:We look forward to seeing the amazing content that you’ll create using Skinned Mesh in VFX Graph.Optimized performance using Mesh LODSpecial effects that use mesh outputs can now be optimized through the implementation of a level of detail (LOD) system, so you can manually specify simpler meshes for distant particles.When selecting a mesh output in the Inspector, you can access a new Mesh Count property that allows you to specify up to four meshes for your output. If you enable the LODcheckbox under it, your particles will select meshes based on the percentage of the screen that they occupy.You can modify the LOD valuesfield in your output to designate the minimum percentage of the screen that each of your meshes must occupy to be visible. For example, a value of 10 means that this particular mesh will only be visible if it occupies at least 10% of the screen. To improve performance while maintaining the visual quality of your effects, you can select simpler meshes and smaller LOD values for particles that are further away. The LOD values can also be adjusted at the same time, for all meshes, by modifying the Radius Scalevalue.As shown in the planetary ring example below, implementing LODs for mesh particles can improve performance from nearly six to more than 60 frames per second (fps),without any noticeable visual impact.In the upcoming versions, starting with 2021.2 beta, we want to push both platform and content reach further through:Improved URP supportCompatibility with the 2D RendererExtended platform reach (Oculus Quest, compute capable mobile, etc.)We also want to provide more tools for you to build and customize your VFX, such as:Advanced integration of Shader Graph in VFX Graph for access to all URP and HDRP master nodes, like hair and fabricSigned Distance Field baker tool in-Editor to save time spent back-and-forth with third-party providers when changing source assetsBounds and Capacity helpers to improve the culling of effectsSimilarly, there’ll be extra support for Graphics buffers, so you can leverage more advanced simulations without leaving the GPU during development.If you want to experiment with these updates, the majority of them are already available for you to test and provide feedback on as part of 2021.2 beta (but please keep in mind that they’re still in the process of evolving).For more information on what’s to come in future releases, you can always visit our Graphics Product Roadmap to vote for specific features, inform us of your needs, and submit new ideas or requests.The following is intended for informational purposes only, and may not be incorporated into any contract. No purchasing decisions should be made based on this material. Unity is not committing to deliver any functionality, features, or code. The development, timing, and release of all products, functionality, and features are at the sole discretion of Unity, and are subject to change.

>access_file_
1284|blog.unity.com

How to build the ultimate cross promotion strategy with Playtika and Supersonic

Mishka Katkoff from Deconstructor of Fun sat down with Yuval Yosefi, Media Department Lead at Playtika, and Igor Bereslavski, Director of Growth at Supersonic, to discuss how to grow titles across your portfolio and retain users in your game with cross promotion. Read on for a summary of the webinar or watch it here: https://www.youtube.com/watch?v=899h9gDP5jMRunning cross promotion campaigns is an essential strategy for growing your business and keeping users within your portfolio - after all, you want your most valuable users' next game to be yours, not your competitors’. Considering you know your players better than anyone, you can recommend the right game to the right users, keeping them in your ecosystem, increasing DAU and overall LTV.That said, there are many misconceptions when it comes to cross promotion. Mishka from Deconstructor of Fun asked the panelists which misconceptions they see most in the industry, which cause developers to begin this strategy with wrong expectations. Yuval from Playtika said there is an expectation that players will behave differently when crossing from one game to the other. In reality, players’ habits are generally consistent within the same game genre, so it’s important to set your goals accordingly.Igor from Supersonic brought up that hyper-casual studios, specifically, are often cautious of doing too much cross promotion because they think it sacrifices their ad revenue - because instead of selling a user to another developer outside of their portfolio and making money, they are cross promoting inside their own portfolio. Igor said that in reality, if you do it the right way - set the right KPIs, the right margins, the right tools - you can be very profitable and meet your targets even with a high share of cross promotion.Yuval mentioned that adding cross promotion to their casual games actually increased the overall competitiveness of the whole ad stack and resulted in an incremental revenue increase. Once you can get your studio on board with cross promotion and you set expectations at the correct level, it’s important to also set up your cross promotion strategy effectively.Setting up your cross promotion strategyNext, Mishka asked about setting up your cross promotion strategy. In general, there are a few ways to do this, according to Yuval and Igor.First, you can use native placements, which can be easy and inexpensive to apply since they are built directly into your app; however, the engagement for these ads is very low and optimization capabilities are limited since usually there is no data collection around native ads.Another option is advertising through a mediation platform, which allows you to utilize actual ad space rather than just the native placements, but it’s still limited since it doesn’t have dynamic suppression, and it requires CPM bids, which is inefficient for ad-based advertisers who are used to paying by CPI.Lastly, you can run cross promotion through ad networks and pay by CPI which is more efficient and allows you to optimize your strategy. But most networks don’t differentiate between your normal ads and cross promotion ads which means your cross promotion data is still limited and you pay the ad networks' regular fees to advertise on your own titles.Then there’s ironSource’s cross promotion solution, which both Playtika and Supersonic use. This solution provides the operational and data science benefits of running through an ad network without having to pay a premium on your own supply, by running all cross promotion in a stand-alone dedicated network.Since Playtika started using the solution, Yuval explained, they “have full control over the demand side, how much we’re paying, targeting, installs, conversions, and we also have full control on the supply side.” ironSource’s tool also offers segmentation and A/B testing, which allows for an incredibly dynamic approach to optimization, according to Yuval.With the solution’s transparency, you can get more granular about how you reach the right users and set yourself up to boost performance. But, with IAP and hyper-casual games approaching cross promotion differently, it’s important to understand what’s best for your studio. Here are some key topics Mishka asked about that important to look at when beginning cross promotion:1. Breaking down the first steps to segmentationAt the end of the day, two games from the same genre are still different games and provide different user experiences and ARPU. For IAP games, Yuval said that it’s key to segment according to the audience as opposed to the genre. Yuval gave an example of segmenting an audience based on their engagement with a certain feature or concept in one game, in which that feature or concept is prominent in another title of theirs. The most important segmentation, Yuval explained, is player churn and, specifically, pushing cross promotion ads immediately before the player is likely to stop playing the game.On the other side, Igor said segmentation in hyper-casual is not as critical since the user is already likely to see multiple ads in their first session. For hyper-casual games with shorter lifetime value and broader audiences, user segmentation for cross promotion is not necessarily an effective strategy. That’s why cross promotion is done more broadly across the hyper-casual genre.2. Deciding between cross promotion and ad monetizationMoving on, it’s also important to know how to strike a balance between cross promotion and ad monetization. For IP games, it all comes down to whether you count cross promotion as an ad revenue generator. According to Yuval, if you’re counting cross promotion as ad revenue, then you treat it like another network. If you don’t count cross promotion as an ad revenue generator, it’s best that cross promotion instances are limited to low positions in the waterfall. Yuval noted that with ironSource’s solution, you can track the eCPM, revenue and impressions specifically for your cross promotion titles and compare it with the opportunity cost of having shown a non-cross promotion ad.For hyper-casual games, a separate cross promotion network can actually boost your LTV and ARPU, according to Igor. That’s because good cross promotion conversion produces higher eCPMs on the monetization side.3. Setting the correct aggression levelWith the difference between cross promotion and ad monetization understood, Mishka asked Yuval and Igor about aggression levels. Both Yuval and Igor agreed that the key to deciding when to show an ad is rooted in A/B testing of the placements, frequency, types of videos, and seeing what players respond to the best.In IAP games, Yuval said you can start small with the segments you’re more comfortable experimenting with and push until there’s no longer added value, and make that your benchmark. You can also reward users in IAP games for cross promotion conversion, which Playitka does by providing larger in-game welcome bonuses from the game economy to converted users.Ready to start cross promotion? Learn more about ironSource’s cross promotion solution here.

>access_file_
1285|blog.unity.com

Simulate robots with more realism: What’s new in physics for Unity 2021.2 beta

Unity 2021.2 beta contains usability improvements to the physics features that enable new use cases while providing easier authoring and faster debugging in the field of robotics.The ArticulationBody component is at the core of our robotics simulation because it enables simulating kinematic chains at high accuracy, which is essential for robotic hands, manipulators, mobile robotics, and much more. We have been listening to user feedback and have made multiple changes to improve performance and usability.The properties of the ArticulationBody component have been rearranged for better readability. Now, parameters related to mass are in one visual block, followed by the parameters related to anchors and then to drives. These changes have been back-ported to Unity 2021.1 & 2020.3.The ArticulationBody editor now uses the same joint tools that the regular iterative joints do. This ensures a consistent experience across the Editor. On top of that, it is also possible to edit the limits and anchors of all joints visually.The joint tools support all of the ArticulationBody joint types, because they have been extended to allow editing of the Prismatic Joint, which wasn’t available before. See this forum thread for more information or to provide feedback.ArticulationBody has a new setting that allows selecting the collision detection mode. All the continuous collision detection modes are supported, just like with Rigidbody. This was back-ported to 2021.1 and 2020.3 since it was viewed as essential to certain use cases. For example, training a machine learning model to control a humanoid character to walk required enabling the continuous collision detection on the feet as otherwise our model was able to learn how to use the depenetration impulse coming from the feet overlapping the ground to its advantage: moved forward a lot faster than it would have normally, and some flight patterns were discovered too.Additional variants of ArticulationBody.AddForce have been added to match those in Rigidbody.AddForce. Apply force, acceleration or impulse directly. This eases up migration of pre-existing code from Rigidbody to ArticulationBody.We improved the clarity of the documentation by explicitly stating boundary conditions and special cases. In this release, we have a new page for the ArticulationBody component.Based on user feedback, we have also included units of measurement for all of the C#-facing properties of the ArticulationBody component in the docs; see mass, for example.We continue to invest in making the general physics-related pipelines easier to use, and into providing more flexibility to accommodate various usage patterns. We believe it makes more sophisticated simulations possible, driven by creators now being able to use the extra functionality to understand their field better and thus configuring the simulation to achieve more accurate results.That being said, the Physics Debugger now supports prefabs properly -- both in the Isolation Mode and in the Context Mode. It allows using the divide-and-conquer design principle to a larger extent now, by observing properties of prefabs in isolation, while the rest of the scene is not shown.Physics layers are an essential tool in optimizing performance of the collision detection system. Frequently, and especially in the large scenes with many layers, it’s best to disable all collisions first, and enable only the needed ones afterwards. To enable this usage pattern, new buttons were added to toggle collision detection between all layers in the physics settings. This is useful in larger projects where many layers are present, but where you can reduce interactions to a smaller subset of layer combinations to improve performance.Additional metrics have been added to the Physics Profiler. Now, there are more graphs available, and the textual pane displays more data about the current simulation. Among the new additions are the total number of physics queries, the number of articulation bodies, and the number of transforms synced over the last frame.A custom profiler module can also be created, to include only the metrics that are needed for a particular project.Finally, memory usage is now also available as a metric.The physics batch queries are a way to boost performance of physics queries (like Raycasts for instance) by running them on all the available cores, as opposed to normally where we run them all on the main thread exclusively.Ideally, the code that depends on the results of a batched query is a C# job itself to maximize the performance boost. However, the main problem preventing this from happening was the fact that the collider hit was reported as a Unity Component (RaycastHit.collider). None of the Unity Components are available off the main thread, so that quite limited the adoption of the batched queries.To address this, the instance ID of the Collider that was hit is now exposed. Instance IDs can be freely used off the main thread, so it should be no problem to chain the query jobs any longer.The patch friction mode is the default friction simulation mode in Unity. It’s certainly a compromise towards higher performance rather than simulation accuracy, but it can still be tweaked to get reasonable results within a tight computational budget.A new improved patch friction mode is now available in the physics settings. It addresses the problem that when more than one friction anchor is generated in a contact pair, the friction forces can be up to two times stronger than predicted by analytical models.For example, on the following graph, cubes with different dynamic friction are sliding over a plane. The red cubes show the expected final positions as predicted theoretically. The blue cubes use regular patch friction, and seem to travel about half-way to the goal. The green cubes use the new improved patch friction, and approach much closer to the expected values.The new contact modification API is now available, and we’re collecting feedback on this forum thread. It allows for the customization of the physics engine’s reaction to the contacts. For any contact pair, it is possible to change contact points, limit the impulses applied by the solver, tweak target velocities, and more. Among other usages, it allows making holes in any colliders, creating sticky contacts and various physics-powered conveyor belts. In the example below, the sphere falls through the plane because it ignores the contact points with it (can be made area-sensitive). On the right, a cube is bouncing off the two inclined planes while not rotating -- because the reaction to the contact was customized to exclude rotations.These improvements ensure users generate more realistic results from their simulations in Unity. Many of these improvements were made based on suggestions or feedback from our community and we invite you to join the conversation. To get started with robotics in Unity, check out some of our examples and demos on the Unity Robotics Hub.

>access_file_
1286|blog.unity.com

Breaking down how different generations interact with their devices

With 6.7 billion devices connected globally, and smartphone usage having successfully penetrated every demographic segment, advertising directly on devices through OEM and carrier partnerships is not one size fits all. That’s because different users are going to engage and interact with their devices in different ways - a 70-year-old is not going to react to advertising the same way a 24-year-old will.It’s interesting to go deeper into the generational gaps between how different age groups interact with their devices as well as the apps they are inclined to engage with. Let’s break down three generations, and how they interact with their phones and apps so you can better format your on-device advertising strategy to reach your audience.Baby Boomers (1946-1964)Baby Boomers are known for being the least familiar with technology, but their time spent in apps has gone up 30% according to TechCrunch*, meaning they are active on their smartphones but not at an advanced level. That said, as an advertiser, reaching these users early on in the device lifecycle when they’re already focused on set up is a valuable way to engage this audience.Because they have less experience downloading apps, they’re less likely to frequent the app store, so meeting them early is the best way to get in front of these users. Ultimately, they are extra cautious about their new phones - only 26% of Baby Boomers say they are very confident using electronic devices to go online according to Herosmyth* - which means they’re going to spend time carefully selecting apps that matter to them and change out an app that is already downloaded. But, what apps actually matter to this generation?According to App Annie*, Baby Boomers are likely to use productivity and local news apps more than social or photo sharing apps. Being new to technology, they are more inclined to download apps that will make their lives easier rather than apps they have to keep up with and maintain.Though reaching users early is always beneficial to your UA strategy (95% of all users download 40% of the apps they will install during the first 48 hours of device purchase) , Baby Boomers are the most likely to download apps early in the device lifecycle. Considering they want apps that will simplify their lives, Baby Boomers don’t want to wait to have these tools accessible on their devices.Millennials (1981-1996)Over half of millennials say they use 3-5 apps a day according to Interop*, making them incredibly active on apps compared to other generations. Because of this reliance, millennials are constantly discovering new apps to download throughout the lifecycle of the device.Millennials were the first generation to make smartphone usage a mass phenomenon, which means downloading apps is a relatively calm and fun experience. On top of that, they are the most diverse in terms of lifestyle according to Marketing Dive*, and their app choices reflect that diversity - they are active across a range of different app categories. Ultimately, millennials are constantly looking for new relevant apps to add to their collection.As a whole, Millennials use social networking apps the most - 80% use these channels according to Marketing Dive* - followed by music streaming, games, and communication. In contrast to Baby Boomers, Millennials don’t use many weather or search apps. Millennials use apps for a range of wide range purposes and don’t limit themselves to a single category.When it comes to Millennials, it’s important to show them apps that will matter to them, especially considering users only open 18 of the average 40 apps on their phones. You should be actively reaching this audience throughout the device lifecycle at contextual touchpoints, such as notifications and device updates. If millennials can engage with a message tailored just for them about an app relevant to them at the right time, they’re likely to open and use that app more often.Generation Z (1997-present)Generation Z are highly concerned with staying connected and informed, and a fear of missing out (FOMO) keeps them unlocking their phones nearly 80 times a day, according to Statista*. Staying up to date through their devices is a huge part of young people’s lives, which is why this generation loves notifications.Generation Z doesn’t know a world without notifications, let alone smartphones - the first iOS and Android notifications were only released in 2009. They are often referred to as the iGeneration, Net Gen, or Digital Natives because they are the most tech savvy age group. That in mind, as Baby Boomers learn about the world through print newspapers and 24 hour news channels, Gen Z thrives off of notifications to stay up to date on what is new and noteworthy. With a “ding” feeling like oxygen to Gen Z, this group sees the physical and digital world as one in the same.When it comes to apps, Gen Z has successfully replaced the TV with visual social platforms, such as Snapchat, TikTok, and YouTube. According to Marketing Dive*, 75% of Gen Z consider Snapchat to be the place to stay connected and 71% use YouTube for long-form video content. In fact, Gen Z watches 68 videos per day on average, according to Geo Marketing*. As an advertiser, it’s important to note that because Gen Z is impressively informed about the world and brands, this group is more likely than others to turn to apps that create unique video content catered to them.Smartphones are the third arm to Gen Z and notifications are their primary connection to communicating, sharing and learning. Users receive over 60 notifications a day according to the Traffic Company, and it is likely that Gen Z is already on their phones watching videos when this notification arrives. That said, on-device notifications are a great way to reach this audience and share information about your app.Different generations are undoubtedly going to interact differently with their devices. With everyone owning smartphones in today’s world, it’s valuable to be aware of the generational gaps when advertising directly on mobile devices. At the end of the day, it’s essential to know your audience, test the theories above, and iterate your copy and campaigns to match the generation you are trying to reach.

>access_file_
1287|blog.unity.com

Advance your robot autonomy with ROS 2 and Unity

Unity is excited to announce our official support of ROS 2, whose robust framework, coupled with simulation, will enable myriad new use cases.The Robot Operating System (ROS) is a popular framework for developing robot applications that began in 2007. Although originally designed to accelerate robotics research, it soon found wide adoption in industrial and commercial robotics. ROS 2 builds on ROS’s reliable framework while improving support for modern applications like multi-robot systems, real-time systems, and production environments. Unity is extending its official support of the ROS ecosystem to ROS 2.Modern robotics is shifting its focus towards “autonomy,” the study and development of algorithms capable of making decisions in the absence of strict rules defined by a human developer, and simulation supports this transition by enabling greater flexibility and faster experimentation than real-world testing. We’ve developed an example, Robotics-Nav2-SLAM, to demonstrate how to get started simulating simultaneous localization and mapping (SLAM) and navigation for an autonomous mobile robotics (AMR) with Unity and ROS 2.While ROS remains an excellent framework for robotics prototyping, it is reaching the end of its lifespan and is missing some features necessary to go beyond prototyping and into full-scale production and deployment of a robotic system. ROS 2’s technical roadmap was established and is maintained by a committee of industry veterans with explicit tenets defined for ensuring ROS 2 is a suitable framework for robotics end users. ROS 2 supports more operating systems and communication protocols and is designed to be more distributed than ROS.Many of the emerging use cases for ROS 2 focus on autonomy. Introducing autonomy means the decisions a robot makes and the results of those decisions are not neatly predictable using only a state machine and a collection of mathematical formulae, as they may be in many industrial robotics use cases. Compared to industrial robots, an autonomous robot’s operating environment is exponentially larger. The permutations of inputs it encounters far surpasses what can be reproduced in a controlled laboratory environment. To fully validate that an autonomous robot behaves the way you expect it to, you can either do it on the robot, in your own personal pocket dimension where time has no meaning and reality is everything and nothing all at the same time, or you need the next best thing: a suitably robust simulation.If a robot is expected to sense an environment, a simulation must be capable of accurately modeling those sensors without making compromises with respect to the accuracy of the environment’s simulated topology and physics. If there are other agents in that environment, i.e., people or other robots, then the simulation must be capable of modeling the agent behavior, while still maintaining the accuracy of its sensor simulation, topology representation, and physics modeling. To fully exercise a robot against all the scenarios it might encounter, this simulation needs to be run many, many, many times. This is all to say that simulation in support of autonomous robotics requires four things not often required by industrial robotics: flexibility, extensibility, scalability, and fidelity – all without sacrificing performance. Unity sits at the intersection of all these requirements, which is why we are building more features into our platform to support development of autonomous robots.With Unity’s Robotics packages, you’ll have access to the interfaces we’ve already built to make communicating with ROS or ROS 2 easy. You will be able to import existing robot configurations directly from URDF files with our URDF Importer, and you’ll be able to start exercising your robot against Unity’s high-quality, highly efficient rendering pipeline and a performant and accurate physics simulation. Through Unity’s Asset Store, you have access to a great variety of additional, premade environments and props to help you model your robot’s specific environment and task. With a few clicks, the simulation you assemble can be built and deployed to any mainstream OS, be it Windows 10, Mac OS, or Linux. Using C# scripting, Bolt visual scripting, or any of the many scripting and utility toolkits available in the Asset Store, you can continue to customize the functionality of your particular simulation to suit your specific use case.Moving your Unity project to ROS 2 is simple. In the ROS-TCP-Connector package, we’ve added a dropdown menu that allows you to toggle the package between ROS and ROS 2 integration. Upon changing the protocol, Unity will automatically recompile the package against the message definitions and serialization protocol that you’ve selected. To test it out, simply make this change in your own project, or pull down our example repository, Robotics-Nav2-SLAM, which contains the necessary components to enable using Unity as the simulated source of sensor and odometry information for the Nav2 Navigating while Mapping tutorial.This example project demonstrates how to use Unity to simulate a navigation system running in ROS 2. The concept of navigation is straightforward and doesn’t change much in the context of autonomous robotics. Navigation algorithms aim to find a path from where one is to where one wants to be. However, to get from where one is to where one is going, one must first do SLAM – simultaneous localization and mapping. SLAM describes a collection of algorithms built to answer the question, “Where am I, right now, and where have I been?” Humans are performing SLAM constantly as an intrinsic part of the processing pipeline between our senses and our brain. For autonomous robots, performing accurate SLAM is still a challenging proposition for most real-world environments. What, exactly, an autonomous mobile robot requires to enable it to always know where it is, relative to everywhere it's ever been, is still an area of active research. The only way to really answer this question for a given use case is to try a lot of different things (sensors, algorithms, etc.) and see what sticks.In our example, you will find a simple warehouse environment, a fully articulated model of a Turtlebot 3 mobile robot with simulated LIDAR and motor controllers, and a Dockerfile used to build an image containing all of the ROS 2 dependencies necessary to exercise the Nav2 and slam_toolbox stacks against our simulation. The steps of Nav2’s tutorials will provide useful context if you’ve never used ROS 2 or worked with SLAM algorithms before. To see this example work in Unity, all the instructions to get you started and the project running are in our repository.Roboticists new to Unity and Unity developers new to robotics are encouraged to try our ROS 2 integration and perform autonomous navigation with Robotics-Nav2-SLAM. This is just a small example of what you can build by integrating our robotics tools and the many other powerful packages available from Unity. In tandem, the Unity Robotics team continues to build and release features explicitly in support of common robotics use cases with an emphasis on scalability and extensibility.Unity will also be hosting a workshop at ROSCon this year that extends the Nav2-SLAM-Example to support multiple robots with specialized roles working together to accomplish a specific task.

>access_file_
1289|blog.unity.com

5 ways to speed up your workflows in the Editor

Achieve more in less time with the Shortcuts Manager, Presets, QuickSearch, and more.We’re always working to bring greater efficiency to your day-to-day aggregate workflow, boost your productivity, and let you focus on your creative process. Even experienced Unity developers might have missed out on some of these improvements, so we created an e-book with more than 70 time-saving tips to accelerate your workflow in Unity 2020 LTS. This is the first in a series of three blog posts highlighting some of these tips, starting with how you can speed up your core Editor workflows.Shortcuts ManagerThe Shortcuts Manager is an interactive visual interface where you can manage Editor hotkeys. Here, you can assign shortcuts to different contexts and visualize existing bindings for any of the tools you use frequently.You can bind any key or combination of keys to a Unity Editor command. For example, the R key is bound by default to the Scale tool in the Tools context.The Binding Conflicts category also identifies if you have a shortcut assigned to two commands that can be executed at the same time. Use the interface to resolve such conflicts. Note: You can assign the same shortcut to multiple commands if they are in different contexts and cannot execute at the same time.To access the Shortcuts Manager from Unity’s main menu:On Windows and Linux, select Edit > ShortcutsOn macOS, select Unity > ShortcutsUse the provided API in the UnityEditor.ShortcutManagement namespace to define custom shortcuts in your own scripts and packages.PresetsThis feature allows you to customize the default state of anything in your Inspector. Creating a Preset lets you copy the settings of a component or asset, save it as an asset, then apply the same settings to another item later.Use Presets to enforce standards or to apply reasonable defaults to new assets. This ensures consistent standards across your team, so commonly overlooked settings don’t impact your project’s performance.Click the Preset icon to the top right of the component. Click Save current to… to save the Preset as an asset, then click one of the available Presets to load a set of values.Other handy ways to use Presets:Create a GameObject with defaults: Drag and drop a Preset asset into the Hierarchy to create a new GameObject with the corresponding component filled in with Preset values.Associate a specific Type with a Preset: In the Preset Manager (Project Settings > Preset Manager), specify one or more Presets per Type. Creating a new component will then default to the specified Preset values.Pro tip: Create multiple Presets per Type, and rely on the Filter to associate the correct Preset by name.Save and load manager settings: Use Presets for a Manager window, so the settings can be reused. For example, if you plan to reapply the same Tags and Layers or Physics settings, Presets can reduce set up time for your next project.Scene visibilityAs your Scene grows larger, you can temporarily hide specific objects to select and edit your GameObjects with greater ease.Instead of deactivating the GameObjects (which can lead to unintended behavior), toggle the SceneVisibility controls. This allows you to hide and show objects in the Scene view without changing their in-game visibility.Use the toolbar in the Hierarchy window to enable or disable the Scene visibility for GameObjects in the viewport.Note that the status icons may change in the Hierarchy, depending on whether parent or child objects are hidden.Use Isolation Viewto concentrate on a specific object and its children. Select a GameObject in the Hierarchy window and press Shift + H to toggle it on and off. This overrides your other Scene visibility settings until you exit.Remember that you can always use the Shift + spacebar shortcut to maximize the viewport and hide the rest of the Editor as well.Scene pickingYou can modify the pickability state of GameObjects, similar to Scene visibility. Use the toolbar to block specific GameObjects from being selected in the Scene view. This is useful to avoid selecting and editing an unintended GameObject in large scenes.Because you can toggle pickability for a whole branch or a single object, some GameObjects may be pickable but have children or parents that are not. The following icons differentiate their status.SearchingThe Editor contains search functionality for the Scene view, Hierarchy window and Project window.In addition to searching for names, you can search by type. Use the dropdown to select Type or the t: shorthand syntax.If you use Asset Labels, you can also use the l: shorthand to filter for labels.In this example, we search the scene for all objects of type Camera:QuickSearchIf you want to extend your search beyond the windows discussed here, you can find anything in Unity using the QuickSearch package.Unity 2021.1 incorporates this functionality into the Editor without requiring a separate package installation. Look for it under Edit > Search All (Ctrl + K on Windows / Cmd + K on macOS).Once installed from the PackageManager, activate QuickSearch from either Help > QuickSearch or use the Alt + ‘hotkey combination.QuickSearch enables you to search multiple areas of Unity, including assets, scene objects, menu items, packages, APIs, settings, etc.Here is an example of a QuickSearch for “Camera”:Make sure you run the setup wizard to configure the search settings for the best results.See the QuickSearch guide to learn more about searching both inside and outside of Unity.Stay tuned for upcoming blog posts with more tips to speed up your workflows – or get all the tips now by downloading the free 70+ tips to increase productivity with Unity 2020 LTS guide. You will need to fill out a short form to have the e-book sent to your inbox.Let us know what additional topics or features you’d like us to cover in the comments.

>access_file_
1290|blog.unity.com

Indie spotlight: Meet the two-person game team behind Virede

Get to know Virede, the two-person game studio based in Ukraine and the developers behind Idle Law Firm. Hear directly from Serhiy Kozachuk and Alex Kozachuk and learn all about their game developer journey - from hyper-casual to idle - and how they teamed up with ironSource to boost app revenue 100%. Check out the Q&A below.How and why did you first get started in gaming?We're a small studio. It's only a two-person studio, me and my brother, and we’ve worked together for seven or eight years. We started as freelance developers, and eventually found the world of games. We just jumped into the hyper-casual space before moving to idle games and have worked with a lot of the biggest publishers out there on both the iOS and Android store.How long does it take you to build a game from concept to production?For our last game, it took us 3-4 months of development from ideation to going live. We then spent one month testing in Canada and Great Britain. It was pretty hard for us to scale the game and our ad spend because we were new to the space.With ironSource, we used the tCPI optimizer and liked it a lot because it does the job automatically for us. We don't need to spend our hours and time manually managing all the campaigns and their performance.What was the main challenge you faced in growing your game?We worked with App Store and Google Play store algorithms for a while, but they are pretty tricky since you don't always know what’s the best strategy to implement. We don't have much experience in marketing, so it's pretty hard to know where to spend your budget, how to spend your budget, is it best to grow your game in the first few days or just to have a small incremental growth?Why did you partner with ironSource?We decided to just keep our studio small and instead spend our resources on marketing and monetization. We tried ironSource - and we were amazed because we worked with a lot of other mediation solutions and ad networks, but we got frustrated every time we’d try to implement something or add something new. With ironSource, we were able to make changes to our setup with just 2 clicks.It was so hard for us to monitor everything and ironSource made it so easy for us. We don’t need to worry about anything. The support is awesome and our app revenue boosts were mindblowing. ironSource helped us boost our app revenue 100%.We have a plan for our games for the next year or two and we want to continue working with ironSource - it’s a crazy good platform and we want to try everything it has to offer because it really works.What advice would you give other indie developers trying to make it?If you want to develop games - give it a try - even if you start by just reading and learning online. Keep trying, because the more you try, the better.

>access_file_
1291|blog.unity.com

Made with Unity: Soccer robots with ML-Agents

Our Made with Unity: AI series showcases Unity projects made by creators for a range of purposes that involve our artificial intelligence products. In this example, ML-Agents empowered AI developers by allowing them to quickly and easily set up machine learning environments and to train an agent how to play soccer before finally transferring that agent to a real robot.Unity Machine Learning Agents Toolkit (ML-Agents) allows users to easily get started with reinforcement learning (RL) using Unity. ML-Agents gives users a variety of sample environments and model architectures that they can use to start working with RL. Users can then tune hyperparameters to experiment and improve the resulting models. All of this can happen without the user having to worry about creating a Unity environment or importing assets – and there’s no immediate need for coding. This project out of Japan by Ghelia Inc. used the ML-Agents soccer environment to train an agent to play soccer. The resulting RL model was then deployed on real Sony toio robots to play soccer. This is an exciting example of simulation-to-real-world with robotics using ML-Agents to train.We interviewed Ghelia’s Ryo Shimizu, CEO and President; Hidekazu Furukawa, Lead Programmer for Innovation and Brand Strategy Office; and Masatoshi Uchida, Manager for Innovation Section of the Innovation and Brand Strategy Office to find out what inspired them to build this project. Read on to discover how they used ML-Agents Toolkit for training a real-world robot how to play soccer and how a golf ball fits into this scenario.What inspired you to create your project? Ghelia is a company that focuses on reinforcement learning applications. The founder of Ghelia, Hiroaki Kitano, launched RobocupSoccer and developed the AIBO at Sony. Our team had previously built an air hockey demo, but since it consisted of many different components, it was not very portable. When we started to discuss creating another demo to explain to customers what reinforcement learning is, we wanted something that was going to be easier to carry around. Since ML-Agents already had a soccer environment, it made sense to use the small and portable Sony toio robots to create a soccer game, which could also lead to viral content.To apply reinforcement learning to a real robot, the robot needs to exist in a simulation environment. Luckily, toio already has a simulator called toio SDK for Unity. By adding the ML-Agents package to it, we were able to use it for training immediately. While the toio SDK provided the robot models for Unity, we still needed to create the ball. We used Unity’s physics engine to recreate the ball in the simulator and needed to find a real-world ball that would match the simulation results. It turns out that a golf ball produced real-world results that reflected the training results. The ball’s position was detected in the simulation by using Unity transform value, and in the real world by image recognition using OpenCV and a camera.We used a golf ball to represent the soccer ball, but to increase the recognition rate, we painted it red. Amazingly, we were able to use just one iPhone and its camera to detect the ball, control all eight robots (it was a four-on-four soccer game), and perform inference with the ML-Agents model.At first, there were many own goals, so we tried to provide a negative reward for an own goal. However, this resulted in the goalkeepers not defending their goal. When we tried giving a positive reward for moving the ball, both teams would simply go back and forth, not putting the ball in the goal, basically stalling for time. Finally, we decided to reward one point for putting the ball in the opponent’s goal and took one point away for being scored on.It was sometimes difficult to ascertain why the actual robots did not work as well as the simulation. For example, sometimes the inference didn’t work because we operated the robot on a slightly tilted floor. Other times, the ball rebounded differently from the simulation, so the robots didn’t respond as expected. The positioning of the camera was also quite sensitive, requiring millimeter-order precision, making it difficult to adjust at the event site every day. After each major set of improvements to the model, we trained for about three days. In the end, we had about six training sessions to achieve our results.In the ML-Agents demo, after a goal, the agents line up in their original position, but it’s not so simple for real robots. Some problems, such as avoiding collisions between toios, were difficult to solve through reinforcement learning alone. While we initially tried to design a reward function for this scenario, we eventually solved it heuristically.If there were demand, we would definitely consider making this project open source. You can find additional details about this project in our blog post (in Japanese).AI, especially deep learning, is fascinating, but it is not well understood. You can’t fully appreciate its beauty and complexity until you work with it firsthand, and that’s a shame, so we encourage Unity developers worldwide to try it. I want to emphasize how much fun machine learning is and that Unity ML-Agents is a system that allows you to get started with machine learning easily or incorporate it into your project.Get started with Unity ML-Agents or learn more about the Unity Robotics packages today. If your project requires you to kick off multiple training sessions in parallel, contact us to learn more about our ML-Agents Cloud offering.Hidekazu Furukawa has also published a Japanese book called Unity ML-Agents Practical Game Programming that details how to get started with reinforcement learning using ML-Agents.

>access_file_
1292|blog.unity.com

3 surprise findings about the state of in-game advertising

Games have emerged as one of the world’s biggest forms of entertainment according to time spent, dollars generated, and audience size. While stereotypes and misperceptions have kept marketers from going all-in on in-game inventory in the past, advertisers are beginning to discover that mobile gaming is a lucrative supply source for high quality users that convert. To better understand the general perception of in-app mobile advertising, ironSource partnered with Digiday to survey mobile game players and advertisers.On the player side, the survey set out to better understand what today’s “mobile gamer” looks like and how they feel about in-game ads. On the advertiser side, we were interested in learning about how they leverage mobile gaming as a supply channel to fit their needs. For both segments, our curiosity lay in understanding how well advertisers are addressing diversity for their audiences.With gaming on the rise and 2.6 billion gamers worldwide as of 2020, there’s no better time to learn everything you can about advertising in mobile games, and about gamers in today’s ecosystem. Let’s dive into the three of the most surprising findings from our survey.65% of mobile game players don’t consider themselves “gamers”In the past, the general perception has been that mobile gaming was just for men. However, the introduction of new game genres, such as casual games or puzzle games, has penetrated the market even deeper, attracting a high female representation in recent years. In fact, 46% of “gamers” today identify as female. With a wide range of games available attracting users from different demographics, people are playing mobile games more than ever before.Among those we talked to, 62% play games at least once a day and 44% exclusively play games on mobile. Yet, despite how often survey respondents play games, 65% of those who play mobile games don’t consider themselves “gamers” - this suggests that the term “gamer” does not typically encompass people who play mobile games, but rather just PC or console.This may be because many of the respondents surveyed say they play games to relax or get their minds off serious issues. For these users, the term “gamer” may give a connotation that a love of games is what motivates them to play, while in fact, it’s not the case.With a wide range of people playing mobile games, we also found that advertisers are starting to increase their budgets for in-game spend.69% of advertisers say they expect to increase their spend in-game in 2021With so many users playing mobile games today, advertisers are embracing t mobile games to connect their brand messages with their target audiences. In fact, 75% of brand respondents say they’ve already allocated digital marketing budgets to in-game advertising in the past and the remaining 25% said they haven’t yet, but will allocate budget to the space in the year to come. In other words, 100% of advertisers have a plan to invest in mobile games. On top of that, 69% of advertisers will increase in-game mobile spend.In the last few years, mobile game developers have been incredibly successful at not only finding organic ways to incorporate advertising into the user experience, but also at implementing ad formats the user wants to see. These capabilities have translated into success for advertisers - with users more engaged, advertisers see higher performance results, and therefore plan to increase their spend.In particular, advertisers are investing in mobile game advertising with the goal of building more inclusive marketing and advertising campaigns.76% of advertisers use games to reach ethnically and racially diverse audiencesBrands are seeking to build more inclusive advertising and marketing strategies that target people across all genders, ages, income, races and ethnicity and even ability. As the survey showed, advertising in mobile games doesn’t mean advertisers are only reaching the stereotypical “gamer,” with advertisers beginning to realize that games can reach a truly broad base of game players across various demographics. In fact, 76% of advertisers said they’ve used games to reach ethnically and racially diverse audiences, with age and income diversity following behind.For more insights, download the full research report here.

>access_file_
1293|blog.unity.com

Mobile games: A premium video advertising channel

In advertising, the word “premium” is often used to describe a better class of time and attention. But how do you define what a premium video ad placement actually is? While it can be subjective, there are three key factors that objectively contribute to quality user experiences in the mobile advertising space. We’ve created a handy guide to help marketers decide where to assign their ad spend.1. Format: 100% full screen vs 100% viewableWhat does a premium advertising format look like? Consider that film and television still account for some of the best opportunities of any advertising channel, representing almost 32% of total media ad spending in the US. Why is that? Because television and theatrical ads are full screen, meaning audiences engage with the content without distraction, unlike mobile web where host content and even other advertisements are competing for audience attention. Even as digital advertising continues to grow, cinema advertising continues to be a major source of mind share for top-tier brands, having closed out another record year in 2018 with more than $750 million in revenue.Over time, the world of mobile advertising format shave come to embrace the same things that continue to make television and cinema such powerful channels for marketers. When advertisers first entered the mobile space, however, they relied on ad formats that were already popular on desktop — banners and in-browser video primarily — which are typically couched in less engaging content that only serves to dilute audience attention. It was a far shot from the dedicated viewing experience of a theatre or television screen, but things have changed for the better.Today, modern mobile in-app ads have more in common with theatrical or television ads than their predecessors. They all occupy the entire screen to capture an audience’s attention in a way that few other channels can match. Our mobile devices are with us in our most private moments, affording advertisers a valuable opportunity to connect. Television, cinema, and in-app mobile ads provide a premium format for reaching audiences, more so than mobile web or even native social.2. Contextual relevanceA premium advertising experience is one that is relevant to the largest possible percentage of its audience. It’s why modern advertising innovation is so heavily geared towards the reduction of waste through better audience targeting. The less money advertisers spend on the time and attention of audiences unlikely to take action, the better. This means pursuing the ideal of 1:1 overlap between advertising content and audience interest. A lofty goal, but one in-app advertising solutions are coming closer to every day.Data is king here, and mobile applications are among the most data-rich environments around when users opt-in to share their information. Variables like age, gender, behavior, spending history, and more all empower mobile advertising platforms to deliver a more contextually relevant advertising experience, ensuring fewer wasted ad dollars.It’s this same wealth of data and pursuit of greater contextual relevance that has already helped make mobile gaming become one of the largest and most profitable markets on the planet. Advertising and in-app purchases are the only two ways for game and app publishers to make money, and modern ad technology has proven effective enough to make mobile game revenue the second most profitable modern entertainment medium, beaten out only by global overall gaming revenue. Other mobile game and app developers have long leveraged these abilities for their own growth goals, but these same opportunities are available to brands.3. Positive sentiment: Delight your audienceFew ads can perform as well as those users enjoy interacting with. Studies show that ads which offer in-app rewards consistently generate the highest sentiment of any mobile ad format, while mobile pop-ups typically rank last. A recent Tapjoy study found that 68% of respondents felt positively towards mobile rewarded ads that operate in this way, the highest of any format available. Mobile users are perfectly happy to watch ads frequently, provided their time is respected, and this means higher ad engagement for brands.Recent years have seen the rise of “value-exchange” or “rewarded” advertising. Mobile veterans recognize it as any integrated ad placement that prompts audiences to opt-in to view an ad in exchange for virtual compensation. Rewarded advertising has proliferated in the mobile gaming space, where ads can offer in-app currency and extra lives. It’s become the defacto solution for publishers, and as the practice becomes more popular, we’re seeing it expand into other app categories like dating and media.Mobile advertising is both an art and science, but there are differentiators that distinguish a premium experience from a simply passable one. Always consider how the following can fit with your campaign:- Is your ad both full screen and fully viewable?- Is your ad contextually relevant to its audience?- Does you add offer tangible value or rewards to viewers?If you’re an advertiser looking to make the most of your media budget and you answered “no” to any of these questions, the advertising experts are here to help. With more than 10 years of experience helping connect advertisers with their ideal mobile customers, we’re confident that we can deliver meaningful growth for any brand or product.

>access_file_
1294|blog.unity.com

2D art creation in Dragon Crashers

Jarek Majewski is the freelance 2D artist and coder who created the art and animations for our latest 2D sample project, Dragon Crashers. Talking with Eduardo from the Unity 2D team, Jarek opened up about his creative process, tips for creating sprites, 2D lighting and animations, and using Affinity Designer and Photo, his art and design software of choice.You can find Jarek on Twitter @mindjar and via his website.I’ve been drawing since I was a child. I wanted to use my imagination to create worlds, stories, and characters. Then I discovered video games and was mesmerized. I combined my passion for art with that for video games.There’s a simplicity to using a pencil that allows me to visualize my thoughts with minimal effort. I don’t need to prepare anything, launch any software, or choose a tool or color to paint – it’s a perfect mind-art connection.I had other concepts inspired by Journey to the Center of Earth, Castle Siege, or a pirate ship. My last-minute proposal was of a crystal mine with a dragon sleeping on a pile of gold. The demo team ultimately chose this as the concept for the project.It’s a great choice to showcase Unity’s capabilities, such as Sprite Shape, which was used to create the mine tracks, and 2D lights. We have a diverse cast of bipedal and four-legged characters that show sprite rigging capabilities. It’s a perfect scenario to tie together the story, art, and technology.I start by researching actual images of the sprites I want to create, because even stylized art needs to be believable.If you’re creating the first sprite for a new game, you can create multiple variants to eventually find the right art style. But if it’s a sprite for a game with an established style, you need the environment in which to place the sprite as a point of reference. This helps you to choose the correct proportions, color palette, and viewing angle (this is important when making a game with a camera angle pointed slightly upward and at an angle, such as a top-down isometric view).If your art uses outlines you’ll need to make sure the outline width matches that of the other objects in the environment. It’s also important for pixel art: If you make a sprite that doesn’t match the game palette you can change it, but if the pixel size is off that requires redoing it from scratch.Once you have your sketch and an environment in which to place the sprite, you can start making the sprite.I start with simple shapes or silhouettes and then fill in the details. I use mostly vector graphics because they’re flexible and easy to edit. I can edit colors and shapes, or scale my sprite without losing quality.I like to have every sprite in the most editable form, whether it’s raster- or vector-based. So I use as many layers as I can without sacrificing performance. It’s important that I can always go back to my original file to change some parts or colors to create a different sprite.I flatten my sprite layers only when exporting to PNG format. I mostly use the Export Persona feature in Affinity Designer for exporting. It allows me to have one file with every sprite and export all of them with a single click. I can also choose the Continuous mode when exporting, so the sprite will be automatically exported when I change anything on it. It’s a huge time saver.A good normal map can make or break the illusion of a sprite being 3D. Every pixel in a normal map stores data about the angles of the main texture. The red, green, and blue (RGB) channels store angle data for the X, Y and Z coordinates. Let’s look at how the RGB values affect the angles of the normal map.The above image is of a flat normal map in which the pixels are facing the camera. Its RGB values are 127, 127, and 255, respectively. Each color channel can have a value from 0 to 255, so 127 is near the middle. If I want my surface to face left (-90 degrees), I need to set the R color value to 0. If I want it to face right, I set R to 255. If I want it to face straight down or up, I set the G channel to 0 or 255 respectively.One way to paint a normal map is to make drawings of your sprite lit from different angles and combine them into one texture. The sprite will be lit with one light from the right in the R channel and one light from the top in the G channel. In the B channel the sprite is lit from the front, but for the sake of simplicity you can omit this channel with 2D sprites.However, this approach can be time-consuming, as you will need to paint your shaders at least twice.Another approach is to use a normal map-generator app. You can open a sprite in a generator app, and with just a couple of clicks it generates a normal map. Generator apps do not take into account the angles of your sprite, so avoid using them on the entire sprite. Use it instead to generate normal maps of sections of a sprite that are beveled, such as chains, cables, or a dragon’s tail. Import a section into the normal map generator, tweak the values, export, and then add the necessary parts and details yourself.The technique I used to make normal maps for the sprites in Dragon Crashers was to paint the colors directly on the sprites. Before I explain this technique, I want to note something about the base-color sprite. If you plan to use 2D lighting extensively in your game and want to make the most of the normal maps, don’t paint the light and shadow onto your sprite.2D lighting doesn’t look good on a sprite that already has shadows painted on. You will end up doing double the amount of work because you’ll be painting the lighting in the normal maps. You can paint some non-directional shadows (ambient occlusion) and your sprite will look better, but it’s better to avoid any directional light, such as from the sun.To paint the normal map, you need to know which colors to use for different angles. For Dragon Crashers, I did this by referencing normal map palettes online. I then made one for myself in Blender and exported it as a .PNG file. The palette is a simple sphere; I picked the angle color I needed and painted it on. I mostly used vectors by making a shape and filling it in. You can also paint the colors with your brush of choice as you normally do on your drawings, or paint it pixel by pixel for your pixel art.Angle colors don’t need to be 100% accurate; a few degrees won’t make a difference. However, keep the overall shape of the sprite believable. If you use an angle color that doesn’t make sense in context, the whole shape will fall apart when lit.Painting normal maps can be tricky at the beginning because it requires good spatial imagination. Start with simple shapes like boxes or barrels to understand how to do it correctly, and in time you’ll master this technique.A couple of shortcuts to note: When there’s a spherical shape, you can paste the normal sphere from your palette. When you have a cylindrical shape, you can take a part of the sphere, paste and stretch it.Be aware that when you copy and paste parts of normal maps and then rotate them, it breaks the shading. But this can also be used to your advantage. For example, when you need a concave spherical shape, you can rotate the sphere 180 degrees to create a hole.Choose the method of generating normal maps that’s best for you. You will most likely have to make many assets for your game, so focus on the objects that will be visible most of the time and simplify other parts of the game. Choose the technique that will give you the best results for the least effort. Some tools that can help include:Normal Painter Krita’s Tangent brush Sprite Illuminator Laigter Sprite LampI always plan out my animations ahead of time to pinpoint what I want to achieve within the constraints I’m working in.For Dragon Crashers, I chose good proportions for the first characters and used this as a base for the others. I focused on three bipedal player characters and one enemy (let’s leave the dragon for now). All of these characters used the same sprite-skinning skeleton to take advantage of the Sprite Swap feature (currently in experimental mode) that comes with the Unity 2D Animation package. At the same time, each character needed its own distinct visual style to avoid looking like a simple reskin.To design the characters, I had to make sure that all of them could use the same skeleton, so I made a simple skeleton overlay in Affinity. That way, I could check whether a character’s limbs match the underlying bone structure. It turned out pretty well, and the characters are unique-looking: One has broad shoulders, one has bigger feet, and another a wolf’s head.A lot of planning went into choosing how many layers the characters needed and which bones would affect each layer because changing these elements later would cause a headache. Of course, there was some trial and error involved, but with the base character planned well, all of the other characters were easier to make.To import the characters into Unity, I used the PSD Importer because it allows me to have the same layer structure and positions as in Affinity. I designed my characters using vectors, so each layer consisted of a number of paths. To import a character into Unity, I had to rasterize each layer and export the file as PSD (and change the file extension to PSB). So I had two files for each character sprite: One was a source vector, and the other a rasterized version. This allowed me to have an editable file in case I wanted to make some tweaks to the character.After importing the PSB file into Unity, I rigged the character in the Skinning Editor. I made all the bones, auto-generated meshes for each of the layers, and used the Auto Weights feature to bind the bones.I optimized the character rig, first by cleaning up the meshes to make them use as few vertices as possible, and then cleaning the bone weights to make sure the character looks good in every pose. I double-checked the places where the joints bend, such as the ankles, knees, and elbows. I carefully placed the mesh points and their weights in these places so the bending of the limbs looks believable.After the rigging process, I made a Sprite Library Asset, which groups multiple sprites into Categories and unique Label names. This enables me to make other characters by just swapping this Sprite Library Asset for another one. I also added Sprite Swap for the eyes and mouth to create facial expressions, then I added 2D IKs to the character limbs to give me better control when animating the character.After these steps, I made my character a Prefab so the changes made to it would apply to other characters. I could make tweaks to IKs, change sorting layers, add some weapons or attachments, or attach some scripts to the base Prefab, and these changes then applied automatically to the other characters. This saves a huge amount of time if you have many characters.For other characters, I imported the PSB file as before, but this time I didn’t need to make the skeleton. I simply copied it from the base character and tweaked the topology and weights of the sprite meshes to fit the new character’s shape.Importing normal maps and mask maps was even easier. I copied the character into Unity by using the shortcut Ctrl + D (Cmd + D on Mac), opened it in Affinity, and replaced all the layers with their normal map (or mask map) counterparts. As the normal map isn’t a color texture, I had to uncheck the RGB option under Advanced > Sprite Import Settings. Now I could assign the normal maps and mask maps as Secondary Textures in the Sprite Editor.The characters were now animation-ready, and they could share the same set of animations. I used the same animation clips for most of the actions but gave each character its own personality by crafting for each of them unique versions of idle and attack animations.The workflow for animating the dragon was more straightforward. It didn’t need to have custom skins so there was no extensive planning involved. I could focus on design and rigging. A lot of time went into making sure that the wings, tail, and neck were rigged correctly and without visual artifacts when animating. It’s always good practice to test extreme poses when rigging, as it will save headaches later on.The process of setting the Sprite Swap, IKs, and additional maps was roughly the same as for the bipedal characters. Not counting the two extra legs.The first thing I need is a vision of the environment I want to create. The mood and general flow of the environment are clear in my mind before I start – the visuals, gameplay, and emotions. Details can always change later, but a foundational vision allows me to focus on what I want to achieve rather than placing Prefabs randomly.I start by exporting the assets for the environment to Unity to make the Prefabs. Once I have all the pieces in the Scene, I can go wild. Unity doesn’t restrict me to any particular workflow; I can start painting with Tilemap as a base, add sprite tiling on top with Sprite Shape, place sprites by hand, add lighting and effects, such as fog or particles. Again, because I already have a clear idea of the layout of the level, I can focus on visuals.There’s also the gameplay-first approach, designing the flow of the level. With this approach, it’s good to focus on the geometry of the main interactive layer by placing all the platforms, walls, and rooms first. Add interactive elements, such as enemies, obstacles, and pickups, then test the level and iterate as required.Overall, a good practice is to separate the interactive layer from the visual elements. This approach will save you a lot of time and work; figure out the core gameplay first and then add the visuals. This way you don’t need to redo all the carefully placed flourishes when you (or the level designer) want to redo the gameplay.The features’ integration with one another makes it easy to set up sprites and secondary maps, and they just work as they should with other features like normal sprites, Tilemaps, the sprite shader, and Shader Graph.2D lightingOne great workflow is 2D lighting and mask maps in conjunction with 2D rigging. It’s a similar workflow to what you would use to set up a 3D environment. I made a simple unlit sprite, normal map, and mask map for rim lighting, and I didn’t need to repaint the asset to match the environment and lighting conditions. The sprite is lit just like it was painted. It looks hand-painted and it fits the game environment.It’s a game changer. You can even make marketing assets with this setup. You can reuse your game environment, place the characters, set up the lights, and it looks incredible. You don’t need to make poses for different characters, paint the light and shadows, etc. And on top of that, you can add some post-processing effects to change the scene’s mood.In particular, I love the way I could use 2D lights to add shadows. When the setting Use Alpha Blend on Overlap is applied on a 2D light and the light intensity is very low, the light starts to shade the environment and acts as a shadow area. I used it to make the shadow below the dragon.Sprite ShapeI can’t imagine making a 2D game without Sprite Shape. It’s very easy to set up and edit. You can have a level in a matter of minutes, so it’s good for rapid prototyping. It’s not just for making level geometry. I used it to make mine tracks, bridges, hanging ropes, background scaffolding and foreground shapes.In Dragon Crashers, to fake the blur (which is expensive on mobile devices) I used a blurred edge texture. The use of Sprite Shape is only limited by your imagination. It takes just a few seconds to edit a shape, which is a great time-saver when you need to tweak your environment. I like how you can make sharp geometric shapes or use Continuous Mirrored Point Mode to make them more rounded. Sprite Shape also generates 2D Colliders saving you time on setting them up manually.If you haven’t used Sprite Shape, try it soon to see how it can improve your workflow.TimelineAndy Touch (a senior content developer at Unity) made almost all of the systems in Dragon Crashers with Timeline. This made the creation process seamless: I could hop on and make some small changes to any of the timelines without breaking anything. I love how modular the system is and how easy it is to edit a cutscene or any other gameplay element based on Timeline. And nesting timelines in each other made the whole process even more efficient.Affinity Designer is available for use on Mac, Windows, and the iPad. It supports vector and raster workflows and tools for 2D game artists.The Pen feature in Designer has many useful shortcuts that will help you make any shape you want without switching to another tool. Start with the shortcuts that are displayed at the bottom of the Designer window.Make your art as editable as possible. The editable Compound feature will help with this. Normally when you want to combine paths, they will become one solid path without the capability for future editing. Click on one of the geometry buttons on the toolbar and at the same time hold down Alt (or Option for Mac users) and the Compound path will form a group, but every layer will have an option on how it interacts with the other layers. You can choose between Add, Intersect, Subtract, and X modes. It’s very handy!Use the Document color palette. These are colors that are set globally for your document. Any object that uses the given color will update when you change the color in the palette. It’s handy for creating variations of objects and characters.The above image shows a blue-colored warrior character. His armor legs, arms, helmet, and weapon are all blue. Let’s say you want to change his color to green. By using a document color on every part you can change the color from the swatches palette and instantly have a different character.Use Symbols. Often you will have many duplicate objects placed around your canvas, such as level tiles or bricks. But what if you want to change all of the duplicated objects? You can use Symbols. Create one object and turn it into a Symbol. Then duplicate it. Whenever we change something in one of the Symbols, the others will change too.Organize with the Assets panel. Place all of your objects in this panel, and you’ll get an overview of all the things in the game in one place. You can group them by any criteria you want: the level that the object is used on, type, color, etc. Then you can drag and drop these objects to any document you have open. You can check for visual consistency, scale, how they appear in another level, and so on. You can also make mockup screens or “screenshots” of your game.Furthermore, you can store UI elements in the Assets panel like button designs, switches, and icons, and use them when designing your game’s interface.Affinity Photo is a full-suite photo editing solution available for macOS, Windows, and iOS.The suite of Affinity apps is set up to be used interchangeably: You can open your document in either Designer or Photo, no matter which app you saved it in first. You can switch between them by using the menu command File – Edit in Designer (or File – Open in Photo).Both apps share most features; Asset Panel and some vector features are available in Affinity Photo as well. The interfaces are similar making it fast to switch between the two.The most important feature in any raster app is the brush feature. Affinity Photo has a brush engine that’s very fluid and provides all the needed functions, such as table support. You can also export and import your own brushes. I love the stroke stabilization option: When you turn it on your brush lines become very clean, which is good for making outlines.In addition to the great raster graphics and brushes, a major feature I like to use is the Live Filters. They allow you to dramatically change the look of your art without losing editability. I love the perspective filter in particular because it enables you to deform layers to match the perspective which is useful for placing windows on buildings, posters on walls, or textures on surfaces. With Live filters, you also get Live Adjustments Layers and Blend Modes, features that enable you to see results instantly.Finally, I like Layer Effects, which enables you to add gradients, drop and inner shadows, outlines, 3D effects, and more. With a bit of creativity, you can achieve almost anything with them and they’re also non-destructive.Thanks to Jarek for taking the time to share his tips on 2D art, Unity, and Affinity Designer and Photo. If you are new to Unity, learn more about Unity’s 2D toolset here.

>access_file_
1296|blog.unity.com

Marketing products in hyper-realistic ways with Unity ArtEngine

How does an office furniture manufacturer leverage 3D product visualization technologies to improve its customer experience? Given the bespoke nature of its products, Flokk decided it was time to revamp its entire web platform. A key part of this project involved digitizing all of Flokk’s chair materials and projecting them onto 3D models, which it did with the help of its trusted solutions partner, Forte Digital, and a scanning workflow that leveraged Unity ArtEngine.Flokk is a market leader in the design and manufacturing of premier workplace furniture. Sold in over 80 countries, its products include those from brands such as HÅG, Offecct, Giroflex, RH, Profim, 9to5 Seating, BMA, RBM and Malmstolen. Each day, its 2,000 employees work together to realize a common vision: Inspire great work.Design is at the heart of Flokk’s products. Each of its products can be customized, thanks to its highly efficient supply chain and manufacturing processes. However, design wasn’t always at the center of its online customer experience. In 2019, Flokk decided to change that by making a large investment in a new e-commerce platform, with the help of its trusted partner Forte Digital. A critical piece of work involved digitizing the company’s chair materials using Unity ArtEngine and integrating them into an 3D configurator developed in-house that would enable customers to design a chair in 3D within the comfort of their web browser.The results:Increased web traffic by enabling online self-serve checkout for the first timeInternal efficiencies associated with the lower requirement for physical photoshoots since Flokk can now generate high-resolution photos with a 3D configuratorIncreased reliability of the ordering experience, resulting in fewer order errors and returns, and more satisfied dealers and customersConsistency across Flokk’s branding and positioning; Flokk’s value for design is now exhibited not only in its products, but throughout all stages of the customer journeyA sustainable competitive advantage by differentiating itself in a traditional industry, where investment in real-time 3D technology can be laggingFlokk’s chairs are bespoke and have millions of potential configurations, which creates significant complexity in the company’s supply chain and ordering process. For example, for a particular office chair SKU, a customer can choose from among dozens of fabrics, specify characteristics about the seat size, lift, foot base and wheels, and add additional accessories such as a neck rest and arm rests – an experience not so dissimilar to buying a car.Design is one of the Flokk’s core tenets. The company prides itself in creating aesthetic, high-quality, durable products that its customers love. (Indeed, a single office chair is priced at $700–$2,000 USD). Customers expect Flokk’s values of quality and design to manifest in all touchpoints with the company, including the online ordering experience.However, prior to 2019, Flokk did not have the most streamlined web experience. Behind the scenes, static content posed a challenge to engaging customers. Recent acquisitions further compounded the issue of consistency and control. The old website infrastructure had low scalability and could not support the new complexity brought on by the additional brands and products, resulting in a subpar customer journey. Flokk also had no online self-serve channel.“We saw that both dealers and customers expected to find our products online, and wanted to see how they could customize them based on their needs,” explained Martina Winsell, E-Commerce Manager at Flokk. “Since our products are quite complex, it was important for us to focus on usability when thinking about the future state.”Indeed, a complete e-commerce platform overhaul was overdue. The project goals were ambitious, but clear: design a common infrastructure that unlocks the self-serve channel, enables the creation of tangential sales tools, scales with the company product portfolio, and facilitates the best customer experience possible.After deciding to make the investment, the next step was to find a trusted partner.Working at the intersection of technology, design and strategy, Forte Digital is a consulting company that builds digital products and services through long-term partnerships with its clients. Its portfolio includes companies that span many industries, such as Farmasiet (Norway’s largest online pharmacy), Nationaltheatret (a world-renowned theater), Gyldendal Akademisk (a large academic publisher), and NorgesGruppen (Norway’s largest retailer).Forte Digital’s interdisciplinary expertise has been core to its success in building solutions that create sustainable value for its customers. Such expertise also made the company an obvious choice as a partner to deliver on Flokk’s goals, and ultimately was selected to do so.At the center of the project was a common product visualization infrastructure (called the “Configurator”) that could accurately depict Flokk’s products and their many permutations, and be deployed across various web platforms, such as the customer-facing website, a new dealer tool called Flokk Hub, and other sales and marketing tools.Given Flokk’s goals for scalability and efficiency, it made sense for the Configurator to be based in 3D, in contrast to, for instance, physically photographing every chair and its configurations at every angle, which would be incredibly time and cost intensive.After aligning on the infrastructure, the next step was to do the work that would actually allow Flokk to represent its products virtually, including digitizing its many chair materials.Visual fidelity and accurate representation of Flokk’s materials were of utmost importance, and thus it made sense to adopt a scanning workflow to create digital twins of the company’s hundreds of chair materials. Other options included generating the materials procedurally (i.e., from scratch) using software, or purchasing scans from a public materials library. However, scanning Flokk’s actual materials was the only way to ensure the results would be true to life.Specifically, the project team decided to use a scanning workflow called photometric stereo, a technique that allows for the capture of a subject’s surface properties using several photographs taken with different light conditions. Using photometric stereo, one can extract data on not only albedo (i.e., color – just as a typical flatbed scanner can do) but normals (i.e., a surface’s relief pattern), and sometimes specularity and roughness, which are key inputs into creating a physically based rendering (PBR) material – the industry standard format.Given the number of materials to be digitized, the team needed the process to be as automated and consistent as possible. Piotr Bieryt, a 3D artist at Forte Digital, decided to build a custom, fully automated rig, and process the scans with ArtEngine.After assembling the rig using laser-cut plywood and 3D-printed elements, Bieryt covered the prototype’s interior with black velour to prevent discoloration and light reflections and installed a removable black plate on the bottom to capture transparency using illumination from below. The rig was then configured to be controlled by Arduino.“I love building things and automating processes, so I had a lot of fun!” Bieryt explained.He used a mirrorless Olympus 16 MP camera with a 60mm macro lens (Micro Four Thirds system) to capture the fine details on Flokk’s fabrics, and shot in RAW to ensure accurate white balance and colors. After color correction, Bieryt began his work in ArtEngine.Here’s an overview of Bieryt’s typical workflow in ArtEngine.In the example below, the sample was a 10x10 cm swatch of a semitransparent fabric from one of Flokk’s chairs. When digitizing materials, transparency requires an additional transparency channel, which can create complexity. To address this, Bieryt scanned the fabric twice, once with it lit from the sides (a standard photometric stereo capture process), and once lit from the bottom (to capture transparency).After importing into ArtEngine, he plugged each of the two image sets into a Multi-Angle to Texture node to combine the 16 images into three maps: albedo, normal, and transparency.He then applied Gradient Removal (similar to the High Pass filter in Photoshop) to both the albedo and normal maps to remove visible gradients and enable better tileability.After applying Compose Material to merge the three maps into a single material, Bieryt used Pattern Unwarp to correct for natural distortions in the fabric. Bieryt notes, “It’s worth spending time straightening a sample before scanning to reduce time spent post-processing, but if you can’t get all the kinks out, ArtEngine has great tools for correcting after the fact.”He then used Cropto frame a portion of the straightened texture with a 1:1 aspect ratio. Below is the node graph.Next came Mutation Structure, a node used to further improve tiling by using AI to recognize and eliminate repetitive patterns, while keeping the structure of the underlying pattern intact.“Mutation Structure is ArtEngine’s magic,” Bieryt notes. “It was a huge game-changer for us that allowed us to focus more on the artistic side of scanning, rather than fight with software or algorithms.”After adjusting several parameters, including the world scale factor and output dimensions, he arrived at a highly detailed 8K texture that was six times larger than the 10x10cm scanned sample and had no obvious tiling artifacts.Finally, Bieryt used Height Generation and Roughness / Gloss Generation to create height and roughness maps, as well as a final Compose Material to compile everything for easy export.Below is the final node graph.Materials that differed only by their color were scanned only once. Creating multiple colors of fabrics with the same underlying structure, as well as applying the materials to the 3D chair models and doing the final rendering, was done in Blender Cycles.Since the transformation began, Flokk has already seen tangible results. As online checkout has rolled out on a country-by-country basis, the company has seen a significant increase in web traffic. Its dealers and customers are more satisfied and delighted, boosting brand loyalty and reputation. The company also has fewer overhead costs associated with processing manual orders.The improvements are perhaps best visualized by taking a stroll through its consumer-facing website, Flokk.com. After selecting a chair of interest, users can customize nearly every aspect, view their configuration up close and different various angles, and understand the costs associated with changing certain features before deciding to place an online order.The project remains in flight as Flokk continues to expand self-service capabilities on country-specific websites and deploys new tools for its sales team and customers. For example, one current initiative aims to leverage the Configurator for a “Showroom mode,” an app installed on iPads in its showrooms around the world so customers can easily explore and order Flokk’s products while in-store.More broadly, the work has shown the entire company the importance of investing in the customer experience and adapting to shifting preferences as consumers increasingly feel more comfortable online, making purchases through the web or an app. To be sure, Flokk has positioned itself exceptionally well to compete, particularly in an industry like furniture manufacturing where investment in the end-to-end user journey can be lagging. Indeed, by choosing to continuously keep the customer experience at the forefront of its investment decisions, Flokk has created a sustainable competitive advantage that will carry the company forward through all its future successes.

>access_file_
1297|blog.unity.com

10 rewarded video monetization lessons from meta match-3 games

Meta match-3 games are a subcategory of casual games that combine traditional match-3 puzzle mechanics (think Candy Crush) with meta layers, such as quests, collections, and mini-games. These additional layers create a series of linked cascading goals, in which users are motivated to accumulate currency and complete meta missions.Rewarded videos are an effective way to cater to and enhance these motivations, and when implemented smartly can increase session length, retention rates, and ARPDAU.But how do you unlock this potential? Below, Anna Popereko, ironSource's Game Design Consultant, shares examples of successful rewarded video implementations from meta match-3 games to help inspire you - whether you’re working in this genre or not.The importance of a good placement strategyWith good placements your rewarded videos will be highly visible and accessible. This will enable you to maximize engagement and usage rates, and in turn meet your ad revenue goals. Making rewarded videos available in the most valuable moments to the user is key to providing the best user experience.In terms of the rewards you offer users with each rewarded video, there are various possibilities. You might offer the ability to unlock in-game content and currencies, progression rewards, surprises, or even bonus levels. In order to appeal to a larger audience, it’s important to offer multiple types of rewards.10 rewarded video placements in meta match-3 gamesLet’s dive into some real life examples.1. Extra currency in home screen or shopExtra currency in the home screen or in-game shop is one of the most common types of rewarded video placements for meta match-3 games.Users in the game’s store show intent to access premium content. Most users won’t be willing to spend real money, so giving them the option to watch an ad in return for earning gems - which can then be used to unlock premium items - can be valuable to them and increase usage rates and ARPDAU for you. In the example below from Kitten Match’s in-game store, we see the rewarded video traffic driver is surrounded by the IAP offerings and is blue, helping it stand out from the green buttons around it.In Property Brothers, an ad placement for extra currency sits on the home screen and offers users 10 gems in return for watching an ad. Being specific with the number often helps drive up usage rates, because users know exactly what they’re getting in return.2. Double or triple rewards at the end of a levelEnd-of-level rewarded video placements can provide progression-based or monetary value to users. What does that mean exactly?When a user completes a level and earns a prize, you can offer them the opportunity to double or triple its value by watching a rewarded video ad. That’s what the publisher Special did in its meta match-3 title, Kitten Match. This placement seeks to tap into the positive and rewarding feeling the user has after completing a level.3. Add moves after failingIf users don’t pass the level in the amount of moves the game gives them, consider placing a rewarded video that gives them more moves. Typically we see 3 to 5 moves being offered for watching the ad - we’ve also seen developers use a wheel of fortune which contains different quantities of additional moves.To avoid IAP cannibalization, make sure you offer less moves through rewarded video than what you offer with in-app purchases. For instance, if you offer 5 moves for $1.99 in the store, give up to 3 as a reward for watching your ads. You should also limit the number of “Add moves” rewarded video placements users can watch per session - if this is unlimited, they’ll have no reason to pay for extra moves in the store.The example below is from Ohana Island - note how the rewarded video traffic driver stands out in blue.4. Extra life after running out of livesAlternatively, when a user runs out of lives, you can use a progression-based reward - like an extra life - that lets them keep playing in exchange for watching a rewarded video. In the example below, from Kitten Match, the user is able to unlock a free life after failing the level instead of waiting for their life to automatically restore. In this case, the user has double the incentive to save time and gain a life through watching the rewarded video.To increase the engagement rates for these placements, add a countdown to give users a sense of FOMO if they don’t engage. As always, you can test this out through A/B testing.You can also use more unpredictable end-of-level placements, for instance offering users a chance to spin a wheel full of prizes, or even opening a simple “surprise box”.5. Start with a bonusThis ad placement gives users extra help for the upcoming level. It can be used in different ways: the placement could offer a random power up, a specific power up, or additional moves.Take the example below, from Property Brothers: users can click “Play” to start the level, or they can click “Play” to start the level with 2 extra moves - which sounds more appealing. Note how the game shows the user the goal of the upcoming level, giving the user a sense of challenge - a feeling that perfectly aligns with the desire to gain extra moves via the rewarded video.6. Surprise chest boxesThere are various ways to use chest boxes with rewarded videos: the traffic driver for the mystery box could appear only after specific achievements, like collecting a certain number of bombs; it could appear on a timer basis, every few hours; or it could appear based on progress, such as every two or three levels. Alternatively, you can enable users to unlock a chest box immediately by watching a rewarded video placed on the homescreen.To increase engagement and usage rates for your chest box placements, take over the screen with the offer, rather than putting it as a small button in the corner of the screen. Check out the example below from Storyngton Hall - this screen takeover can be shown during a level or at the end. Note how the design is exciting and makes the box seem magical and valuable.7. Multiple videos in a rowInstead of showing a one off rewarded video placement, you can encourage users to watch multiple videos consecutively in order to win a lucrative reward.Offer a small reward for each one, and also test out adding a roulette or spinning wheel with a generous prize at the end as a bonus to the user.This can help encourage them to get to the end of the multiple rewarded video placement and ensures they end on a positive note. To help users better understand there are still more videos to watch, while giving them incentive to watch many in a row, add a checkmark next to every video they complete.Check out this example from Storyngton Hall - it offers users to watch 5 ads in a row, earning 50 coins for each video in addition to a special mystery reward after the fifth video.Make sure that the rewarded video provider you’re working with can guarantee zero latency, like ironSource does. You want to make sure the next ad is always available and ready to play - otherwise you risk damaging your game’s user experience.8. Daily bonusUse daily bonuses as a retention boosting mechanism. The placement can be used in different ways: for example, you could give users a daily bonus for free and use the rewarded video to multiply the offer, or you could provide rewarded videos to users as a way to unlock daily rewards that ordinarily cost real money.The example below is from Jam City's game, Vineyard Valley, which gives users 100 coins as a free daily bonus, and offers an additional, mysterious bonus through watching a rewarded video. Note that they clearly state what users need to do and what they get in return using copy. Also, they encourage users to log in again the next day for more rewards - clearly showing its objective of increasing retention.9. In-level bonusIn-level bonuses let users unlock boosters inside the game’s levels by watching rewarded videos. These placements help players pass the level - the more people win, the more they can continue playing. The longer people spend playing your game, the greater your opportunities are to monetize your content. In this regard, it’s a similar logic to the “add more moves” placement.In terms of the traffic driver itself, place it in the corner of the screen from the start of the mission, and make sure it’s animated enough to grab users’ attention while being non-disruptive to the gameplay experience. In Hell’s Kitchen, the developer places the traffic driver on the right side of the screen, using a video symbol to make it clear that it’s a rewarded video placement, and a purple background to help it stand out.Once the user opens the ad, they see a wheel of fortune that contains several types of boosters.10. Extra choiceRewarded ads that give users extra choices are particularly effective in choice-based or narrative games, where choosing from a selection of on-screen sentences or items directly impacts the gameplay experience.For example, in a narrative-based game, one choice might be a free but less appealing choice, while the other two options are much more appealing but require gems or in-game currency to select.Using rewarded video to give users extra currency to unlock the best options is an effective strategy for maximizing revenue while helping users get the most out of their experience.Step into your users' shoesUse these examples as inspiration, but be sure to A/B test everything you implement in your own game. Ultimately, putting yourself in your users’ shoes will help you provide the most valuable placements with the best user experience. If you focus on that, high engagement rates, usage rates, and ARPDAU will come naturally.Finally, continue looking for inspiration from other game genres, make sure to research what your competitors are doing, and stay updated with your genre’s benchmarks for engagement and usage rates. That way, you have a reference point to measure whether your placements are performing well or not.

>access_file_
1298|blog.unity.com

How to attract users to your rapid delivery app and hang on to them

Rapid delivery apps, or apps that deliver groceries within 10-30 minutes of app order, are attracting millions of users as well as tons of big name investors. PitchBook estimates that investors have coughed up $14 billion into these services since the beginning of 2021, giving rapid delivery apps the capital to improve and optimize - and it makes perfect sense why they’re getting so much attention. Because of the pandemic, users have made it a habit to start ordering their groceries through apps, and receiving those groceries within 30 minutes of an app order is a pretty innovative concept.Rapid delivery apps are largely operating out of major cities across the UK, US, and Europe, with a few first movers leading the market in these areas - Getir, Flink, Gorillas, Gopuff, Weezy, Dija, Jiffy, Fancy and Snappy are all notable names. However, with competition rising (smartphone use of grocery delivery apps rose over 40% in 2020 according to eMarketer) and such a large amount of capital being poured into these services in only a year, it’s becoming challenging for rapid delivery grocery apps to attract and hang on to users. Ultimately, everyone is chasing market dominance, but only a few will get to be the app of choice.So, how can you crush the competition, reach more users for your rapid delivery app, and ensure they stick around? Here are three strategies to get you started.Reach users first and keep them longerMany users are already using rapid delivery apps, but the space is still fairly new - with most companies established just last year in 2020. There are millions of users who are unaware they can get Advil delivered in minutes without ever leaving the house, and it’s important to align your UA strategy to reach those users before the competition does first.Think about the potential for growth by reaching app users directly on devices while they’re setting up their new phones. With 95% of users downloading 40% of the apps that will be installed throughout the lifetime of the device (that’s a long time) during the first 48 hours after purchasing their new phone, device setup is a critical touchpoint to become the app of choice.For example, ironSource Aura uses contextual information to place your app in front of users who are likely to engage during a customized device set up experience.Especially in such a young industry, optimizing your marketing tactics towards attracting users that will be loyal to your app is vital to beating out the competition. But a solid UA strategy must accompany a solid product, which leads us to the next point.Choose the delivery model that optimizes convenienceUltimately, users order from rapid delivery apps because they want their items fast, and they are even willing to pay more for it. To attract and keep users, you should be doing everything you can to make your service the most convenient and efficient - and that starts with the delivery model you choose.With most users ordering from rapid delivery services for urgent essentials or last minute items, these companies are fitting into users’ lives for the in-between, spontaneous moments. As people continue to live hectic and demanding lives, it’s important to position your app as a convenience offering through an integrated model.Some rapid delivery apps claim a vertically integrated model - where they source and own their inventory. In these cases, the service handles the warehouse and delivery logistics. Other apps have taken it a step further by hiring couriers as full time employees, ensuring that there are always people on standby to deliver. At the end of the day, choosing one of these models and sticking to it will give you the upper hand in achieving the fastest speeds in the long run.Once the bones are good, you can start thinking about your value proposition that will set you apart.Differentiate your product offeringAs we mentioned, the number of rapid delivery apps has exploded in the past year, making it hard to stand out. To acquire users that will likely stick around, focus on building a product that offers something unique and valuable.Ultimately, you don’t have to be the fastest to get a leg up. You can differentiate your product according to service and type of items offered. For example, you can increase your delivery range, allowing you to reach users outside of major cities. You could also offer items from specific retailers, such as local or organic grocery stores. The Istanbul based app, Getir, not only boasts about its ability to deliver over 1,000 products, but also differentiates itself by delivering around the clock.It’s also important to consider differentiating your product by offering promotions. Since time is of the essence in the rapid delivery space, it’s important to take responsibility for underperformance - try offering some sort of compensation when an order is late. You could also entice users to sign up by offering a promo with the first order. You could use marketing channels, such as ironSource Aura which has a full screen offer placement, to show these promos.As an example, Getir increases its value by not charging a delivery fee for orders over £10. By showing users that you care about quality and their needs, you’ll retain more for the long-term - over 70% of users stick with a brand with friendly employees and quality customer service according to Nextiva.In this incredibly saturated market, differentiation is a must. There are infinite ways to make your product stand out, and it all comes down evolving and innovating in a unique way.Rapid delivery apps are among one of the most innovative and disruptive concepts that came out of the last couple of years - it’s rare for a new idea to affect the market as much as rapid delivery has. On top of that, investors across the world, including Ophelia Brown from Blossom Capital, are saying they have never seen anything like the amount of capital that has gone into rapid delivery all at once, allowing them to grow in an unprecedented way. So, it’s clear that rapid delivery apps are not going anywhere, and with all this funding and attention being poured into the industry, competition is only going to increase. It’s valuable to prepare yourself for that competition by finding new ways to attract users and keep them around. Looking at your business model, finding ways to differentiate your product, and analyzing your UA strategy are great places to start.

>access_file_
1299|blog.unity.com

Your toolkit for self-publishing a hit mobile game

Editor’s note: This article is based on Antti Hattara’s exclusive presentation at LevelUp 2021. Antti is a mobile games industry veteran based in Berlin. He’s currently the founder and CEO of indie studio Starberry Games. Check out the video from LevelUp 2021 below.Many developers at some point ask themselves the question: should we launch and run the game ourselves, or should we partner with a publisher? While it may seem daunting to do it alone, it is possible to succeed going down the solo route.There are two main parts to self-publishing: distributing a mobile game globally in the app stores on your own account, and operating the game as a service - which entails marketing, analytics, and support. We self-published Merge Mayor: Idle Village and learnt a lot along the way. Below, we’ll share the market tools we used during the tech launch, from conceptualization and production, in addition to the soft launch phase.Tech launchConceptThe conceptualization phase is all about market research and gathering customer insights. We used AppAnnie for insights on market size, trends, and our category's benchmarks. We also used Geeklab to test our concept, its theme, and art style by running low scale marketing campaigns that directed users to a simulated app store landing page. This is a great way to check if your ideas have market potential and strong appeal - there's no point investing time and resources in building a game that doesn't have an audience. It also helps guide your app store optimization, to see which colors, screenshots, and graphics bring about the highest conversion rates and lowest CPIs. Optimizing these metrics will enable you to scale your game in the most economically efficient way - so getting started as early as the concept phase is highly recommended.Once you’ve formed a stronger idea of what your game concept will look like, it’s time to start deepening your understanding of your target audience. This brings us to the customer insights part of the concept phase.We used three tools to better understand our players: PlaytestCloud, where we ran usability tests for our concept and tested our competitors’ games; Smartlook, where we collected real recordings from users playing our game, and observed how they progressed over multiple sessions (you should insert a pop up message to the users you're observing that allows them to opt-in to being tracked); and 12 Traits, which provide in-game questionnaires that allow us to build audience profiles and predict what our future audience would care about.ProductionWe’ve reviewed the game concept phase, and now we can move on to our next stage - testing a more built-up product. There is a simple set of tools I believe everyone should be using to test the games as early on as possible in the market. The first is Google Open Beta, which allows you to keep your game under the radar while still running UA campaigns. This means you can acquire paid users, and your game will receive no organic traffic from app stores - it won’t appear in searches or game category lists. Users can leave feedback but aren’t able to leave public reviews - helping you improve your game design and the user experience without the risk of negative public comments.Now your game is available to install, you want to drive some traffic. Facebook Ads are a great way to start: their campaigns are simple, effective, and suitable for small budgets. In addition, their ad creatives tool provides 30 second videos or banner carousels, using content you've simply captured from your device. For Merge Mayor, for example, we used Facebook Ads to acquire quality users from the UK at $0.35 per install. Once you begin driving installs from new users, you want to start using an analytics tool such as Facebook, Google Firebase, Unity, or GameAnalytics to measure your KPIs. At this stage, focus on essential metrics like early retention (Day 1 to Day 3), session lengths, and track how users progress through your levels. Soft launchUpgrading your analytics capabilitiesAfter this testing phase, you’ll step up your game's development as you prepare for a soft launch. As you build out your game, you'll need to answer more advanced questions about your user behavior and game metrics - and for that you need to upgrade your analytics tech stack.Try data warehouse tools like Google BigQuery, where you can build the data engineering yourself; DeltaDNA, which deals with the backend and lets you just operate your analytics; or use a full service tool that handles the backend and operations, like Dive. In addition, integrate an attribution partner to get data about post-install actions and where your installs are coming from. The top options are Appsflyer, Adjust, Singular, and Tenjin. Thinking about growthAt this stage, start thinking seriously about in-game monetization and expanding your user acquisition. For tracking in-app purchases, make sure to use in-app receipt validation, which helps keep your data clean - you can build it yourself or implement it through your attribution partner.It’s equally important to think about your ad monetization early on. For Merge Mayors, we use ironSource's mediation solution to manage our ad monetization strategy: we’re delighted with it, particularly its quick and efficient setup process, integration with AppsFlyer, and its cohort-based ad LTV reporting. The next stage is expanding your marketing efforts, using multiple UA channels like ironSource, Google Admob, and Unity Ads, in addition to Facebook Ads. Ahead of your soft launch you should experiment with different campaign types - from app install campaigns to event-focused campaigns. Also be sure to iterate your ad creatives and A/B test many variations to find the top performers. Expanding your marketing is as important as it is time-consuming - you should look into hiring someone to head this effort internally or use an agency.Laying the foundation for a big global launchLeveraging a combination of these tools will help set you up for a successful global launch. Once you’ve shipped your game and begun scaling UA, your work is only getting started. You need to constantly be optimizing your marketing and monetization, and dedicating resources to increase retention through liveops - from frequent game updates to in-game seasonal events. That’s a big topic which needs its own session - hopefully next year I’ll be back to take your through our liveops strategy with Merge Mayor.

>access_file_
1300|blog.unity.com

How to build games at rapid speed: Q&A with Rappid Studios

ironSource sat down with our mediation partners Nikos Tourlos and Antonis Taglartzis, co-founders of the indie studio Rappid, to learn all about their gamedev journey - from developing games as university students to pursuing game development full-time. Keep reading for a transcript of the conversation and more of Rappid Studio’s insights on how to build a game at rapid speed - with just two people at the helm.How did you first get started in gaming?We started making games for fun 7 years ago as computer science students at the University of Athens, we never imagined we would be successful at it at the time. After about a year of figuring things out, we started working at it on a more serious level and started producing bigger and better games. Ιt took us about 4 years of building up our professional game development experience to come up with Epic Battle Simulator.What’s your favorite part about developing games?Antonis: It’s incredible and seems surreal that we are able to apply theoretical knowledge – math, physics, etc. everything we were taught since childhood, comes to life in the games we make. We can see science coming to life through our games. It is extremely rewarding to know that so many people are enjoying our games and are engaged in them as we speak.Nikos: The most exciting part of developing games, for me, has to be when one of our games goes live and comes to life, as players are engaging with our ideas, our game mechanics, our graphics and we know they are having fun.Was it easier to build the game or grow the game?Nikos: We generally build games fast. We pretty much know what we want to make, how to do it, and how to make it profitable. Making the game isn’t the hardest part for us. Growing the game and keeping users engaged with it requires a lot of commitment and hard work.Antonis: For us it’s easier to make the game. Marketing the game and making it successful is trickier. You can make a great game that can just as easily turn out to be a huge success or flop. We’ve had experience with both outcomes.How long did it take to build the game from concept to production?Antonis: Our average time has been 2-3 weeks per game. We are fully dedicated, focused and efficient when making games and we complete each other as a team. Nikos works mornings and I work nights so it’s a 24-hour a day operation. Production moves quickly and that’s why we named our studio Rappid. We like making games fast.What is the most challenging part about being a game developer?Nikos: In Greece there is very little in the way of a gaming industry, there are no large gaming companies based here, no gaming hubs and not enough professional expertise at the developer level. It’s difficult finding world-class experienced game developers to work with, in order to create world-class games. We are doing our part, pitching in to help build this up, but it is a slow process.What inspired your games?Nikos: We create games we believe the players will enjoy playing, as well as games that we believe are missing from the field. We carry out a lot of market research into what games we should make. There are for example a lot of racing games out there and you might think that another racing game is going to be redundant. This is exactly where opportunities exist, filling the gaps you discover in otherwise saturated genres.What advice would you give to other indie developers trying to make it?Antonis: Keep at it and never give up. It might sound cliché but this is exactly what you need to do: make a lot of games and a lot of different kinds of games. Trial and error. Each time you get better, each time you make better games learning from your mistakes. It’s a learning and it’s an iterative process. Sticking with it and being consistent is critical. You can’t do that if you don’t love what you do, so that’s a good starting point.Nikos: We’ve made more than 60 games. We just kept making games, always keeping ourselves in the discovery process while making them, where we went wrong, how to do it better next time, learning from our mistakes. Learning to make better games through our failures, discovering if it was the concept, the mechanics, or the market. After experiencing more than 60 such iterations we are in a very good position to know what we need to make and we keep making it. So, I would say never stop creating.

>access_file_