// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1689 transmissions indexed — page 79 of 85

[ 2018 ]

10 entries
1565|blog.unity.com

Introducing 2D Game Kit: Learn Unity with drag and drop

The Explorer 2D Game Kit is a collection of mechanics, tools, systems and assets to hook up gameplay without writing any code. We’ve also created a game example using these systems, so you can see how they work together in Unity.Unity Brighton’s Content Team - who brought you learning projects Survival Shooter, Adventure Game and Trash Dash - are now unveiling their latest creation: a 2D Gamekit for anyone who want to learn hands-on how to build a game in Unity. This game kit includes everything you need to hook up gameplay without writing any code. Download the kit and you’ll get a collection of art, gameplay elements, tools and systems, and, to show how these elements can be used, we’ve also created a game example using these systems. If you’re an artist, designer or anything in between, this is a great way to get your creative teeth into Unity.Meet Ellen - our Principal Engineer. She has crash-landed her ship on a mysterious planet and has to make her way through the hazardous remains of an ancient alien civilisation, fighting tiny acid spitting creatures, deadly crystal spikes and bubbling murky pools to discover what is hidden in the deep, long forgotten crypts of this overgrown island… sounds good right?With some seriously lush environments using loads of sprite assets, the Content Team have included some platformer classics in the kit including moving platforms, pushable boxes, switches and magical glowing keys for giant alien stone doors. Plus of course, some adorable (and some not so adorable) enemies to defeat.Open the Unity engine and navigate to Scenes in the Project window. From there you will find the pre-made levels 1-5 as well as a Template scene. This template scene shows Ellen standing on a single platform. Add more ground and platforms using Tilemap, throw in some doors and some vegetation sprites, a few little snapping creatures to defeat and bam - you’ve got yourself a miniature level. Get creative with spikes, acid water, teleporters and more.To start making your own 2D platformer, check out the Getting Started guide. If you’re interested in learning about how each component works, you’ll find the Reference Guide super helpful. You can also find all the supporting documentation in the project’s Documentation folder. Use it as a glossary, a step by step or simply as a reference if you get stuck.There are a few ways to access 2D Game Kit. Head to our Learn site or the Asset Store. You can also access the Asset Store from within the Unity engine itself and search for ‘2D Game Kit’.Watch the recording of our live training session on the Game Kit featuring the Content Team’s Producer Aurore Dimopoulos below. You can also discuss the project on our dedicated forum thread.Stay tuned! The Content Team also have another trick up their sleeve. If you’re excited about the 2D Game Kit you might be pleased to know their next project is going to be a 3D Game Kit with the same theme but all in a 3D environment.

>access_file_
1568|blog.unity.com

2D Tilemap asset workflow: From image to level

In Unity 2017.2, we introduced a new addition to the 2D Feature Set: Tilemaps! Using Tilemaps, you can quickly layout and create 2D levels using a combination of Sprites and GameObjects, and have control over properties such as layer ordering, tilemap colliders, animated tiles and more! In this blogpost, I will explain the full workflow beginning at importing your image file into Unity all the way through to a laid out level for a 2D Platformer!As a TL;DR overview; the workflow can be summarised like this, with each element relating to an Asset or a Component in the Unity Editor:Sprite -> Tile -> Palette -> Brush -> TilemapFrom a Non-Unity point of view, these terms could seem a little abstract. Just imagine the process for a real-life painting on a real-life canvas:Color -> Paint -> Tile Palette -> Paint Brush -> CanvasThere is similar logic to each step of the process and even similar names for each step!For this post, I will use this ‘GrassPlatform_TileSet’ Image as the main example:With the end result being a level constructed of these pieces that a 2D character can run on as a ‘level’:Importing an Image into Unity can be done in a variety of ways: - Saving the desired Image file into the project’s ‘Assets’ folder. - From the top menu, selecting ‘Assets -> Import New Asset’ and then selecting the desired file. - Dragging the Image File from your File Browser into the ‘Project Window’ in the Unity Editor (This is probably the easiest way!)Once the image is imported into your project, its default Texture Type import settings are defined by which behaviour mode your project is currently set to: 2D or 3D.This mode is originally set when a new project is created:Or can be changed from in Editor Settings:As my project already is setup for 2D Behaviour Mode, then ‘GrassPlatform_TileSet’ will automatically import with the Texture Type of ‘Sprite (2D and UI)’ which is the setting that the Tile asset will require to reference the Sprite.As the ‘GrassPlatform_TileSet’ is a series of sprites in one image, we will need to slice it into individual sprites; this can be done by setting the Sprite Mode from ‘Single’ to ‘Multiple’ and opening the Sprite Editor:The Sprite Editor window allows you to ‘slice’ an image into multiple sprites; so you can work on one spritesheet in your desired image editing software and define which areas of the image are treated as ‘individual’ sprites directly in Unity. No need to juggle and manage hundreds of individual image files!As ‘GrassPlatform_TileSet’ is an image composited of a series of tiles, we can use the Sprite Editor’s Grid Slicing Settings to automatically split the image into multiple sprites. The dimensions of each ‘cell tile’ in this tileset are 64 pixels by 64 pixels, so we can input these setting and let the Sprite Editor auto-generate the required sprite slices:And after the ‘Slice’ button is clicked, we now have a sliced set of sprites!In the Sprite Editor window, each sliced sprite is then selectable and editable. For example, you can set names for each sprite and even manually tweak values such as position and pivots.We then need to ‘Apply’ the changes to the Sprite Asset (by clicking the aptly named ‘Apply’ button near the top-right corner of the Sprite Editor) which will then allow us to reference each sliced sprite individually in the Project window:Now that our Sprite Sheet has been sliced into individual Sprites, we next need to ‘convert’ these into Tiles.The Tile is a brand new asset added in Unity 2017.2. Its purpose is to hold data for the Tilemap to use at a specific cell on the grid.The base default Tile asset (which can be generated from ‘Create -> Tile’ in the Project window) allows for a Sprite to be assigned to it and also other customizations such as the Tint of the Sprite and the type of Collider that it would use on the Tilemap (which will be explained later).Unity 2017.2 introduces a new window: the Tile Palette! This window is integral to using the new Tilemap system as it acts as an interface to select which Tiles to use and how the Tilemap is to be edited with them.Before we can add the ’TopGrassTile’ Tile to the Tile Palette window, we first need to create a new Palette. Palettes can be used to organise your sets of Tiles instead of ‘storing’ all of them (could be hundreds or more!) on to one workspace in the window.In the drop-down Palette menu there is an option to create a brand new Palette:It’s as simple & easy as drag-and-drop to add ‘TopGrassTile’ to this newly created Palette!However, in some situations we might be working with hundreds and hundreds of Sprites that build up our 2D scene. It would be very time-consuming to manually create a Tile asset for each of these Sprites and then drag-and-drop each one individually onto the Palette.Thankfully, there is a workflow that can be used to automatically generate a set of Tiles (one for each Sprite) and assign all of them to the desired Palette. And it‘s also as simple & easy as drag-and-drop! Instead of dragging a Tile asset onto the Palette, drag the source Spritesheet that contains the sliced Sprites. In this case, ‘GrassPlatform_TileSet’:Now that our ‘GrassPlatform_TileSet’ spritesheet is successfully set up in the Tile Palette window, it’s time to start constructing a 2D Level!To begin, we need to can create a brand new ‘Tilemap’ in our current scene; this can be done from the ‘GameObject -> 2D Objects -> Tilemap’ drop-down menu. However, this not only creates a ‘Tilemap’ GameObject (With related components) but also a ‘Grid’ GameObject that the Tilemap gameObject is automatically a child of.The most similar GameObject structure to the ‘Grid Tilemap’ setup is Unity’s UI System; where the Canvas parent GameObject acts as a layout container for all of its child UI GameObjects (Such as Text and Images). The ‘Grid’ GameObject uses the ‘Grid’ Component to define the dimensions of all of its child Tilemap GameObjects. There are options that allow for some customization in the layout:The child Tilemap GameObject is then constructed by both the Tilemap component and the Tilemap Renderer component; the former containing the data of the Tiles painted onto it and the later defining the visual settings of how it is rendered.The Tilemap system has been designed so that multiple Tilemap GameObjects can be children of the same Grid, meaning that the end result of your level can be easily composited by multiple layers of different Tiles:Each Tilemap Renderer gives you control over the Material used to render its Tiles, the Sorting Layer it uses (which is the same layer system that Sprite Renderers, UI Canvases and Particle Systems use!) and also how it reacts to the Sprite Mask.Before Tiles can be painted onto the Tilemap, two things have to be selected: which Tilemap is currently focused and which Brush is currently in use.The first can be chosen from the ‘Active Tilemap’ drop-down in the Tile Palette window beneath the Editing Options:This drop-down list will show all instances of the ‘Tilemap’ component in the scene and will allow you to select one to be painted on and edited. The above screenshot only shows one ‘Tilemap’ option, and named after the default Tilemap GameObject, whereas a more complex scene with multiple Tilemaps could display a list of possible Active Tilemaps such as this:For the ‘GrassPlatform_TileSet’ example, renaming the “Tilemap’ GameObject to be more accurate will also update the Active Tilemap list name(s):The next thing to select is the current Brush. Whilst the Tile asset determines what data a cell would contain (Visuals, Collider Type, etc), a Brush asset defines how a Tile (or Tiles) would be placed onto a Tilemap. Currently, Unity only has one Brush (named ‘Default Brush’) built-in to be selected; and it has expected functionality of its name such as placing, erasing, moving and filling Tiles on the Tilemap. However, on the Unity Technologies Team GitHub there is a 2D Extras’ Repository that has a variety of examples of how you can script your own custom Brushes and Tiles! Once these are imported into your project, the current Brush menu (at the bottom of the Tile Palette window) will allow you to choose which Brush to use:Whilst this article doesn’t dive into the use of Scriptable Brushes and Scriptable Tiles, it’s a very powerful area to study and integrate into your tilemap-based level-design toolset.With the Active Tilemap and Current Brush set, we can then select a specific Tile, in the Tile Palette window, and then paint it onto the Tilemap in the Scene! You will also need to make sure that the ‘Paintbrush’ Icon in the Edit Tools is also selected:Success! Tiles are being painted on the Tilemap! However, you may notice that the Tiles are slightly smaller than the size of the Grid’s cells. This is not a bug, but we need to step back a bit for some explanation of why - and how you can change the default.The Grid component’s Cell Size uses Unity’s world space distance units (For example, a primitive Unity cube with default scaling of 1 for each axes will be the same size as one cell on the default Grid). Each Sprite asset has a Pixels Per Unit value in its Import Settings, with the default value being 100:

>access_file_
1570|blog.unity.com

Getting started in interactive 360 video: Download our sample project

The release of Unity 2017.3 marked another step in our commitment to empowering filmmakers to create truly interactive 360 videos.Creators can now bring a 360 2D or 3D video into Unity and play it back on the Skybox Panoramic Shader to create standalone 360 video experiences targeting VR platforms. Additionally, Unity now offers built-in support for both 180- and 360-degree videos in either an equirectangular layout (longitude and latitude) or a cubemap layout (6 frames).With Unity you can build real-time effects, interactions and UI on top of your videos to achieve a highly immersive and interactive experience. To make this process even easier, we have just released the Interactive 360 Video Sample Project on the Asset Store. It’s a free download and we encourage creators interested in making interactive 360 videos to give it a try.The Sample Project contains scenes, Prefabs, code, and video files that can be used by anyone who wants to learn how to build interactive 360 video experiences for mobile or desktop VR. The project shows you how to use Unity’s UI system and Video Player, and how to get input data from VR controllers.In the project are two ready-to-build scenes for gaze-based interactions, which work with Oculus, OpenVR (Vive), Android (Samsung Gear VR, Google Daydream, Google Cardboard) and iOS (Cardboard). The project also includes sample scenes for Oculus+Touch and Google Daydream controller configurations.You can start a project by importing your own 360 videos in either 2D (monoscopic) or 3D (stereoscopic) formats. The Sample Project also supports 180 videos.To get started using the Interactive 360 Video Sample Project:Download Unity 2017.3 here.Download the project here.Watch the tutorial video below Add your 360 masterpieces to the project and start building.Learn more about Unity for 360 video and how esteemed creators are already using Unity for their award-winning projects.We will be hosting a live web training session on building interactive 360 videos on February 28. Mark your calendars!

>access_file_

[ 2017 ]

10 entries
1573|blog.unity.com

Crunch compression of ETC textures

This blog post describes the basics of Crunch compression and explains in details how the original Crunch algorithm was modified in order to be able to compress ETC1 and ETC2 textures.Crunch is an open source texture compression library © Richard Geldreich, Jr. and Binomial LLC, available on GitHub. The library was originally designed for compression of DXT textures. The following section describes the main ideas used in the original algorithm.DXT is a block-based texture compression format. The image is split up into 4x4 blocks, and each block is encoded using a fixed number of bits. In case of DXT1 format (used for compression of RGB images), each block is encoded using 64 bits. Information about each block is stored using two 16-bit color endpoint values (color0 and color1), and 16 2-bit selector values (one selector value per pixel) which determine how the color of each pixel is computed (it can be either one of the two endpoint colors or a blend between them). According to the DXT1 compression format, there are two different ways to blend the endpoint colors, depending on which endpoint color has higher value. However, Crunch algorithm uses a subset of DXT1 encoding (endpoint colors are always ordered in such a way that color0 >= color1). Therefore, when using Crunch compression, endpoint colors are always blended in the following way:DXT encoding can therefore be visually represented in the following way:Each pixel can be decoded by merging together color0 and color1 values according to the selector value.For simplicity, information about color0 and color1 can be displayed on the same image (with the upper part of every 4×4 block filled with color0 and the lower part filled with color1). Then all the information necessary for decoding the final texture can be represented in a form of the following 2 images (4×4 blocks are displayed slightly separated from each other):For an average texture it is quite common that neighbor blocks have similar endpoints. This property can be used to improve the compression ratio. In order to achieve this, Crunch introduces the concept of “chunks”. All the texture blocks are split into “chunks” of 2x2 blocks (the size of each chunk is 8x8 pixels), and each chunk is associated with one of the following 8 chunk types:Blocks with identical endpoints form a “tile” within a chunk, and are displayed united together on the picture above. Once the information about the chunk types has been encoded, it is sufficient to encode only one endpoint per tile. For example, in case of the leftmost chunk type, all the blocks within a chunk have the same endpoints, so for such a chunk it is sufficient to encode only one endpoint pair. In case of the rightmost chunk encoding, all the endpoint pairs are different, so it is necessary to encode all 4 of them. The following example shows texture endpoints, grouped into chunks, where each chunk is split into tiles:Of course, the described chunk types don’t cover all the possible combinations of matching endpoints, but at the same time, this way the information about the matching endpoints can be encoded very efficiently. Specifically, encoding of the chunk type requires 3 bits per 4 blocks (0.75 bits per block, uncompressed).Crunch algorithm can enforce the neighbor blocks within a chunk to have identical endpoints in cases when extra accuracy of the encoded colors isn’t worth spending extra bits for encoding of additional endpoints. This is achieved in the following way. First, each chunk is encoded in 8 different ways, corresponding to the described 8 chunk types (instead of using DXT1 optimization for each block, the algorithm is using DXT1 optimization for each tile). The quality of each encoding is then evaluated as the PSNR multiplied by a coefficient associated with the used chunk type, and the optimal encoding is selected. The trick here is that chunk types with higher number of matching endpoints also have higher quality coefficients. In other words, if using the same endpoint for two neighbor blocks within a chunk doesn’t reduce the PSNR much, then the algorithm will most likely select the chunk type where those neighbor blocks belong to the same tile. The described process can be referenced as “tiling”.The basic idea of Crunch compression is to perform quantization of the determined endpoints and selectors blocks, in order to encode them more efficiently. This is achieved using vector quantization. The idea is similar to color quantization, when a color image is represented using a color palette and palette indices defined for each pixel.In order to perform vector quantization, each endpoint pair should be represented with a vector. For example, it is possible represent a tile endpoint pair with a vector (color0.r, color0.g, color0.b, color1.r, color1.g, color1.b), where color0 and color1 are obtained from DXT1 optimization. However, such representation doesn't reflect the continuity properties of the source texture very well (for example, in case of a solid block, a small change of the block color might result in significant change of the optimal color0 and color1, which are used to encode this color). Instead, Crunch algorithm is using a different representation. Source pixels of each tile, which are represented by their (r, g, b) vectors, are split into 2 clusters using vector quantization, providing two centroids for each tile: low_color and high_color. Then the endpoints of each tile are represented with a (low_color.r, low_color.g, low_color.b, high_color.r, high_color.g, high_color.b) vector. Such representation of the tile endpoints doesn't depend on the DXT1 optimization result, but at the same time performs quite well.Note that after quantization all the blocks within a tile will be associated with the same endpoint codebook element, so they will get assigned the same endpoint index. This means that initially determined chunk types will be still valid after endpoint quantization.Selectors of each 4x4 block can be represented with a vector of 16 components, corresponding to the selector values of each block pixel. In order to improve the result of the quantization, selector values are reordered in the following way, in order to better reflect the continuity of the selected color values:Vector quantization algorithm splits all the input vectors into separate groups (clusters) in such a way so that vectors in each group appear to be more or less similar. Each group is represented by its centroid, which is computed as an average of all the vectors in the group according to the selected metric. The computed centroid vectors are then used to generate the codebook (centroid vector components are clipped and rounded to integers in order to represent valid endpoints or selectors). The original texture elements are then replaced with the elements of the computed codebooks (endpoints for each source 4×4 block are replaced with the closest endpoint pair from the generated endpoint codebook, selectors for each source 4×4 block are replaced with the selector values of the closest selector codebook element).The result of vector quantization performed for both endpoints and selectors can be represented in the following way:After quantization, it is sufficient to store the following information in order to decode the image:- chunk types- endpoint codebook- selector codebook- endpoint indices (one index per tile)- selector indices (one index per block)The quality parameter provided for Crunch compressor directly controls the size of generated endpoint and selector codebooks. The higher is the quality value, the larger are the endpoint and selector codebooks, the wider is the range of the possible indices, and subsequently, the bigger is the size of the compressed texture.DXT encoding for the alpha channel is very similar to the DXT encoding of the color information. Information about the alpha channel of each block is stored using 64 bits: two 8-bit alpha endpoint values (alpha0 and alpha1), and 16 3-bit selector values (one selector value per pixel) which determine how the alpha of each pixel is computed (it can be either one of the two alpha values or a blend between them). As has been mentioned before, Crunch algorithm uses a subset of DXT encoding, so the possible alpha values are always blended in the following way:Vector quantization for the alpha channel is performed exactly the same way as for the color components, except that vectors which represent alpha endpoints of each tile, consist of 2 components (low_alpha, high_alpha), and are obtained through clusterization of the alpha values of all the tile pixels.Note that the chunk type, determined during the tiling step, is common for both color and alpha endpoints. So in case of textures using alpha channel, chunk type is determined based on the combined PSNR computed for color and alpha components.The main idea used in Crunch algorithm for improving the compression ratio is based on the fact that changing the order of the elements in the codebook doesn't affect the decompression result (considering that the indices are reassigned accordingly). In other words, the elements of the generated codebooks can be reordered in such a way, so that the dictionary elements and indices acquire some specific properties, which allow them to be compressed more efficiently. Specifically, if the neighbor encoded elements appear to be similar, then each element can be used for prediction of the following element, which significantly improves the compression ratio.According to this scheme, Crunch algorithm is using zero order prediction when encoding codebook elements and indices. Instead of encoding endpoint and selector indices, the algorithm encodes the deltas between the indices of the neighbor encoded blocks. The codebook elements are encoded using per-component prediction. Specifically, each endpoint codebook element (which is represented by two RGB565 colors) is encoded as 6 per-component deltas from the previous dictionary element. Each selector codebook element (which is represented by 16 2-bit selector values) is encoded as 16 per-component deltas from the previous dictionary element.On the one hand, endpoint indices of the neighbor blocks should be similar, as the encoder compresses the deltas between the indices of the neighbour blocks. On the other hand, the neighbor codebook elements should be also similar, as the encoder compresses the deltas between the components of those neighbor codebook elements. The combined optimization is based on the Zeng's technique, using a weighted function which takes into account both similarity of the indices of the neighbor blocks and similarity of the neighbor elements in the codebook. Such reordering optimization is performed both for endpoint and selector codebooks.Finally, the reordered codebooks and indices, along with the chunk type information, are encoded with Huffman coding (using zero order prediction for indices and codebook components). Each type of encoded data uses its own Huffman table, or multiple tables. For performance reasons adaptive Huffman coding isn't used.We performed a comprehensive analysis of the algorithms and techniques used in the original version of Crunch and introduced several modifications which allowed us to significantly improve the compression performance. The updated Crunch library, introduced in Unity 2017.3, can compress DXT textures up to 2.5 times faster, while providing about 10% better compression ratio. At the same time, decompressed textures, generated by both libraries, are identical bit by bit. The latest version of the library, which will reach Beta builds soon, will be able to perform Crunch compression of DXT textures about 5 times faster than the original version. The latest version of the Crunch library can be found in the following GitHub repository.The main modifications of the original Crunch library are described below. The improvement in compressed size and compression time, introduced by each modification, is described as a saved portion of the compressed size and compression time spent by the original library. It has been evaluated on the Kodak image test set. When compressing real world textures, the improvement in compression size should be normally higher.1. Replace chunk encoding scheme with block encoding scheme (improvement in compressed size: 2.1%, improvement in compression time: 7%)As described above, in the original version of Crunch algorithm all the blocks are grouped into chunks of 2x2 blocks. Each chunk is associated with one of 8 different chunk types. The type of the chunk determines which blocks inside the chunk have the same endpoints indices. This scheme performs quite well, because it is often more efficient to compress information about the endpoint equality, rather than compress duplicate endpoint indices. However, this scheme can be improved. The modified Crunch algorithm no longer uses the concept of chunks. Instead, for each block it can encode a reference to the previously processed neighbor block, where the endpoint can be copied from. Considering that the texture is decompressed from left-to-right, top-to-bottom, endpoints of each decoded block can be either decoded from the input stream, copied from the left nearest block (reference to the left) or copied from the upper nearest block (reference to the top):The following example shows quantized texture endpoints with the references:Note that the modified Crunch encoding is a superset of the original encoding, so all the images previously encoded with the original Crunch algorithm can be losslessly transcoded into the new format, but not vice versa. Even though the new endpoint equality encoding is more expensive (about 1.58 bits per block, uncompressed), it also provides more flexibility for endpoint matching inside the previously used “chunks”, but more importantly, it allows to copy endpoints from one “chunk” to another (which isn’t possible when using the original chunk encoding). The blocks are no longer grouped together and are encoded in the same order as they appear on the image, which significantly simplifies the algorithm and eliminates extra levels of indirection.2. Encode selector indices without prediction (improvement in compressed size: 1.8%, improvement in compression time: 10%)The original version of Crunch encodes the deltas between the neighbour indices in order to get advantage of the neighbour indices similarity. The efficiency of such approach highly depends on the continuity of the encoded data. While neighbour color and alpha endpoints are usually similar, this is often not the case for selectors. Of course, in some situations, encoding the deltas for selector indices makes sense, for example, when an image contains a lot of regular patterns aligned to the 4×4 block boundaries. In practice, however, such situations are relatively rare, so it usually appears to be more efficient to encode raw selector indices without prediction. Note that when selector indices are encoded without prediction, the reordering of the selector indices no longer affects the size of the encoded selector indices stream (at least when using Huffman coding). This makes the Zeng optimization of selector indices unnecessary, and it’s sufficient to simply optimize the size of the packed selector codebook.3. Remove duplicate endpoints and selectors from the codebooks (improvement in compressed size: 1.7%)By default, the size of the endpoint and selector codebooks is calculated based on the total number of blocks in the image and the quality parameter, while the actual complexity of the image isn’t evaluated and isn’t taken into account. The target codebook size is selected in such a way that even complex images can be approximated well enough. At the same time, normally, the lower the complexity of the image, the higher is the density of the quantized vectors. Considering that vector quantization is performed using floating point computations, and the quantized endpoints have integer components, high density of quantized vectors will result in a large number of duplicate endpoints. As the result, some identical endpoints are being represented with multiple different indices, which affects the compression ratio. Note that this isn’t the case for selectors, as their corresponding vector components are rounded after quantization, but instead it leads to some duplicate selectors in the codebook being unused. In the modified version of the algorithm all the duplicate codebook entries are merged together, unused entries are removed from the codebooks, endpoint and selector indices are updated accordingly.4. Use XOR-deltas for encoding of the selector codebook (improvement in compressed size: 0.9%)In the original version of Crunch, selector codebook is encoded with Huffman coding applied to the raw deltas between corresponding pixel selectors of the neighbour codebook elements. However, using Huffman coding for raw deltas has a downside. Specifically, for each individual pixel selector, only about half of all the possible raw deltas are valid. Indeed, once the value of the current selector is determined, the selector delta depends only on the next selector value, so only n out of 2 * n – 1 total raw delta values are possible at any specific point (where n is the number of possible selector values). This means that on each step the impossible raw delta values are being encoded with a non-zero probability, as the probability table is calculated only once throughout the whole codebook. The situation can be improved by using modulo-deltas instead of raw deltas (modulo 4 for color selectors and modulo 8 for alpha selectors). This eliminates the mentioned implicit restriction on the values of the decoded selector deltas, and therefore improves the compression ratio. Interestingly, the compression ratio can be improved even further if XOR-deltas are used instead of modulo-deltas (XOR-delta is computed by simply XOR-ing two selector values). At first it might seem counterintuitive that XOR-delta can perform better than modulo-delta, as it doesn’t reflect the continuity properties of the data that well. The trick here is that the encoded selectors are first sorted according to the used delta operation and the corresponding metric.5. Improve Zeng reordering algorithm (improvement in compressed size: 0.7%, improvement in compression time: 5%)After the endpoint codebook has been computed, the endpoints are reordered to improve the compression ratio. As has been described above, optimization is based on Zeng’s technique, using a weighted function which takes into account both similarity of the indices in neighbor blocks and similarity of the neighbor elements in the codebook.The ordered list of endpoints is built starting from a single endpoint and then adding one of the remaining endpoints to the beginning or to the end of the list on each iteration. It’s using a greedy strategy which is controlled by the optimization function. The similarity of the endpoint indices is evaluated as a combined neighborhood frequency of the candidate endpoint and all the endpoints in the ordered list. The similarity of the neighbor endpoints in the codebook is evaluated as Euclidian distance from the candidate endpoint to the extremity of the ordered list. The original optimization function for an endpoint candidate p can be represented as:F(p) = (endpoint_similarity(p) + 1) * (neighborhood_frequency(p) + 1)The problem with this approach is the following. While the endpoint_similarity(p) has a limited range of values, the neighborhood_frequency(p) grows rapidly with the increasing size of the ordered list of endpoints. With each iteration this introduces additional disbalance for the weighted optimization function. In order to minimize this effect, is it proposed to normalize the neighborhood_frequency(p) on each iteration. For computational simplicity, the normalizer is computed as the optimal neighborhood_frequency value from the previous iteration, multiplied by a constant. The modified optimization function can be represented as:F(p) = (endpoint_similarity(p) + 1) * (neighborhood_frequency(p) + neighborhood_frequency_normalizer)Additional improvement in compression speed has been achieved by optimizing the original algorithms, reducing the total amount of computations by caching the intermediate computation results, and spreading the computations between threads more efficiently.The described modifications of the Crunch algorithm don't change the result of the quantization step, which means that decompressed textures, generated by both libraries, will be identical bit by bit. In other words, the improvement in compression ratio has been achieved by using a different lossless encoding of the quantized images. It might therefore be interesting to compare Crunch encoding with alternative ways of compressing the quantized textures. For example, quantized textures can be stored in a raw DXT format, compressed with LZMA. The following table displays the difference in compression ratio when using different approaches:According to the test results, it seems to be more efficient to use Crunch encoding of the computed codebooks and indices, rather than compress the quantized texture with LZMA. Not to mention that Crunch decompression is also significantly faster than LZMA decompression.Even though the Crunch algorithm was originally designed for compression of DXT textures, it is in fact much more powerful. With some minor adjustments it can be used to compress other texture formats. This section will describe in detail how the original Crunch algorithm was modified in order to be able to compress ETC and ETC2 textures.ETC is a block-based texture compression format. The image is split up into 4x4 blocks, and each block is encoded using a fixed number of bits. In case of ETC1 format (used for compression of RGB images), each block is encoded using 64 bits.The first 32 bits contain information about the colors used within the 4x4 block. Each 4x4 block is split either vertically or horizontally into two 2x4 or 4x2 subblocks (the orientation of each block is controlled by the “flip” bit). Each subblock is assigned its own base color and its own modifier table index.The two base colors of a 4x4 block can be encoded either individually as RGB444, or differentially (the first base color is encoded as RGB555, and the second base color is encoded as RGB333 signed offset from the first base color). The type of the base color encoding for each block is controlled by the “diff” bit.The modifier table index of each subblock is referencing one of the 8 possible rows in the following modifier table:The intensity modifier set (modifier0, modifier1, modifier2, modifier3) defined by the modifier table index, along with the base color, determine 4 possible color values for each subblock:base_color + RGB(modifier0, modifier0, modifier0) base_color + RGB(modifier1, modifier1, modifier1) base_color + RGB(modifier2, modifier2, modifier2) base_color + RGB(modifier3, modifier3, modifier3) Note that the higher is the value of the modifier table index, the more distributed are the subblock colors along the intensity axis.Another 32 bits of the encoded ETC1 block describe 16 2-bit selectors values (each pixel in the block can take one of 4 possible color values, described above).ETC1 encoding can therefore be visually represented in the following way:Each pixel color of an ETC1 block can be decoded by adding together the base color and the modifier color, defined by the modifier table index and selector value (the result color should be clamped).For simplicity, information about the base colors, block orientations and modifier table indices can be displayed on the same image. The upper or the left part of each 2×4 or 4×2 subblock (depending on the block orientation) is filled with the base color, and the rest is filled with the modifier table index color. Then all the information necessary for decoding of the final texture can be represented in a form of the following 2 images (subblocks on the left image and blocks on the right image are displayed slightly separated from each other):The detailed description of ETC1 format can be found at this Khronos Group page.Even though DXT1 and ETC1 encodings seem to be quite different, they also have a lot in common. Each pixel of an ETC1 texture can take one of four possible color values, which means that ETC1 selector encoding is equivalent to DXT1 selector encoding, and therefore ETC1 selectors can be quantized exactly the same way as DXT1 selectors. The main difference between the encodings is that in case of ETC1, each half of a 4x4 block has its own set of possible color values. But even though ETC1 subblock colors are encoded using a base color and a modifier table index, the four computed subblock colors normally lie on the same line and are more or less evenly distributed along that line, which highly resembles DXT1 block colors. The described similarities allow to use Crunch compression for ETC1 textures, with some modifications.As has been described above, Crunch compression involves the following main steps:tilingendpoint quantizationselector quantizationcompression of the determined codebooks and indicesWhen applying Crunch algorithms to a new texture format, it is necessary to first define the codebook element. In the context of Crunch, this means that the whole image consists of smaller non-overlapping blocks, while the contents of each individual block are determined by an endpoint and a selector from the corresponding codebooks. For example, in case of DXT format, each endpoint and selector codebook element corresponds to a 4x4 pixel block. In general, the size of the blocks, which form the encoded image, depends on the texture format and quality considerations.It’s proposed to define codebook elements according to the following limitations:Codebook elements should be compatible with the existing Crunch algorithm, while the image blocks defined by those codebook elements should be compatible with the texture encoding format.It should be possible to cover a wide range of image quality and bitrates by changing the size of the endpoint and selector codebooks. If there is no limitation for the codebook size, it should be possible to achieve lossless or near-lossless compression quality (not considering the quality loss implied by the texture format itself)In case of ETC1, the texture format itself determines the minimal size of the image block, defined by an endpoint: it can be either 2x4 or 4x2 rectangle, aligned to the borders of the 4x4 grid. It isn't possible to use higher granularity, because each of those rectangles can have only one base color, according to the ETC1 format. For the same reason, any image block, defined by an endpoint codebook element, should represent a combination of ETC1 subblocks.At the same time, each ETC1 subblock has its own base color and modifier table index, which approximately determine the high and the low colors of the subblock (even though there are some limitations on the position of those high and low colors, implied by the ETC1 encoding). If an endpoint codebook element is defined in such a way that it contains information about more than one ETC1 base color, then such a dictionary will become incompatible with the existing tile quantization algorithm for the following reason. The Crunch tiling algorithm first performs quantization of all the tile pixel colors, down to just 2 colors. Then it performs quantization of all the generated color pairs, generated by different tiles. This approach works quite well for 4x4 DXT blocks, as those 2 colors approximately represent the principal component of the tile pixel colors. In case of ETC1, however, mixing together pixels, which correspond to different base colors, doesn't make much sense, because each group of those pixels has its own low and high color values independent from other groups. If those pixels are mixed together, the information about the original principal components of each subblock will get lost.The described limitations suggest that ETC1 endpoint codebook element should represent the area of a single ETC1 subblock (either 2x4 or 4x2). This means that ETC1 endpoint codebook element should contain information about the subblock base color (RGB444 or RGB555) and the modifier table index (3 bits). And it is therefore proposed to encode an ETC1 “endpoint” as 3555 (3 bits for the modifier table index and 5 bits for each component of the base color).In case of DXT format, both endpoint codebook elements and selector codebook elements correspond to the same size of the decoded block (in case of DXT it is 4x4). So it would be reasonable to try the same scheme for ETC1 encoding (i.e. to use 2x4 or 4x2 blocks for selector codebooks, matching the blocks which are defined by endpoint codebook elements). Nevertheless, after additional research we discovered a very interesting observation. Specifically, endpoint blocks and selector blocks don't have to be of the same size in order to be compatible with the existing Crunch algorithm. Indeed, selector codebook and selector indices are defined after the endpoint optimization is complete. At this point each image pixel is already associated with a specific endpoint. At the same time, the selector computation step is using those per-pixel endpoint associations as the only input information, so the size and the shape of the blocks, defined by selector codebook elements, doesn't depend in any way on the size or shape of the blocks, defined by endpoint codebook elements.In other words, the endpoint space of the texture can be split into one set of blocks, defined by endpoint codebook and endpoint indices. And the selector space of the texture can be split into a completely different set of blocks, defined by selector codebook and selector indices. Endpoint blocks can be different in size from the selector blocks, as well as endpoint blocks can overlap in arbitrary way with the selector blocks, and such setup will still be fully compatible with the existing Crunch algorithm. The discovered property of the Crunch algorithm opens another dimension for optimization of the compression ratio. Specifically, the quality of the compressed selectors can now be adjusted in two ways: by changing the size of the selector codebook and by changing the size of the selector block. Note that both DXT and ETC formats have selectors encoded as plain bits in the output format, so there is no limitation on the size or shape of the selector block (though, for performance reasons, non-power-of-two selector blocks might require some specific optimizations in the decoder).Several performance tests have been conducted using different selector block sizes, and the results suggest that 4x4 selector blocks perform quite well.As has been described above, each element of an ETC1 endpoint codebook should correspond to an ETC1 subblock (i.e. to a 2x4 or a 4x2 pixel block, depending on the block orientation). In case of DXT encoding, the size of the encoded block is 4x4 pixels, and tiling is performed in a 8x8 pixel area (covering 4 blocks). In case of ETC1, however, tiling can be performed either in a 4x4 pixel area (covering 2 subblocks), or in a 8x8 pixel area (covering 8 subblocks), while other possibilities are either not symmetrical or too complex. For performance reasons and simplicity it is proposed to use 4x4 pixel area for tiling. There are therefore 3 possible block types: the block isn't split (the whole block is encoded using a single endpoint), the block is split horizontally, the block is split vertically:The following example shows computed tiles for the texture endpoints:At first, it might look like ETC1 block flipping can bring some complications for Crunch, as the subblock structure doesn't look like a grid. This, however, can be easily resolved by flipping all the “horizontal” ETC1 blocks across the main diagonal of the block after the tiling step, so that all the ETC1 subblocks will become 2x4 and form a regular grid:Note that decoded selectors should be flipped back according to the block orientation during decompression (this can be efficiently implemented by precomputing a codebook of flipped selectors).Endpoint references for the ETC1 format are encoded in a similar way to the DXT1 format. The are however two modifications, specific to the ETC1 encoding:In addition to the standard endpoint references (to the top and to the left blocks), it is also possible to use an endpoint reference to the top-left diagonal neighbour block.Endpoint references for the primary and secondary subblocks have different meaning.The primary ETC1 subblock has the reference value of 0 if the endpoint is decoded from the input stream, the value of 1 if the endpoint is copied from the secondary subblock of the left neighbour ETC1 block, the value of 2 if the endpoint is copied from the primary subblock of the top neighbour ETC1 block, and the value of 3 if the endpoint is copied from the secondary subblock of the top-left neighbour ETC1 block:The reference value of secondary ETC1 subblock contains information about the block tiling and flipping. It has the reference value of 0 if the endpoint is copied from the primary subblock (note that in this case flipping doesn’t need to be encoded, as endpoints are equal), the value of 1 if the endpoint is decoded from the input stream and the corresponding ETC1 block is split horizontally, and the value of 2 if the endpoint is decoded from the input stream and the corresponding ETC1 block is split vertically:The following example shows ETC1 texture endpoints with tiles and references (considering that flipping has been already performed by the decoder):Considering that each endpoint codebook element corresponds to a single ETC1 base color, the original endpoint quantization algorithm works almost the same way for the ETC1 encoding as for the DXT1 encoding. An endpoint of en ETC1 tile can be represented with a (low_color.r, low_color.g, low_color.b, high_color.r, high_color.g, high_color.b) vector, where low_color and high_color are generated by the tile palletizer, exactly the same way as for the DXT1 encoding.Note that low_color and high_color, computed for a tile, implicitly contain information about the base color and the modifier table index, computed for this tile. Indeed, the base color normally lies somewhere in the middle between low_color and high_color, while the modifier table index corresponds to the distance between low_color and high_color. Vectors which represent tiles with close values of low_color and high_color, will most likely get into the same cluster after vector quantization. But this also means that for the tiles from the same cluster, the average values of low_color and high_color, and distances between low_color and high_color should be also pretty close. In other words, the original endpoint quantization algorithm will generate tile clusters with close values of the base color and the modifier table index.Selectors of each 4x4 block can be represented with a vector of 16 components, corresponding to the selector values of each block pixel. This means that ETC1 selector quantization step is identical to the DXT1 selector quantization step.The result of the vector quantization performed for both ETC1 endpoints and selectors can be represented in the following way:Note that according to the ETC1 format, the base colors within an ETC1 block can be encoded either as RGB444 and RGB444, or differentially as RGB555 and RGB333. For simplicity, this aspect is currently not taken into account (all the quantized endpoints are encoded as 3555 in the codebook). If it appears that the base colors in the resulting ETC1 block can not be encoded differentially, the decoder will convert both base colors from RGB555 to RGB444 during decompression.The Crunch algorithm doesn't yet support ETC2 specific modes (T, H or P), but it’s capable of efficiently encoding the ETC2 Alpha channel. This means that the current ETC2 + Alpha compression format is equivalent to ETC1 + Alpha. Note that ETC2 encoding is a superset of ETC1, so any texture, which consists of ETC1 color blocks and ETC2 Alpha blocks, can be correctly decoded by an ETC2_RGBA8 decoder.ETC2 encoding for the alpha channel is very similar to the ETC1 encoding of the color information. Information about the alpha channel of each block is stored using 64 bits: 8-bit base alpha, 4-bit modifier table index, 4-bit multiplier and 16 3-bit selector values (one selector value per pixel).The modifier table index and selector value determine a modifier value for a pixel, which is selected from the ETC2 alpha modifier table. For performance reasons, ETC2 Crunch compressor is currently using only the following subset of the modifier table:The final alpha value for each pixel is calculated as base_alpha + modifier * multiplier, which is then clamped.Note that unlike ETC1 color, ETC2 Alpha is encoded using a single base alpha value per 4×4 pixel block. This means that each element of the alpha endpoint dictionary should correspond to a 4×4 pixel block, covering both primary and secondary ETC1 subblocks. For this reason, alpha channel can be ignored when performing color endpoint tiling.The compression scheme for ETC2 Alpha blocks is equivalent to the compression scheme for DXT5 Alpha blocks. As has been shown before, vector representation of alpha endpoints doesn’t depend on the used encoding. This means that all the initial processing steps, including alpha endpoint quantization, will be almost identical for DXT5 and ETC2 Alpha channels. The only part which is actually different for the ETC2 Alpha encoding is the final Alpha endpoint optimization step.In order to perform ETC2 Alpha endpoint optimization, the already existing DXT5 Alpha endpoint optimization algorithm is run to obtain the initial approximate solution. Then the approximate solution is refined based on the ETC2 Alpha modifier table values. Note that ETC2 format supports 16 different Alpha modifier indices, but for performance reasons, only 2 Alpha modifier indices are currently used: modifier index 13, which allows to perform precise approximation on short Alpha intervals, and modifier index 11, which has more or less regularly distributed values, and is used for large Alpha intervals.At first it might seem that different size of the color and alpha blocks can bring some complications for Crunch, as according to the original algorithm, both color and alpha endpoints should share the same endpoint references. This, however, is easily resolved in the following way: each alpha block is using the endpoint reference of the corresponding primary color subblock (this allows to copy alpha endpoint from the left, top, left-top or from the input stream), while the endpoint reference of the secondary color subblock is simply ignored when decoding alpha channel.The performed research demonstrates that Crunch compression algorithm in not limited to the DXT format and with some modifications can be used on a different gpu texture formats. We see some research potential to expand this work to cover further texture formats in the future.

>access_file_
1574|blog.unity.com

Spectating VR

A fun game experience is something that players want to show off, record, and share. With VR, seeing what the player sees on a single, rectangular screen doesn’t always convey the entire feeling. This means that spectators can often find the default ‘seeing through a player’s POV’ experience underwhelming. What I wanted to do was set up a simple starter system for how a spectator camera should work and to add a little more fun for those not in the VR experience themselves. Fortunately, there have been a few shipped examples that successfully designed a good spectator view. The goal of this project was to come up with a spectator system that builds on those designs, is compact and portable, and can easily be integrated into your own projects.You can download the associated project here. Requires Unity version 2017.2 or later.The first thing I need to do is to create a second camera specifically for the Spectator. I create a second camera and place it facing my first, original camera. Then, in the Camera Settings, I need to set the Target Eye to None (Main Display).Run the project in the editor, and already Unity’s game view is rendered independent of what the VR headset displays. It’s that easy! But don’t worry, there’s more fun we can have here.If I point that spectator camera back at myself, and hit play, I can’t see anything! I need to create an avatar to represent me in the world. I managed to create a nice little head and hands model using Unity's built-in shapes, and can now link them up as a head and hands. I want these to move with my tracked devices in the real world. To link these up, we have a new component in 2017.2: The Tracked Pose Driver. Drop it onto a gameobject, set whether you want to use the HMD or a Controller, and voila, that gameobject will be updated and can be used as an in-game proxy for any tracked part of your VR hardware. This makes it trivial to build a quick player VR rig.My narcissistic itch satisfied, now I want to get a few more in-game angles. All I need is a few world locations, and a small script, called the Spectator Controller, to iterate over those locations. The core of this script keeps track of the transform that the camera is currently attached to. In our sample, we are tracking m_CurrentTransform. I want to be able to switch cameras both as a VR player, and as a spectator, so I’ve linked that up to both the touchpad/stick clicks on the VR controllers, and the spacebar on the keyboard. The second responsibility of this Spectator Controller is to enable and disable the color and viewfinders of the currently active camera. I'll opt to create a CameraAttachPoint MonoBehaviour in order to handle the elements that are specific to my high tech camera and viewfinder.Next up, I want to be able to see what the Spectator sees, while still in VR. I won’t know if I’m `striking a good pose until I can see for myself, in real time. For this, I need a render target, and an extra camera. If I render my spectator’s camera to a render target, I can then redirect the output to both a texture in the world, and a camera directed towards the Main Display. This part just needs a few more assets, conveniently located in the Assets/RenderTarget folder. I also need a third camera. We now have 3 cameras: the VR camera, the spectator camera, and the spectator display, which takes the spectator camera’s render target and displays it to the user. I’ll opt to use a Canvas UI object here so that I could then add additional UI not visible to the VR player nor any spectator render targets.That’s fun, but now that I can see myself dance, I don’t just want to iterate over preset angles, I want to be able to set my own. I want to be able to grab that camera and really show myself off. For that, I need to build a small component called the Grabber. It’s a simple system: when I press the trigger, I check for any physics objects in a small radius that are on a specific layer. While the trigger is held, I continue to update the position and rotation of any found objects to match that of the grabbing hand. Simple, but it gets the job done.An important note about moving the camera: getting the camera tossed around like a small ragdoll can be disorienting to our spectators. If you don’t have your inner ear helping you out, it can be hard to understand jittery movement. For that purpose, all camera movements (Grabber and Spectator Controller behaviours) contain settings for smoothing. These smoothing values, which go from 0 (no smoothing) to 1 (stays at the original position indefinitely), will use linear interpolation between the original and desired camera location and orientation to smooth out any sudden movements. I’ve found 0.1 is generally enough, but it’s a personal preference and can depend on context, so adjust as needed.I’ve now got everything bundled up nicely. I’ve got a series of toggleable spectator cameras that can be grabbed, posed with, and presented within the VR world itself. I still need a way to make sure the users know what they can manipulate, without interfering with the spectator scenery. Since I’ve got separate cameras for the spectator and the player, it’s trivial to use the cameras layer mask to create a player-only layer and place instructions there.It’s important to note that all these cameras get expensive. We draw the whole world twice and then re-render the spectator’s view a third time. Disabling both spectator cameras when not in use would be a useful addition. To do that, turn off both the Spectator Camera and Spectator View cameras and the system will fall back into the original ‘render from the player’s POV’ way of spectating.And this is where I leave it up to you. There is a grabbable, movable spectator camera, with its own in-game viewfinder and a separate UI layer for both player and spectator. Take it apart, swap out the assets, change the camera switching behaviour and UI, and turn this project into your own. I’ve tried to keep it light and easy to dissect, with environment and visual assets easy to exclude, and there is a minimal amount of custom scripts. This would be an excellent place to start looking into Cinemachine to pick the right angles to maintain a good view of the action. A crafty developer could even add more to the spectator UI and inputs and design a new asymmetric style of gameplay where the spectator can be a real participant.What would you like to see in a good VR spectator system?

>access_file_
1576|blog.unity.com

A new way to learn Unity: Game development from A-to-Z

New to Unity? Start building with Unity Game Dev Courses and create your own dungeon crawler game, Swords and Shovels.Learn Unity and game development from start to finish with Unity Game Dev Courses. You’ll learn all the essential concepts of game development, then focus on the areas that appeal to you, all while creating an awesome looking game that you can play. You’ll build your foundations in core topics like C#, cameras, animation, and lighting. You’ll also begin working with non-Unity applications necessary for the full game dev pipeline, such as Maya, 3DS Max, and Photoshop.The courses include AAA-quality assets used in Swords & Shovels, and at each step, you’ll learn new skills that will make these assets come to life. By the end of the course, you will have a working dungeon crawler style arena battle game. And, it will look amazing!When you sign up for Unity Game Dev Courses, you’ll get access to more than 20 hours of instructional videos, created by game industry veterans with a knack for teaching. You can learn at your own pace, practice, or review as you like. We designed these courses in Partnership with Pluralsight, the enterprise technology learning platform, so now you can get the best of Unity’s content expertise delivered on the easy to use Pluralsight learning platform."At Unity we are passionate about enabling the success of our developers, so it's important that we equip them with the tools and resources they need to bring their ideas to life. Pluralsight's reputation as a leading learning platform makes them an ideal partner for helping game developers acquire new skills and learn new techniques that will raise the quality of creations in the gaming industry.”-- Jessica Lindl, Global Head of Education at Unity TechnologiesFor a limited time, you can get the low introductory rate of $12/month. Your monthly subscription gives you access to:Unlimited access to a growing library of expertly-authored Unity courses.High-quality Swords and Shovels assets get you started quickly.Easy-to-understand courses taught by game-industry veterans.Learn the basics of content-creation tools like 3ds Max and Maya.Skill assessments that help you validate your skills in minutes.Absolutely no commitment. Period. You can cancel at any time.Sign up now and start building!

>access_file_
1580|blog.unity.com

Trends and Challenges in User Acquisition: Q&A with Playrix's Artur Grigorjan

Artur is Head of Growth Marketing and is charged with the mission of finding unique opportunities to grow and improve the marketing efforts of Playrix - one of the biggest game developers in the world. They’re the team behind the wildly popular, and top-grossing titles of Township, Gardenscapes and Fishdom.Artur knows a thing or two about acquiring great users and keeping them in-app, and at this year’s China Joy conference in Shanghai, he spoke to the community of local developers on the latest UA trends.Yoni Eyal, ironSource’s GM of APAC sat down with Artur to get his take on the biggest challenges app developers are facing, and how they can ace user acquisition and find great users in the face of rising costs.Globally, what markets are the biggest opportunities for game developers today?Artur: Asia represents the biggest opportunity for game developers to see growth. Playrix is also experiencing growth in these areas - after the US, our next biggest markets are China, Japan and Korea.Not all western game developers see growth in Asia at first, so it’s important to approach the Asian market differently than you would at ‘home’. It’s unconventional and sounds counter-intuitive, but my advice is that it’s better to differentiate your games in the Asian markets, rather than adjusting your content to look more Asian or overhauling the game to match the Asian user.Using this approach, Playrix was able to find our niche in Asia, and now Township is the most successful farm simulation game in China.What are some of the challenges you think that app marketers are facing today?Artur: The first challenge facing app marketers today is simply the amount of change and growth in this industry. Often, the processes can’t keep up with the pace of change. The marketers who will succeed are those who can adapt their games quickly to the changing industry and make the most of the opportunities. For example, by working quickly to incorporate dynamic ad units into their marketing strategies.The second challenge resulting from the rapidly-growing industry is quality user acquisition. Increasing UA costs combined with reduced user quality and reduced retention means it’s getting harder to find quality users at an affordable cost.It’s also becoming harder to target and segment quality users, and to find out how relevant they are for your app. Game marketers should approach user acquisition knowing that perhaps only a portion of their acquired users will be high-quality or with high LTV-potential. The trick is to see which traffic sources are bringing in the highest volume of quality users, and then double-down on them.The third big challenge is fraud. Of course, we have seen moves in the industry to tackle fraud, and some companies are doing better than others at fighting it. Unfortunately, it’s an issue that will probably never completely disappear, so the industry needs to work together on fraud prevention tools that will benefit both app marketers and publishers.What are some ways that developers can fight the rising costs of user acquisition and declining retention?One way to tackle declining retention is to work hard on your product - try to make the actual game you are marketing better all the time. In-app engagement equates to revenue, so also develop and refine in-app engagement opportunities with your users.Game developers can also differentiate their portfolio to find out where there’s an open niche in the market, and acquire users this way. To do this well, developers need to be able to deploy multiple games in a short space of time. That way, if you have a user in one game, you can then offer them another of your titles, and still keep that user. If you don’t have these capabilities, don’t worry. At Playrix we focus on developing a small amount of titles and then constantly improving them to improve retention.Retention in the industry will continue to decrease, so the only thing that makes a difference is the potential longevity of the product. Developers should build extensive engagement features and social features into the game, and always be open to communication with their users. This is a great way to understand what users want out of your game, and potentially increase retention, so don’t ignore user feedback.What impact have you seen at Playrix from playable ads for UA?Artur: Playable ads are the hottest thing in the industry right now because they give the user an opportunity to become familiar with the app and the basic mechanics of the game before choosing to install.We’ve seen the “try before you buy” model work perfectly well in the commerce industry - it’s something which builds up engagement before the actual decision making, so I think we’re only going to see more of their impact in the mobile space.A challenge with playable ads that app marketers should keep in mind is that A/B testing a playable is more complex than other ad units, so it’s important for app marketers to work out a testing methodology before progressing with this new format.What do you think is going to be the biggest UA trend this year?Artur: First, I think a big trend on the horizon is more sophisticated ad tech tools that will allow advertisers to run even better campaigns, and to optimize them in a more intelligent way.If we look at UA just two or three years ago, there were far fewer tools. Now the technology is only getting better, and there is even more transparency. A lot of the partners Playrix works with give us a lot of transparency - they explain how their business works, and where our campaign spend is going. We’re also trying to be more transparent from our side too, which is equally important in a successful partnership. Increased transparency is a win-win situation for everyone.Second, the rise of programmatic will continue to be a significant trend in app marketing and user acquisition.Finally, new creative formats are going to continue to make waves, like playables and interactive rewarded video ads. Further down the road I think we will see the rise of AR in ads, it definitely has exciting potential.Learn more about ironSource playable ads here!

>access_file_