// transmission.log

Data Feed

> Intercepted signals from across the network — tech, engineering, and dispatches from the void.

1690 transmissions indexed — page 22 of 85

[ 2025 ]

20 entries
429|blog.unity.com

How Synergiz Harbor innovates student learning through mixed reality

About SynergizFounded in 2011, Synergiz is a French company specializing in intuitive, connected, and interactive solutions, with strong expertise in Mixed Reality. With a commitment to technological excellence, Synergiz positions itself as a key player in digital transformation, collaborating with renowned partners such as Microsoft, Meta, Magic Leap, RealWear, HTC Vive, Apple, etc. Synergiz supports companies in their digital projects from start to finish through its comprehensive offering of hardware, software, and service solutions, as well as creation and development.In education, innovation has consistently been a driving force behind improved student learning outcomes. One of the key advancements in recent years has been the integration of mixed reality (MR) technology into learning environments. Harbor is at the forefront of this digital transformation by providing XR experiences that enhance knowledge sharing training, and engagement for students. Developed with Unity, Harbor is redefining how institutions like Bâtiment CFA Bretagne run training programs.Read on to learn how:- Harbor can be used to create custom mixed reality scenarios or projects.- Synergiz used Unity to build the Harbor software suite.- A training and apprenticeship center developed two hands-on MR workshops with Harbor to improve student learning outcomes.What is Synergiz Harbor?Harbor is an off-the-shelf, no-code software suite designed to facilitate immersive mixed reality experiences in educational and professional training settings. With Harbor, teachers can independently create their course materials in mixed reality. By combining the physical and digital worlds, this platform delivers dynamic, interactive environments where students can learn, practice, and perfect skills using a blend of virtual and real-world elements. Harbor is compatible with a variety of devices, including Apple Vision Pro, Microsoft HoloLens 2, Meta Quest 3, Magic Leap 2, as well as tablets, smart phones, and plans to support additional headsets.How Synergiz used Unity to develop HarborWhen Synergiz began developing Harbor in early 2020, the development team chose Unity Industry as its foundation. The decision was simple–Unity offered all the tools needed to bring their vision to life while aligning with their existing Unity expertise.Here’s why Unity was the ideal choice:1. Fast, efficient developmentLeveraging Unity enabled rapid prototyping and shortened Synergiz’s overall development timeline. The team’s prior experience with Unity tools meant they could jump right in and spend more time fine-tuning the user experience.2. Comprehensive XR development supportUnity’s suite of XR development solutions, including XR Interaction Toolkit, AR Foundation, and Unity OpenXR Plugin, enabled Synergiz to create advanced MR applications. These tools allowed for seamless integration of XR features into Harbor’s software.3. Cross-platform readinessThe initial version of Harbor was designed specifically to support the HoloLens 1. When the Synergiz team was ready to expand their platform support, Unity made it easy to expand Harbor’s compatibility to other AR and VR devices such as the Meta Quest 3 and Apple Vision Pro.4. Ongoing support and trainingUnity’s customer support and training programs played a vital role in overcoming technical challenges. Whether it was optimizing 3D models or supporting Universal Windows Platform (UWP) features, Unity’s expert resources kept the Synergiz team ahead of the curve with the latest XR advancements and rendering pipelines.Using Harbor to improve learning outcomes in training programsAt Bâtiment CFA Bretagne, Harbor was used to develop educational workshops for technician apprenticeships. Here, Erwan Gry, Electricity Trainer, collaborated with the team at Synergiz to implement two mixed reality workshops, built for the MetaQuest 3 using Harbor, into the Technician of Connected Infrastructures and Equipment diploma at CFA Morbihan.Within the platform, the two workshops trained students in motor mechanics and electrical safety. The electrical training workshop enables students to safely engage in solo training, with a guided 3D electrical procedure following step-by-step actions. The motor mechanic workshop involves a collaborative training around a 3D model of an asynchronous motor. The professor can modify the motor model by adding 3D animations, videos, and images for the students to interact with.These workshops resulted in several key benefits:Improved student outcomes: Students move beyond theoretical study, actively participating in simulations that mirror the real-world job.Scalability of coursework: The professor is able to reuse the content indefinitely, making it faster to tailor and adapt scenarios in the future.Risk-free practice: Students can safely make and correct mistakes without real-world consequences, building both confidence and competence.Feedback from the CFA has been overwhelmingly positive. Students report stronger engagement and higher degrees of confidence, while professors highlight the efficiency and adaptability of the MR-based workshops. One student from Bâtiment CFA Morbihan said, “it’s rewarding to try out new digital tools as part of our training,” another remarked, “integrating mixed reality gives us a more complete and also different perspective on our profession”.“For me, it makes perfect sense to incorporate mixed reality into this technical training. I'm very proud to have invested in this technology, which is now essential to our training program. – Erwan Gry, Electricity Trainer at Bâtiment CFA MorbihanUnlocking MR innovation with Synergiz HarborAs mixed reality continues to gain momentum in education, solutions like Harbor offer a glimpse into a future where learning is not only more accessible and engaging, but also more effective in building the skills required for tomorrow’s challenges. By leveraging Unity Industry, Synergiz not only was able to build out a cutting edge MR-solution, but they enabled their customers to develop immersive custom experiences.Synergiz’s impact extends beyond individual classrooms or educational training. Their broader mission is to support the creation of MR experiences across industries. Whether it's preparing students for high-stakes technical roles, or optimizing the layout of an industrial site without interfering with current operations, Synergiz is accustomed to helping organizations achieve their own digital transformation.

>access_file_
431|blog.unity.com

Split-screen and GameShare networking in Survival Kids

This summer, Unity released its first game, created in close collaboration with publisher partner KONAMI. Survival Kids is a fun-filled update to the classic kids’ game that launched as a day-one Nintendo Switch™ 2 title.The game was built entirely on Unity 6, so the dev team was working with new software toward launching the game on a new platform – a huge challenge. On top of that, the game can be enjoyed in a variety of network configurations, so the small Unity team working on the project had to build a robust multiplayer architecture that would support these options.Check out the first instalment of the multiplayer networking story for Survival Kids, where we share how the fundamentals behind the game’s network architecture came together. This post expands on this base to show how the team built the game’s split screen and Nintendo Switch 2 GameShare capabilities.Nintendo Switch is a trademark of Nintendo.After we’d solved a lot of the problems in the game’s network architecture, we started to think about how we were going to do split screen, which isn’t supplied out of the box in Netcode for Entities. This was a different challenge. With split screen, there’s got to be more than one player, but those players belong to a client.Netcode for Entities assumes that there’s one player per client – if there’s a separate game, with a separate console connecting to it, then it has one player. When that changes and there are actually two players or three players, there’s no way to send the input up for each individual player. They have to be sent up as one.We effectively created a virtual input player that nobody can see. It’s totally invisible, but it collects all the input for all the local players, up to four of them (although in the end we didn’t do four-player split screen). It manages all the input that comes in, and then it sends all that input up the server every frame.In the game, players don’t manage their own input. The imaginary virtual input player tells them what the input is for a frame. Previously, Netcode for Entities assumes that a player is responsible for getting its input and using its input to do all its movement, but here there’s this other player that doesn’t do any movement but holds all the input for everything else.Split screen was the main challenge from a network point of view. To avoid having a multiple cameras problem, we started by having a second player that would run around while the camera stayed with the first player. That came together pretty quickly, but then we encountered other problems, like how to set up a second camera? How to keep one camera on the left of the screen and the other on the right side of the screen? We had to solve UI problems, too, because there’s quite a bit of UI that only one player can see. For example, if one player is in front of a log, they would see a little prompt button that says, “Hey, press X to pick up this log,” but of course you don’t want the other player to see that.We had to figure out how to hide the UI so that if the other player is nearby, they won’t see it. We used layers for that, but our fix related more to UI than to the network. We had decided that we ultimately wanted to lock the game to two split-screen players for a better gameplay experience – even if it’s on a big screen, there can only be two local players. We could do four on a split-screen internally, and we kept that going for quite a while because it was a great way of stress-testing performance, since every player adds a bit more processing, a bit more rendering, another player to simulate.One of the features during development for Nintendo Switch 2 is GameShare. You’re effectively sending a video feed to another console – really, it’s just split screen from a network perspective – except the system sends one camera to another console instead of rendering it on a screen.Our four-player split screen was the basis for how we approached GameShare mode. We could connect as many players as we want as long as the performance is okay and we can stream video to that console. The main reason we didn’t want to do four-player split screen was just about screen size, really. Unless you have a massive TV, it’s really hard to see the windows – but if you have your own console, the video can stream over to that.We pushed hard to differentiate from our two-player split screen mode so we could support an extra third player in GameShare. You can have a host and two guests while still offering players a great experience and smooth performance. We weren’t willing to lower our standards on that, but we were still able to use the split-screen architecture to power GameShare.One really helpful feature that we added was a debug command. We have a dev menu, so you can press a button, call up the menu, and then type commands into it. This was handy because it let us run loads of debug stuff – it’s all compiled out of the final game, so of course nobody could do that in the final game that people buy and play. But one of the modes that we had in split screen was that you could duplicate the main player – this let you have a split screen where one controller runs both players. It was a great way to test the split screen without needing to have loads of controllers around, and this made it easier to test.The split screen setup also effectively ran all the normal networking code that we did. Since the players were separate from each other, the server would send information to show how the online game works. But it’s also possible to test whether code worked in multiplayer mode without connecting a player to another client by firing up split screen mode with another controller in the Editor to play there. There’s no need to do a new build since it’s possible to test the code on split screen as a proxy for a normal online game.There were another two Unity tools that we found really useful, although we didn’t use them until right at the end of the project. Unity 6 includes new Multiplayer Play Mode tools, which enables us to test without a separate player build.Opening the Editor, it takes over an hour to do a clean player build because there’s so much art and other information, so testing code with a remote player means waiting at least that long. It’s not particularly good for iterating. But Multiplayer Play Mode enables you to effectively spin up another window, like another virtual version of the Editor, and connect like that.Netcode for Entities also has Play Mode tools to simulate bad network connections. You can specify and simulate a specific level of ping – say, a 300-millisecond ping, a really horrible round trip to simulate what it would be like to play with a friend who tethered their phone to their laptop in an airport and connected to the game that way. Then you can test that in the Editor to find out how laggy or unstable it is. Sometimes that doesn’t work on a network connection that’s losing data and dropping packets, and we could simulate that easily.This testing happened all the time. For a while, we had a rule that nobody was allowed to play in the Editor with the simulator turned off – everyone had to play with some kind of simulated lag, since none of our players were going to play on a perfect connection. That way, we could never fool ourselves into believing that a super high-speed office broadband was representative.In the end, all of this testing paid off – we were able to deliver a smooth, performant game at 60 fps across really different networks and multiplayer setups. Since the game's release a few weeks ago, we've seen players continuing to engage online through Lobby and Relay, hopefully enjoying a seamless and robust gaming experience, regardless of their home network conditions.Check out the other instalments of our blog series deep dive into Survival Kids production: - "Graphics and rendering tips from Survival Kids" - "Level layout and terrain workflows in Survival Kids" - "Inside the Survival Kids multiplayer network infrastructure"To learn more about projects made with Unity, visit the Resources page.

>access_file_
440|blog.unity.com

Inside the Survival Kids multiplayer network infrastructure

This summer, Survival Kids launched as a day-one release for Nintendo Switch™ 2. The game was built entirely on Unity 6, marking Unity’s first-ever end-to-end development project, working closely with publisher partner KONAMI.Developing for a new platform on Day 1 is a huge challenge, but the small internal team that built this project included seasoned Unity developers, many of whom have been working in Unity and on games for decades. This blog is part of an ongoing series diving into how the game was made, how this work fueled Unity’s commitment to production verification, plus lessons other Unity gamedevs can take and apply to their own projects.This is the first instalment of an ongoing behind-the-scenes series digging into team lessons from working on Survival Kids.Nintendo Switch is a trademark of Nintendo.Survival Kids was built by a very small team within Unity. The core group was about 10 developers of various disciplines (artists, engineers, and designers). At our peak, we were around 20 as people from other Unity teams came onboard. For example, Steven, our rendering engineer, worked with us a lot, but he wasn’t always on the project.As a small team, we had some advantages, though. The engineers were vastly experienced – most of us have been writing games for 20-odd years, mostly in the AAA space, so we’ve learned a lot of lessons and we’ve made a lot of mistakes. And of course we’re really experienced in Unity because most of us have been here for some time.Some of us have also worked on customer projects as part of Unity support teams like Professional Services/Accelerate Solutions, now Unity Studio Productions. We advise customers on how to optimize their projects and even embed with project teams to work alongside them and help solve their hard technical problems, so we’re quite well-versed in the mistakes that studios often make and how to fix them. Working on Survival Kids, we could architect the project and put it on the right path from the start because we knew where all the pitfalls would be, and that saved us a lot of time and resources.Today, I want to dig into the game’s network architecture. We used Unity to drive multiplayer networking, and Survival Kids offers players a number of different ways to play the game, all from the same networking base. So let’s dive into how this came together, and hopefully some of this can help you in your projects, too.Survival Kids can be played a few ways: single player, local co-op, and online with friends. On the Nintendo Switch™ 2, players can also use GameShare to stream the video to another Nintendo Switch 2 or even an original Switch, then play multiplayer with someone on the TV or a device, which is really cool.We wanted our setup to drive all of that and other combinations. For example, you could have two players playing split screen on one television that’s connected to another two players playing split screen on a different TV – so four players using two devices. That flexibility was something that we really wanted to design into the architecture to enable play in lots of different ways.To do this, we decided on Netcode for Entities. Once we’d pitched the concept for Survival Kids to KONAMI, we went straight into prototyping to find the fun for our multiplayer game. We used an existing project as a launch point, one that I’d written previously as a proof of concept for how we could use Netcode for Entities as a backend network, then write a GameObject layer on top of it to take advantage of Prefabs and animations. Not everyone on the team had experience working with Entities, so we decided to use GameObjects and MonoBehaviour together.We also wanted to keep the gameplay logic in GameObjects and MonoBehaviours because they make it really easy to prototype – this setup lets you throw things together and write scripts and download scripts off the internet or use Asset Store packages for prototyping. We wanted that fast iteration and freedom, but we also liked that Netcode for Entities gave us a performant network layer. I’d already used it on a few customer projects and personal research projects, so I knew that its quality level could drive the level of gameplay we wanted.When we first started, about three years ago, Netcode for GameObjects existed, but it still lacked a few of the features we wanted, especially client-side prediction. With client-side prediction, if there’s ever a lag between the server and client, the client predicts what the server is going to do and does it instantly – so players’ controls feel responsive even when there’s lag. You don’t have to wait for the server to tell you that a player has moved or what have you – you’re already doing it. That’s something that Netcode for Entities had from the start.For prototyping, we basically grabbed a project we already had and jumped in. We started with simple things – picking up objects, chopping down trees – and gradually, we started fleshing out what some of the gameplay would be. We were still prototyping, so we didn’t really worry about code quality too much. We were trying to find the fun and looking at our game pillars, including “survival for everyone.” We wanted a survival game, but we didn’t want it to be super hard or punishing – we were trying to distill what’s really fun and exciting about this genre.We asked ourselves: What do people love about crafting and resource gathering? What don’t they care about? That helped us define how players get resources, how they move them from one place to another, how they do crafting. We figured that all out by prototyping and iterating quite quickly using GameObjects and MonoBehaviours.Because we started from that little proof-of-concept demo, we could connect by internet address, right from the word go. It was possible to connect using a computer IP, but we also used Unity’s Relay service, which lets you host a game on a Relay server in the cloud. With Relay, anyone can join that game using a join code, and people can connect from home or the office without a VPN or known IP. That meant that we could get into a rhythm of weekly playtests – and we were doing them at work and on our home networks, which let us stress-test our network architecture alongside the gameplay with all kinds of different connection speeds. In the end, we kept Relay in production.We tried to stay as close to the publicly released packages as possible. If we found a bug in one of the packages, we’d identify it, bring the package locally, and try to fix it. Sometimes we’d go to Slack after and message Unity’s Netcode team to explain the problem and our fix so they could take that and do the PRs – and sometimes get it into the final version. We weren’t involved in the fix necessarily, but by working in a production environment, we found some issues that they hadn’t yet (although sometimes they already had a better fix than whatever we’d come up with, or they’d tell us we’re using it wrong).Because we developed this way, remotely through Relay, we didn’t add an offline mode until later, close to release. The offline mode doesn’t open up any network sockets, and it uses something called an in-process driver. It effectively behaves like it’s a network, with a server and a client, but they execute in the same process and communicate with one another. Instead of sending it through the network, they send it directly to the client. It’s called an in-process connection, and it’s very fast because you don’t have to wait for actual bytes to travel across the network, but it goes through all of the same flow as our gameplay does.Working this way, we didn’t need to code a different version – this is our single-player mode and our multiplayer mode. Single player and offline are still a network game, it’s just that we don’t use the network – it all just happens internally.This basically meant that we had one code architecture that we could use everywhere. The cost of that, though, is that when you’re hosting or on single player, you’re simulating the server and the client, creating a performance challenge to run both at the same time. With dedicated servers, a server might go off and live in a server farm somewhere, so that all you need is what’s called the client, which makes it all look nice and respond to whatever the server’s communicating. But on single-player, since we’re simulating, the game has to do both and can’t just sit off on a dedicated server somewhere.That ended up being one of our biggest performance challenges, optimizing so that the server and client could sit in the same game, in the same frame, and still hit our 60 frames per second target at a good resolution. That target was really important to us.Check out the other instalments of our blog series deep dive into Survival Kids production: - "Graphics and rendering tips from Survival Kids" - "Level layout and terrain workflows in Survival Kids" - "Inside the Survival Kids multiplayer network infrastructure"To learn more about projects made with Unity, visit the Resources page.

>access_file_