Digital Sibling for Virtual Worlds: How autonomous is reality?

Digital Sibling is a great word, because it makes things easier: Whether intentionally or unintentionally, it opens up possibilities for identical images or even just similar implementations, primarily of real existing products, vehicles or places such as streets and cities in a three-dimensional environment - these days often implemented with the Unreal Engine 5 or via Unity, for which we suggest our interview with Stefan Wenz from Epic Games.

However, until 3D artists have digitized and optimized test tracks for, for example, machine learning in connection with sensor technology of modern, partly autonomously driving cars, it is often a long - and costly - way, which, moreover, is often difficult to adapt individually and, above all, quickly. In our article about AVES Reality, a start-up from Garmisch-Partenkirchen founded by Florian Albert, we explain how a business model aims to solve the question of speed and scalability and the approach behind it, as well as, of course, the question of the benefits for the automotive industry.


Marketing Professional


Ca. 16 min

Sharing is caring!

Digital Sibling vs. Digital Twin – Like and Like does (not so much) Like Each Other?

The fact that the development of autonomous vehicles in particular is an enormous challenge goes far beyond the purely technical; after all, all too often myriads of aspects are added to what the vehicle has to do: sensors have to record, camera images want to be converted, data have to be communicated – decisions have to be made. For whom does the car brake in case of doubt? How do you rule out confusion, such as that which is repeatedly sensationalized by Tesla vehicles? So how do you manage the challenges that arise when creating virtual test worlds, and what exactly do these challenges look like?

Florian Albert:The challenges of creating virtual worlds for testing autonomous vehicles are manifold. On the one hand, data from a wide variety of sources must be merged and taken into account. For example, information on the course of roads must be combined with topographic elevation models, building footprints, and vegetation zones. On the other hand, a large part of the work steps for 3D modeling is still done manually today using different graphics software and design tools. This obviously doesn’t allow for the creation of scalable and infinitely large 3D worlds that would be necessary for comprehensive, virtual testing of autonomous vehicles.”

Florian Albert, founder and owner of AVES Reality, not only develops Digital Siblings real-world environments to create virtual test environments from – he also found time to answer questions from the Mobility Rockstars editorial team

So we now also know that an extensive simulation of countless test scenarios is required to create a Digital Sibling – here it is not enough to simply simulate city models: The sensors have to be integrated, and in order to train them correctly, the virtual environment has to match real physical properties – so designing autonomous test tracks digitally has to be based on real conditions, the visual stimuli have to be correct, and countless test scenarios including rare edge cases want to be considered. And what if circumstances change? The same test track, but with rain? Or potholes? With houses on the street, whose color reflects sometimes more or sometimes less in sunlight and could confuse the camera? Until now, this has often required laborious, manual conversion by the 3D artist, which incurs costs and can delay projects.

A “digital twin”, i.e. a 1:1 replica of a real existing street in a busy city that is to act as a test track, cannot therefore always meet all the requirements for every test scenario. Orwell already knew that equal is not always equal. AVES Reality takes a different approach here, in which test tracks are not recreated completely and from scratch in Unreal (or any other graphics engine), but instead artificial intelligence generates 3D environments from satellite images. Because of these intentional deviations of the virtual images from an identical 1:1 image, AVES Reality likes to refer to its 3D worlds as “Digital Siblings” instead of “Digital Twins”.

Taken out of the air – the Digital Sibling for test tracks from AVES Reality

For the implementation of digital siblings, as the Digital Siblings literally translated, serves AVES Reality as an initial basis a satellite image of the area that the customer wants digitized as a test track. This area, which has now been captured via aerial photographs, will be processed in the following via A.I. analysis: Where are plants, bushes, trees, which data sources can be used to create a realistic image of the desired test area?

Once this data is available and evaluated, an algorithm-based 3D reconstruction of the images is performed and at the end the customer receives the virtual lookalike, the digital sibling of the area, which can now be “populated” with e.g. road users and the digital twin of the test vehicle, with road markings, traffic signs or weather effects if desired.

Creating digital render images from satellite images is well known: Google takes a slightly different approach with Street View 3D, but follows a similar approach and is also focused on testing autonomous vehicles (Image source: Google/The Decoder)

The advantages of this approach are obvious: Unlike in the past, test routes do not have to be laboriously modeled by hand; large areas can be quickly and easily captured via satellite image, further digitized via A.I., i.e., virtualized, and later processed as desired: The house on the corner should rather be yellow? Or not even be a residential building, but a mirrored bank building? Or gone altogether?

Such wishes could certainly be adapted more or less easily before – the speed and scalability of the Digital Sibling, as AVES Reality promises, is still new in the industry and a unique selling point of the still young company.

Florian Albert: “With our technology, we offer an unprecedented combination of geospatial data analysis, mapping and 3D reconstruction, which is specifically tailored to the needs of the automotive industry. We incorporate road information from so-called HD maps, design buildings and landscapes based on customizable parameters, and thus offer our customers maximum flexibility in the variation of virtual worlds. Ultimately, we then provide the industry with a very performant and automatable interface to effortlessly export the 3D content we generate into game engines, or automotive simulation tools.”

An implementation in a few hours, under favorable circumstances even in minutes, compared to the previously sometimes months-long waiting times for virtual test environments certainly sounds promising and could integrate a previously unknown benefit into the industry. No more having to go out on the street to capture environments and then laboriously virtualize them later, global application instead of local streetscapes, and rapid adaptation of otherwise rigid 3D models: It sounds almost too good to be true.

Hurray hurray, the digital world is here – but can you rely on it?

Even the first contact with the data science industry – which, by the way, is much more exciting than it sometimes sounds in the first few sentences – lets even newcomers know that any model is only as good as the underlying training data. So how much reliance can be placed on the data generated in the 3D virtual world, as part of the Digital Sibling, which in turn will be used to train a virtual autonomous vehicle?

Florian Albert: “Ground truth data is of particular importance in the development of self-driving cars. This data is to be considered true and accordingly serves as benchmarking for the results predicted by an AI system. In order for this data (e.g., images from a vehicle’s front camera) to be machine-readable, it must be annotated or labeled very precisely. This process of assigning a class or meaning to each object, or even to each image pixel, is very laborious, usually manually driven, and time-consuming for real-world collected test data. One of the biggest advantages of virtual worlds is the automatically included ground truth information for each object, since all objects of a virtual scene are known and based on software code. If these virtual scenes are now procedurally generated based on parameters, as in AVES Reality, then very large amounts of data and very many variants of this valuable data can be generated in an instant, which is essential for training a robust AI system.”

The fact that this circumstance ensures that the automotive industry saves effort and time with its gigantic amounts of data would be an undeniable advantage over the conventional methods of providing environmental images in game engines for testing new vehicle developments.

Whoever says A must of course also say B – translated into a somewhat more concrete question for test engineers, we would therefore want to say: Whoever says “testing” must also say “automation”. Gone are the days of 200 test vehicles with professional drivers covering millions of miles, documenting trigger events, some automated and some manual, and sending petabytes of data through complex pipelines for labeling. Each test scenario ideally runs in an automated fashion, independently designing and also executing changes to the test scenario – allowing millions of test scenarios to be mapped independently and in the shortest possible time, generating and delivering high-quality training data for an outstanding volume and, as already noted, quality.

Looking at the concept of AVES Reality, one of the most difficult questions seems to be why nobody came up with the idea before – in terms of scalability, speed of implementation and high ground truth, the concept of the digital sibling of real test environments definitely seems forward-looking. Speaking of the future:

All just a gimmick? Where are AVES Reality and the Digital Sibling headed in the coming years?

Florian Albert: “At the end of the day, we don’t want to limit our technology to the automotive industry. Creating large 3D virtual worlds based on real-world data can be exciting for many fields. For example, in gaming, it could be extremely cool in our imagination for many users to play car racing games in their own hometown instead of on pre-made standard tracks that are already there in the game.”

Extremely realistic depictions of real-life situations such as city traffic have long been familiar from the gaming industry. This image from the upcoming “Cities: Skylines 2” shows the considerable groundwork of the game industry very vividly (Image source: Paradox Interactive)

Expertise can certainly be built up within the current concept for this. Organic and realistic (because real) 3D images of the world, in which large areas are built up procedurally (thanks to quite high computing power), for the user with practically one click? The concept is not entirely unknown and has parallels to Epic’s “Quixels”, high-resolution assets that can be dragged and dropped into virtual environments and thus, in combination with “Metahuman” and various script kits, also pave the way for non-programmers into game development. The scalability of the Digital Sibling is what should make AVES Reality interesting for one or the other board level – after all, processing time and personnel expenditure can be saved here, which can result in leaner processes and thus cheaper production.

In addition, some of the processes are, as far as can be judged from a distance, already close to game development; after all, the large, procedurally generated areas, even after the population with all kinds of traffic of any nature and the integration of the sensors to be tested, still want to play smoothly with 60 FPS, better even 120 FPS. Florian therefore revealed to us that it is little surprise that AVES Reality also employs people from the gaming sector.

Florian Albert: “The increasing use of 3D engines, virtual reality and other applications that we are actually familiar with from the gaming or entertainment industry requires a lot of new know-how for the automotive industry. This is where classic vehicle technicians, mechanical engineers and even computer scientists sometimes reach their limits. To master this variety of challenges in combining gaming and automotive, our team consists of a number of experts with a wide range of backgrounds, from game development, to AI development, to geoinformatics or systems engineering.”

There is little to add to this. If you are interested in AVES Reality’s Digital Sibling concept, we are attaching the whitepaper as a PDF below, where you can get an overview of how the Digital Sibling works and what’s behind it, and we’ll link to it for you here still our testing section on the blogwhere we have some more exciting articles on the topic of automotive testing for you, for example – and this cross connection is extremely appropriate -… why there would be no autonomous driving without the gaming industry. If you have further questions, please contact us at any time by contact form to us or contact Florian Albert directly, you can find the possibilities in the whitepaper.

Whitepaper: AVES Reality – Virtual World Creation
Wie funktioniert es denn nun, das Kreieren virtueller Testumgebungen als Digital Sibling realer Landschaften? Dieses Whitepaper, direkt von AVES Reality bereitgestellt, informiert über seine Vision eines maximal skalierbaren Prozesses zur Erschaffung virtueller Welten.