"What is an A.I. anyway?"
Technically, we know this: an LLM, a language model, artificially created pseudo-intelligence, enriched with training data. We talk about machine learning, prompt engineering, algorithms and clouds. But what is an A.I. really? Is it not rather an "extended life version", a whole new species? Finally, Michael Knight didn't get into the Trans-M, model number 2,000, with the onboard A.I. KITT in the series - he got into KITT, KITT was his buddy. Tony Stark doesn't talk to assistance software for trajectories and status calculations - he talks to JARVIS ("Just Another Rather Very Intelligent System") or FRIDAY and downright flirts with the sophisticated language model. And even away from the science fiction on the big screen, this continues: we don't activate A.I. on the iPhone, we activate Siri, we ask Alexa to order something, and new smart cockpits can simply be named by the user. "Hello Helmut" is not a vision, but a new everyday Helmut that supports us in mobile road and data traffic, and we are not even a little afraid of nameless computers, but of "Skynet". This list could be continued, these days almost arbitrarily. "VERA" (not the official name) seamlessly joins a virtual companion environment, aboard autonomous vehicles, in public transportation, and perhaps in the end user's personal vehicle. So is the name the program, and the programming the name? Will A.I.s become personal and ubiquitous everyday helpers that can handle everything from predictive maintenance (detect road temperature -> predict tire change -> proactively book an appointment at the repair shop in sync with the shared calendar?) to wedding day finesses of interpersonal interaction? Does it recognize when we are unwell, tired, or should be active? Does the A.I. give dating tips while driving?
What is certain is that the "Very Enhanced Road Assistant" takes a new step in an unprecedented development of generative A.I. in vehicles and beyond. This article describes what the Very Enhanced Road Asisstant exactly is and can do, offers interviews and technical background: Enjoy!
Marc
Marketing Professional
13.10.23
Ca. 16 min
Very Enhanced Road Assistance: Really picking up passengers
There is general agreement that the introduction of autonomous vehicles to the global automotive market is an enormous step in the development of tomorrow’s mobility. It also means fundamentally more than just technological progress; it tends to usher in a new chapter in public transport. The rollout promises a scenario in which autonomous vehicles are not a mere novelty, but form the backbone of urban transportation. For such a scenario to become realistic, seamless and inclusive transportation is essential. But it must also literally and proverbially “pick people up”: physically, but also in the context of building bridges to people in everyday life, for whom fear of contact with high-tech innovations may exist. The ubiquitous rise of generative A.I. in almost every field is an example of this – but it also represents part of the solution.
In this context, Cognizant Mobility is presenting – for the first time at IAA Mobility 2023 – its latest A.I. system, the “Very Enhanced Road Assistant”. This system shows what the future of ride assistance could look like in public transportation, especially in autonomous vehicles. The AI system can interact personally and individually with passengers, provide driving information, and is even expected to be able to recognize medical emergencies in the future.
Here, too, the – justified – question can be asked: “But what is really new? Generative A.I.s exist more than ever on the net!” The answer is almost surprisingly simple: the multimodality of the Very Enhanced Road Assistant is as much a novelty as the Assistant itself. The multimodal approach brings together what has often not necessarily belonged together until now: The Road Assistant is not just a chat window with a camera – the system can see passengers, recognize facial expressions and react to them. It can perceive speech and interpret commands and questions, can scan tickets, and is capable of learning to interact optimally and ever more personally with passengers. This combination of individual services to create an overall experience for passengers is almost as much of a novelty as the final Very Enhanced Road Assistant itself – we’ll go into more detail about this in the following paragraph.
Very Enhanced Road Assistant: Sustainability and Smart Cities
The overall evolution of virtual assistants based on generative A.I. takes the user experience to a new level that is inclusive and motivating. After all, it is above all the pleasant user experience that encourages the use of public transport – and this brings far-reaching benefits if it first reduces individual traffic: parking spaces can be used more sustainably and socially sensibly in inner-city areas, roads can be reduced and infrastructures can be streamlined. Certainly – no merit of A.I.-based ride assistance, but an important building block on the way to modern and sustainable mobility.
Dr. Daniel Isemann’s (Head of Data Science and AI) perspective supports the fact that virtual passenger assistance also offers many advantages from an industry perspective, according to which A.I. will not only be used sensibly in public transport, but will also be made available in vehicles of all kinds as a smart driving assistance.
Head of Business Development Michael Pollner presents a potential that may still sound like science fiction today, but could already become reality in the historic “tomorrow: In the near future, A.I.-based assistance systems could become the first and most important interface to people and users. The A.I. and its avatar become the vehicle itself, think of “Knight Rider” and its “K.I.T.T.” popular with young and old alike. (stood for “Knight Industries Two Thousand” as the model designation). The A.I. in the vehicle is becoming more and more attuned to the personal needs of each individual user, literally reading the wishes from their lips, booking parking spaces, movie tickets and reminding them of their wedding day including gift suggestions – selected according to their individual tastes. It also takes care of the reservation for the winter tire change, since this was done in the same way the year before and the process can thus be automated in a user-friendly way.
“We will see, in a sense, personal relationships between people and their A.I. systems, as they will use the entire lifetime context for deeper and deeper bonding and adaptation,” Pollner adds. The fact that data is collected with the consent of the user in order to offer a customer-friendly overall experience will not only raise the experience in individual traffic to a new level. A system like the Very Enhanced Road Assistant offers the obvious opportunity to act as a virtual life assistant, not only optimizing the vehicle, but becoming the personal concierge for all life situations. Coupled with the IoT, Industry 4.0, smart cities and autonomous vehicles, the possibilities are almost, if you’ll forgive the brief excursion into prosaic realms, breathtakingly diverse.
Of course, the many potential use cases from sales to after sales are of fundamental importance for the industry. After all, as previously indicated, information is exchanged under previously agreed conditions that may be of interest for the benefit of the customer and the improvement of customer-oriented services. Today in autonomous public transport, tomorrow in an individual vehicle and the day after tomorrow on the smartphone always and everywhere? An opportunity to no longer sell customers products, but experiences. You don’t have to be a marketing expert or read studies to realize that a positive and lasting experience is more driving than any sales promise.
Very Enhanced Road Assistant: Win-Win for the Passenger
The Large Language Model (LLM) underlying the Very Enhanced Road Assistant is a technological highlight that gives an A.I. the ability to interpret and produce high quality human languages. Cognizant Mobility has created technologies for this purpose that allow the connection of speech-to-text, 3D avatars, language models/LLMs, vector converters for the preparation of web-based information, and specialized databases for rapid information retrieval. This is a challenge in the development process that should not be underestimated: What is often not perceived by outsiders at first glance is the fact that there were no plug & play solutions here. All solutions had to be adapted, interfaces had to be developed partly “from the scratch”, language models had to be tested, selected and trained(we also recommend our series of articles on Machine Learning). Choosing which backend database is suitable for existing and upcoming demands, or integrating vector converters to enable processing of info from the web, had to be explored and implemented to create a resilient generative A.I. that could serve – and convince – as a driving companion.
Above all, the keyword of “information” is one of the specific points to be considered in the development of A.I., since the responses of the driving companion must be resilient and in no case lead to the “hallucinations” often observed in generative A.I.; that is, to information that the A.I. considers to be correct based on probability, but which is ultimately incorrect. The Very Enhanced Road Assistant uses a method specially developed by Cognizant Mobility, in which the answer is not based on the probability of the correctness of the next word, but on facts. The underlying databases are optimized and maintained by Cognizant Mobility specifically for the desired use case, which extremely increases the likelihood of receiving a correct fact-based answer. This way, users can be sure to receive correct and targeted answers. This process of enriching the voice model with appropriate data is a USP of Road Assistant and Cognizant Mobility.
Generative A.I. in the vehicle: What is true must remain true
It is precisely this experience in dealing with language models and training data, thanks to a high level of expertise in machine learning, data science and connectivity, that is of particular importance for the further development of virtual driving assistance. Providers who want to use Road Assistant in their own vehicles may have different wishes for the A.I. – for example, it may need to make recommendations for city explorations with maximum reliability, and when booking tickets, flights or accommodations, no mistakes should be made either. Depending on the use case, the system must be specialized and trained with appropriate data to provide fact-based answers in any desired area without machine hallucination – thus ensuring positive user behavior in the first place. The most modern development is wasted effort, should users have been confused by fuzzy or even wrong answers of an A.I.. Modifying the underlying language model (LLM – Large Language Model) accordingly and thus guaranteeing a versatile and flexible scope of application is one of the special features for which the Very Enhanced Road Assistant was developed from the very beginning. To increase the resilience of A.I. responses for in-vehicle use, the Very Enhanced Road Assistant uses what is called “prompt engineering.” Together with a maintained vector database, this prompt engineering, i.e., the construction of the input and the analysis of the data obtained, ensures that the A.I. language model takes into account the user context and largely avoids false statements.
Beyond voice, the Very Enhanced Road Assistant can communicate with travelers through other channels: Thanks to multiple cameras, the system can actively engage passengers and scan relevant information to better reflect the context of a trip in conversations. This advanced AI capability has been strengthened by the integration of audio, video, and scanning techniques, which in the future will enable AI to modify its responses based on users’ nonverbal communications as well. Once again, the connectivity of the individual modules (cameras, sensors, text-to-speech, etc.), which was developed in-house from the ground up, played an important, in some cases even elementary, role in this.
So can all these innovations ensure that passengers lose their first fear of contact with the autonomous driving world? The last mile solution in the last mile solution, an innovation matroshka? Can trust be digitized and can ride assistance help reduce the need for individual vehicle control to experience greater user comfort?
What seems certain is that the evolution of transportation, of a truly new mobility far removed from pure drive technology changes or barely detachable individual transport, is made somewhat more tangible with the “Very Enhanced Road Assistant. After all, “transportation” is ultimately a term for the purely technical overcoming of a distance lying between start and destination. When we talk about people, we always talk about an experience, a tangible experience, memories, in short: a journey. And it has only just begun.