As automotive technology rapidly advances, consumers are looking for vehicles that deliver AI-enhanced experiences through conversational voice assistants and sophisticated user interfaces.
By Sunvir Gujral, director of product management for automotive at Qualcomm
Automotive products, technologies, software and architecture must prioritise safety, security and the highest standards of quality and reliability. These solutions utilise data from sensors, driver assistance systems, as well as all available context about the driver and the surrounding environment in real-time.
Processing data locally with large vision models (LVMs), large language models (LLMs) and small language models (SLMs) for different modalities can enhance the experience by keeping data private and ensuring efficient and secure access.
Equipping automakers with the latest technology to design these experiences is essential.
AI is playing a pivotal role in the operation and environmental interaction of vehicles as well as the way drivers and passengers engage with the vehicle and its surroundings.
Delivering pervasively intelligent experiences at the edge, along with premium audio and visual capabilities across multiple displays, requires a streamlined architecture, scalable silicon and a collaborative technology ecosystem focused on the future.
One of the most innovative and emerging aspects of modern automotive technology is the integration of AI-driven infotainment systems that continuously update in real-time using the context of the surrounding environment and perception built at the edge. This integration goes beyond adding features – it enables interaction between the driver, the vehicle and its surroundings.
Automotive innovation has transformed the driving experience with advanced displays and AI that fosters an engaging and user-friendly environment. These high-tech dashboards dynamically respond to user preferences, optimising settings for climate, audio, navigation and more. By incorporating safety alerts into infotainment systems, AI can help minimise distractions and enhance road safety.
For example, the same cameras and sensors used for safety functions (such as lane keeping, adaptive cruise control) can provide context for AI to personalise the infotainment offerings, such as adjusting audio levels or climate settings based on the number of occupants and their preferences.
This integration allows for a unified response system where safety alerts are intuitively communicated through the infotainment system, ensuring timely warnings and reducing driver distraction.
This convergence not only improves vehicle system efficiency but also elevates the overall driving experience. It allows for smoother transitions between entertainment and essential driving functions, allowing technology to better serve both comfort and safety needs.
Maintaining this context for real-time operations requires edge processing, not only for performance and cost reasons, but also to comply with increasingly stringent privacy requirements. Snapdragon Digital Chassis solutions have the NPU and heterogenous compute capabilities needed to build such solutions today.
With the recent launch of Snapdragon Cockpit Elite and Snapdragon Ride Elite, Snapdragon Digital Chassis scales to accommodate even higher tiers of AI with increased CPU, NPU and GPU performance. And we showcased several of these advanced AI experiences this year at CES 2025.
Here’s what’s in store:
AI is the new UI
- AI and orchestration demonstrate powerful infotainment use cases utilising multiple in-car and out-of-car services. HVAC, navigation and music control are all examples of applications that can be orchestrated using the in-car Qualcomm Edge Orchestrator.
- Qualcomm Edge Orchestrator provides the framework and SDK that can be used to extend and expand the vehicle’s AI-driven capabilities by plugins or tools developed by Qualcomm Technologies, OEM’s and third-party developers.
- A reference voice assistant application is available for OEMs and Automotive Tier 1 companies to support the development of unique AI-driven in-car experiences.
- Innovative and interesting features can be added using image understanding with AI models at the edge that reduce inference costs to zero. For example, cameras on the vehicle can capture images and analyse them using on-device models like LlaVa in near real-time to create a tour guide application.
Multimodal AI contextual awareness
- Context-aware intelligent cockpit systems are comprised of various sensors (such as cameras, biometric and environmental sensing) that capture information about the driver, vehicle operation and the environment. The data collected by these sensors is processed using neural networks and heuristics to generate scores, such as the driver distraction score and driver drowsiness score. Based on these scores, the system performs actions or generates recommendations.
- Intelligent interfaces recognise various driving scenarios and personal preferences, adjusting the display and controls accordingly. For instance, in high-traffic situations, the cockpit can prioritise essential driving information, while during leisurely drives, it might highlight entertainment options or scenic routes.
- Sensors on the vehicle capture landmarks, while edge AI can interpret the scene with on-device models like LLaVa. Further inference can offer additional details on the landmark, like when it was built and why or if you can tour it and when.
Snapdragon Digital Chassis with an AI playground deployed in the cloud
- Virtual workbench and partnerships with a broad ecosystem enables automakers to build and validate new technologies in a protected, virtual environment prior to real world testing.
- An apps/services AI playground enables inference on real, cloud-based hardware to extend the on-device AI processing capabilities. Large models running on Qualcomm AI Cloud 100 devices in the AI Playground enable users to request illustrated short stories on demand, merging the power of AI with creativity. This offering not only highlights the seamless use of hybrid AI but also underscores its potential to deliver personalised entertainment for passengers of all ages.
- Qualcomm AI Cloud 100 can be used for offloading complex or long context inference tasks that might not meet latency requirements when executed on heavily loaded edge devices. It can also be used by developers to test their prompts and AI agents without needing access to development boards.
- Data Simulation Factory (DSF) uses the data collected by vehicles to be processed in the cloud and provides insights to develop new features, functionality and services.
- Snapdragon Car-to-Cloud Services help engineers build connected and intelligent vehicles that help to enhance safety, are customisable, immersive and upgradable.
Shaping the future of driving with scalable solutions
The integration of centralised architectures, AI-driven infotainment systems and the surrounding real-time environment is revolutionising the automotive industry. By leveraging Qualcomm’s Technologies’ advanced solutions, such as the Snapdragon Digital Chassis, comprehensive AI toolchain and Qualcomm Device Cloud, automakers can design vehicles to prioritise safety, enhance efficiency, and provide a highly personalised and immersive driving experience.
From deploying Unreal Engine 5 on Snapdragon Cockpit Platforms to innovative AI-based driver state monitoring, Qualcomm Technologies is at the forefront of providing scalable solutions. The capability to process data both at the edge and in the cloud allows for the development and validation of new features in a secure environment before being rolled out to consumers.
As we move forward, the convergence of AI, multimodal contextual awareness and cloud-based services will continue to unlock new possibilities in vehicle design and functionality. This holistic approach enables technology to enhance both comfort and safety, paving the way for a smarter and more connected future of driving.