From SDV to AI‑Defined: Trends from the 3rd Automotive Computing Conference (ACC)
- Andrew Wilczynski
- 1 minute ago
- 4 min read

Event: March 24–25, Dearborn, MI
Who was there: 108 registered participants from OEMs, Tier 1s, and technology vendors; 26 sessions; 8 exhibitors (Micron, Sonatus, Cellebrite, Dana, Intrepid Control Systems, TASKING, Elektrobit, Synopsys).
Why it matters
Automotive computing is consolidating and accelerating. Over two days in Dearborn, the conversation moved from software‑defined vehicles (SDVs) to AI‑defined vehicles (AIDVs), with speakers aligning on three aspects: consolidating compute, shifting intelligence to the edge, and virtualizing development. The net effect: faster iteration, richer personalization, and a route to safety‑critical AI at scale.
SBD Automotive Consulting Director, Americas – Alex Oyler, gave a talk on the first day of the conference titled:
“The SDV Space Race”
Software-defined vehicles (SDVs) represent an automotive “space race,” requiring OEMs to invent missing technologies while changing how vehicles are developed, updated, and monetized. Just as the U.S. entered the 1960s space race with minimal experience, traditional automakers entered the 2020s as hardware integrators with electronics far behind consumer technology. The new goals are forcing OEMs to build new E/E architectures, software platforms, and OTA capabilities from scratch. Measuring with SBD Automotive’s EEA maturity model, most OEMs are still transitioning from connected or updateable architectures (EEA 2.0–3.0) to fully software-defined systems (EEA 4.0).
However, the first “test flight” of SDVs exposed major economic and organizational problems. High costs and scalability are worsened by the ever-changing EV market. At the same time, suppliers face declining profit margins, layoffs, and restructuring. These traits are pushing OEMs toward greater software insourcing. Falling behind in the SDV race brings competitive losses, while success requires reducing EEA costs, enabling cross-domain OTA updates, rethinking OEM–supplier relationships, and treating software as a core product rather than a feature.

Key trends we noted
1) AI‑Defined Vehicle Momentum
We observed many speakers framing AI not only as a feature but as the fundamental principle of next-generation platforms influencing perception, personalization, and even workflow automation across engineering and validation.
BMW Neue Klasse was highlighted for its high-performance central computing using an Ethernet backbone, enabling AI-driven features such as highly automated driving, pothole detection, and smart lighting.
Stellantis outlined a transition away from fleets of distributed ECUs toward centralized compute with zonal aggregation, reducing ECU counts and wiring while unlocking more robust OTA.
We heard that moving AI inference to the edge benefits will improve privacy, lower latency, resilience to cloud outages, and reduce data transmission costs. Architectures favored hybrid patterns with onboard inference with selective uplink of summaries, anomalies, or training signals.
Why it matters: While AIdefined vehicles were a dominant theme, most OEMs are still working through foundational SDV challenges, including architecture, OTA reliability, and crossdomain integration. As a result, centralized compute with “logical” zonal aggregation is emerging as a more practical nearterm solution than fully distributed zonal controllers. The key technical challenge will be managing latency as safetycritical edge ECUs increasingly rely on centralized compute via TSNenabled Ethernet backbones.

2) Cybersecurity: VSOC Grows Up
The Vehicle Security Operations Center (VSOC) model is advancing beyond simple telemetry uploads. Onboard AI analyzes local signals to identify likely attack paths and sends only relevant information back to the VSOC. The result is reduced bandwidth use, quicker triage, and fewer false positives.
Why it matters: As centralized computing increases vulnerability, intelligent telemetry becomes essential to keep detection accurate and affordable.

Photo courtesy of ACC Website
3) Thermal Is the New Bottleneck
With autonomy and AI workloads rising, traditional air‑cooling and cast‑aluminum liquid cooling are approaching their usable limits.
Dana argued for brazed heat exchangers to deliver higher conductivity, thinner walls with less weight, and double‑sided cooling. They concluded that many current solutions won’t work on the next‑gen automotive SoCs.
Why it matters: Thermal constraints directly cap AI performance per vehicle. Cooling choices made now will define the physical limits of onboard intelligence.

Photo courtesy of ACC Website
4) Chiplets, Digital Twins & Time‑to‑Silicon
IMEC mapped a path toward an open, multi‑vendor chiplet ecosystem for automotive that focuses on reliability, supply‑chain resilience, and integration optionality.
Sessions on the system‑to‑silicon pipeline emphasized digital twins, multi‑die modeling, emulation‑based validation, and targets 13 months from architectural design to first silicon.
Why it matters: Open-chiplet strategies, along with virtual validation, could reduce silicon risk while keeping schedules competitive.
5) Virtual ECUs Go Mainstream
Multiple speakers argued that virtual ECUs (vECUs), virtual platforms, and digital twins are now essential for validating modern stacks, long before physical prototypes exist.
Quantified outcomes shared: multi‑million‑dollar savings per ECU program, roughly 10 months of schedule reduction, and ~40% fewer defects through earlier discovery.
Rivian showcased an “own‑the‑stack” approach with vECUs as the primary development surface, and AI agents embedded in human-supervised engineering workflows for debugging and automated test generation
Why it matters: Virtualization, combined with AI-supported workflow and development, can enhance software quality. This is essential as vehicles become ongoing digital platforms. Rivian’s presentation highlighted a broader shift underway: software stack ownership is increasingly becoming the primary driver of E/E architecture. OEMs that develop and validate software almost entirely in virtual environments are defining interfaces, update strategies, and hardware abstraction layers long before engaging suppliers.

Photo courtesy of Alex Oyler
What this means for OEMs
Software stack ownership is becoming an increasingly important differentiation; OEMs should assume greater control over software platforms, tool chains, and validation environments
Supply chain resilience is now influencing silicon strategy, driving interest in custom SoCs, chiplets, and multi-vendor ecosystems
Central compute architectures could offer scalability, but paired with latency management and safety-critical separation
What this means for Tier 1s
Software value will shift from proprietary ECU applications to enable OEM-owned platforms
Competitive advantage will come from capabilities such as virtual development environments, open-source collaboration using safety-certified programming languages
New entrants supporting in-house silicon design and integration to reduce the risk of the supply chain constraints that uses a single SKU SoC
“The Automotive Computing Conference sessions make it clear that cars are rapidly becoming AIheavy, and softwaredefined, built on centralized compute and zonal architectures. Edge AI is moving directly into the vehicle for safety, latency, and privacy, forcing a rethink of hardware from memory and cooling to chiplets. To survive that complexity, the industry is standardizing on open, safetyoriented stacks and aggressive virtualization to ‘shift left’ development and testing.”
-Andrew Wilczynski “SBD Automotive” – Research Analyst
