페이지 상단

How Much AI is Too Much?

Considering a scorecard approach to the responsible and sustainable deployment of AI.

ree

By:

Riley Keehn – Sr. Consultant, Regulatory and Government Affairs

Andy Qiu – Sr. Manager and Subject Matter Expert, AI/ML SBD Automotive


Section 0: Is AI a holy grail or a cautionary tale?

Let’s begin by acknowledging one simple, hard truth: a company’s needs as a business rarely mirror its consumers’ needs. For a while now, the automotive industry has been attempting to reconcile with this in an increasingly complex web of rapid technological development, waffling regulatory priorities, and compounding trade costs.

The industry is now facing pressure like never before:

  • OEM and supplier margins are being squeezed by a tight global economy and evolving trade policy that has led to a tense period of supply chain review and reform.

  • Simultaneously, investors require more brand differentiation and are steering automakers toward the leagues of silicon valley’s tech giants as cars become more ‘computer’ than ever.

  • Then, there is pressure from governments, who are positioning automakers as a pillar of regional politics and domestic protectionism, with expectations to contribute to a secure labour market; be a bastion of cybersecurity against an ever-evolving, invisible global threat; and to design, manufacture, and sell more and better than ‘the other guy’ – all with varying degrees of support for these endeavors per region.


Somewhere at the intersection of all this sits the ‘silver bullet’ of artificial intelligence (AI).

The above points, among other factors, are pushing OEMs towards software-defined vehicles (SDV) and digital revenue models, so at first glance it is easy to feel like AI fits right in as a solution to all our problems.


It could streamline the software development supply chain; it could enable new connected features and SaaS models; it could reduce rising labour costs; it could manage and analyze our deepening data lakes with record efficacy; it could transform smart manufacturing; it could provide days of research at the click of a button; simply: it could give companies the competitive edge they have been desperately clawing for.


At face value it seems too good to be true, and unfortunately, these things often are. While whole sectors, companies, and individuals are clamoring to capitalize on the promised benefits of AI, some key concerns (and potential consequences) are being hastily overlooked.


ree

For all those who want and regularly use AI, there is an equally growing sentiment against it, driven by grievances as basic as feeling overly inundated with redundant AI features and the MSRP impact of ‘technology bloat,’ to more complex themes like trust, unclear liability frameworks, and AI data centers’ land, energy, and water resource costs. And, where AI may not be polarizing, for some brands and specific use cases it may just not make sense.


Some days it feels like the whole world is ready to make these sacrifices in the name of AI innovation - but should we? Do we have to?


To tackle that question, we must explore a more measured approach to AI - one that acknowledges the unique positions of the decision-makers and the end-users.


This article challenges the automotive industry to reexamine its AI implementation through a framework that addresses user impact, brand fit, and longevity of demand, balanced against resource costs, risk, and governance, to ensure responsible deployment and sustainable revenue.


Section 1:  Lead with purpose, not technology

Prior to developing a metric, OEMs should first return to their “why” statement.


While the news cycle is flooded with cutting-edge product announcements and concepts, the pressure is undeniable. No company wants to fall behind, and so they are quick to ask themselves:

“How can we implement this? Should we insource or outsource?” “Where does this fit into our operations?” “When can we deliver a similar product?” “What model(s) should we use?”

Before throwing AI at the proverbial wall and seeing what sticks, press pause and ask yourself again: why? Why for this sector? For this brand? For this team? For this product? For personal use?

There is a growing schism of thought, particularly here in the U.S., that is symptomatic of a lack in purpose-driven AI development and a forgoing of the “why.”


On one hand, a new Pew Research survey of ~5,000 U.S. citizens found half (50%) are concerned about the increased use of AI in daily life, while 57% view AI as being a high risk to society.


ree

On the other hand, active AI use is growing rapidly. With a comparable sample size (~5,000 U.S. citizens), a Menlo Ventures study found that 61% of adults actively used AI in the past six months, while about one-in-five (19%) use it daily, across all generations. For the 39% who do not use AI, reasoning varied: most prefer human interaction (80%), while others were concerned about data privacy and IPR (71%), accuracy and reliability (58%), accountability and liability (53%), and others do not see a use case that makes sense for them (63%), aligning generally with the sentiments raised by Pew Research’s review.


This divide – where a majority of U.S. adults distrust AI, view it as ‘high risk,’ or do not find value in it, yet increasingly are actively engaging with it – suggests it is not necessarily the technology itself that raises concern, but where, how, and how thoughtfully it is applied.

ree


For OEMs, this can mean money lost or left on the table. When Menlo Ventures scaled their study globally, they found ~2 billion individuals are using AI, yet consumer spending is only around $12.1 billion. This market represents a low uptake rate, where just 3% of consumers are formally buying into AI services. Of that $12.1 billion, general AI assistants are pulling in the significant majority (81%) of revenue, with OpenAI’s ChatGPT alone accounting for 70% of that segment.


That leaves ~$2.9 billion in consumer spending (at the current uptake rate) for automakers to capture in the general AI assistant segment, where they are already competing with other established offerings like Microsoft Copilot, Google Gemini, Meta AI, Amazon Alexa, Apple’s Siri, Samsung’s Bixby, Claude, and more, and where a number of such offerings are already brought into the vehicle through smartphone duplication (e.g., AAOS, GAS, and CarPlay) or development partnerships. Ultimately, it leaves a thin slice of pie for OEMs to capitalize on if (co-)developing a native virtual personal assistant (VPA), and as such the industry has yet to realize a successful business model: the majority of virtual in-vehicle assistants are being offered for free by the OEM or bundled within an existing connected service subscription.


Looking beyond just AI assistants, that also leaves only $2.42 billion in spending - globally - across all other consumer-facing AI product segments for OEMs to jockey for between the same players in the VPA space, plus more, and each other. It is a comparably slim slice of pie, and SBD Automotive research has already seen parallels in AI navigation, as another example.


Now, note that Menlo Ventures’ study did not include enterprise AI applications, which grew significantly in 2024 but have seen adoption and revenue generation slow in 2025. Consumer-facing products, and therein consumer sentiment, utility, and demand, is just one component of a much broader scorecard approach to purpose-built AI.


Section 2:  Using a balanced-scorecard approach

OEMs can adapt a multi-factor evaluation model to ensure their AI development is fit for purpose. While this scorecard example is geared towards consumer-facing AI, the framework can be adjusted up the supply chain to similarly evaluate AI in enterprise, engineering support services, smart manufacturing, and more.


Any such scorecard should at least consider the following weighted dimensions:

  1. Customer Demand & Utility - ensuring AI responds to real needs

 

Section 1 touched on a few key examples of consumer demand and utility in the context of general perceptions and uptake of AI and the potential for revenue generation from consumer-facing products and applications. However, there are multiple factors influencing demand and utility that should also inform this evaluation.

 

A persistent challenge in evaluating in-vehicle AI is separating genuine customer demand from industry-driven momentum. Despite rapid advancements, many consumers already rely heavily on AI ecosystems outside the vehicle - such as Copilot, Gemini, and ChatGPT - raising questions about whether car-embedded AI meaningfully adds to their daily routines. Early evidence suggests that repeated functionality across devices tends to suppress perceived value, especially in the constrained and distraction-sensitive environment of a vehicle cabin.

 

Industry observations also indicate that users do not share a uniform expectation of what “intelligence” in the car should look like. While some welcome a highly conversational, human-like assistant, others prefer a more restrained, tool-like experience that performs tasks efficiently without adopting the behaviors of a virtual persona. Yet, most current automotive AI systems treat intelligence as a fixed setting, offering no way for users to calibrate how proactive, contextual, or “personal” the system should be.


ree

This becomes more complex when considering privacy. Higher levels of intelligence generally require deeper access to personal context - habits, preferences, locations, and communication patterns. However, context is also the source of many users’ hesitation. Without an established sense of trust, some drivers are reluctant to expose this degree of information to an in-vehicle system, regardless of its capabilities.

 

As a result, a widening mismatch is emerging between the diversity of user expectations and the uniformity of AI experiences being deployed. Some drivers value simplicity and minimal intrusion, while others expect richer assistance. Whether a feature sustains long-term engagement increasingly depends on how well it aligns with these differing preferences rather than how advanced the underlying model is.

 

Regional differences also play a significant role in understanding the audience for AI. This is not too different from the regional splits for other new vehicle technologies, like in electrification and other digital lifestyle features. The level of functionality and convenience that consumers in China expect from products, like their cars, skews much higher than in Europe, which itself tends to skew higher than in the U.S.


A September 2025 article by McKinsey & Company describes this, saying “smart cockpit features and voice assistants” are “key priorities” in China, while they “remain nice to have[s]” in the U.S. and E.U.

 

With that expectation, or lack thereof, also comes a relative understanding and adoption rate of these new technologies per region. This suggests that AI development is not necessarily a one-size-fits-all global strategy, or at least the introduction of that AI should be more regionalized.


Drivers in the U.S., for example, are much less familiar with – or more wary of – advanced technologies, and may be more likely to adopt AI long-term if they receive more education and guidance from an OEM’s website or at the dealership, or with more gradual introductions to simple applications before being presented with a fully AI-driven in-cabin experience. It’s comparable to, say, a driver getting comfortable using L2 automatic emergency braking and lane keep assist before trusting a ride in a L4 autonomous shuttle.

 

  1. Affordability & Lifecycle Cost – the cost per vehicle vs. perceived value, maintenance, and data

 

The economics of automotive AI remains one of the least resolved questions in the industry. Although VPAs are now widely showcased, a viable business model for in-vehicle AI services has yet to materialize. Many offerings today are bundled into connectivity plans or absorbed directly by the OEM, rather than generating standalone revenue. This has created growing uncertainty about whether the long-term operational costs of AI can be justified by the value they return.

 

These concerns are underscored by conversations across the industry, where questions frequently arise about the ongoing fees associated with large AI model providers. For many automakers, the challenge is not only the immediate cost of integration but the difficulty of constructing a defensible business case when user uptake and monetization mechanisms remain unclear. The shift from a one-time software deployment to a recurring services model - driven by cloud inference, hosting, and API consumption - further complicates forecasting and budget planning.

 

Adding to this tension is the technical reality that the newest generation of virtual assistants still relies heavily on cloud-based models. Their size and computational demands exceed what can be executed locally in the vehicle, making the system dependent on external infrastructure. This, in turn, exposes automakers to the same cascading pressures affecting the broader AI ecosystem today: high energy consumption, water usage, data privacy requirements, and regional compliance expectations. As these upstream costs continue to rise, the long-tail financial burden is increasingly difficult to overlook.

 

Taken together, these factors point to a structural mismatch: the sophistication of next-generation AI is growing faster than the industry’s ability to support it economically. Until the operational cost curve and the value curve move closer together, affordability and lifecycle sustainability will remain central friction points in the adoption of in-vehicle AI.

 

This is why ensuring AI is fit for purpose for the brand and the consumer is essential: to justify the short-, mid-, and long-term costs, and, from the other direction, to ensure the costs are sustainable to minimize service and quality disruptions that could damage user confidence over time.

 

  1. Environmental Impact – AI rollout is intrinsically tied to carbon and resource budgets

A national study by Cornell University published November 10, 2025 finds AI data centers in the U.S. alone would emit 24 million to 44 million metric tons of CO2 annually by 2030, the equivalent of adding 5 million to 10 million more vehicles to U.S. roads. For automakers, this is a direct comparison, and one hard to reconcile against decarbonization mandates, pledges, and progress to-date.  


ree

The U.S. Environmental Protection Agency reports tailpipe emission improvements between 2014 and 2024 of 64 g CO2/mi, or the rough equivalent of taking 2.78 million vehicles off the road over that same timeframe, as estimated by SBD Automotive’s sustainability expert Robert Fisher.

Given this, and while not a direct cause and effect, engaging with AI upstream in development and the supply chain, and deploying and maintaining AI in a final product, could be seen as effectively negating a decade or more of nationwide average tailpipe emission improvements.

While transportation remains one of the highest polluting sectors globally and tailpipe emissions bounce between being a political target and regulatory priority, AI raises a new pathway for scrutiny of OEMs by environmental activists and regulators alike. This could pose a particular conflict for brands built on a platform of environmentally-conscious messaging, such as dedicated electric vehicle makers.


The same study estimates water usage of 731 million to 1,125 million cubic meters of water per year, of equal importance to air quality as major OEMs’ ESG reports focus on resource conservation efforts around major manufacturing centers.


ree

And then there’s energy. Not just energy consumption by data centers, which Pew Research Center projects will reach 426 terawatt-hours in the U.S. alone by 2030, but energy demand by the vehicle architecture.


From a series of expert interviews conducted by McKinsey & Company in 2024, surveying OEMs, suppliers, and semiconductor manufacturers, 35% of those interviewed raised concerns that “computationally intensive AI compute loads can lead to high energy consumption,” of notable concern for electrified platforms and potentially undermining recent battery efficiency and range improvements.


  1. Risk & Governance – safety, privacy, and the nebulous area of liability

Liability and accountability of AI systems is still not clearly defined in most cases, with policies to address AI governance, data privacy, and legal liability at varying stages of development and with varying levels of stringency between states/provinces and countries. The risk of not knowing where liability may fall, combined with hyper-accelerated development and deployment timelines, is a compounding issue.


The AI Incident Database demonstrates this, in part, with a record documenting $2.9 billion in settlement fees and other associated fines since the index’s conception in 2020. Automakers are not shielded from this, with Tesla’s well-known settlement in an autopilot-related wrongful death case regarding Walter Huang represented in the Database.


As of July 2025, only 25% of organizations report having a fully implemented AI governance program, according to an AuditBoard report which surveyed in the U.S., Canada, Germany, and the U.K. This is despite a fairly generous window for preparedness, with AI in a recognizable form having grown since the early 2010s, and the intense surge of AI we are in today beginning roughly five years ago, around 2020.


Further still, only two-thirds of organizations report having a framework for –  and completing – AI-specific risk assessments for third-party language models and suppliers, leaving roughly one-third of all companies overconfidently relying on external AI systems without risk management planning and with major vulnerabilities.


Remember that governance is not solely a function of safety, security, and legal requirement, but one of business ethics. Unique to AI applications is the risk of algorithmic bias, which is dependent largely on a model’s training data and whether that data is prejudiced or an appropriate representation of the end userbase. A governance committee that represents diverse identities, roles, and responsibilities within the organization, and a consistent data review mechanism, can help to identify and eliminate perceived biases and disconnects within these products.

By approaching AI slowly, and ensuring robust liability, risk, and governance policies are a foundational element of any AI development framework, OEMs have an opportunity to position themselves as trustworthy stewards of the technology.


  1. Brand Differentiation – how AI will define the OEM

All the preceding scorecard factors ultimately tie into this question of brand differentiation:

  • AI features should align with consumers’ real needs and expectations for the product and brand.

  • The lifecycle costs of AI development and maintenance should be justified by consumer demand and brand fit.

  • AI resource costs (water, energy, land) and associated emissions should be examined against existing environmental commitments and sustainability-first brand messaging.

  • Comprehensive risk, governance, and liability frameworks will protect OEMs’ brand image with respect to safety, security, and accountability.


ree

The unofficial homogenization of the industry – seen in vehicle exterior design, or infotainment experiences defaulting to AAOS and Apple CarPlay – has been noticed by many consumers and subsequently felt by shareholders as markets’ response to new vehicle models remains mum.

AI, and the way it is implemented, has the potential to make or break an OEMs brand identity at a pivotal time in the industry when brand differentiation is reemerging as a top priority.

 

Section 3: Continuous Review and Monitoring

The weighting baked into the scorecard and the ultimate business impacts of AI will continuously shift as global regulation, data availability, technological capabilities, energy efficiencies, and public sentiment evolves in tandem.

This can create a certain level of uncertainty, but the risks associated with this uncertainty can be mitigated with a proactive approach to continuous review and monitoring. OEMs should review each of their AI use cases and the metrics included in their scorecard regularly, framing them against considerations such as:

  • Does the product and its performance still align with customer expectations?

  • Is it still delivering measurable brand differentiation?

  • Are the upstream costs (financial, resource) still viable?

  • Is the risk and liability framework effective in practice (is data secure, have there been any vulnerabilities, does the driving task remain safe)?



    ree


It is extremely important within this review process to ensure there is a pathway for a human-centered feedback loop, and to not rely solely on AI or other digital systems to evaluate AI. As OpenAI itself recommends: business leadership should take an active, direct role in “learn[ing] how to create contextual eval[uations] specific to their organization’s needs and operating environment.” A culture of AI governance should be informed from the top down and practiced from the bottom up.

 

Section 4:  Responsible AI Adoption Score (RAAS)
A quantitative metric for decision-making

To move from qualitative philosophy to quantitative decision-making, we propose the Responsible AI Adoption Score (RAAS). This formula consolidates the dimensions discussed - specifically addressing the technical reality that modern AI models are energy-intensive and cloud-dependent, and the social reality that there is no single, uniform AI solution for all consumers.

The RAAS formula is defined as follows:


RAAS = User Impact x Brand Fit x Longevity x ESG Contribution

Energy Cost x Governance Risk x Redundancy

 

Key variables & constraints

User Impact (Regionally Weighted): This score must be adjusted by region. A conversational avatar might score highly in China but poorly in the U.S. due to differences in culture, regulatory environment, and consumer perception.

ESG Contribution: Does the feature actively reduce the vehicle's footprint (e.g., AI-optimized route planning), or does it conflict with sustainability-minded brand identities?

Energy Cost (The Cloud Multiplier): This is the critical inhibitor. Since current GenAI relies on cloud-hosting , this variable must account for the upstream water and power usage of data centers, not just the vehicle's battery drain. High energy cost significantly lowers the RAAS. Redundancy (The "Smartphone" Factor): If a user has a ‘robust AI agent’ on their smartphone, the Redundancy score increases, driving the total RAAS down. An OEM should not pay to build what the user already owns.

Governance Risk: This is a complex metric as it attempts to account for the multiple contributing factors to risk and liability. Ultimately, this should assess the presence, comprehensiveness, and follow-through of an OEMs’ internal governance procedures, relative to national and local AI liability and reporting laws, to determine a relative rating for the OEMs’ preparedness, agility, and potential liability in the event of an AI-related incident. For example, an organization that has already paid settlement fees related to AI misuse may have a high risk score.

Decision Gate

At the end of this evaluation an OEM will be left with two outcomes: a high or low RAAS.

High RAAS suggests a fair balance between input and output, and that the OEM can reasonably proceed with development. A High RAAS product example: AI-powered BMS.

Low RAAS suggests the OEM should halt or defer development until one of the contributing factors is addressed. This could mean hedging your bets for a more efficient AI chip, new renewable energy sources serving data centers, air quality regulation, or an upswing in regional consumer sentiment. A Low RAAS product example: a generic ChatGPT wrapper for a budget vehicle.


Andy Qiu             Sr. Manager & Subject Matter Expert, AI/ML. SBD Automotive
Andy Qiu Sr. Manager & Subject Matter Expert, AI/ML. SBD Automotive

"The automotive industry is racing to integrate AI, but the gap between corporate ambition and consumer reality is widening. From 'technology bloat' to the often-overlooked environmental costs of data centers, we must ask: are we solving user problems or just adding complexity? This article introduces the Responsible AI Adoption Score (RAAS)—a strategic framework designed to help OEMs move beyond the hype." 



Section 5: Concluding thoughts - RAAS separates value from hype

With the balanced scorecard and RAAS model in place, we can see that not all AI is the ‘silver bullet’ OEMs are hoping for. Stripping away the marketing excitement and looking at the actual business value, a clearer path emerges that allows OEMs to prioritize their AI investments based on their end goal(s).

Of utmost importance is developing a secure and stable foundation within the organization to build upon – and the most sustainable AI is generally not the one talking to the driver: it is the ‘hidden’ AI working in the background.

Before trying to impress consumers with complex and often redundant avatars at high compute costs, OEMs should look upstream - using AI to fix supply chain complexities or speed up software development & engineering - or rethink AI use altogether. Upstream applications tend to have less "hallucinations," do not directly frustrate or confuse customers, and provide the budget stability needed to experiment later.

AI is exciting, no doubt, and the market pressure around it can often feel suffocating. If the automotive industry can steel its nerves and approach AI with a level head, it will have the opportunity to achieve efficiencies and develop genre-defining services that will transform the automotive industry forever. It is up to us all to seize this moment - and steward it responsibly-  to ensure we are investing in more than just a buzzword, but a real strategic benefit to ourselves and our consumers.



Riley Keehn       Senior Consultant, Regulatory and Government Affairs. SBD Automotive
Riley Keehn Senior Consultant, Regulatory and Government Affairs. SBD Automotive

 "AI is everywhere and while decisionmakers are full steam ahead, not everyone else is. Consumers are often acutely aware of environmental, social, and financial costs associated with AI being baked into seemingly any and all devices and apps. In order to win consumers over and sustain our trajectory, we need to approach AI more responsibly. I hope this article can be a launchpad for that conversation. "




최신 정보 유지

차량용 AI 가이드  보고서 샘플
차량용 AI 가이드 보고서 샘플

SBD Automotive’s AI for Automotive Guide offers insight into generative AI use cases across the entire vehicle lifecycle, and projects how they will shape the industry over the next five years. It examines upstream applications functional verticals and delivers real world case studies while mapping short term mid term and long term opportunities. The Guide also lists potential partners and outlines the varied approaches that leading players are taking today to help you stay ahead in a rapidly changing market.

For custom projects, SBD Automotive provides expert Automotive AI Consulting Services to help ideate, plan, deploy and measure the effectiveness of artificial intelligence in vehicles and across your enterprise.

유니티의 AI 연구 보고서에 대해 자세히 알아보고 싶거나 유니티 AI 팀 전문가와 상담하고 싶다면 지금 바로 문의하세요!




페이지 하단