The Scope of the Problem
Cloud and AI projects fail at rates significantly higher than traditional IT projects. RAND Corporation research confirms that over 80% of AI projects fail—twice the failure rate of non-AI technology projects. More alarmingly, the average organization scrapped 46% of AI proof-of-concepts before they reached production, and only 48% of AI projects make it into production at all, taking an average of 8 months from prototype to deployment.
While higher failure rates might not surprise those familiar with emerging technology adoption, a deeper question emerges: why are cloud computing and AI—technologies explicitly designed to accelerate business operations—instead experiencing such substantial implementation challenges?
| Project Type | Failure Rate | Production Rate |
|---|---|---|
| Traditional IT Projects | ~40% | Varies by project type |
| AI Projects (2025) | 80%+ | 48% reach production |
| GenAI Pilot Programs | 95% | 5% achieve rapid revenue growth |
Enterprise surveys have identified three primary causes for these delays, along with practical strategies for improvement.
#3: The Skills Shortage Surprise: Not Just Any Cloud/AI Skills Will Do
Over half (56%) of enterprises acknowledge a lack of essential skills, and acquiring these skills takes longer than anticipated. However, the real issue runs deeper than simple headcount.
Informatica’s CDO Insights 2025 survey identifies the top obstacles to AI success as data quality and readiness (43%), lack of technical maturity (43%), and shortage of skills (35%). The data reveals that organizations aren’t just short on technical skills—they lack the hybrid expertise needed to bridge business requirements and technical implementation.
Organizations that succeed invest 50-70% of their timeline and budget on data readiness—extraction, normalization, governance metadata, quality dashboards, and retention controls—rather than rushing to model deployment.
#2: Unrealistic Expectations: When Dreams Collide with Project Requirements
Nearly 70% of enterprises report that early planning stages reveal unmet requirements, most stemming from unrealistic expectations about what cloud and AI can deliver.
Common Expectation Mismatches
- Cost Overestimation: Senior management assumes cloud will be dramatically cheaper than it proves to be in practice
- AI Capability Confusion: Line departments base expectations on consumer AI tools like ChatGPT, leading to governance violations or misunderstanding of enterprise AI limitations
- Implementation Gaps: Business goals are clearly defined (“improve sales by 15%”) but the specific technical steps to achieve them remain vague
- ROI Timeline Misconceptions: Expectations for immediate returns clash with the 8-month average timeline from prototype to production
Many enterprises admit to limited understanding of AI’s actual potential versus its marketed capabilities. This knowledge gap makes it difficult to frame realistic AI projects. Some CIOs describe project proposals as vague “invitations to AI fishing trips”—broadly defined business goals without clear implementation paths.
Unlike previous technology waves, line organizations can now experiment with AI tools independently, drawing conclusions about benefits without IT involvement. This creates situations where business units form expectations based on consumer-grade AI experiences, then demand enterprise implementations without understanding the fundamental differences in data governance, security, and scalability requirements.
#1: Mid-Execution Second-Guessing: The Perils of Shifting Perspectives
The most prevalent issue, cited by 74% of enterprises, is questioning the project’s approach mid-execution as stakeholders begin doubting the strategy based on real-world experience. This stems from fundamental differences in how IT and line organizations perceive new technologies, particularly AI.
The Perspective Divide
Enterprise IT Perspective: Views technologies within the context of existing infrastructure. IT professionals think about how systems integrate with servers, networks, and applications to facilitate business processes. For IT teams, AI is another application component that “does stuff.”
Line Organization Perspective: Perceives AI as a tool for answering questions and augmenting worker capability. Business units view AI’s value in the answers and insights themselves, not the underlying technical processes. They see AI as something that “makes workers work better.”
This perspective difference might seem subtle, but it becomes critically important when projects produce tangible results. Terms like “generative,” “agent,” and “autonomy” carry entirely different meanings depending on perspective, and these conflicts typically surface only when concrete demonstrations force alignment discussions.
The impact on projects is substantial. A significant percentage of workflow-coupled AI projects end up incorporating interactive components—often serving as management oversight mechanisms—because line managers need to see and understand what AI agents are doing, while IT teams initially designed fully automated systems.
For cloud projects, the perspective gap is less pronounced but still problematic. The primary challenge becomes inadequate evaluation of claimed benefits, with failures to validate assumptions until actual testing exposes the reality. Organizations rush to migrate workloads based on cost projections that don’t account for data transfer fees, compliance requirements, or the operational overhead of multi-cloud management.
The Data Quality Crisis Beneath Every Failure
While skills gaps and expectation mismatches cause visible project delays, the fundamental challenge that dooms most AI initiatives remains largely invisible until implementation: data quality and readiness.
Gartner predicts that at least 30% of generative AI projects will be abandoned after proof-of-concept by the end of 2025, primarily due to poor data quality, inadequate risk controls, escalating costs, or unclear business value. Organizations focus intensely on algorithm selection and model performance while overlooking that their data infrastructure fundamentally cannot support AI at scale.
“Modern generative AI hasn’t eliminated the old maxim that 80% of machine learning work is data preparation. If anything, the stakes are higher. Bad training data produces inaccurate batch reports that analysts have to debug. Bad retrieval-augmented generation (RAG) systems hallucinate in real-time customer conversations.”
— WorkOS AI Implementation Analysis, 2025
The problem extends beyond simple data quality. Many organizations invested heavily in traditional data management architectures and practices, only to discover that AI-ready data management represents a fundamentally distinct discipline with different requirements for governance, access patterns, and freshness guarantees.
What Actually Works: Building Dedicated Teams
To address these challenges, enterprises with successful cloud projects establish dedicated “cloud teams,” while those succeeding with AI build specialized “AI teams.” These aren’t traditional IT teams with new names—they represent a fundamentally different organizational structure.
Successful Team Composition
- Hybrid Membership: Include both line management and IT experts, not IT-only teams
- Trained in Application Patterns: Team members understand various ways AI can be applied to business problems, not just how to build models
- Business Case Authority: Empowered to assess and validate business cases before technical work begins
- Collaborative Decision Rights: Drive adoption decisions jointly rather than IT dictating to business or vice versa
- Continuous Learning Culture: Maintain ongoing understanding of both evolving business goals and emerging technology capabilities
This comprehensive understanding enables teams to select appropriate approaches, accurately assess business cases, and drive adoption collaboratively. Most importantly, these teams speak a common language that bridges business requirements and technical constraints—addressing the fundamental communication gap that derails most projects.
Start with Pain, Not Capability
The most reliable predictor of AI project success is starting with documented business pain rather than technical capability. Organizations that succeed follow a different pattern than those that fail.
| Failing Approach | Successful Approach |
|---|---|
| Ask “Which AI models should we deploy?” | Identify process bottlenecks that cost measurable money |
| Start with technology capabilities | Quantify business pain before considering solutions |
| Focus on model selection and tuning | Invest primarily in data quality and workflow integration |
| Build internally to maintain control | Partner with specialized vendors (67% success rate vs 33% for internal builds) |
Lumen Technologies exemplifies this approach. Their sales teams spent four hours researching customer backgrounds before outreach calls. The company recognized this as a $50 million annual opportunity—not primarily a machine learning challenge. Only after quantifying that specific pain did they design solutions, ultimately achieving $50 million in projected annual savings through tools that compress research time to 15 minutes.
McKinsey’s 2025 AI survey confirms this pattern: organizations reporting “significant” financial returns are twice as likely to have redesigned end-to-end workflows before selecting modeling techniques.
Learning a New Language
AI is forcing organizations to learn a new language—and it’s crucial that this becomes a single, common language understood by both business and technology stakeholders, not separate dialects that breed misunderstanding.
Terms like “agent,” “autonomous,” “learning,” and “hallucination” require precise, shared definitions. When IT describes an AI system as “autonomous,” they typically mean it operates without manual intervention within predefined parameters. When business leaders hear “autonomous,” they often envision systems that can make independent strategic decisions—a fundamental misalignment that leads to mid-project conflicts.
Organizations succeeding with AI invest heavily in shared vocabulary development, creating glossaries that define terms from both technical and business perspectives, then training all stakeholders in these definitions before projects begin.
Key Recommendations
Based on enterprise experiences and current failure rate data, organizations should:
- Establish hybrid teams before starting projects — Don’t wait until problems emerge. Build cross-functional cloud/AI teams with decision authority from day one.
- Invest in business-technology translators — The critical shortage isn’t developers, it’s architects who can bridge business goals and technical reality. Prioritize hiring or developing this capability.
- Validate assumptions continuously — Don’t wait for mid-project crises. Establish checkpoints where business expectations and technical capabilities are explicitly reconciled.
- Quantify pain before proposing solutions — No AI project should begin without documented, measurable business pain and clear success metrics tied to real financial impact.
- Allocate 50-70% of resources to data readiness — Stop treating data preparation as preliminary work. It’s the foundation that determines whether projects succeed or fail.
- Partner strategically for specialized expertise — Internal builds succeed only 33% as often as vendor partnerships for AI projects. Recognize when external expertise accelerates success.
- Create shared vocabulary before technical work begins — Invest in defining key terms from both business and technical perspectives, then train all stakeholders in these definitions.
- Design for transparency, not just automation — Business stakeholders need to understand what AI systems are doing. Build oversight and explainability into initial designs, not as afterthoughts.
Cloud and AI will continue transforming business operations, but success requires more than technical expertise and adequate budgets. It demands a new approach to how organizations conceptualize, plan, and execute technology projects—one that prioritizes shared understanding over technical sophistication and business outcomes over technological impressiveness.
Organizations that master this translation—building teams that genuinely bridge business and technology, starting with documented pain rather than available capabilities, and investing as heavily in data and workflow integration as in models—will find themselves in the successful minority. Those that don’t will continue contributing to failure statistics that should alarm every technology and business leader.
The technology works. The question is whether organizations can create the conditions for it to succeed.

