, a cooperation partner casually approached me with an AI use case at their organization. They wanted to make their onboarding process for new staff more efficient by using AI to answer the repetitive questions of newcomers. I suggested a practical chat approach that would integrate their internal documentation, and off they went with an air of confidence, planning to “talk to their IT team” to move forward.
From experience, I knew that this kind of optimism was brittle. The average IT team isn’t equipped to implement a full end-to-end Ai Application on their own. And so it was: months later, they were stuck. Their system was frustratingly slow, and it also became clear they had misread the users’ actual needs during development. New employees were asking different questions than those the system had been tuned for. Most users bounced after a couple of attempts and never came back. Fixing these issues would require rethinking their entire architecture and data strategy, but damage was already done. Employees were frustrated, leadership had taken notice, and the initial excitement around AI had faded into skepticism. Arguing for another extensive development phase would be difficult, so the case was quietly shelved.
This story is far from unique. Great marketing by AI companies creates an illusion of accessibility around AI, and companies jump into initiatives without fully grasping the challenges ahead. In reality, specialized expertise is needed to create a solid AI strategy and implement any more or less custom use case in your company. If this expertise is not available internally, you need to get it from external partners or providers.
That doesn’t mean that you need to buy everything — that would be like having $100 and spending it at the restaurant instead of going to the supermarket. The first option will address your hunger on the spot, but the second one will ensure you have something to eat for a week.
So, how can you get started, and who should implement your first AI projects? Here is my take: Forget build-or-buy and focus on partnering and learning instead. I deeply believe that most companies should build AI expertise internally — this will provide them with more bandwidth in their AI strategy and activities in the future. At the same time, AI is a complex craft that takes time to master, and failure is omnipresent (according to this report by RAND Corporation, more than 80% of AI initiatives fail). Learning from failure is nice in theory, but in reality, it leads to waste of time, resources, and credibility. In order to achieve AI maturity efficiently, companies should consider cooperating with trusted partners who are ready to share their expertise. A realistic and careful setup will not only ensure a smoother technical implementation but also address the people- and Business-related aspects of your AI strategy.
In the following, I will first outline the rough basics (inputs, outputs, and trade-offs) of build-or-buy decisions in AI. Then, you will learn about a more differentiated partnering approach. It combines building and buying while reinforcing your internal learning curve. Finally, I will close with some practical observations and advice on partnering in AI.
Note: If you are interested in more actionable AI insights, please check out my newsletter AI for Business!
The basics of build-or-buy decisions in AI
To start, let’s break down a classical build-or-buy decision into two parts: the inputs — what you should assess upfront — and the outputs — what each choice will mean for your business down the line.
Inputs
To prepare the decision, you need to evaluate your internal capabilities and the requirements of the use case. These factors will shape how realistic, risky, or rewarding each option might be:
- AI maturity of your organization: Consider your internal technical capabilities, such as skilled AI talent, existing reusable AI assets (e.g. datasets, pre-built models, knowledge graphs), and adjacent technical skills that can be transferred into the AI space (e.g. data engineering, analytics). Also count in how proficient users are at interacting with AI and dealing with its uncertainties. Invest in upskilling and dare to build more as your AI maturity grows.
- Domain expertise needs: How deeply must the solution reflect your industry-specific knowledge? In use cases requiring expert human intuition or regulatory familiarity, your internal domain experts will play a crucial role. They should be part of the development process, whether through building internally or partnering closely with an external provider.
- Technical complexity of the use case: Not all AI is created equal. A project that relies on existing APIs or foundation models is vastly simpler than one that demands training a custom model architecture from scratch. High complexity increases the risk, resource requirements, and potential delays of a build-first approach.
- Value and strategic differentiation: Is the use case core to your strategic advantage or more of a support function? If it’s unique to your industry (or even company) and will increase competitive differentiation, building or co-developing may offer more value. By contrast, for a a standard use case (e.g. document classification, forecasting), buying will likely deliver faster, more cost-effective results.
Consequences of build-or-buy decisions
Once you’ve assessed your inputs, you’ll want to map out the downstream impact of your build-or-buy choice and evaluate the trade-offs. Here are seven dimensions that will influence your timelines, costs, risks, and outcomes:
- Customization: The degree to which the AI solution can be tailored to the organization’s specific workflows, goals, and domain needs. Customization often determines how well the solution fits unique business requirements.
- Ownership: Intellectual property (IP) rights and control over the underlying AI models, code, and strategic direction. Building internally offers full ownership, while buying typically involves licensing another party’s technology.
- Data security: Covers how data is handled, where it resides, and who has access. In regulated or sensitive environments, data privacy and compliance are central concerns, particularly when data may be shared with or processed by external vendors.
- Cost: Encompasses both the initial investment and ongoing operational expenses. Building involves R&D, talent, infrastructure, and long-term maintenance, whereas buying may require licensing, subscriptions, or cloud usage fees.
- Time-to-market: Measures how quickly the solution can be deployed and start delivering value. Fast deployment is often critical in competitive or dynamic markets; delays can lead to lost opportunities.
- Support & maintenance: Involves who is responsible for updates, scaling, bug fixes, and ongoing model performance. Internal builds require dedicated resources for upkeep, while external solutions often include support services.
- AI learning curve: Reflects the complexity of acquiring AI expertise and operationalizing it within the organization. Building in-house often comes with lots of trial-and-error and brittle outcomes because the team doesn’t possess foundational AI knowledge. On the other hand, buying or partnering can accelerate learning via guided expertise and mature tooling and create a solid basis for future AI activities.
Now, in practice, binary build-or-buy thinking often leads to unresolvable trade-offs. Take the onboarding use case mentioned earlier. One reason the team leaned toward building was a need to keep their company data confidential. At the same time, they didn’t have the internal AI expertise to develop a production-ready chat system. They would likely have been more successful by outsourcing the chat architecture and ongoing support while building their database internally. Thus, you shouldn’t decide to build or buy at the level of the entire AI system. Instead, break it down into components and evaluate each one based on your capabilities, constraints, and strategic priorities.
Towards a handshake between domain and AI expertise
At the component level, I encourage you to differentiate build-or-buy decisions through the lens of expertise requirements. Most B2B AI systems combine two kinds of expertise: domain expertise, which lives within your company, and technical AI expertise, which can be brought in through an external partner if you don’t (yet) have specialized AI skills. In the following, I will examine the expertise needs for each of the core components of an AI system (cf. this article for an explanation of the components).

Business opportunity: Framing the right AI problems
Did you know that the #1 reason for AI project failure is not technical — it’s choosing the wrong problem to solve (cf. The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed)? You might be surprised — after all, your expert teams understand their problems deeply. The point is, they don’t have the means to connect the dots between their pain points and AI technology. Here are some of the most common failure patterns:
- Vague or unsuitable problem framing: Is this a task that AI is actually good at?
- Missing effort/ROI estimation: Is the outcome worth the time and resources for AI development and deployment?
- Unrealistic expectations: What does “good enough” mean for an imperfect AI?
On the other hand, there are many organizations that use AI for its own sake and create solutions in search of a problem. This burns resources and erodes confidence internally.
A good AI partner helps assess which business processes are ripe for AI intervention, estimates potential impact, and models how AI might deliver value. Both parties can shape a focused, high-impact use case through joint discovery workshops, design sprints, and exploratory prototyping.
Data: The fuel of your AI system
Clean, well-structured domain data is a core asset. It encodes your process knowledge, customer behavior, system performance, and more. But raw data alone isn’t enough — it needs to be transformed into meaningful learning signals. That’s where AI expertise comes in to design pipelines, choose the right data representations, and align everything with AI’s learning goals.
Often, this includes data labeling — annotating examples with the signals a model needs to learn from. It might seem tedious, but resist the urge to outsource it. Labeling is one of the most context-sensitive parts of the pipeline, and it requires domain expertise to be done right. In fact, many fine-tuning tasks today perform best on small but high-quality datasets — so work closely with your AI partner to keep the effort focused and manageable.
Data cleaning and preprocessing is another area where experience makes all the difference. You’ve probably heard the saying: “Most of a data scientist’s time is spent cleaning data.” That doesn’t mean it should be slow. With engineers who are experienced in your data modality (text, numbers, images…), this process can be dramatically accelerated. They’ll instinctively know which preprocessing techniques to apply and when, turning weeks of trial and error into hours of productive setup.
Intelligence: AI models and architectures
This is where most people think AI projects begin — but it’s only the middle of the story. Deep AI expertise is needed to select or fine-tune models, evaluate performance, and design system architectures. For example, should your use case use a pre-trained model? Do you need a multimodel setup? What evaluation metrics make sense? In more complex systems, different AI components such as models and knowledge bases can be combined into a multi-step workflow.
Domain expertise comes in during system validation and evaluation. Experts and future users need to check if AI outputs make sense and align with their real-world expectations. A model might be statistically strong, but operationally useless if its outputs don’t map to business logic. When designing compound systems, domain experts also need to make sure that the system setup mirrors their real-world processes and needs.
Tailoring AI models and building a custom AI architecture is your “co-pilot” phase: AI teams architect and optimize, while domain teams steer and refine based on business goals. Over time, the goal is to build shared ownership of system behavior.
Case study: Building with AI expertise support in insurance
At a leading insurance provider, the data science team was tasked with building a claims risk prediction system — a project they wanted to keep in-house to retain full ownership and align closely with proprietary data and workflows. However, early prototypes ran into performance and scalability issues. That’s where my company Anacode came in as an architectural and strategic partner. We helped the internal team evaluate model candidates, design a modular architecture, and set up reproducible ML pipelines. Just as importantly, we ran targeted upskilling sessions focused on model evaluation, MLOps, and responsible AI practices. Over time, the internal team gained confidence, reworked earlier prototypes into a robust solution, and fully took over operations. The result was a system they owned completely, while the expert guidance we provided during the project had also elevated their internal AI capabilities.
User experience: Delivering AI value through the user interface
This one is tricky. With a few exceptions, neither domain experts nor deep AI engineers are likely to design an experience that is truly intuitive, efficient, and enjoyable for real users. Ideally, you can bring in specialized UX designers. If these are not available, look for people from adjacent disciplines who have a natural feel for user experience. Today, a lot of AI tools are available to support UX design and prototyping, so taste matters more than technical craft. Once you have the right people, you need to feed them with inputs from both sides:
- Backend: AI experts provide insight into how the system works internally — its strengths, limitations, levels of certainty — and support the design of elements like explanations, uncertainty indicators, and confidence scores (cf. this article on building trust in AI through UX).
- Frontend: Domain experts understand the users, their workflows, and their pain points. They help validate user flows, highlight friction, and propose refinements based on how people actually interact with the system.
Focus on fast iteration and be prepared for some erring around. AI UX is an emerging field, and there’s no settled formula for what “great” looks like. The best experiences arise from tight, iterative feedback loops, where design, testing, and refinement happen continuously, absorbing inputs from both domain experts and AI specialists.
Support and maintenance: Keeping AI alive
Once deployed, AI systems require close monitoring and continuous improvement. Real-world user behavior often diverges from test environments and changes over time. This inherent uncertainty means your system needs to be actively watched, so that issues can be identified and addressed early.
The technical infrastructure for monitoring — including performance tracking, drift detection, automated retraining, and MLOps pipelines — is typically set up by your AI partner. Once in place, many day-to-day monitoring tasks don’t require deep technical skills. What they do require is domain expertise: understanding whether model outputs still make sense, noticing subtle shifts in usage patterns, and knowing when something “feels off.”
A well-designed support phase is more than just operational — it can be a critical learning phase for your internal teams. It creates space for gradual skill-building, deeper system understanding, and ultimately, a smoother path toward taking greater ownership of the AI system over time.
Thus, rather than framing AI implementation as a binary build-or-buy decision, you should view it as a mosaic of activities. Some of these are deeply technical, while others are closely tied to your business context. By mapping responsibilities across the AI lifecycle, you can:
- Clarify which roles and skills are essential to success
- Identify capabilities you already have in-house
- Spot gaps where external expertise is most valuable
- Plan for knowledge transfer and long-term ownership
If you want to dive deeper into the integration of domain expertise, check out my article Injecting domain expertise into your AI systems. Importantly, the line between “domain” and “AI” expertise is not fixed. You might already have team members experimenting with machine learning, or others eager to grow into more technical roles. With the right partnership model and upskilling strategy, you can evolve towards AI autonomy, gradually taking on more responsibility and control as your internal maturity grows.
In partnering, start early and focus on communication
By now, you know that build-or-buy decisions should be made at the level of individual components of your AI system. But if you don’t yet have AI expertise on your team, how can you envision what your system and its components will eventually look like? The answer: start partnering early. As you begin shaping your AI strategy and design, bring in a trusted partner to guide the process. Choose someone you can communicate with easily and openly. With the right collaboration from the start, you’ll increase your chances of navigating AI challenges smoothly and successfully.
Choose an AI partner with foundational AI expertise
Your AI partner should not just deliver code and technical assets, but help your organization learn and grow during your cooperation. Here are a few common types of external partnerships, and what to expect from each:
- Outsourcing: This model abstracts away the complexity — you get results quickly, like a dose of fast carbs. While it’s efficient, it rarely delivers long-term strategic value. You end up with a tool, not with stronger capabilities.
- Academic partnerships: Great for cutting-edge innovation and long-term research, but often less suited for an AI system’s real-world deployment and adoption.
- Advisory partnerships: In my view, the most promising path, especially for companies that already have a tech team and want to develop their AI acumen. A good advisor empowers your engineers, helps them avoid costly missteps, and brings practical, experience-driven insight to questions like: What’s the right tech stack for our use case? How do we curate our data to boost quality and kick off a powerful data flywheel How do we scale without compromising trust and governance?
A detailed partner selection framework is beyond the scope of this article, but here’s one piece of hard-earned advice: Be wary of IT outsourcers and consultancies that suddenly added “AI” to their offering after the GenAI boom in 2022. They might charm you with fancy buzzwords, but if AI isn’t in their DNA, you may end up paying for their learning curve rather than benefiting from complementary expertise. Choose a partner who’s done the hard work already and is ready to transfer that expertise to you.
Double down on communication and alignment
Effective communication and stakeholder alignment are critical in partnering models. Here are some important communication roles to get right in your company:
- Leadership and domain experts must identify and clearly communicate the business problems worth solving (more on best practices for AI ideation here).
- End users need to share their needs early, give feedback during usage, and ideally become co-creators in shaping the AI experience.
- IT and governance teams must ensure compliance, security, and safety while enabling, not blocking, AI innovation. Keep in mind: these capabilities don’t appear fully formed.
In AI projects, the risk of misalignment and unproductive silos is high. AI is still a relatively new field, and the terminology alone can create confusion. If you’ve ever found yourself in a debate about the difference between “AI” and “machine learning,” you know what I mean. And if you haven’t, I encourage you to try at your next get-together with your colleagues. It can be just as slippery as that conversation with your significant other that starts with “we need to talk.”
Aim for a rapprochement from both sides to iron out ambiguities and disconnects. Your internal teams should invest in upskilling and build a basic understanding of AI concepts. On the other hand, your AI partners must meet you halfway. They should skip the jargon and use clear, business-oriented language that your team can actually work with. Effective collaboration starts with shared understanding.
Conclusion
The real question isn’t “Should we build or buy AI?” — it’s “How do we grow our AI capability in a way that balances speed, control, and long-term value?” The answer lies in recognizing AI as a blend of technology and expertise, where success depends on matching the right resources to the right tasks.
For most organizations, the smartest path forward is partnering — combining your domain strengths with external AI expertise to build faster, learn faster, and eventually own more of your AI journey.
What you can do next:
- Map your AI use case against your internal capabilities. Be honest about the gaps.
- Choose partners who transfer knowledge, not just deliverables.
- Identify which components to build, buy, or co-create. You don’t need to make a binary choice.
- Upskill your team as you go. Every project should make you more capable and autonomous, not more dependent on your partner’s assets and skills.
- Start with focused pilots that create value and momentum for internal learning.
By taking a strategic, capability-building approach today, you lay the groundwork for becoming an AI-capable — and eventually AI-driven — organization in the long term.
Further readings
- Singla, A., Sukharevsky, A., Ellencweig, B., Krzyzaniak, M., & Song, J. (2024, May 22). Strategic alliances for Gen AI: How to build them and make them work. McKinsey & Company.
- Liebl, A., Hartmann, P., & Schamberger, M. (2023, November 23). Enterprise guide for make-or-buy decisions [White paper]. appliedAI Initiative.
- Gartner. (n.d.). Deploying AI: Should your organization build, buy or blend? Gartner.