About the Author

S

Scott O'Leary

Co-founder

Related Articles

Get Your First AI Project Approved: The Complete Implementation Toolkit for Finance Managers

Everything you need to identify, scope, and win approval for AI automation projects in finance operations.

Sales order processing with AI: the use case that proves the point

Finance leaders assume AI can't reliably automate business critical processes. This walkthroughs shows otherwise.

The Art of Prompting for Finance Operations: Writing LLM Instructions That Actually Work

A tactical guide to prompt and context engineering for finance and operations.

Topics

AI
finance
automation
back-office
operations
AI
finance
automation
back-office
operations
December 3, 2025

5 AI Adoption Patterns That Actually Work For Finance Operations

Most AI projects do not fail because of the tech. They fail because of how leaders approach adoption.

Scott O'Leary

Co-founder

We recently met with a Fortune 500 enterprise that’s spent the last 18 months (and millions on consultants), building a gleaming "AI factory" that yet that has yet to launch a single use case to production. In stark contrast, a manufacturing conglomerate skipped the platform build and dove right in, shipping their first agentic workflow (streamlining order entry) after a two-week proof-of-concept and six-week sprint to production. This tale of two approaches defines the current state of AI adoption for enterprises.

Enterprise finance and IT leaders are used to implementation timelines that stretch into years, with months of planning and requirements gathering and system overhauls that touch everything. When that mental model is applied to AI projects, big strategies stall, experiments drift without impact, and “perfect” pilots never scale.

The teams that win choose proven patterns, start small with fixable tasks, and build confidence step by step.

Adopting AI is not about finding a silver bullet. It’s about choosing the right path for your culture, starting with the smallest changes that prove value, and knowing when to scale.

Let’s begin with the most common and reliable approach: Crawl, Walk, Run.

Pattern 1- Crawl, Walk, Run: Build trust in AI step by step

You know what it’s like when a new initiative kicks off and leaders want results yesterday. With AI, that pressure usually leads to trying to automate too much, too soon.

The crawl, walk, run approach flips that script. It starts with small, low-risk wins that prove value and build the confidence you need to expand safely.

How it works

Think of it as three clear phases:

  • Crawl (Weeks 1-4): Simple, low-risk automation with human review of every output.
  • Walk (Months 2-3): Add business rules and exception handling, with selective human review.
  • Run (Months 4-6): Full automation, with humans stepping in only for true exceptions.

Why it works

Your teams already know which parts of their day are full of repetitive, error-prone work. By starting there, you take something that feels messy and frustrating and show quick, measurable improvements.

Those early wins build credibility and reduce resistance when you scale into more complex territory.

Implementation steps

  1. Pick one repeatable task, like invoice processing or expense categorization.
  2. Start in read-only mode: AI suggests, humans decide.
  3. Add automation gradually, one decision type at a time.
  4. Expand only after you’ve proven accuracy and built confidence.

Customer example

The aforementioned manufacturing conglomerate is following a phase approach to their order entry automation project, enabling them to secure wins and compound learnings quickly.

Here’s their phased approach:

  1. The agent extracts order details from POs and emails, validates the extracted data against master, and creates a draft order in the ERP for human review.
  2. The agent communicates directly with customers (with a rep on cc) when the order is incomplete (e.g. missing ship-to address) or incorrect (e.g. wrong prices) to resolve.
  3. The agent reasons over current machine capacity and delivery timetable data to assess fulfillment feasibility, informing the customer if the “deliver by” date can’t be met.
  4. Once the system clears 99% human approval rate (on created drafts), they’ll allow the agent to submit orders directly, automating the process end-to-end for “good” orders.

The phased approach immediately increases customer service productivity by eliminating data entry while maintaining the existing controls (human review/submission) and gives them a fast path to full automation.

Metrics that matter

Each phase of crawl, walk, run has its own definition of success. Early on, the goal is to prove accuracy and build trust.

As you progress, the focus shifts to speed and efficiency. By the time you reach full automation, cost savings and scale become the clearest markers of value.

  • Crawl: Accuracy above 95 percent, user satisfaction above 80 percent

    Why: In the first weeks, leaders and teams want reassurance that the AI can get the basics right. High accuracy and positive user feedback prove it is safe to move forward.

  • Walk: Processing time reduced by more than 40 percent, error rate below 2 percent

    Why: Once accuracy is proven, efficiency becomes the next priority. Cutting cycle times while keeping errors low shows the system can handle more responsibility without risk.

  • Run: More than 70 percent of cases fully automated, cost savings above 30 percent

    Why: At maturity, the measure of success is scale. A majority of cases should run without intervention, and the savings should be visible on the bottom line.

These benchmarks give you tangible milestones to track and clear proof points to share with leadership.

Common challenges

  • Impatience: Pressure to automate everything immediately
  • Perfectionism: Waiting for 100 percent accuracy before moving forward
  • Scope creep: Tackling edge cases before the core workflow is stable

Pro tip: Set explicit thresholds for each phase, and do not move on until those targets are met.

Crawl, walk, run is the safest way to start. But sometimes the fastest path to trust is to let people work side by side with AI before handing over control. That is where the copilot approach comes in.

Pattern 2 - Copilot first: Build trust through AI assistance

Your teams may not be ready to hand decisions over to AI on day one. And that is okay.

A copilot-first approach gives them time to see how the system works, test its reasoning, and build confidence before automation takes over. Instead of forcing trust, you earn it through collaboration.

How it works

This approach moves in three stages:

  • AI suggests, human decides: AI provides analysis and recommendations, but humans make the final call.
  • AI decides, human reviews: AI makes the decision, and a human double-checks the outcome.
  • AI acts independently: AI handles routine cases on its own, escalating only when exceptions appear.

Why it works

By working side by side with AI, teams learn where it excels and where human judgment is still needed. That transparency reduces resistance, clarifies automation boundaries, and improves long-term adoption.

Implementation approaches

Invoice processing

  • Phase 1: AI extracts data and flags potential issues for review.
  • Phase 2: AI matches invoices to POs and recommends approval or rejection.
  • Phase 3: AI processes standard invoices automatically and escalates exceptions.

Expense management

  • Phase 1: AI categorizes expenses and checks policy compliance.
  • Phase 2: AI approves expenses under threshold amounts if they meet policy.
  • Phase 3: AI handles routine exceptions and provides for nuanced cases requiring human judgement.

Customer example

A large financial firm came to us with an interesting issue: they were regularly late on vendor payments because invoices weren’t getting approved in time. Given they didn’t have a massive volume of invoices, the lean AP team didn’t have bandwidth to be constantly chasing the executive budget owners for approvals. Allowing lower-level managers to approve up to a higher amount helped a bit, but they still couldn’t get the executives to consistently approve in time.

When we dug in, two primary reasons emerged:

  1. The executives hated having to navigate their legacy AP platform to find and approve the invoices they needed to approve
  2. When they found them, they often didn’t have context on the purchase in most cases. They didn’t have a formal PO process in place, so approvals typically happened live in meetings or via long email chains.

The solution? An email agent that automatically responded to approval requests the AP platform sent out. The agent pulled information from the vendor management system, relevant email threads, and the AP system to provide context along with an approval recommendation.

And to solve the platform issue, the execs simply reply “approve” to the email and the agent takes care of the rest.

Metrics that matter

The progression is less about speed and more about trust. You are measuring how people respond as much as how the AI performs.

Early phase: High override rates are expected. The measure of success is transparency - people understand why the AI made a recommendation.

Middle phase: Overrides decrease as trust grows. Teams start requesting automation of repetitive decisions, a clear sign of confidence.

Mature phase: Routine cases run automatically, and human attention shifts to true exceptions. The proof point is improved accuracy paired with higher adoption.

These indicators show that your teams are not just tolerating AI but actively asking it to take on more work.

Common challenges

  • Rushed automation: Skipping the assistance phase undermines trust.
  • Poor explanation: AI recommendations without clear reasoning leave people skeptical.
  • Weak feedback loops: If you do not capture when humans disagree with AI, the system cannot improve.

Starting with AI as a copilot helps your people learn to trust it. But trust also depends on focus.

Success is easier to prove when you pick one team and one process, instead of spreading pilots too thin.

Pattern 3 - One team, one use case: Focus beats breadth

You have probably seen AI pilots spread across multiple teams at once. Everyone wants to try something, but nothing goes deep enough to show real value.

A more effective strategy is to concentrate on one team and one process, prove success, and then expand from there.

How it works

The focus framework is simple:

  • One team: A single department with clear ownership and accountability.
  • One process: An end-to-end workflow that creates measurable business value.
  • One success metric: A clear outcome that matters to leadership and can be proven quickly.

Why it works

Concentration creates mastery.

When one team develops deep expertise with AI, the learnings compound. Success turns them into credible advocates for adoption in other parts of the business, and in our experience, the team that proves value first is excited to teach and advise the rest of the organization.

Instead of scattered pilots with shallow impact, you get a focused win that becomes a replicable success story.

Choosing your starting point

When you are deciding where to begin, look for processes that are:

  • High volume and repetitive
  • Governed by clear business rules and decision criteria
  • Capable of producing measurable time or cost savings
  • Supported by leaders who are willing to champion adoption

Good first use cases

  • Accounts Payable: Invoice processing and exception handling
  • Accounts Receivable: Collections outreach prioritization and automation
  • Order Entry: PO data extraction, validation, and integration
  • Customer Helpdesk: Order status inquiry and change request automation

Your 6-month expansion strategy

Once one team has mastered a process, the next step is to scale the success. Treat it as a structured sequence rather than a jump into the deep end:

  1. Master the core process (3–6 months): Stay focused until the workflow is stable and results are consistent.

  2. Document learnings and create templates (1 month): Capture what worked so the playbook can be reused.

  3. Identify adjacent opportunities within the same team (1–2 months): Build on familiar ground where adoption is easiest.

  4. Scale to related processes before jumping to new teams (2–3 months): Expand gradually to avoid losing momentum.

Handled this way, expansion feels natural and sustainable, rather than another big-bang rollout.

Metrics that matter

The goal of this pattern is depth of adoption within a team. The numbers you track prove whether the approach is working:

  • Process mastery: More than 80 percent straight-through processing shows the workflow is reliable.
  • Team adoption: User satisfaction above 90 percent indicates the team trusts the system.
  • Business impact: Measurable ROI within six months demonstrates value to leadership.

These metrics prove that focus pays off and create the credibility you need to expand into other areas.

Common challenges

  • Spreading too thin: Trying to pilot in too many places at once weakens results.
  • Leadership pressure: Executives may push for quick wins in multiple departments, which dilutes focus.
  • Neglecting documentation: Failing to capture lessons learned makes expansion harder later.

Once you have proven value with one team and one use case, scaling becomes much easier.

The next step is to make sure the work you have done does not have to be reinvented from scratch. That is where building a library comes in.

Pattern 4 - Build a library: Reuse what works

You know how every new project feels slower when you have to start from scratch? AI adoption is no different.

The library approach solves this by capturing what works like prompts, rules, templates, testing methods. That way each project builds on the last. What once took months can later be deployed in weeks.

Why it works

Every successful implementation leaves behind assets your team can use again. Over time, those assets compound into faster deployments, more consistent results, and a growing confidence that you are not reinventing the wheel.

What to include in your AI library

Think of your library as a toolkit that gets richer with each project:

  • Prompt templates: For invoice data extraction, policy compliance checks, exception handling, and approval routing.
  • Business rules libraries: Validation criteria, escalation thresholds, ERP integration patterns, and audit requirements.
  • Testing frameworks: Accuracy measurement methods, user acceptance criteria, benchmarking tools, and error analysis procedures.

Building your library

The key is to start small and expand as you go:

  1. Document what works: Capture the details from your first implementation.
  2. Abstract patterns: Separate reusable components from process-specific elements.
  3. Create templates: Turn proven approaches into repeatable frameworks.
  4. Share knowledge: Make the library accessible to anyone working on new use cases.

Library maturity progression

A library matures in stages. Each milestone speeds up the next project:

  • Months 1-3: Basic prompt templates and success criteria
  • Months 4-6: Reusable business rules and integration patterns
  • Months 7-12: Comprehensive testing frameworks and deployment procedures
  • Year 2+: A cross-functional library supporting multiple departments

Metrics that matter

The value of a library shows up in speed and reuse:

  • Time to deploy: Are new projects taking weeks instead of months?
  • Reuse rate: How much of each new build comes from existing components?
  • Consistency: Are error rates and user satisfaction stable across projects?

These metrics demonstrate that the organization is not just adopting AI but becoming more efficient with every project.

Common challenges

  • Skipping documentation: Failing to record what worked means every project starts over.
  • Over-customization: Building one-off solutions that cannot be reused.
  • Poor access: Keeping the library siloed so other teams cannot benefit.

A strong library makes every new AI project faster, cheaper, and more reliable. Once you have the ability to reuse what works, the next question is how you decide what to build in the first place.

That is where experimentation comes in.

Pattern 5 - Run safe experiments that build momentum

Long planning cycles often kill AI projects before they even start.

You know the pattern: months spent designing a roadmap, only to watch priorities shift and the plan gather dust. A safer path is to experiment in short cycles.

Small, timeboxed tests let you learn quickly, scale what works, and cut what doesn’t.

How it works

Think of this as a 90-day loop you repeat until adoption feels natural:

  • Weeks 1–4: Identify one opportunity and run a small test.
  • Weeks 5–8: Scale what works, drop what fails.
  • Weeks 9–12: Document what you learned and plan the next cycle.

Why it works

Short cycles keep projects grounded in reality. Instead of betting on a perfect 18-month plan, you adapt as the technology evolves and your team’s needs change.

Every experiment produces insight, even the ones that don’t scale, so nothing is wasted.

Toolkit for experimentation

To keep experiments safe and productive, use these guardrails:

  1. Hypothesis: What do you expect AI to improve?
  2. Success criteria: How will you measure impact?
  3. Learning goals: What do you want to understand better?
  4. Resource limits: How much time and budget will you spend?

This structure gives you enough discipline to avoid chaos, while still leaving space for creativity.

Metrics that matter

The key to measuring experimentation is learning velocity. Think about how quickly you can test, adapt, and improve.

  • Cycle completion: Are teams finishing 90-day cycles without stalling?
    Why: Momentum matters more than perfection at this stage.

  • Experiment success rate: What percentage of tests scale into production?
    Why: Not every test will succeed, but a steady conversion shows you’re targeting the right problems.

  • Reusable learnings: How many components or insights carry over into other projects?
    Why: Even “failed” experiments should add value by enriching your library.

These metrics prove experimentation is not wasted motion because it is an engine for sustainable progress.

Common challenges

  • Over-planning: Designing the perfect experiment defeats the purpose.
  • Skipping documentation: Without a record of what worked, you cannot improve the next cycle.
  • Chasing novelty: Testing flashy ideas instead of tackling real pain points wastes credibility.

Running safe experiments gives you a way to move fast without losing control. And when paired with the other adoption patterns, it helps you turn early wins into long-term momentum.

Measuring success across patterns

Success in AI adoption is not just about whether the system runs. It is about whether people trust it, whether it improves the business, and whether it lays the foundation for bigger wins. The right metrics keep leaders from guessing and give them a clear story to share with stakeholders.

Universal success indicators

No matter which adoption pattern you choose, these signals show you are moving in the right direction:

  • User adoption: Teams are actively using AI tools in their daily work, not avoiding them or falling back on manual processes.

  • Process improvement: You can point to measurable gains in speed, accuracy, or cost reduction, not just anecdotes.

  • Organizational learning: Each project leaves your team more fluent and confident with AI, making the next one easier.

  • Business impact: ROI is visible enough to justify further investment, whether through savings, reduced risk, or freed capacity.

Think of these as your executive dashboard. If these indicators are not moving, it does not matter how sophisticated the technology looks on paper.

Pattern-specific metrics at a glance

Crawl, Walk, Run

Look for steady progression from human-reviewed outputs to true straight-through processing. If the percentages stall, you may be advancing phases too quickly.

Copilot first

Watch override rates. High overrides early on are normal, but they should decrease over time. Equally important is whether users start requesting more automation because it’s a sign they trust the system.

One team, one use case

The depth of mastery matters more than breadth. An 80 percent straight-through rate or 90 percent user satisfaction within a single process shows the pilot has credibility to scale.

Build a library

Measure how much faster new projects start because of reusable assets. A rising reuse rate signals that your investment is compounding.

Run safe experiments

Track the completion of 90-day cycles and the percentage of experiments that scale into production. The goal is not a perfect hit rate but a rhythm of consistent learning and reuse.

PatternWhat to trackWhy it matters
Crawl, Walk, RunProgression from review → automationShows whether phases are advancing at the right pace
Copilot firstOverride rates, automation requestsIndicates growing trust and readiness for full automation
One team, one use caseStraight-through rate, user satisfactionProves depth of mastery and credibility to scale
Build a libraryReuse rate, time to deployDemonstrates compounding value across projects
Run safe experimentsCycle completion, conversion to productionMeasures learning velocity and sustainable momentum

The specifics differ by pattern, but the theme is the same: metrics are not just numbers. They are milestones that show your organization is learning, adapting, and gaining confidence.

How to choose the right pattern for your team in 3 steps

No two finance teams start from the same place. The right adoption pattern depends on your culture, resources, and risk tolerance. Use this quick checklist to match your situation with the right starting point.

Step 1: Ask yourself these questions

  • Is my team comfortable with new technology?
  • Do we have dedicated resources, or will this run within existing capacity?
  • Are we more planning-oriented or experimentation-friendly?
  • How much timeline pressure is coming from leadership?

Step 2: Match your answers to a pattern

  • Risk-averse team? → Start with Copilot first or Crawl, Walk, Run to build trust gradually.

  • Limited resources? → Choose One team, one use case and build a library as you go.

  • Innovation-driven culture? → Run safe experiments and scale what works, supported by Crawl, Walk, Run for structure.

  • Heavy compliance requirements? → Stick with Crawl, Walk, Run and document every stage for audit readiness.

Step 3: Decide and commit

Pick one pattern that fits your reality today. Resist the temptation to try them all at once. Depth beats breadth when it comes to early adoption.

Choosing the right adoption pattern is less about theory and more about fit. When the pattern matches your culture and constraints, you get early wins that build momentum. And once you have those wins, you can expand with confidence.

The final step is knowing how to turn those first successes into a repeatable playbook for your whole organization.

Final thoughts: Don’t reinvent, reuse what works

AI adoption in finance is not about writing the perfect roadmap. It is about proving value early, earning trust, and building momentum that compounds over time. The good news is you do not need to start from scratch.

These adoption patterns give you a playbook that already works. Your job is to apply it to your context.

Here is how to move forward:

  1. Pick your starting pattern: Choose the approach that best fits your culture, resources, and risk tolerance.

  2. Prove value fast: Apply it to one process, track clear metrics, and communicate the early wins.

  3. Scale with confidence: Reuse what worked, expand to adjacent processes, and keep momentum going.

Ready to start your AI journey? Download our Implementation Checklist and book a strategy session to figure out where to start and how to scale.