Kicksights

How We Deliver Salesforce Architect Kickoffs in 3 Days

Nov 18, 2024·10 min read·Kicksights Research
Architect reviewing a multi-system integration blueprint

The kickoff bottleneck

If you run a small-to-midsize Salesforce consultancy, you probably have this problem: you sign a new client, and then you need to find someone who can actually run discovery. Not just anyone - someone who knows enough about Salesforce architecture to spot the integration dependencies and technical debt that will blow up your project timeline three weeks in.

If you have one or two new clients per quarter, this is fine. You schedule your senior architect, they spend a week doing discovery, and life goes on. But if you're growing and closing 5-10 deals, everyone who's qualified to run kickoffs is already booked on delivery. So either you delay the new project (which annoys the client), or you pull someone off existing work (which annoys your other clients), or you send someone junior who misses the critical stuff.

This is the pattern we kept seeing. Partners would close deals, then spend 2-3 weeks scrambling to staff discovery. By the time they got someone scheduled, the client's timeline expectations were already shot, or worse, their internal champion had moved on to other priorities.

What discovery actually involves

The traditional workflow looks like this: a senior architect spends 5-10 days on discovery. They interview stakeholders, click through the org in Setup, document what exists, draft an architecture proposal, and write up a statement of work.

The problem isn't just finding someone with the time. It's that most of discovery is tedious inventory work - clicking through objects and fields, documenting workflows and Process Builders, mapping integrations, noting what automations exist. This takes 6-8 hours of an expensive person's time, and it's the kind of work that doesn't require architectural judgment. You're just documenting what's there.

Then there's the human knowledge problem. In a typical org that's been around for a few years, nobody actually knows what everything does. The person who built "Opportunity_Renewal_Automation_v3_FINAL" left the company 18 months ago. There's no documentation. The automation name suggests it's about renewals, but when you open it up, it's also creating tasks, updating account ownership, and calling some custom Apex that posts to Slack. And according to the debug logs, it hasn't triggered in six months. Can you delete it? Maybe. Or maybe someone's renewal workflow will break.

According to Salesforce's State of IT report, 57% of IT leaders say integration complexity is their biggest technical challenge. But most discovery processes don't systematically inventory integrations - they ask stakeholders to remember them, which is how you miss the fact that the client's ERP sync uses a scheduled Apex job that will conflict with the new CPQ logic they want.

How Kickoff Insights changes this

Instead of spending 6-8 hours manually documenting what exists, we connect to the org and extract all the metadata automatically. Objects, fields, workflows, Process Builders, Flows, Apex classes, integrations, API usage, governor limit consumption - everything the Metadata API and Tooling API can surface.

We parse this into a structured inventory. Now when the architect talks to stakeholders, instead of asking "tell me about your Salesforce setup," they can ask specific questions: "I see you have a workflow updating Account ownership when Opportunity stage changes - is that for renewals or something else?"

This is the key insight: the AI handles documentation. The architect handles understanding business context, assessing risk, and designing solutions. Those are the parts that actually require human judgment.

The workflow takes three days:

Day 1: Automated metadata extraction. We pull everything from Salesforce plus any connected systems (NetSuite, HubSpot, custom APIs, whatever). The AI parses it into an inventory showing what objects exist, how they relate, which automations touch them, where external systems connect.

Day 2: The architect reviews the inventory and runs focused stakeholder interviews. They can ask targeted questions because they already know what exists. They flag risks - API limit issues, deprecated features, automations that will conflict with new functionality. This stuff goes into the risk register now, not as a surprise during build.

Day 3: The architect finalizes the design, and we generate documentation - architecture diagrams, effort estimates, risk briefs, test plans. Everything's in plain language. Diagrams show current vs. proposed state. The risk register explains what could go wrong and how to mitigate it. Estimates are T-shirt sized (S/M/L), not fake precision like "47 hours".

A real example: retail omnichannel

We ran this for a consultancy working with a mid-market retailer doing omnichannel - stores, e-commerce, marketplace integrations (Amazon, Walmart), plus a loyalty program. The client wanted to consolidate everything into Salesforce Commerce Cloud and Service Cloud.

The consultancy's original plan: 10 days of discovery, starting with stakeholder interviews to understand the current state. But they were already running tight on their Q1 delivery schedule, and their senior architects were booked.

We ran the metadata extraction on day one. Here's what surfaced:

  • 340 custom objects across the main Salesforce org, plus separate orgs for Commerce and Community Cloud that weren't properly documented
  • 47 active Process Builders, 23 Flows, 61 Workflow Rules - a lot of automation nobody could explain
  • 12 external integrations: the documented ones (Shopify, Amazon MWS, loyalty platform), plus several undocumented ones including a custom middleware app that synced order data and a legacy system that turned out to be their warehouse management system
  • API usage showing they were hitting 80% of daily limits, mostly from a scheduled job that ran every hour to sync inventory

The warehouse integration was the critical find. It wasn't in any documentation. When the architect asked about it during stakeholder interviews (because it showed up in the metadata scan), it turned out to be essential - it updated inventory availability in real-time, and the retail operations team depended on it. If they'd missed this and built the new omnichannel solution without accounting for it, the project would have blown up mid-implementation.

The other big issue: those 47 Process Builders. Most weren't documented. Some were clearly deprecated (last modified 3+ years ago, not triggered in months), but nobody wanted to commit to deleting them without understanding what they did. We flagged the ones that touched objects relevant to the omnichannel project - about 15 of them - and the architect focused stakeholder interviews on those.

The risk register included:

  • API limit headroom (they were already at 80%; new integrations would push them over)
  • Undocumented warehouse integration that needed migration planning
  • Process Builders that might conflict with new order management flows
  • Data quality issues in the loyalty object (duplicate customer records, inconsistent email formats)

Armed with this, the architect ran focused 2-hour design sessions instead of open-ended discovery meetings. They walked in knowing the problems, so stakeholder time went to prioritizing solutions, not explaining what exists.

Total discovery time: 3 days instead of 10. The architect spent maybe 12 hours total (inventory review, stakeholder interviews, design finalization). The consultancy delivered the SOW on schedule, with a risk register the client actually found valuable because it flagged real issues they didn't know about.

Why this is actually useful

The value isn't in eliminating architects - it's in letting them focus on high-judgment work instead of manual inventory tasks.

A senior architect's time is expensive and scarce. If they spend 60% of discovery just documenting what already exists, that's waste. The AI handles documentation. The architect handles design decisions, risk assessment, and client communication.

We've run this workflow on about 40 kickoffs so far. The results:

  • Kickoffs finish 68% faster (3 days vs. 9 days on average)
  • Delivery teams accept 2.5x more opportunities, because the handoff is clearer and risks are documented upfront
  • Rework rate in the first sprint drops to under 2%, because integration dependencies are caught in discovery instead of mid-build

That last number is the important one. When you miss a critical integration dependency during kickoff, you find out 3 weeks into development when someone tries to deploy and the batch job fails or the API connector doesn't support the use case. That's expensive - you're already staffed on the project, the client is expecting delivery, and now you have to redesign.

Research from the Standish Group's CHAOS report shows that inadequate requirements gathering is one of the top three reasons software projects fail. Better discovery directly reduces downstream rework.

What this doesn't do

This isn't a replacement for experienced architects. If you don't have anyone on your team who understands Salesforce architecture, AI won't save you.

It also doesn't work well for highly customized industries where business context is everything. If you're doing health insurance claims processing or financial services compliance, the metadata scan gives you the technical inventory but misses all the domain-specific nuance. You still need someone who understands the industry.

And it's not magic. If the client's org is a disaster - hundreds of unused fields, automations nobody understands, undocumented integrations everywhere - the tooling will surface all of it, but you still have to deal with it. Sometimes the right answer is "clean this up before building anything new," which is an awkward conversation. But better to have it during discovery than mid-implementation.

Also, if your typical projects are small and straightforward (basic Sales Cloud implementations, simple integrations), the manual discovery process probably works fine. This is more useful for complex projects - multi-cloud, lots of integrations, legacy technical debt.

What makes this different

Most "AI for Salesforce" tools are either Einstein features or generic chatbots. This is neither. We're using the Metadata API and Tooling API to extract structured data, then parsing it into human-readable format. The AI isn't "thinking" about architecture - it's doing structured data processing at scale.

The architect still does architecture work. They review the inventory, talk to stakeholders, make design decisions, sign off on the blueprint. We just remove the part where they spend hours clicking through Setup and transcribing what they find.

If you're interested in the technical details of how Salesforce metadata can be extracted and analyzed programmatically, the Salesforce DX Developer Guide covers the tooling.

Getting started

If you're a consultancy trying to scale discovery without hiring more senior architects, this is worth testing. We can run it on one of your incoming projects. You get the full deliverable package, and if it doesn't save time or improve quality, you're not out anything.

The pricing is per-kickoff, not subscription. Essentials tier ($999) gets you the AI-powered scan and risk scoring. Professional ($2,499) adds architect curation and client-ready deliverables. Enterprise ($5,999) includes architect-led workshops.

Most consultancies start with Essentials to see the output quality, then move to Professional for client-facing projects where the deliverables matter.

See the kickoff packages or reach out to schedule a walkthrough. We can show you the actual metadata extraction output and talk through how it would work for your typical client profile.

Book a Kickoff

Get started