Operations

How we replaced our SOC 2 compliance platform with AI and Google Drive

Compliance platforms charge thousands annually for what is essentially organisation software. We moved to a Git repository, Google Drive for auditor access, and AI for the tedious work. The only cheque we write now goes to our CPA firm for the actual audit.

Quick answers

Why does this matter? Compliance platforms are organisation software - they track controls, store evidence, and send reminders. That is what you are paying thousands for annually.

What should you do? The audit requires a CPA firm regardless - no platform performs the actual audit or provides the attestation. You need a licensed CPA firm either way.

What is the biggest risk? Git provides audit trail for free - version control tracks every change with who, when, and why. Better than any platform activity log.

Where do most people go wrong? AI handles the tedious compliance work - evidence analysis, policy reviews, security scanning, and report generation that used to justify platform fees.

The invoice arrived. Again. Thousands of dollars for another year of compliance software that, when I actually sat down and thought about it, was doing roughly the same thing as a well-organised folder.

That was the moment I stopped to ask what we were actually getting for this money at Tallyfy. Not what the sales deck promised. What we actually used.

A spreadsheet tracking our controls. A place to upload screenshots. Reminders about evidence due dates. Integrations that promised automatic evidence collection but still needed manual screenshots about half the time. That’s it. That’s the product. And compliance platform companies have built billion-dollar businesses on exactly this.

What took me embarrassingly long to grasp: the platform doesn’t do the audit. You still need a licensed CPA firm to review your evidence, test your controls, and put their attestation on the SOC 2 report. That’s the only part with legal weight. The platform is just where you park things before the auditors arrive.

So we stopped paying for the parking lot and built something better. Our SOC 2 Type 2 is current. The only cheque we write now goes to our CPA firm. Everything else runs on tools we already had.

What compliance platforms actually sell you

Let me be specific. The compliance automation market has real players - Vanta, Drata, Secureframe, Sprinto - all competing for the same pitch: they make SOC 2 manageable. And the framework keeps shifting under their feet. AI governance controls are now central to SOC 2 audits, with the AICPA driving new requirements around algorithmic bias, data poisoning, and AI-driven decision-making explainability.

What does “manageable” actually mean?

Control tracking. A database of your SOC 2 controls, typically 60 to 100 items depending on your trust service criteria. Each control has a status, an owner, evidence requirements, and due dates. This is a structured spreadsheet with conditional formatting. A YAML file with a script to generate status reports does the same thing.

Evidence storage. A place to upload screenshots, exports, and documents proving you did what your policies say. Screenshots of AWS IAM configurations. Exports of user access lists. Policy acknowledgement documents. This is a folder. Google Drive does this. Any shared storage does this.

Reminders and dashboards. Notifications when evidence goes stale. Visualisations showing compliance status. A cron job that checks due dates and sends alerts handles this. So does a calendar.

Integrations. Connections to AWS, GitHub, Okta, Google Workspace that can automatically pull certain evidence. Sounds good until you look closely. User feedback on G2 consistently mentions that integrations work for some items but not others, producing a patchwork of automated and manual evidence gathering. The integration pulls a user list from your identity provider. Fine. But it can’t take a screenshot of your password policy configuration page. It can’t capture specific firewall rule settings. It can’t document manual review processes.

Most evidence still requires someone to take a screenshot, name it sensibly, upload it, and mark it collected. Manually.

Policy templates. Pre-written policy documents covering information security, acceptable use, incident response, business continuity. These save time during initial setup. But they need customisation to reflect actual practices. And after the first year, you already have policies. The templates provide diminishing value.

Readiness assessments. Questionnaires evaluating your current state against SOC 2 requirements. Useful for first-time efforts. Less useful once you understand what the framework requires.

Meanwhile, the industry is moving toward continuous compliance, replacing the traditional six-to-twelve-month Type II audit cycle with ongoing monitoring. The platforms are scrambling to keep up. You’re paying for a dashboard that might already be outdated by the time auditors look at it.

The real value proposition isn’t technology. It’s that these platforms make compliance seem manageable by breaking it into steps. They reduce the intimidation factor. They provide structure that feels official and complete.

You can get this same structure from a well-organised folder system, clear documentation of what evidence you need, and someone who understands what SOC 2 actually requires.

The system that replaced it

We moved everything off our compliance platform. Exported our data, converted it to portable formats, and built a system using three components we already had.

A Git repository. All compliance data lives here. Version controlled. Auditable. Portable.

Controls are tracked in YAML files. Each control has an ID, description, owner, status, frequency, and mappings to Trust Service Criteria. The YAML format is readable by both humans and machines. When someone asks about a specific control, we find it in seconds. When we need reports, scripts parse the files.

Evidence items are tracked similarly. Each item has a description, the control it supports, collection frequency - 90 days, 180 days, 365 days depending on how quickly the evidence goes stale - and the date it was last collected. A dashboard script reads these files and shows what’s current, what’s coming due, what’s overdue.

The frequency tiers exist for practical reasons, not arbitrary schedules. Access reviews need refreshing every 90 days because employee roles change, people leave, and permissions accumulate. Three months is roughly the window before an access list becomes unreliable. Vendor compliance reports typically update annually because SOC 2 reports themselves cover a twelve-month observation period. User access population exports work on roughly 300-day cycles because auditors want evidence that is recent but does not necessarily need to align with calendar quarters. Getting these cadences wrong means either wasting time on unnecessary re-collection or discovering during the audit that your evidence is stale.

Not all evidence works the same way either. A practical taxonomy: samples are single instances of a control operating, like a signed NDA or a completed access review with sign-off. Populations are system-generated lists like user exports or customer inventories. Settings evidence means configuration screenshots showing how a system is actually configured right now. Policy evidence is the reviewed document itself. These distinctions matter because each type has different collection mechanics and different ways of going stale. Most compliance guides treat evidence as one undifferentiated category. It is not.

One workflow gap catches most companies off guard: not every evidence item applies to every organisation. When something genuinely does not apply, auditors want a formal attestation letter explaining why. Not an email. Not a comment in a spreadsheet. A signed letter with the specific item, the specific reasoning, and a date. This sounds minor until you realise that a dozen items might be legitimately not applicable and each one needs this formal documentation to close out cleanly.

Risks are documented the same way. Risk ID, description, treatment approach, current status, mitigating controls. All in YAML. All version controlled.

Our policies live in the repository too. Three formats for each policy: editable source files that humans write and update, markdown versions that AI can read and help review, and PDF exports that auditors receive. The markdown versions include YAML frontmatter with metadata: version number, last review date, next review due date, owner, and mappings to SOC 2 criteria.

Every change is tracked through git commits. Who made the change, when, what changed, why. Git blame shows the exact commit for any line in any policy. Git log shows the complete history. Git diff shows exactly what changed between versions, character by character.

This is your audit trail, built into version control for free. Better than any platform activity log I’ve seen. Platforms show you that someone uploaded a file. Git shows you exactly what changed in that file, with the commit message explaining why.

Google Drive. This is where auditors look. We created a shared folder structure mirroring the repository. Policies organised by category. Evidence organised by quarter. Third-party SOC 2 reports from our vendors. Audit packages with official reports from previous periods.

A Python script syncs files from the repo to Drive using the Google Drive API with a service account. Programmatic access, no manual uploads, no human error. Run the sync after evidence collection, after policy reviews, after any updates. Auditors get read-only access to the Drive folder. They can browse, download, review. They can’t modify anything.

The repository is the source of truth. Drive is the read-only mirror for auditor access. Changes happen in the repo, then sync outward. Never the other way around.

This separation matters. The source of truth is version controlled, portable, owned entirely by us. The auditor view is a snapshot we choose to share. We control what gets synced and when.

AI assistance. This is where the real shift happened. AI handles the tedious work that used to justify platform subscriptions.

Evidence collection follows a quarterly cycle. Check what’s due in the evidence YAML file, filter by next_due date. Go to AWS or GitHub or whatever system holds that evidence. Take a screenshot showing current state and date. Name it with a consistent convention: date prefix, evidence ID, source system. Files sort chronologically by default.

Update the YAML with the new collection date and next due date. Sync to Drive. Mark as done. Move to the next item.

Annual policy reviews work similarly. A script bumps version numbers and review dates across all policies. AI reads each policy and identifies sections that might need updates: references to specific technologies that have changed, procedures that no longer match actual practice, compliance requirements that have evolved. Human reviews the suggestions, makes actual changes, approves the updates. Generate fresh PDFs. Sync to Drive. Done.

The whole system is portable. Clone the repository, you have everything. Export the Drive folder, you have all evidence. No vendor lock-in. No proprietary formats. No worrying about what happens if your compliance platform gets acquired, changes pricing, or goes out of business.

How AI changes the actual workload

Compliance work isn’t intellectually difficult. It’s tedious. Evidence collection is tedious. Policy reviews are tedious. Security scanning is tedious. Status reporting is tedious.

AI handles tedious. That’s not a limitation - it’s exactly what makes this approach work.

Evidence analysis. When you collect evidence - screenshots of AWS IAM settings, exports of user access lists, configuration pages from various systems - someone needs to verify that the screenshot actually shows what it claims to show. Does the file you uploaded actually demonstrate access controls, or did someone upload the wrong thing?

AI can visually inspect images and describe what they contain. We ran visual analysis on our entire evidence library. Every screenshot now has a machine-generated description of what it shows, mapped to the evidence ID it supports. When an auditor asks about a specific evidence item, you can immediately confirm what the screenshot demonstrates without hunting through folders. The description tells you: this screenshot shows AWS IAM user list with 12 users, MFA status column visible, last login dates shown.

AI also catches problems. Screenshot shows wrong time period. Screenshot shows staging environment instead of production. Screenshot was taken before a policy change, not after. These errors get caught during analysis rather than during the audit.

Policy reviews. Annual policy reviews traditionally meant someone reading through 30-plus policy documents, checking if anything needed updates, making changes, tracking versions. This takes days when done properly. Most companies either rush through it or skip meaningful review entirely.

AI reads your policies and identifies sections that reference specific technologies, vendors, or practices that might have changed. It flags inconsistencies between related policies. It suggests updates based on changes in your actual practices documented elsewhere: commit histories showing new tools adopted, configuration changes showing new security measures implemented, incident logs showing response procedures that evolved.

The human still decides what to change. But the tedious reading and cross-referencing that used to take days now takes hours. The AI surfaces what needs attention. The human applies judgment about what to actually update.

Security scanning. SOC 2 requires evidence that you test your security posture regularly. Zero trust architecture is now a SOC 2 requirement, with auditors scrutinising access restrictions, network segmentation, and least-privilege enforcement. Penetration testing and vulnerability assessments traditionally require external consultants charging significant fees per engagement.

We run automated penetration tests monthly using open-source tools. Nuclei for vulnerability scanning against thousands of known vulnerability templates. Testssl.sh for certificate analysis and TLS configuration review. Security header checks for HSTS, CSP, and other browser security policies. Port reconnaissance to verify only expected services are exposed.

The scans run automatically on a schedule. Raw results go into the repository. Then AI generates the reports.

AI takes raw scan output - technical, verbose, sometimes thousands of lines - and produces professional PDF reports. Executive summary with security posture score. OWASP Top 10 coverage showing which categories were assessed. Severity breakdown showing critical, high, medium, low findings. Individual findings with CWE classifications, remediation guidance, and references. Most importantly: mappings to SOC 2 trust service criteria. This finding relates to CC6.1. This finding relates to CC7.2.

What used to require a security consultant writing up findings now happens automatically. The scans cost nothing to run. The AI report generation costs fractions of what consultant time costs.

Dashboard generation. Parse the YAML config files, count what’s current versus overdue, calculate compliance percentages, generate a status report showing overall health and items needing attention. No platform subscription required. No monthly fee for a dashboard showing information derived from your own data.

Attestation letters. Some evidence items cannot be captured with a screenshot. “Confirm there were zero security incidents this quarter” is a true statement, but there is nothing to screenshot. The practical approach: generate formal attestation letters. A markdown template with structured fields gets rendered to HTML, then to PDF with an embedded signature image. AI writes the content based on what the tracking data shows. A human reviews the letter, confirms accuracy, and the signed PDF becomes the evidence. This replaces hours of manual Word document formatting for what amounts to structured fill-in-the-blank work.

Vendor compliance review without enterprise access. Many vendors gate their SOC 2 reports behind enterprise pricing tiers. If you are a 15-person company, you probably do not have an enterprise contract with every SaaS tool you use. The workaround: review the vendor’s publicly available compliance documentation, trust centres, published certifications, and security pages. Then generate a formal review attestation documenting what was reviewed and confirmed. Auditors accept this when the actual report is not obtainable at your pricing tier. It shows you did the diligence with the access you actually had.

Batch evidence collection. AI can systematically process dozens of overdue evidence items in sequence. Read what is due from the tracking data. Go to each source system. Collect evidence through screenshots, exports, or attestation generation. Name each file with a date-first convention so everything sorts chronologically by default. Update the tracking data with new due dates. Move to the next item. What takes a human several days of context-switching across different systems takes an AI session a few hours. The repetitive nature of evidence collection is precisely what makes it suitable for AI assistance.

The pattern is consistent. Compliance work involves lots of reading, lots of cross-referencing, lots of documentation. AI is excellent at exactly this. The platforms charged thousands annually for organising this work. AI actually does this work, faster.

What you still need

Clear about what this approach doesn’t replace.

A licensed CPA firm for the audit. Non-negotiable. No platform, no AI, no clever folder structure substitutes for the actual audit. A licensed CPA firm needs to review your evidence, test your controls through inquiry and observation, and provide the attestation that your customers and their security teams actually care about.

The platform vendors sometimes obscure this. They talk about compliance automation like the platform does the compliance. It doesn’t. Your CPA firm does the compliance assessment. The platform - or in our case, the repository and AI - just organises the evidence they review.

When a customer asks for your SOC 2 report, they want the attestation letter signed by a licensed CPA. That letter is what carries legal weight. That letter is what their security team reviews.

Budget accordingly. The audit cost stays roughly the same regardless of how you organise your evidence. The CPA firm charges for their time reviewing, testing, and writing. What changes is the platform subscription you no longer pay.

Beyond storing files, the shared Drive folder becomes an interaction layer with your audit firm. Auditors can comment directly on evidence files, ask clarifying questions, or flag items that need re-collection. Detecting and responding to these comments through the Drive API replaces the back-and-forth email chains that compliance platforms handle with their own messaging systems. The audit firm gets a familiar interface. You keep everything centralised rather than scattered across email threads.

Someone responsible for compliance. A human needs to own evidence collection, policy maintenance, and audit coordination. AI assists but doesn’t replace judgment calls about what evidence to collect, how to respond to auditor requests, or when policies need substantive updates.

At Tallyfy, this isn’t a full-time role. Quarterly evidence collection takes a day or two. Annual policy reviews take a week. Audit coordination during the observation period takes more time but happens once a year.

Understanding of what SOC 2 requires. This approach works because we already understood SOC 2 from years of working with compliance platforms and auditors. The requirements keep evolving too. SOC 2 auditors now require evidence that AI models are explainable and decision-making processes are transparent. Processing integrity criteria require companies to demonstrate their AI systems regularly generate complete, valid, accurate outputs. AWS released a new SOC 2 compliance guide in July 2025, setting clearer expectations for how Trust Services Criteria should be evidenced in cloud environments.

If you’re starting from zero, the platforms do provide educational value. They break down requirements and guide you through initial setup. You can get this same education from your CPA firm, from the AICPA guidance, from compliance consultants who charge for initial setup rather than ongoing subscriptions. But you need it from somewhere.

SOC 2 Type 2 benefits more from this automation than Type 1. When you need quarterly evidence refresh across dozens of controls, having AI-assisted workflows matters more than when you’re proving a single point in time. Worth noting: SOC 2 principles align with data protection laws like GDPR, CCPA, and HIPAA, so the evidence you collect often does double duty across multiple compliance regimes.

When this makes sense and when it doesn’t

This isn’t for everyone. I’d rather be honest about the fit than oversell it.

Good fit: technical teams comfortable with Git and YAML. If your engineering team already uses version control, this approach feels natural. YAML config files, markdown policies, Python sync scripts - this is infrastructure developers already understand.

Good fit: startups trying to get SOC 2 without burning runway. Platform subscriptions represent significant annual cost, often equivalent to a meaningful percentage of monthly burn for early-stage companies. The approach here requires upfront setup work but eliminates ongoing subscription fees.

Good fit: companies wanting full control over their compliance data. Everything lives in your repository. You can audit your own audit trail. You’re not dependent on a vendor continuing to exist, maintaining specific features, or keeping pricing stable. This matters more than it used to - IBM’s 2025 report found that one in five organisations reported a breach due to shadow AI, and 63% of breached organisations either lack AI governance policies or are still developing one. Owning your compliance data means you know exactly what tools touch it.

Less good fit: large enterprises with complex multi-team compliance needs. If you have separate teams responsible for different parts of SOC 2, if you need role-based access controls on who can see what evidence, if you have regulatory requirements about where compliance data lives - the platforms handle this complexity better than a repository does. Only 35% of organisations have an established AI governance framework as it stands, and large enterprises juggling multiple compliance regimes probably need all the structure they can get.

Less good fit: teams that genuinely benefit from platform integrations. If your stack happens to match what the platforms integrate with, and those integrations actually work for your evidence needs, the automation might justify the cost. Check carefully though. Many teams find the integrations handle maybe 30 percent of evidence requirements, with everything else still manual.

Less good fit: non-technical teams who need hand-holding. The platforms provide structure, guidance, and support. They make compliance feel achievable for teams without deep technical backgrounds. If you need that scaffolding, pay for it.

The honest assessment: you trade platform fees for your own time. Someone needs to set this up initially. Someone needs to maintain it. Someone needs to understand how the pieces fit together. But you own everything. Nothing is locked in vendor formats. Your compliance data is portable.


SOC 2 is documentation. You’re proving you do what your policies say you do. Policies, controls, evidence - organised, accessible, version controlled.

The compliance platform vendors built businesses on making this seem complicated. It’s not complicated. It’s tedious. There’s a difference.

Complicated means intellectually difficult, requiring specialised knowledge to work through. Tedious means time-consuming and repetitive, requiring attention but not genius. Tedious work follows a specific pattern: read something, check something, document something, repeat.

AI handles tedious. That’s what large language models do well. Read documents, cross-reference information, generate reports, check consistency. The same capabilities that power AI writing assistants work perfectly for compliance busywork.

What you’re really buying with platform subscriptions is the comfort of not having to figure this out yourself. The organisation, the reminders, the dashboards, the sense that someone else has thought through how compliance should work.

Now that AI can help figure it out - can read your policies, analyse your evidence, generate your reports, assist with the actual compliance work - that comfort is worth less than it used to be. The biggest AI failures of 2025 were organisational, not technical - weak controls, unclear ownership, and misplaced trust. The platforms did not prevent that. Understanding your own systems prevents that.

Our SOC 2 Type 2 is current. Our auditor is happy with the evidence organisation. Our CPA firm does the attestation that actually matters legally. Everything else runs on Git, Google Drive, and AI.

The rest is just folders and files.

About the Author

Amit Kothari is an experienced consultant, advisor, coach, and educator specializing in AI and operations for executives and their companies. With 25+ years of experience and as the founder of Tallyfy (raised $3.6m), he helps mid-size companies identify, plan, and implement practical AI solutions that actually work. Originally British and now based in St. Louis, MO, Amit combines deep technical expertise with real-world business understanding.

Disclaimer: The content in this article represents personal opinions based on extensive research and practical experience. While every effort has been made to ensure accuracy through data analysis and source verification, this should not be considered professional advice. Always consult with qualified professionals for decisions specific to your situation.