AI Governance
& Compliance

CaliforniaOntarioQuebecUpdated 2026-05-02

Do You Actually Need This?

AI moves faster than your policies. The regulators are catching up. Pull the lever when any of these applies.

  • SHIPPING AN AI PRODUCT

    • Your AI features change your data flows.
    • They shift legal risk to your stack.
    • Old policies miss the new exposure.
    • The gap widens with every release.
  • EMPLOYEES USING AI TOOLS

    • Your team pastes data into ChatGPT daily.
    • Confidential information leaks through prompts.
    • No policy means no defense later.
    • Vendor agreements quietly use your inputs.
  • INVESTOR OR ENTERPRISE DILIGENCE

    • Diligence asks for AI governance documentation.
    • Without it, the deal slows or dies.
    • Enterprise procurement gates on the same docs.
    • Clean files move money faster.
  • EU OR CROSS-BORDER AI EXPOSURE

    • Your AI reaches users in the EU. EU rules apply regardless of your headquarters.
    • U.S. state laws now stack on top.
    • Risk classification is no longer optional.

A regulatory fine is not the worst outcome.The worst outcome is the AI decision that goes wrong while your policy was still a draft.

What You Get

  • Internal AI rules your team can actually follow.

    An AI Acceptable Use Policy for your employees and contractors. Defines which tools are approved, what data may go into them, who owns the output, and the consequences for breach. Built for how your team actually uses AI day to day, not for a generic compliance binder.

  • The decision-making layer regulators expect.

    An AI Governance Charter that names who decides what about your AI systems. Roles, oversight, risk-tier escalation, training data sourcing, and output ownership documented in one charter. The artifact enterprise procurement and investor diligence ask for, drafted to satisfy U.S. and EU expectations in a single document.

  • Stop signing AI vendor terms blind.

    A playbook for reviewing and negotiating the AI vendor contracts your team signs every quarter. Red-flag clauses on training, output ownership, indemnification, and data retention, with a negotiation cheat-sheet your procurement team can use without calling a lawyer for every renewal. Built for the AI stack your platform actually runs on.

  • Multi-jurisdiction AI law without separate engagements.

    A jurisdiction-by-jurisdiction map of the AI laws your company operates under. EU readiness with risk classification, U.S. state-by-state compliance posture, and the Canadian three-jurisdiction overlay where it applies. The complete cross-border picture in one engagement, drafted by a firm licensed in all three jurisdictions.

Flat Fee. No Surprises.

  • AI Policy Review

    $1,495Flat fee. One inbound policy.
    • Audit of one inbound AI policy, governance doc, or AI clause
    • Redline against the current U.S. and EU AI regulatory landscape
    • One-page risk summary memo
    • 30-minute walkthrough call
    Book a Strategy Call
  • AI Governance Stack

    Recommended
    $3,495Flat fee. Most common engagement.
    • SGL-drafted Internal AI Acceptable Use Policy
    • SGL-drafted AI Governance Charter
    • AI Vendor Contract Playbook
    • AI training data and output ownership memo
    • 60-minute strategy call
    Book a Strategy Call
  • Full AI Compliance Program

    $7,495+Flat fee. Multi-jurisdiction scope.
    • Everything in the AI Governance Stack
    • Cross-border AI risk classification and readiness memo
    • U.S. multi-state AI law compliance map
    • Investor and enterprise-diligence documentation pack
    • California, Ontario, and Quebec cross-border module when applicable
    Book a Strategy Call

Common Questions

Do I actually need an AI governance policy?

Yes, in two situations. First, if your team uses AI tools (ChatGPT, Claude, Cursor, Copilot, Gemini) day to day, you need an Acceptable Use Policy that names which tools are approved and what data can be entered. Second, if your product uses AI to make decisions affecting users, you need a governance charter that documents oversight, risk classification, and escalation. Both are gates for enterprise procurement and investor diligence.

Book a free discovery call
Does the EU AI Act apply to my U.S. company?

Yes, if your AI system is used by, or affects, people in the European Union. The EU AI Act has extraterritorial reach similar to GDPR and applies regardless of where your company is incorporated. U.S. SaaS and AI startups with even modest EU usage often fall within scope, especially when their tools generate output used inside the EU. Risk classification is the first step; obligations cascade from the risk tier.

Book a free discovery call
What is a 'high-risk' AI system under the EU AI Act?

An AI system that poses significant potential harm to health, safety, or fundamental rights. Examples named in the EU AI Act on EUR-Lex include AI used in critical infrastructure, employment screening, education, law enforcement, healthcare devices, and creditworthiness scoring. High-risk systems carry the heaviest compliance load: risk management, data governance, technical documentation, human oversight, and conformity assessment before going to market. Most consumer SaaS does not hit 'high-risk,' but many companies assume the answer until classified.

Book a free discovery call
How is AI Governance different from a Privacy Policy or Data Processing Agreement?

They serve three different audiences and three different surfaces. The Privacy Policy is your public-facing notice to users about data practices; that lives on our Terms of Service & Privacy Policy page. The DPA is the contract between you and a vendor or customer governing how personal data is processed; that lives on our Data Processing Agreements page. AI Governance covers the operational layer above both: how your company decides what AI to build, deploy, and let employees use, and the documentation that shows it.

Book a free discovery call
What does an AI Acceptable Use Policy actually cover?

Which AI tools your employees and contractors may use, what data they may enter, who owns the resulting output, and the consequences for breach. The policy also addresses confidentiality (do not paste client data into a public chatbot), security (only approved enterprise tiers), and disclosure (when AI use must be flagged in deliverables). The U.S. National Institute of Standards and Technology publishes the AI Risk Management Framework that informs current best practice on internal AI policy design.

Book a free discovery call
Who owns the output my company generates with an AI tool?

It depends on the AI vendor's terms and current copyright law. The U.S. Copyright Office does not register works that lack meaningful human authorship, which means purely AI-generated output is typically not copyrightable. Many AI vendor agreements assign output ownership to the customer but reserve rights to use inputs for training, model improvement, or quality review. The AI Vendor Contract Playbook in the AI Governance Stack tier identifies the clauses that change which side actually keeps the value.

Book a free discovery call
What about Colorado, Texas, California, and other state AI laws?

U.S. state AI laws now stack on top of federal and EU obligations, and each state imposes its own duties. The Colorado AI Act SB 24-205 targets consequential decisions; the Texas Responsible AI Governance Act covers state agency AI deployment with private-sector ripple effects; California has layered transparency duties on synthetic media generators and political deepfakes through bills like SB 942 and AB 2655. The Full AI Compliance Program tier produces a multi-state map showing which laws apply to your company's specific footprint.

Book a free discovery call
My team uses ChatGPT and Claude every day. What's the actual exposure?

Three risks compound quickly without a written policy. First, employees paste confidential client data, source code, or trade secrets into consumer-tier AI tools whose terms allow training on inputs. Second, AI-generated work product carries hallucination and infringement risk that flows to whoever signs the deliverable. Third, when a customer or regulator asks how your company controls AI use and you have nothing to show, the absence is the finding. The Federal Trade Commission has been clear that 'we govern AI' claims without supporting policy are themselves enforceable misrepresentations.

Book a free discovery call
What about synthetic media and deepfakes?

If your AI generates faces, voices, or likenesses, multiple regimes apply at once. California has layered disclosure and election-deepfake duties through bills like SB 942 and AB 2655 on the state's Legislative Information site. Federal scrutiny of non-consensual deepfakes has increased through the FTC and DOJ. The EU AI Act adds transparency obligations on generators of synthetic content. Brand-side defense (someone deepfaked your client) lives on our AI Brand Infringement & Deepfake Defense page; this page covers the build-side: the policies your platform needs before it ships.

Book a free discovery call
How is this billed and when does the work start?

Each tier is flat fee, billed at engagement. After the strategy call we send the engagement letter, you pay upfront, and the work begins. Delivery timing is set on the strategy call based on scope, urgency, and current capacity. Rush handling is available; we will tell you the cost before adding it.

Book a free discovery call

Shipping AI without a policy?Let's lock yours down.

Book a Strategy Call