Amazon, the e-commerce behemoth, has admitted that extensive use of AI coding tools is causing significant outages across its core retail and AWS businesses.

The Story
They've linked these incidents to "Gen-AI assisted changes" and a lack of established best practices, now requiring senior engineers to sign off on AI-generated code.
Why It Matters
For your 10-50 person law firm, accountancy, or consultancy, this isn't just a "big tech" problem. It means that pushing AI adoption without clear governance, human oversight, and established best practices—even for seemingly minor tasks—can lead to costly errors and reputational damage. Speed without control is a dangerous gamble, regardless of your company size.
What To Do About It
My advice is simple: before you even think about scaling AI tools, focus on robust AI governance. Implement a "human-in-the-loop" policy for any AI-generated output, especially client-facing work or internal critical processes. I recommend starting with a pilot project in a low-risk area, defining clear review protocols, and training your team on *when* to trust AI, not just *how* to use it.
Related Signals

European Parliament voted 569-45 to postpone high-risk AI Act obligations to December 2027 and to ban AI nudifier apps.
10 Apr 2026
A Vision Compliance report published April 1, 2026 found 78% of European enterprises have taken no meaningful steps toward EU AI Act compliance, with 83% lacking any formal inventory of AI systems currently in use.
4 Apr 2026
Only 8 of 27 EU member states are on track for the August 2 AI Act deadline, when full enforcement of high-risk AI rules and mandatory transparency obligations begins across Europe.
27 Mar 2026