1 May 2026
Agentic AI is coming to your agency. Here's what the ACSC report actually means for you.
Five international cyber security agencies just released guidance on agentic AI, and it has very specific implications for public sector leaders who are being asked to approve AI deployments right now.

Agentic AI is coming to your agency. Here's what the ACSC report actually means for you.
Five international cyber security agencies just released guidance on agentic AI, and it has very specific implications for public sector leaders who are being asked to approve AI deployments right now.
Last week, the Australian Signals Directorate's Australian Cyber Security Centre (ASD's ACSC), alongside the US NSA, CISA, and equivalent agencies from the UK, Canada, and New Zealand, released joint guidance on what they're calling "careful adoption of agentic AI services."
This report isn't about hackers exploiting chatbots. It's about a category of AI technology that is already being sold to your organisation, or will be shortly, and that carries a genuinely different profile than anything you've procured before.
.jpg)
First: what is agentic AI, really?
You've probably used, or approved the use of, generative AI tools that answer questions, draft documents or summarise information. That's one thing. Agentic AI is different in a critical way: it acts.
An agentic AI system doesn't just respond to prompts you put in Co-Pilot or Claude. It connects to your systems, takes actions on your behalf and does so continuously, without a human reviewing each and every step. That is why it's so valuable, when it's set up right.
A practical example. At Sumday, use software called Linear to manage the features being built and bugs being fixed across our products. When someone posts in Slack (your Teams), that a report isn't working for example, we don't wait for a human to get to that anymore. An AI agent investigates the code to understand why, replies with a diagnosis and depending on how it's been configured, goes and fixes it. A human then reviews the proposed fix and either approves it or sends the agent back with corrections. We can control how far it goes, in what circumstance.

Last week, one of our engineer's agents went a little off piste. There is never any confusion about who took what action, what the agent did and what the human decided. That's what well-implemented agentic AI looks like. Most modern software works the same way. The audit trail is there. The human decisions are visible. The agent operates within boundaries that a person set, and a person can always override and see what it has done.
So yes, it is a new risk and a new way of working, but you would never adopt tools that didn't give you the visibility.
It's very important those agents are on a tight leash so to speak, as one of our engineers posted:


The risks the report identifies
The ACSC report identifies five categories of risk.
What you practically need to know
On access: only give agents what they need
The fix for privilege risk is simple. Only grant agents access to what they actually need for the specific task, and review that access regularly. A procurement agent doesn't need access to HR records. A reporting agent doesn't need write permissions. Scope it tightly from the start and expand only when there's a clear reason to.
On accountability: people are always responsible
There is a version of this conversation that ties organisations up for months in abstract questions about AI and accountability. Don't let that happen.
Agents act on behalf of people, within parameters people set, toward outcomes people are responsible for.
If your organisation has an agent approving spend under $8,000 that meets certain criteria and something goes wrong, you're not having a performance review with the agent. But the manager who would otherwise have been responsible for that decision still is. You review why the agent was configured to make that call, update its guidance so it doesn't happen again, and the responsible officer wears the outcome.
Think of it like onboarding a new employee. You wouldn't hand a graduate a delegation of authority and walk away. You'd make sure they understood the rules, check their work early, and build trust incrementally before expanding their scope. Agents are no different. The capability to review, question, and update an agent's guidance isn't an IT function. It most likely belongs with the manager whose name is on the outcomes.
You can use AI to document who owns each agent, what it's authorised to do and when a human must be in the loop. That doesn't need to be elaborate and no, it's not another excel based DOA that never gets updated.
On behaviour: start narrow and expand deliberately
A good first deployment is an agent that monitors something and flags it for a human, without taking any action itself. From there, you expand in stages.
First the agent flags. Then it drafts a recommended response. Then it sends that response after a human approves it. Then, once you've built genuine confidence in how it behaves, it acts within defined parameters on its own. Each stage is a documented decision about what the agent is authorised to do, who is responsible, and how you'll know if something goes wrong. That process is what makes expansion safe and keeps behaviour risks manageable as you mature.
On buying: ask vendors the right question
Ask every vendor the same thing: can you show me exactly what the agent did, why, and what a human approved? Good vendors will walk you through the audit trail without hesitation. If the answer is vague, or the system gives you no visibility into what the agent actually did, walk away. A black box is not a risk you can manage, and it's not a vendor you want to be standing next to when something goes wrong.
On your governance frameworks: update them now, not after
The report notes that
"governance mechanisms designed for human actors do not always translate effectively to autonomous AI agents."
Your delegations of authority, your approval workflows, your sign-off processes were written assuming a human was making every call. Some of that language needs updating to account for agents acting within delegated parameters on behalf of human officers.
This isn't a reason to pause AI adoption. It's a reason to modernise your frameworks alongside the technology.
Where Sumday comes in
Sumday works with organisations on exactly this: helping you scope what AI and agents should and shouldn't do in your specific context, updating governance frameworks so accountability is clear before deployment and building implementation approaches that expand carefully and keep your leaders genuinely in control.
The ACSC report asks the right questions. The organisations that adopt agentic AI well are the ones that answer them early, document them clearly and grow from there.