
Content Menu
A BEGINNER’S GUIDE TO AGENTIC AI
Sam de Silva is an independent AI Strategy Advisor and Consultant who helps organisations use AI responsibly and build AI literacy across their teams.
Right now, every company seems to be asking the same question: what comes after Generative AI?
Over the past couple of years, GenAI has changed how we write, analyse data, and even code. But a new wave is emerging: AI Agents, also known as Agentic AI, and they are set to change how work itself gets done.
For people learning tech skills, this opens up a big opportunity. AI Agents sit right at the crossroads of coding, data, and business. Thanks to the rise of low-code and no-code tools, even non-coders can now experiment with them.
For organisations, they offer a way to create workplaces that are more connected, automated, and intelligent.
So, let’s break it down.
What is an AI Agent?
The easiest way to picture an AI Agent is as a digital teammate that can plan and act on its own.
Instead of just responding to a question like a chatbot, an AI Agent can understand a goal, figure out the steps needed to reach it, and carry out those steps without you having to do the heavy lifting.
It doesn’t just tell you what to do. It can actually do it.
How are businesses using AI Agents?
Across industries, companies are starting to move beyond simple GenAI tools and experiment with AI Agents.
A straightforward example might be a customer service agent that can interpret natural language via a chatbot, understand a customer request to change an address, change the address in a CRM tool, send a confirmation email to the customer, offer the customer a discount on their next home delivery purchase, and apply the discount to the customers account.
In more regulated sectors such as finance, things get trickier. An AI Agent analysing trade data and correcting errors must meet much higher standards of accuracy and compliance. The more autonomy an agent has, the more important it becomes to think about Responsible AI practices.
In these cases, organisations will need to ensure adequate human oversight, build traceability and produce documentation from day one to show regulators how compliance is being maintained.
Over time, we will start to see agents that can manage other agents, which makes governance and transparency even more critical. Businesses will need to implement clear frameworks to manage autonomous agents, with clear accountability for their oversight.
What do you need to learn to create an AI Agent?
This partnership isn’t just about widening access, it’s also about leading innovation. Together, GCHQ and CFG built the first specialist Vulnerability Research (VR) course.
This course equips women with the skills to identify, analyse, and assess weaknesses in software, hardware, and systems. In an industry where attackers are constantly searching for flaws, vulnerability research is one of the most critical skills we need. By designing this curriculum together, CFG and GCHQ have filled a major skills and knowledge gap.
And that’s not all. The partnership has also introduced a Computer Fundamentals class, building strong foundations for women at the start of their tech journey. These programmes are training the talent of today and future-proofing the workforce for tomorrow.
What are the risks of AI Agents taking over human tasks?
The biggest risk is not that AI Agents take over, but that they get things wrong in ways that matter.
If an agent makes a mistake that leads to a compliance issue or a costly error, the impact can be serious.
That’s why it’s critical to run a full risk assessment before deploying any agent, ideally overseen by a governance or ethics committee.
When you decide a task is suitable for automation, make sure there is a clear level of human oversight built in. Humans and agents should work together, not replace one another.
What advice would you give to someone entering tech and learning about AI Agents?
If you are just starting out, my biggest piece of advice is to look beyond the technology itself.
Learn about the ethics, governance and implications of agents and autonomous systems in real-world environments, and their impact on society as well as human-agent interaction.
Don’t limit your AI Agent work to achieving just efficiency or revenue growth, but consider the ethical and responsible implications.
The best AI professionals will be those who can balance technical skills with real-world understanding, good judgment, and a sense of responsibility.
What advice would you give to businesses deploying AI Agents?
From working with both large, highly regulated and small, fast-moving companies, I have learned that an AI strategy must align with your key business strategic goals, organisation values, your risk appetite, size, and compliance requirements.
Start with a few simple use cases, decide whether you will build or buy your AI solutions, and vet any third-party vendors carefully.
Implement the use cases responsibly and ensure all key functions in the organisation are involved – think about resilience, cyber security, and data privacy, as they are all critical for maintaining trust. Build your governance framework – decide who determines when to use an agent and how its performance is monitored.
And do not forget to support your employees. Training is essential, not just for tech teams but for everyone who will interact with AI systems. When employees understand how the technology works, they are far more confident using it. Encourage everyone to learn about AI – that could be reading an internal policy or going on a short course – Code First Girls offer AI courses!
Finally, build an AI-positive culture. The pace of change has been incredibly fast, and for many employees, AI is still unfamiliar. Communicate the process, struggles and success – take the organisation on an AI journey, evolve the culture and make it a positive experience.
When teams feel confident and supported, AI stops being something to fear and becomes something to innovate with.
Final Thoughts
AI Agents mark an exciting shift in how we work.
In my view, agentic AI offers real hope to inspire and support all parts of our society, helping people to do things more easily and quickly. The ability for non-technical people to directly interact with AI through natural language will democratise AI in the same way the Internet provided search to everyone. How AI deployment aligns with society as a whole is an urgent topic, as well as addressing the ever-widening skills gap. I recently founded Strand Logic, a company focused on supporting organisations with Responsible AI adoption and democratising AI using low-code/no-code solutions.
Whether you are learning about AI for the first time or building an enterprise strategy, my advice is the same: start simple, stay responsible, and always build with purpose.








