Webinar Recap: Building Cyber Resilience in the Age of AI / AI Management

By Connor Duthie

Artificial Intelligence is reshaping every aspect of cybersecurity, from how attackers craft their tactics to how defenders anticipate and respond.

That was the focus of our latest webinar, Building Cyber Resilience in the Age of AI, hosted by TechForce Cyber’s Founder & CEO Jai Aenugu alongside Sam Peters, Chief Product Officer at ISMS.online.

Together, they explored how AI is driving both innovation and risk, how organisations can align with ISO 42001 and the EU AI Act, and what practical steps businesses can take now to strengthen resilience in an increasingly AI-powered world.


AI: The Double-Edged Sword

Jai opened the session with a story that perfectly captured the threat evolution we’re witnessing.

A UK-based engineering firm recently lost over $25 million after a finance employee, based in their Hong Kong office, authorised a transaction during what appeared to be a legitimate video call. The catch? The entire call was a deepfake, AI-generated versions of their CFO and colleagues, all of whom appeared to authorise the transfer.

Jai highlighted how quickly attacker capabilities have accelerated with AI. He explained that tasks which once took weeks now take minutes, noting: “In the olden days, I would have to download multiple pieces of software and work through a number of days and weeks to have a a decent enough clone of myself.” He continued: “But with AI, what I figured is I could clone myself with less than 30 minutes” He added that AI has removed many traditional warning signs: “With AI, these spelling mistakes and grammar mistakes, they don’t exist.”

He warned that reactive response is no longer enough. “The reactive response is out of the door. If you’re still a company that is reacting, waiting for an incident to happen and then taking measures, then God will help you.” He continued: “It doesn’t save anybody these days. Pro-active is not enough either. We used to say be proactive in your cybersecurity, but now it’s not enough, now it’s all pre-emptive.”

Jai urged organisations to:

  • Continuously assess where AI-generated threats could exploit vulnerabilities.
  • Strengthen incident detection and response time, and shift toward a fully pre-emptive approach.
  • Extend security awareness beyond email, into video, voice, and deepfake recognition.
  • Audit what data about the business is already public and could be used maliciously.


“AI needs information of you to use it against you.” he added. “Whether it is your user's data credentials, any other information about your business, what's out there already that can be used against you?”


Governance Meets Regulation

Picking up from the risk landscape, Sam Peters shifted focus to AI governance and compliance, a challenge now at the top of the agenda for many UK businesses.

According to ISMS.online’s State of Information Security Report 2025, 37% of organisations identified shadow AI as their top emerging concern, where employees use AI tools without oversight or approval.

As Sam explained, the EU AI Act introduces a new level of accountability and transparency in how organisations develop, deploy and monitor AI systems. He noted that “much like we saw with privacy and GDPR, we're now seeing legislation being introduced to govern how organisations are using and managing AI as they work.”

He outlined the Act’s four levels of AI risk:

  • Unacceptable risk – banned uses such as manipulation or social scoring.
  • High risk – areas like recruitment, finance, healthcare, or critical infrastructure.
  • Limited risk – use cases including chatbots, AI-generated content, or deepfakes (which legally must be declared).
  • Minimal risk – tools with little to no potential for harm.


Sam emphasised that any organisation operating in or impacting the EU market falls under its scope, not just large tech companies.

“It applies to anyone who develops, deploys or uses in a professional capacity AI systems within the EU or whose systems impact EU citizens.” he noted. “So even if you're based in the UK, if the product or service that you're offering touches the EU market, then the ACT is relevant to you.”


ISO 42001: A Framework for Responsible AI

Sam also introduced ISO 42001, the first international standard for responsible AI management.

Much like ISO 27001 for information security, ISO 42001 provides a framework for:

  • Assessing AI-related risks
  • Defining clear roles, responsibility, accountability and oversight within your organisations
  • Managing data quality and bias
  • Ensuring transparency and continuous improvement


Adopting the standard, he explained, is about building trust and confidence, not just with regulators, customers and investors, but also within your own organisation.

Sam also highlighted that AI governance has implications beyond regulation.
As he explained: “If any of you are thinking about trying to get funding or looking to sell a business in the coming years, an AI strategy will likely be a key part of what your investors are looking for.”


The Resilience Loop

Both speakers underscored the importance of integrating AI governance with existing frameworks such as ISO 27001 (Information Security) and ISO 27701 (Privacy).
By aligning these standards, organisations can streamline audits, reduce costs, and create a unified view of risk, covering cybersecurity, data privacy, and AI governance together.

Sam emphasised that AI governance isn’t just a compliance exercise, it’s a practical way for organisations to build real-world resilience as technology and regulation continue to evolve. As Sam put it: “Your team can spend less time preparing for audits and more time on real-world resilience.”


Key Takeaways

  • AI is already part of your organisation, whether you know it or not.
  • Reactive cybersecurity is obsolete; pre-emptive resilience is essential.
  • Shadow AI poses serious risk without clear governance.
  • The EU AI Act introduces accountability, transparency, and potential fines of up to €15 million.
  • ISO 42001 offers a practical framework to govern AI use responsibly.
  • Integrating cybersecurity, privacy, and AI governance builds long-term trust and efficiency.


Final Thoughts

“Your company is already using it in some form or shape, your staff are already using it,” Jai concluded. “They’re probably putting every single email you’re receiving into AI before you even get it. So let’s accept the truth, and let’s look into the controls so we can protect the data.”

For a full look at the discussion, you can watch the on-demand recording of Building Cyber Resilience in the Age of AI / AI Management here:

Watch the webinar

Related Articles

CONTACT US TODAY:

Back to start
aberdeen skyline graphic
x