Contact Us
    We are committed to protecting and respecting your privacy. Please review our privacy policy for more information. If you consent to us contacting you for this purpose, please tick above. By clicking Register below, you consent to allow Orion Innovation to store and process the personal information submitted above to provide you the content requested.
  • This field is for validation purposes and should be left unchanged.

The following is an excerpt from an article published in the enterprise business publication NewTab, featuring insights from our Global Head of Cybersecurity, Aaron Mathews. 

Even though AI is already a core part of business operations, the frameworks to secure it lag dangerously behind. Despite lacking the necessary governance and security infrastructure to manage AI effectively, leaders continue to integrate it into critical workflows. Now, some experts say the gap could introduce significant risks. 

For an insider’s take, we spoke to Aaron Mathews, Global Head of Cybersecurity at digital transformation company Orion Innovation. A cybersecurity executive with over 20 years of experience building enterprise security programs, Mathews has spent his entire career navigating complex environments. From leading global cyber audit teams at Scotiabank and co-founding the NFT marketplace Token Bazaar to managing security for essential government infrastructure like Canada’s largest airport (GTAA) and Ontario’s largest power producer, Mathews has had a long, hard look at what it takes to succeed. 

“The language of AI security is often too technical to resonate in the boardroom, a huge barrier to getting buy-in. We need to stop discussing abstract threats and start framing risk in terms of concrete business impacts. When you can explain that a vulnerability could lead to significant financial loss or major regulatory fines, that is when executives will start to listen and assign the resources needed.”  
Aaron Mathews, Global Head of Cybersecurity, Orion Innovation

From his perspective, the AI gap stems from a fundamental misunderstanding about its role in the enterprise. Technical controls alone are not enough, he says. Instead, AI security relies equally on a formal governance model established from day one. 

  • First things first: A risk assessment is a formal process that forces the business to manage AI with proper rigor, Mathews explains. “From a governance standpoint, there is a clear first step: conduct an AI risk assessment before any program is deployed. A step like this is not optional. The process is what forces the organization to establish the right governance mental model from day one, before AI becomes deeply embedded in the infrastructure.” 
  • Money talks: For an assessment to be meaningful, however, its findings must be translated into the language of the boardroom, Mathews continues. “The language of AI security is often too technical to resonate in the boardroom, a huge barrier to getting buy-in. We need to stop discussing abstract threats and start framing risk in terms of concrete business impacts. When you can explain that a vulnerability could lead to significant financial loss or major regulatory fines, that is when executives will start to listen and assign the resources needed.” 

Read the full article at thenewtab.com.  

Learn more about our Cybersecurity offerings.

Keep Connected