Skip to main content

Navigating the Australian Voluntary AI Safety Standard, A Practical Approach

· 3 min read
Ben Johns
Founder of complyleft
VAISS Gap Assessment

🚀 Navigating the Australian Voluntary AI Safety Standard: A Practical Approach

As artificial intelligence becomes increasingly embedded in Australian business operations, understanding how to implement safe and responsible AI practices is no longer optional—it's essential. The recent release of the Australian Voluntary AI Safety Standard marks a significant step toward establishing consistent practices for organisations adopting AI.

Introducing our free Gap Assessment template

Explore our Australian Voluntary AI Safety Standard Gap Assessment Template →

ISO/IEC 42001 is Here—Are You Ready for AI Governance, what are your GAPS?

· 3 min read
Ben Johns
Founder of complyleft
42001 Gap Assessment

🚀 ISO/IEC 42001 is Here—Are You Ready for AI Governance?

Artificial Intelligence is transforming how organizations operate—but with innovation comes responsibility.

In response to growing global concerns around bias, transparency, ethics, and security in AI, ISO/IEC 42001:2023 has arrived as the first international standard for AI Management Systems (AIMS).

For many organizations, this is a major milestone—and a necessary framework to ensure trustworthy, safe, and compliant AI systems. But let’s be honest: it’s also complex. Figuring out where to start with ISO 42001 can feel like a daunting task.

That’s why I’ve created a free ISO 42001 Gap Analysis Tool—a practical, Google sheet template that helps teams quickly assess their current level of readiness against the standard’s core requirements.

Google Sheet: Explore the ISO/IEC 42001:2023 AIMS Gap Assessment Tool →

Excel version is coming soon!

Exploring ISO/IEC 42001, An Interactive Guide to AI Management Systems

· 3 min read
Ben Johns
Founder of complyleft
AIMS Control Matrix

Exploring ISO/IEC 42001: An Interactive Guide to AI Management Systems

In today's rapidly evolving AI landscape, organizations face increasing pressure to implement responsible AI governance frameworks. The new ISO/IEC 42001 standard—the first international standard for AI Management Systems (AIMS)—provides a comprehensive structure for organizations to develop, implement, and continuously improve their approach to AI. To help visualize this complex standard, we've created an interactive tree diagram that maps out the entire ISO 42001 framework and its controls.

Explore the ISO 42001 AIMS interactive visual guide →

Introducing Our Free AI Policy Document Generator, Simplify Your AI Governance Journey

· 3 min read
Ben Johns
Founder of complyleft
AI Policy Document Generator

In today's rapidly evolving AI landscape, organizations of all sizes face the challenge of establishing proper governance for their artificial intelligence systems. With regulations like the EU AI Act, US Executive Order 14110, and standards such as ISO/IEC 42001 taking shape, having a comprehensive AI policy is no longer optional—it's essential.

To help organizations get started, we're excited to launch our free AI Policy Document Generator. This tool aims to simplify the complex process of creating an organizational AI policy that aligns with leading standards and regulatory requirements.

Start Creating Your AI Policy Document →

Hacking the Mind of AI - Adversarial Machine Learning, Social Engineering for Large Language Models

· 5 min read
Ben Johns
Founder of complyleft
LLM Trustworthiness

We stand at a fascinating crossroads in technological evolution. Large Language Models (LLMs) have burst onto the scene, transforming everything from how we write code to how we interact with customer service. But with this rapid advancement comes a shadow: new vulnerabilities that could be exploited by those with malicious intent. Welcome to the world of "hacking the mind of AI" – where traditional cybersecurity meets cognitive manipulation.

Defining, measuring and establishing LLM trustworthiness

· 5 min read
Ben Johns
Founder of complyleft
LLM Trustworthiness

The primary use case for LLMs is Generative AI, based on a user providing an input, “a prompt,” which can be a text string or an image encoded into a series of tokens. The LLM then takes those tokens and predicts the next tokens that will most likely follow. The prediction or the generated information becomes the output of the LLM. This is all based on the data the LLM was pre-trained on and fine-trained on and the data or information used for reinforcement learning.

Question: Do we trust people to provide correct, accurate, and trustworthy information?