The Ethical AI Reckoning: What CEOs & Boards in Aotearoa Must Grapple With, Today.
How do we as leaders confidently harness the power of AI—driving innovation, competitive advantage, and profitability—without inadvertently causing harm, loss, compromising trust, violating ethical or regulatory standards, or perpetuating bias and exclusion?
"Industries exposed to AI saw productivity nearly quadruple since 2022." - PwC Global AI Jobs Barometer (2025)
This is the ethical AI reckoning—and a commercial one.
As AI adoption accelerates, New Zealand’s leaders face a defining choice: whether to lead with foresight and trust, or risk falling behind in a world that’s watching closely. AI is no longer just a tool for innovation—it’s a force with deep implications for fairness, accountability, and social licence.
This isn’t just a technology decision. It’s a leadership and governance decision.
Ethical AI is good for business because trust, performance, and long-term value now depend on it.
Without ethical guardrails, AI systems can—and already do—generate reputational fallout, regulatory breaches, biased outcomes, and public backlash.
“Ethical AI isn’t a cost — it’s a competitive advantage.” - Kerry Topp (2025)
So, as AI adoption accelerates, New Zealand’s leaders are at a crossroads.
AI is no longer just about innovation. It’s about implication. It’s no longer a technology decision. It’s a governance one. Because, when ethics are embedded early, AI becomes more accurate, resilient, and inclusive. It leads to better decisions, stronger risk mitigation, faster adoption, and deeper relationships with customers, regulators, and communities.
This seven-part series shows CEOs, boards, and executive teams how to lead AI ethically—ensuring it is:
Aligned with your strategy
Grounded in clear ethics
Governed responsibly
The series will pose questions that help ensure your AI is governed by applying three essential lenses:
Bias – Are we identifying and reducing AI bias?
Explainability – Can we clearly explain AI decisions?
Privacy & Data – Are we protecting data beyond compliance?
Kerry Topp and Matthew Cheetham, are the founders behind pAISley Ethics—a leadership-focused AI ethics consultancy based in Aotearoa New Zealand.
pAIsley Ethics’ work is guided by a simple but powerful belief:
Ethical AI isn’t a tech challenge — it’s a leadership responsibility.
Their kaupapa is clear:
To embed ethical governance that protects people, profit, and reputation — by grounding AI in local values, lived experiences, and enduring principles.
Their vision?
To be Good Ancestors in the age of intelligent technology — because in a world shaped by AI, trust will be the most valuable corporate asset of the next decade. Ethical AI governance isn’t about looking good; it’s about leading wisely, and being remembered well.
Collectively, and alongside Advisors like Kim Gordon, Mike Hendry, and Richard McLeod, they bring decades of cross-sector expertise in:
Innovation leadership
Tech and AI governance
Public policy and law
Indigenous data sovereignty
Human-centred design
Systems thinking
With experience across public, private, iwi-led, local and global contexts, they’ve contributed to leadership conversations where strategy, governance, and innovation meet real-world complexity.
They bring a grounded understanding of change, cross-sector systems, and the trust-building needed to lead across diverse communities.
Five Critical Questions For CEOs and Boards To Ponder
Five Critical Questions For CEOs and Boards To Ponder
Based on experience there are five questions every board and CEO should be asking before, during and after implementing AI within their business. These are:
What ethical risks already exist in your business offerings, service experiences, and processes?
How might the creation process itself introduce ethical risks?
How might end-users use your product in ethically risky ways?
What ethical impact will your deployed product have—and who will be affected?
How will you address post-deployment ethical issues as your AI evolves?
Why This Isn’t Just a Technical Problem—It’s a Leadership Challenge
“Asking AI engineers to design an ethical AI program is like asking architects to write building safety codes.” - Kerry Topp (2025)
Architects design impressive buildings. But safety codes require expertise in law, ethics, and human risk.
Ethical AI is no different.
It’s not a task for just engineers or data scientists.
It’s not solved with models, frameworks and principles alone.
It’s not about compliance checklists.
It’s about:
Senior leadership ownership
Equity
Risk integration and cultural competency
Governance systems that evolve over time
Technical brilliance builds the system.
Ethical leadership shapes it.
And to lead AI responsibly?
You need more than one lens: Legal. Ethical. Cultural. Technical.
Not in silos. In conversation.
“Ethical AI is not a lab problem. It’s a boardroom priority.” - Kerry Topp (2025)
Why In-House Isn’t Always Enough
Building ethical AI capability in-house is important—but rarely sufficient on its own. While some organisations try to go it alone, here’s the risks:
Slow capability growth
No external credibility
Missed regulatory expectations
Opportunity cost from redeployed teams
Limited independence for board or public trust
Even with strong internal talent, ethical AI demands more than technical or governance fluency—it requires cultural, legal, and community insight.
If you don’t have this in-house, you’ll need a trusted partner. And if you do, you will still benefit from working alongside specialists who bring the capacity, independence, and cross-sector expertise needed to strengthen and scale your efforts.
Ethical AI isn’t binary—it’s collaborative.
Even with great internal talent, ethical AI requires more than tech or governance fluency. It demands a clear understanding of the ethical considerations involved in the development and implementation of AI.
“Building ethical AI capability in-house is important—but rarely sufficient on its own.” - Kim Gordon (2025)
Building ethical AI capability in-house is important—but rarely sufficient on its own.
That’s Why pAISley Ethics Exists
pAIsley Ethics helps organisations embed ethical AI across your organisation by co-designing systems that align with your values, strengthen your governance, and build trust—at pace.
With pAISley Ethics, you’ll go beyond policy PDFs. You’ll build a live, credible, cross-disciplinary AI governance capability.
Explore seven key questions every CEO, board, and executive should be asking
Coming Up in This Series
With the rise of agentic AI—where systems make decisions, take actions, and even build new agents—this isn’t theoretical anymore.
Your AI isn’t just processing data. It’s shaping outcomes. It’s influencing people. It’s acting—sometimes without you.
Which is why your AI governance needs to be just as active.
In this series, we explore seven key questions every CEO, board, and executive should be asking.
What ethical risks already exist in your business today? In this article, we reveal how AI may already be influencing your operations—often invisibly—and outlines how to detect and assess risks you didn’t know you had.
Could you be designing risk into the system from day one? Next, we explore how flawed datasets, exclusionary design decisions, and lack of diversity of input can embed bias and risk before your AI even launches.
Are you prepared for how your AI might be misused? We also look at third-party risk, unintended use cases, and the accountability gap—highlighting the importance of planning for impact, not just intent.
What long-term impact will your AI have—and who’s watching? We then focus on AI drift, ongoing bias, and reputational exposure—making the case for continuous ethical oversight, not one-time reviews or annual audit reviews or health checks.
Do you have a living, breathing AI governance model? Next, we distinguish between static policy documents and real-time governance systems that can evolve with your business and the AI itself.
How ethics should flow through every phase of innovation In our penultimate article, we break down the full AI lifecycle—from ideation to post-launch—and show how to embed ethics, inclusion, and accountability into every step.
Executive briefing: Do you have a roadmap to ethical AI? And finally, we summarise the key takeaways across the series, offering a clear checklist and maturity roadmap for boards and leaders to act on now.
Final Thought
AI is not neutral. It reflects your leadership.
This is not just about tech. It’s about values, strategy, and legacy.
Getting ahead of AI risks isn’t just a governance safeguard—it’s a strategic business move that protects value, builds trust, and ensures your innovation scales sustainably.
"AI success starts with governance." - Kim Gordon (2025)
What Ethical Risks Already Exist in Your Business Today?
Next in the Series:
➡️ Article 2: What Ethical Risks Already Exist in Your Business Today? Next, we’ll explore how AI may already be operating inside your organisation—often invisibly—and what hidden risks you might already be exposed to. Because before you can lead ethical AI, you need to understand what’s already live, and already at stake.
Next Actions:
♻️ Repost if you believe AI is a leadership test and that AI ethics starts at the top.
👥 Tag a CEO or board chair who needs to see this article.
📩 DM us for:
An Executive Briefing Whitepaper
A 1:1 session with our ethics advisors
Let’s make Aotearoa New Zealand a global model for inclusive, intelligent and ethical AI.
About pAISley Ethics
pAIsley Ethics helps organisations build governance in support of strategic and ethical AI across your organisation.
We offer:
AI Ethics Health Checks
AI Governance Framework Design
Board Briefings & Risk Scenarios for the development and implementation of AI
Independent AI Ethical Risk Reviews
Executive Training in AI Ethics
Together, let’s work towards a just, inclusive, and accountable future.
Hōake tātou | Let's go!