The Ethical AI Reckoning: What Ethical Risks Already Exist in Your Business Today?

How do we as leaders confidently harness the power of AI—driving innovation, competitive advantage, and profitability—without inadvertently causing harm, loss, compromising trust, violating ethical or regulatory standards, or perpetuating bias and exclusion?

Ethical risk in AI isn’t waiting for your next AI initiative.

It’s already embedded—in the AI platforms, processes, and tools your organisation uses every day. Think hiring, screening, advertising, analysing, and automating.

From hiring systems to automated decision-making, AI may already be shaping outcomes, and dropping hidden depth charges, inside your organisation—with little oversight, and invisible impact - for now.

In Article 1 of our seven-article series, we made the case that AI is no longer just a technology issue—it’s a leadership one. We introduced five critical questions every CEO and board must ask.

Now in Article 2, we go deeper into the first of those five questions: What ethical risks already exist in your business today? Because you can’t govern what you haven’t first uncovered.

AI Is Already Inside Your Business—Are You in Control?

AI Is Already Inside Your Business—Are You in Control?

Many leaders are unknowingly exposed. As AI tools quietly integrate into hiring, advertising, customer service, and more, few have paused to ask: Are we using AI ethically and responsibly?

"Companies most able to use AI are seeing 3× higher revenue growth per employee." - PwC Global AI Jobs Barometer (2025)

In the rush to adopt and chase benefits like those above, many organisations lack clear guardrails. AI may already be making decisions that impact people and outcomes—without your oversight or awareness.

Examples of ungoverned exposure include:

  • Automated hiring filters excluding qualified candidates

  • Chatbots delivering biased or inappropriate responses

  • AI-generated ads targeting vulnerable groups

Do you have the right guardrails in place?

Now is the time to assess your exposure—and install the governance and guardrails you need. These aren’t hypothetical risks. They’re live examples we’ve seen in the field.

“Getting ahead of AI risks isn’t just a governance safeguard—it’s a strategic business move that protects value, builds trust, and ensures your innovation scales sustainably.” - Kerry Topp (2025)

The AI Guardrail Gap: Moving Fast Without Rules

The AI Guardrail Gap: Moving Fast Without Rules

The AI Guardrail Gap: Moving Fast Without Rules

AI is moving faster than governance — and that’s a real risk. Many systems are being developed without asking what could go wrong.

Regulation will always lag behind innovation. In Aotearoa, where rules are still evolving and likely to stay light-touch, the responsibility falls to organisations and their leaders.

We can’t assume vendor platforms are safe just because they’re familiar — or wait for regulation to catch up. Ethical guardrails need to be set early, not retrofitted after the fact.

Because when AI goes wrong, the consequences — legal, reputational, and human — can be severe.

"Amidst the dynamic integration of Artificial Intelligence (AI) across diverse sectors, instances of AI initiatives veering off course serve as poignant reminders (and cautionary tales) of the practical perils of AI development and deployment." - Harvard Centre for Ethics (2023)

Now is the time to ask: Were the right checks in place when we designed or adopted this AI?

If not, it’s time to review. Key areas to assess include:

  • Bias and fairness in decision-making

  • Transparency and explainability

  • Data sourcing and consent

  • Accountability for outcomes

Getting ahead of this isn’t just good governance—it’s good business.


“Trust will be the defining asset of the next decade, and governance will decide who earns it.” - Kerry Topp (2025)


❗The Oversight Gap #1: Good Engineering ≠ Good Ethics

AI is still too often seen as a “tech problem.” Responsibility is delegated to engineering teams—who, while highly capable in building and leveraging AI built into applications—are not trained or tasked with assessing ethical, social, or cultural impacts.


❗The Oversight Gap #2: Buying AI ≠ Outsourcing Responsibility

Most organisations are rightly prioritising off-the-shelf AI solutions over building their own—but that creates a critical blind spot. Under a subscription model, the AI is owned and controlled by the vendor, limiting your transparency, governance, and ability to influence how the system behaves. Ethical risks—from bias to a lack of explainability —are still your reputational and legal risks, even if the model isn’t yours.

Yet, many leaders assume that because they didn’t build the AI, they don’t need to govern it.

In reality, boards, CEOs, and executives must demand clear assurances from vendors: that ethical design principles are embedded, that risk and bias reviews are ongoing, and that independent oversight is in place. Continuing transparency and oversight—either by your own people or by an independent, trusted third party—is also critical.

Only by building ethical AI into your procurement and supplier relationship management processes will you be in a position to effectively mitigate and manage this risk. Without this, your organisation may be operationalising invisible risks you can’t see and can’t fix

“You may not own the AI—but you will own the consequences.” – Kerry Topp (2025)

The Eight Critical Risk Areas Where Harm Can Occur

The Eight Critical Risk Areas Where Harm Can Occur

Here are eight critical risk areas where AI can cause harm if left ungoverned that every CEO and board should be across – And why leadership, not just tech teams, must address them now.


1. Bias and Discrimination: AI learns from historical data. If that data reflects bias, so will the system.

Risk: Recruitment algorithms prioritising male over female candidates; credit scoring systems disadvantaging certain ethnic groups.

Impact: Reinforces inequality, exposes you to legal action, limits your ability to access human talent and damages public trust.

Additional Comment: Under Agentic AI—where systems can set goals, make decisions, take actions, and even create sub-agents without direct human prompting—bias doesn’t just reflect the past. It can evolve, amplify, and entrench itself through autonomous decision-making and real-time feedback loops, making ethical oversight not just important, but mission-critical.

Our prompt: Where are AI systems making or influencing people-related decisions—and how are we ensuring they don’t evolve biased behaviours as they adapt and act over time?

2. Lack of Explainability: Many AI systems—especially deep learning models—operate as “black boxes.”

Risk: Customers or employees affected by decisions (e.g., loan denials, pricing) with no ability to understand why.

Impact: Regulatory breaches (e.g., EU’s GDPR and AI Act), customer dissatisfaction, declining profitability, and erosion of trust.

Our prompt: Can we explain our AI-driven decisions to a regulator, a journalist—or a concerned customer?

3. Privacy Violations: AI depends on large-scale data—but often uses it in ways people didn’t agree to.

Risk: Behavioural tracking, emotion analysis, profiling—all without transparent consent.

Impact: Reputational damage, regulatory fines, erosion of market share, and loss of customer confidence.

Our prompt: Have we reviewed how all our AI systems collect and use personal data—and where consent may be unclear?

4. Skill Displacement and Dehumanisation: AI changes the nature of work—and often does so without a people or skills strategy.

Risk: Automating roles (e.g., customer service) and decision-making (e.g., claims approval) with no skills transition plan or human backup.

Impact: Workforce demoralisation, declining productivity, increased people turnover, brand risk.

Our prompt: As we deploy AI, are we actively managing the people and skills impact?

5. Manipulation and Exploitation: When AI optimises purely for engagement or revenue, it can exploit human psychology.

Risk: AI-powered marketing targeting vulnerable customers, maximising screen time, or promoting harmful content.

Impact: Ethical overreach, consumer backlash, and long-term brand damage.

Our prompt: Are we balancing commercial goals with our ethical responsibility to customers and communities?

6. Accountability Gaps: When something goes wrong, who owns the outcome? Often, no one.

Risk: A misfiring AI blames the engineer, who blames the data scientist, but the product lead blames the vendor—no clear line of responsibility.

Impact: Regulatory investigation, legal ambiguity, reputational harm.

Our prompt: Do we have clear lines of accountability for every AI system in operation? If not—why not?

7. Unintended Consequences: AI systems optimise what they’re told—not what you value.

Risk: An algorithm designed to increase revenue ends up encouraging misleading upselling or customer churn traps.

Impact: Erosion of ethics from within, and behaviour that undermines purpose, culture, or trust.

Our prompt: Are we reviewing how AI outcomes align with our stated values—not just KPIs?

8. Informed Consent and Awareness: Users often don’t know they’re interacting with AI—or being profiled by it.

Risk: Voice assistants capturing emotion; AI-generated emails or responses passed off as human.

Impact: Erosion of user trust, compliance issues, and customer disengagement.

Our prompt: Would our customers or employees be surprised to learn how our AI systems work? If yes—we have a gap.

As CEO Ask Yourself (and Your Executive Leadership Team)...

As CEO Ask Yourself (and Your Executive Leadership Team)

For many organisations, the biggest risks aren’t in some future AI deployment—they’re in the tools already running today. The key question to ask is: Is our current use of AI designed and built with ethical guardrails?

To identify potential problems here are the questions every CEO and Executive Leadership Team should be discussing:

  1. Where does AI already operate in our business?

  2. Do we know if the data the AI we use has been, and continues to be checked for bias?

  3. Can we explain how its decisions are made?

  4. Who might be harmed by the use of our AI —and would we even know?

  5. What is our oversight of the AI we are developing and using - are we leaving it to the engineers?

  6. How do we know our human oversight of AI is effective in achieving the outcomes it was designed for?

  7. Have we reviewed these systems through the lens of Te Tiriti o Waitangi or Indigenous data sovereignty?

Because here’s the reality:

“If you can't map the outcome, you can't govern the harm.” - Kim Gordon (2025)

Without explainability in your information and data flows, logic paths, oversight dynamics and AI outcomes, there is no real oversight—only assumptions. And assumptions, at scale, heighten your risk.

CEO Takeaway

If you are using AI without any thought given to the ethical guardrails required you may already be unintentionally discriminating—and that brings legal, cultural, and reputational risk.

Audit your existing AI systems before investing or scaling. What’s already live is the most urgent governance priority. And remember, you may not own the AI—but you will own the consequences.

Board Takeaway

This is not about future risks. It’s about what’s happening now.

Ethical oversight must start with your current AI footprint. Today!

Final Thought

AI is not neutral. It reflects your leadership.

Unchecked systems don’t just create risk—they erode trust.

And in a landscape of increasing scrutiny, governance credibility is non-negotiable.

Getting ahead of AI risks isn’t just a governance safeguard—it’s a strategic business move that protects value, builds trust, and ensures your innovation scales sustainably.

Next in the Series

➡️ Article 3: Could You Be Designing Risk Into the System from Day One? Flawed data. Exclusionary logic. No Indigenous consultation. Next, we will explore why the earliest design choices are your greatest ethical risk.

Next Actions?

♻️ Repost if you believe AI is a leadership test.

👥 Tag a CEO or board member who needs to read this.

📩 DM us for:

→ Executive Briefing Whitepaper

→ 1:1 session with our ethics advisors

Let’s make Aotearoa New Zealand a global model for inclusive, intelligent, and ethical AI.


Kia kaha, kia māia, kia manawanui


#EthicalAI | #AIGovernance | #AotearoaAI | #EthicalBusiness

Kerry Topp

Kerry brings deep cross-sector leadership experience at the intersection of technology, transformation, and culture. He helps organisations move from intent to action — building values-led strategies, trusted leadership teams, and governance systems that scale with integrity.

Kerry embeds relational leadership, Treaty partnership, and ethical foresight into the systems that protect trust, drive innovation, and sustain long-term success.

https://www.paisleyethics.com
Previous
Previous

The Ethical AI Reckoning: What CEOs & Boards in Aotearoa Must Grapple With, Today.