The Ethical AI Reckoning: Are You Designing Risk Into Your Systems from Day One?

How do we as leaders confidently harness the power of AI—driving innovation, competitive advantage, and profitability—without inadvertently causing harm, loss, compromising trust, violating ethical or regulatory standards, or perpetuating bias and exclusion?

Ethical AI failures don’t begin at deployment. They begin far earlier—at inception.

Before a line of code is written, teams make decisions about purpose, data, and logic. And when these choices are made without foresight, inclusion, ethical considerations or Treaty-aligned perspectives, risk gets baked in.

In Article 2, we uncovered how AI may already be operating within your organisation—and the hidden risks that come with it through the lack of ethical considerations when it was designed.

Now in Article 3, we move deeper into that issue. Could your design decisions be embedding harm from the very beginning? Because the moment you define the problem—you’re shaping the outcome.

Could your design decisions be embedding harm from the very beginning?

What This Risk Looks Like in Practice

AI is not inherently unethical—but it inevitably reflects the values, assumptions, and blind spots of those who create and deploy it. Today, ethical risks aren’t abstract—they are active, visible, and reputationally consequential.

“By the time harm appears, it’s not a coding problem—it’s a leadership one.” - Kerry Topp (2025)

For CEOs and boards, this means understanding that flawed design choices made early in the AI lifecycle—often invisibly—can embed risk that surfaces only once systems are scaled or challenged.

Too often:

  • Information and data is sourced offshore and lacks local context

  • Māori and Pasifika perspectives are absent from design decisions

  • AI is trained on outdated or biased data sets

  • Tools are deployed without scrutiny of social impact or fairness

Because by the time harm appears, it’s not a coding problem—it’s a leadership one.

These are not technical glitches—they are signals of poor governance, missing foresight, and ethical gaps at inception.


❗The Oversight Gap #3: Thinking of Today, Not Tomorrow

In Aotearoa, the future market is already here—and it’s Māori, Pasifika, and migrant. These communities are not only growing rapidly—they are reshaping the demographic, cultural, and economic fabric of our country.

  • Māori make up approximately 17.8% of the population—about 887,500 in 2023—and have grown 14.4% since 2018 (Stats NZ, 2024).

  • Pasifika now represent 8.9% of the population, with a 16% increase since 2018 (Stats NZ, 2024).

  • Migration accounts for two‑thirds of projected population growth toward 6 million by 2040 (Stats NZ, 2025).

If your organisation’s AI systems, services, and decision frameworks aren’t designed with these communities in mind from the outset, you risk losing trust, relevance, and market share. Inclusive design isn’t just a social good—it’s a business imperative.

This foresight ties directly to the second question we ask leaders: How might the creation process itself introduce ethical risks?


How Is Your Creation Process Introducing Ethical Risks?

How Is Your Creation Process Introducing Ethical Risks?

Design-stage decisions are never neutral—they shape risk from the outset. Flawed logic, poor data quality, homogeneous teams, and the absence of Treaty-aligned or culturally grounded input can quietly embed harm into your AI systems from day one.

When governance is missing early, risk isn’t avoided—it’s simply delayed. And by the time harm surfaces, it’s not a coding issue. It’s a leadership failure.

That’s why AI designers can’t—and shouldn’t—set the guardrails alone. Just as engineers don’t write building codes, AI needs oversight shaped by legal, ethical, cultural, and community voices from the start.

If your organisation doesn’t yet have formal guidance on how AI is designed, tested, and deployed, now is the time to act. Because what you design today doesn’t stay small—it scales. And without the right guardrails in place, so does the risk.

So what can go wrong when ethical principles are left out of AI design? And why is integrating them from the outset critical to building systems that are both effective and trustworthy?


Let’s take a look:


1. Bias and Discrimination

AI systems learn from data—but if that data reflects societal or historical inequities, the system will replicate them at scale. In areas  like hiring, lending, and healthcare, this can result in unlawful or unethical discrimination. For products or services, this can result in failed launches or innovation.

Implications: Legal liability, regulatory breaches, reputational damage, declining financial performance and loss of public trust.

Example: A hiring algorithm trained on a male-dominated dataset may systematically downrank female candidates—entrenching inequality under the guise of “objectivity.”

Our CEO prompt: Are we auditing our data and outcomes for fairness—or blindly trusting the algorithm?

2. Opaque Decision-Making

Many advanced AI models (especially deep learning systems) operate as “black boxes”—their outputs are not explainable, even to their creators.

Implications: Loss of stakeholder trust, inability to meet regulatory requirements (e.g. GDPR), and exposure in legal or public forums.

Example: A customer denied a loan or a job can't get a clear answer as to why—and the organisation can't explain it either.

Our Board prompt: Can our organisation clearly justify every high-stakes AI decision we make?

3. Privacy Violations

AI thrives on data—but without robust consent frameworks and boundaries, it can become a tool for surveillance or misuse.

Implications: Breaches of privacy law, fines, brand damage, and disproportionate harm to vulnerable or marginalised communities.

Example: Facial recognition used in public spaces without consent, or emotion analysis software deployed without user awareness.

Our CPO & CRO prompt: Are our AI systems collecting data in ways we can legally and ethically defend?

4. Physical and Emotional Harm

AI decisions in autonomous systems—from vehicles to health diagnostics—can result in real-world harm if not properly tested, governed, or understood.

Implications: Safety failures, insurance liability, public backlash, and avoidable injury or death.

Example: An autonomous vehicle fails to detect a child. A healthcare AI misdiagnoses due to non-representative training data.

Our CTO prompt: Have we thoroughly stress-tested our AI models for rare but high-impact edge cases?

5. Manipulation and Misinformation

AI can be used not only to inform—but to manipulate. Systems that optimise purely for engagement can reinforce addiction, spread disinformation, or prey on vulnerable groups.

Implications: Loss of ethical license to operate, regulatory intervention, and long-term brand erosion.

Example: Generative AI is used to create deepfakes, or feed addictive content that exploits behavioural vulnerabilities.

Our CEO prompt: Are we optimising for short-term  growth at the expense of values, truth, or civic trust? Are we adversely impacting sustainable growth?

6. Lack of Accountability

Without clear roles and escalation pathways, AI failures often fall into grey zones—where no one is responsible, and no lessons are learned.

Implications: Reputational collapse, legal exposure, and erosion of organisational learning and governance.

Example: An AI failure causes harm, but the developer blames the data, the vendor blames the user, and no one is accountable.

Our Board prompt: Do we know who owns the risk—and the response—when AI goes wrong?


Ethical design is not a drag on innovation—it’s a prerequisite for trust, scalability, and long-term relevance.

Why Ethical Design Can’t Be Optional

The consequences of ignoring ethics in AI design are no longer speculative—they’re unfolding in real time, across headlines, courtrooms, and communities. Ignoring ethics in AI design isn’t just a tech flaw; it’s a leadership failure.

For CEOs and boards operating within a regulatory environment that remains light-touch, the message is clear:

Ethical design is not a drag on innovation—it’s a prerequisite for trust, scalability, and long-term relevance.

From initial data sourcing to deployment, every stage of the AI lifecycle carries the potential to either reinforce values—or quietly erode them.

“AI systems don’t just automate your values—they amplify them. Governance isn’t optional. It’s strategy.” — Harvard Law School Forum on Corporate Governance (2025).

The good news? You don’t have to start from scratch.


Frameworks for Ethical and Accountable AI Design

Frameworks for Ethical and Accountable AI Design

Here are some trusted frameworks already shaping best practice in Aotearoa—tools you can use to embed ethical, accountable AI design into your organisation from day one.

  • ISO/IEC 42001: The first international standard for AI management systems—outlining how to embed governance, risk controls, and continuous improvement from day one.

  • NIST AI Risk Management Framework (2023): A practical, globally respected guide for identifying, measuring, and mitigating AI risks across the full lifecycle—already gaining traction with local advisors.

  • UNESCO AI Ethics Recommendations (2021): An internationally recognised ethical framework promoting transparency, accountability, human rights, and inclusive innovation—endorsed by Aotearoa New Zealand in global policy settings.

  • Tikanga-based Governance Frameworks and Models: Māori-led approaches to AI and data governance—grounded in mana, kaitiakitanga, and whakapapa—bring cultural intelligence and relational accountability to the fore. Frameworks such as Te Mana Raraunga’s data sovereignty principles, Ngā Tikanga Paihere, Te Kāhui Raraunga’s leadership model, and the legacy of WAI 262 provide a values-based foundation for ethical and Treaty-aligned technology design in Aotearoa.

  • Indigenous Governance Frameworks: The CARE Principles (Collective Benefit, Authority to Control, Responsibility, Ethics) and OCAP® (Ownership, Control, Access, Possession) provide internationally recognised models for Indigenous data governance. While CARE is increasingly embedded in Aotearoa through Māori data sovereignty efforts, OCAP—originating in Canada—is referenced as a comparative standard for ensuring Indigenous rights, control, and stewardship in data and AI use.

These frameworks exist because AI design decisions don’t just shape technical outcomes—they shape social, cultural, and economic realities. For leaders, ensuring ethical design isn’t optional—it’s a fiduciary duty.


As CEO, Ask Yourself (and Your Design Teams):

Ethical risk starts long before deployment—at the moment you define the opportunity for AI, commit to ethical AI, and build your team.

So ask:

  • Who’s in the design room—and who’s not?

  • Are you relying on your design team to ensure there are guardrails in place?

  • Whose values are shaping the logic we’re building?

  • Are we embedding indigenous consultation from the start—or after the fact?

  • Is this system equitable by design, or just by default?

“Design is where governance starts.” - Kerry Topp (2025)

Early assumptions—about users, data, and fairness—shape every outcome that follows. Get them wrong, and you embed risk from day one.


CEO Takeaway

Ethics is not a patch. It’s a core design requirement.

You must lead inclusion, foresight, and ethical considerations upstream—not after harm has occurred.


Board Takeaway

Every early-stage decision—information or data source, user group, feature set—is a governance decision.

If it shapes outcomes, it deserves oversight.

Inclusive design is your fiduciary responsibility.


Final Thought

We have three key messages:

  1. AI is not neutral–It reflects your leadership.

  2. You cannot afford to wait for regulation–you must lead, now.

  3. The tools are here. They’re relevant. And they work.

Because in today’s environment:

AI doesn’t just automate your values—it amplifies them.

And governance isn’t optional—it’s strategy.

Waiting until deployment to “fix” ethics is like launching a product with no quality control. It’s costly, reactive, and reputationally risky.

Getting ahead of AI risks isn’t just a governance safeguard—it’s a strategic business move that protects value, builds trust, and ensures your innovation scales sustainably.

To lead AI responsibly, you need:

  • Design-stage values alignment

  • Multidisciplinary collaboration

  • Leverage risk management and ethical frameworks which recognise bias and cultural dimensions.


Next in the Series

➡️ Article 4: Are You Prepared for How Your AI Might Be Misused?

Next, we’ll explore how AI systems are repurposed, redirected—or misused—and why your governance must account for more than just your intent.


Next Actions?

♻️ Repost if you believe AI is a leadership test.

👥 Tag a CEO or board member who needs to read this.

📩 DM us for:

→ Executive Briefing Whitepaper

→ 1:1 session with our ethics advisors

Let’s make Aotearoa New Zealand a global model for inclusive, intelligent, and ethical AI.


Kia kaha, kia māia, kia manawanui


#EthicalAI | #AIGovernance | #AotearoaAI | #EthicalBusiness

Kerry Topp

Kerry brings deep cross-sector leadership experience at the intersection of technology, transformation, and culture. He helps organisations move from intent to action — building values-led strategies, trusted leadership teams, and governance systems that scale with integrity.

Kerry embeds relational leadership, Treaty partnership, and ethical foresight into the systems that protect trust, drive innovation, and sustain long-term success.

https://www.paisleyethics.com
Next
Next

The Ethical AI Reckoning: What CEOs & Boards in Aotearoa Must Grapple With, Today.