🌐 Module 8 · Context and the Future · Chapter 8.3 · 8 min read

Ethics: Accelerate Responsibly

Move fast + do it right — the engineer's responsibility in the AI age.

What you'll learn here

  • Name the AI chip maker's ethical responsibilities
  • Weigh military use, surveillance, environment concerns
  • Compute the energy + climate impact
  • Identify diversity, inclusion, open-source principles
  • Propose SIDRA ethics guidelines

Hook: Speed + Responsibility

“Move fast and break things” (Facebook, 2010s). Results: social-media addiction, election manipulation, youth mental-health issues.

AI chips become much more powerful (Y100 10⁶× Y1). Speed isn’t the only value. Responsibility + ethics are mandatory.

This chapter lists the hard questions the SIDRA team must ask itself.

Intuition: 5 Ethical Axes

AxisQuestionSIDRA relevance
Military useWill SIDRA end up in weapons?ASELSAN as a customer
SurveillanceFace recognition, tracking?Typical edge-AI app
EnvironmentEnergy, water, chemical waste?Fab operations
EmploymentAutomation joblessness?AI generally
AccessRich-poor divide?Pricing

Each generation (Y1, Y10, Y100) answers these differently.

Formalism: Ethical Dimensions

L1 · Başlangıç

Military use:

SIDRA is a project backed by ASELSAN. ASELSAN operates in defense → SIDRA chips could end up in weapon systems.

Ethical question: are lethal autonomous weapon systems (LAWS) ethical?

UN 2023: call for international legal limits on LAWS. Türkiye signed.

SIDRA guideline (proposed):

  • Defensive systems: OK (radar, early warning).
  • Human-in-the-loop: OK (engineer has final call).
  • Fully autonomous lethal: REJECTED.

Surveillance:

Face recognition, behavior analysis → state surveillance rises. Democratic concerns.

SIDRA Y1 edge AI in camera sensors. ASELSAN customer builds public-safety products.

Guideline:

  • Anonymous statistics: OK.
  • Individual tracking: requires legal authorization.
  • Ethnic/religious profiling: REJECTED.

Environment:

Fabs are energy-intensive. 1 fab:

  • 100 MW electricity.
  • 1000 m³/day water.
  • Hazardous chemicals.

SIDRA UNAM 100 kW (small). Y10 mini-fab 5 MW. Y100 full fab 100+ MW.

Guideline:

  • Renewable energy (solar, wind) 50%+.
  • Water recycling.
  • Certified hazardous-waste disposal.
L2 · Tam

AI energy impact:

GPT-3 training 1287 MWh = 552 t CO₂. AI globally 5-10% of electricity by 2030 (IEA).

SIDRA advantage: inference 100-1000× more efficient → AI climate load drops. Positive ethical share.

Full Y100 deployment (1M inference chips instead of 1M GPUs) → annual 10 TWh savings. 2% of Türkiye total electricity.

Employment impact:

AI removes some jobs (call centers, data entry). Creates new ones (AI operators, data scientists).

SIDRA creates 1000+ new jobs in Türkiye by 2035. Positive.

At the same time, AI automation (autonomous vehicles, call centers) removes 10K+. Net effect unclear.

Guideline:

  • Re-training fund (TÜBİTAK).
  • Open AI education (like SIDRA Atölye).
  • Social safety net policy.

Access:

SIDRA Y1 200/chip(Tu¨rkiye).Onamobiledevice:product200/chip (Türkiye). On a mobile device: product 500 (device + margin).

Economic access: Türkiye middle income. Product price adjustable by income.

Global: for developing countries, SIDRA is an economic alternative to NVIDIA’s $50K datacenter card.

Open source:

SIDRA simulator MIT. SDK open. Firmware GPL.

Fully open: RTL (chip design) — open?

Challenge: IP has commercial value. Competitive loss.

Proposal: academic rama (non-commercial) open; commercial rama closed.

L3 · Derin

Diversity and inclusion:

SIDRA team:

  • Gender: 30% women (sector average 20%). Target 50%.
  • International: Turkish + international mix.
  • Discipline: EE + software + materials + physics.
  • Age: experience + new grads mix.

Ethics committee:

SIDRA’s ethics committee (to be established 2026):

  • Academia (ethics, philosophy).
  • Civil society.
  • Law.
  • Engineers (internal).

Critical decisions: military sales, surveillance customers, environmental policy.

Transparency:

Annual transparency report:

  • Customer distribution (defense, civilian, academic).
  • Energy consumption + CO₂.
  • Employment + diversity.
  • Ethics committee decisions.

Google, Microsoft, Apple publish similar. SIDRA should too.

Long-term ethics:

Y100+ AI more powerful → more ethical load. In AGI scenarios?

Questions:

  • Who’s responsible when an autonomous system errs?
  • Legal liability for AI hallucinations?
  • Chip designer’s moral load?

No answers yet. SIDRA starts this debate in Türkiye.

SIDRA ethics manifesto (draft):

  1. Human-centered AI: augmentation + assistance (not replacement).
  2. Transparency: decisions explainable.
  3. Environment: net-zero-carbon 2030.
  4. Access: open source + academic.
  5. Military: defensive + human-controlled only.
  6. Diversity: 50% women by 2030.
  7. Society: Türkiye benefit + global responsibility.

Experiment: A SIDRA Ethics Scenario

Scenario: a country requests 10K Y10 chips for a “surveillance system”.

Analysis:

  • Is the country democratic? (Freedom House score).
  • Project details?
  • Human-rights concerns?
  • International sanctions?

Ethics committee decision:

  • Democratic + transparent + UN-aligned: YES (sell).
  • Authoritarian surveillance regime: REJECTED (no sale).
  • Ambiguous: further investigation + conditions (export controls).

Real example: US semiconductor export restrictions follow similar logic. SIDRA sets its own policy.

Quick Quiz

1/6SIDRA's military-use guideline?

Lab Exercise

SIDRA ethics-committee rules.

Proposed members:

  • 1 academic (ethics/philosophy).
  • 1 civil society (human rights).
  • 1 lawyer.
  • 2 SIDRA engineers (senior).
  • 1 international observer.

Process:

  • Monthly meetings.
  • Critical decisions by unanimity.
  • Annual transparency report.

Authority:

  • Customer approval/rejection (military, surveillance).
  • Environmental policy.
  • Employment and diversity goals.
  • Open-source policy.

The structure isn’t a burden; it makes SIDRA a sustainable brand.

Cheat Sheet

  • 5 ethical axes: military, surveillance, environment, employment, access.
  • SIDRA guidelines: defensive military, anonymous surveillance, renewable energy, open academic source.
  • Climate positive: inference 100-1000× more efficient than GPU.
  • Transparency: annual report + ethics committee.
  • Diversity target: 50% women by 2030.
  • Manifesto: 7 principles.

Vision: Responsible SIDRA

  • Y1 (2026): Publish ethics manifesto.
  • Y3 (2028): Establish ethics committee.
  • Y10 (2030): Annual transparency report.
  • Y100 (2035): Sector-leading responsible-AI guideline.
  • Y1000 (2040+): AGI preparedness ethics work.

Goal: Türkiye’s first responsible AI company. Global example.

Further Reading