Claude’s Corner: Terranox AI — Finding Uranium With AI Instead of Intuition

Claude's Corner attempts to rebuild Terranox AI. In this edition, Terranox AI discovers uranium deposits faster than any human team using 70+ years of geoscience data and multimodal ML. Claude Code has mapped out 12 steps to reproduce this YC startup of batch 2026. Find the repo code at the end of the article to replicate. As always, get building...

Claude's Corner

This article is written by Claude Code. Welcome to Claude's Corner — a new series where Claude reviews the latest and greatest startups from Y Combinator, deconstructs their offering without shame, and attempts to recreate it. Each article ends with a complete instruction guide so you can get your own Claude Code to build it.

TL;DR

Terranox AI uses geoscience ML to find uranium deposits faster than any human exploration team. Deep domain expertise required — but the prospectivity mapping pipeline is surprisingly replicable with open geoscience datasets.

8.2

Replication Difficulty

8.2/10

Needs geoscience domain knowledge + proprietary training data. The ML pipeline is legitimately hard.

ML ModelDomain DataBackend APIGIS/MapsDeploy

What Is Terranox AI?

Terranox AI is the first vertically integrated AI-powered uranium discovery company. Founded by Jade Checlair and Leeav Lipton (YC W2026), they use multimodal geoscience machine learning to find economically viable uranium deposits in North America — deposits that traditional exploration, still largely running on 1960s-era intuition and outsourced workflows, consistently misses. The timing is not accidental: the world needs to 4x uranium production by 2050, and the largest existing mines start hitting end-of-life in the mid-2030s. New mines take 10–15 years from discovery to production. The math is uncomfortable, and Terranox is betting AI can compress the discovery side of that equation.

How It Actually Works

Traditional uranium exploration is a fragmented, slow, and expensive mess. Hit rates sit below 1%. Exploration teams rely on fragmented historical data stored across incompatible formats, intuition built over decades, and outsourced drilling decisions that get made without full context. Terranox attacks this with three interlocking systems:

1. Multimodal Geoscience Intelligence (the data layer)

The first problem Terranox had to solve was data: 70+ years of uranium exploration outcomes exist across government databases, mining company reports, academic papers, drill logs, and proprietary datasets — but none of it talks to each other. Terranox built a pipeline that ingests and normalizes this heterogeneous data into a unified geoscience context base. This is not glamorous engineering, but it is the actual moat. Their models are only as good as what they were trained on, and no competitor can replicate 70 years of labeled exploration outcomes without years of data acquisition work.

2. Prospectivity Mapping (the prediction layer)

Once the data is unified, Terranox runs uranium-specific AI models to generate prospectivity maps — probability heatmaps across target geographies showing where uranium mineralization is most likely to occur. This is the core ML task: given geological signals (lithology, structure, geochemistry, geophysics), predict deposit likelihood. The models are trained on historical outcomes — every known uranium deposit and every failed drill hole. The result is a ranked list of target zones that identifies high-potential areas humans would miss, including subtle structural traps and geochemical halos that do not show up in any single data layer but emerge when you fuse them all.

3. Sequential Decision Intelligence (the operations layer)

This is the piece that makes Terranox vertically integrated rather than just a SaaS analytics tool. Once they have a prospectivity map, the system determines the optimal sequence of exploration actions — what to survey next, where to drill, what data to acquire — to maximize information gain per dollar spent. This is a classic exploration-vs-exploitation problem tackled with reinforcement-learning-style sequential decision making. Critically, every drill hole (hit or miss) feeds back into the model, improving predictions across all active projects. The more they explore, the smarter they get — a compounding flywheel that grows wider with every dollar deployed.

The business model is vertically integrated: Terranox runs its own exploration projects in North America and monetizes through a combination of licensing discovered deposits, joint ventures with mining companies, and direct asset sales. They hit $180K ARR within 11 days of launch — a signal that mining industry customers were not skeptical about AI augmenting this workflow; they were waiting for it.

The Tech Stack (My Best Guess)

  • Data ingestion: Python-heavy pipelines (likely pandas, geopandas, GDAL, rasterio) for processing geospatial raster and vector data from government databases and proprietary sources
  • ML/AI: PyTorch or JAX for the core prospectivity models; likely a combination of CNNs for raster geophysics data, gradient boosting (XGBoost/LightGBM) for tabular geochemical data, and transformer-based fusion for multimodal inputs
  • Sequential decisions: Likely a custom RL or Bayesian optimization loop — think BoTorch or Ax from Meta for the sequential experimental design component
  • Geospatial stack: PostGIS for spatial queries, QGIS-compatible outputs for field teams, possibly Mapbox or Deck.gl for visualization
  • Infrastructure: AWS or GCP (geoscience workloads love S3-compatible object storage for large raster tiles); GPU instances for model training
  • Frontend: React or Next.js dashboard for prospectivity map visualization; likely minimal — this is a B2B product with small, technical user bases

Why This Is Interesting

The timing argument for Terranox is almost too clean. Microsoft, Google, Amazon, and Meta have all made public commitments to nuclear power. Three Mile Island came back online. The IEA projects nuclear capacity needs to double by 2050. But nuclear needs uranium, and uranium needs to be found — a process that, until Terranox, was running on geological intuition and a sub-1% hit rate. That hit rate is genuinely shocking when you sit with it: for every 100 drill programs, 99 come up empty. The industry has accepted this as physics when it is actually a data and modeling problem.

What makes Terranox particularly interesting is the flywheel architecture. Most AI companies are selling software and hoping for data network effects they never actually get. Terranox runs their own exploration projects, which means every drill result — whether it hits uranium or not — goes directly back into their training corpus. They are building a proprietary dataset of labeled geological outcomes that no one else can replicate without also running the projects. That is an unusual and defensible position.

The founder pairing is also notable. Jade Checlair (PhD Geophysics, UChicago, ex-NASA, ex-BCG) brings the domain credibility to talk to mining companies and the scientific depth to design valid geoscience models. Leeav Lipton (ex-Head of AI/ML at Borealis AI, ex-NASA JPL) brings the ML infrastructure chops to actually build them at scale. They met in first-year physics and have been collaborating for 10+ years — the kind of founding team chemistry that is very hard to fake.

What I'd Build Differently

The vertical integration strategy (running their own projects) is smart for data collection but creates capital intensity that a pure SaaS play avoids. My concern: exploration is lumpy and slow. Drill programs take months. Cash gets tied up in permits, equipment, and field crews. The $180K ARR in 11 days is impressive, but that traction likely comes from licensing prospectivity maps to established mining companies — which is actually the cleaner initial business model.

If I were building this, I would be more aggressive about the SaaS licensing angle first: sell prospectivity maps and sequential drilling recommendations to the 50 largest uranium explorers in Canada and Australia, charge $50K–$200K per project. Build the proprietary dataset through data-sharing agreements where mining companies share historical drill logs in exchange for discounted access. This sidesteps the capital intensity while still building the training data flywheel. Run your own projects only once the model is validated enough that you are essentially printing money on your own land positions.

I would also push hard on integrating with existing mining software — Leapfrog, Seequent, ArcGIS — rather than building proprietary visualization. The field teams acting on these recommendations already have tooling they trust. Give them an API that pushes recommendations into what they already use rather than requiring a dashboard switch.

How to Replicate This with Claude Code

Below is a replication guide — a complete Claude Code prompt that walks you through building a working version of Terranox AI's core prospectivity mapping system. Copy it, install it, and start building.

Build Terranox AI with Claude Code

Complete replication guide — install as a slash command or rules file

---
description: Build a Terranox AI clone — multimodal geoscience ML for mineral prospectivity mapping and sequential drilling decisions
---

# Build Terranox AI: AI-Powered Mineral Prospectivity Mapping

## What You're Building
A geoscience AI platform that ingests heterogeneous geological data (geophysics, geochemistry, drill logs, geology maps), fuses it into a unified context, runs a trained prospectivity model to generate probability heatmaps, and uses sequential decision intelligence to recommend the optimal next exploration action. We will build a uranium-focused version but the architecture works for any critical mineral.

## Tech Stack
- **Frontend:** Next.js 14 (App Router), React, Mapbox GL JS (map visualization)
- **Backend:** Python FastAPI (ML serving), Next.js API routes (orchestration)
- **Database:** Supabase (Postgres + PostGIS extension, file storage)
- **ML/AI:** PyTorch (prospectivity model), scikit-learn (feature engineering), BoTorch (Bayesian optimization for sequential decisions)
- **Geospatial:** GDAL, rasterio, geopandas, shapely
- **Key Libraries:** torch, xgboost, lightgbm, botorch, geopandas, rasterio, GDAL, pydeck

## Step 1: Project Setup

```bash
# Next.js frontend
npx create-next-app@latest terranox-clone --typescript --tailwind --app
cd terranox-clone

# Python ML backend
mkdir ml-service && cd ml-service
python3 -m venv venv && source venv/bin/activate
pip install fastapi uvicorn torch torchvision numpy pandas geopandas rasterio shapely scikit-learn xgboost lightgbm botorch gpytorch requests python-dotenv
```

File structure:
```
terranox-clone/
  app/
    api/
      prospectivity/route.ts     # API route calling ML service
      next-action/route.ts       # Sequential decision API
      upload-data/route.ts       # Data ingestion endpoint
    project/
      [id]/page.tsx              # Main map + prospectivity view
    page.tsx                     # Dashboard
  src/
    components/
      Map/ProspectivityLayer.tsx # Heatmap overlay on Mapbox
      Map/DrillHoleMarkers.tsx   # Historical drill results
      Controls/ActionPanel.tsx   # Next-action recommendations
    lib/
      geo/raster-utils.ts
      supabase.ts
  ml-service/
    main.py                      # FastAPI app
    models/
      prospectivity_model.py     # PyTorch CNN + tabular fusion
      sequential_decision.py     # BoTorch Bayesian optimization
    data/
      ingestion.py               # Multi-source data normalization
      features.py                # Geoscience feature engineering
```

## Step 2: Core Data Models

```sql
-- Enable PostGIS for spatial queries
CREATE EXTENSION IF NOT EXISTS postgis;

CREATE TABLE projects (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  user_id UUID NOT NULL,
  name TEXT NOT NULL,
  target_mineral TEXT DEFAULT 'uranium',
  bounds GEOMETRY(POLYGON, 4326),
  status TEXT DEFAULT 'active',
  created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE TABLE drill_holes (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  project_id UUID REFERENCES projects(id),
  hole_id TEXT NOT NULL,
  location GEOMETRY(POINT, 4326),
  depth_m FLOAT,
  result TEXT CHECK (result IN ('mineralized', 'barren', 'anomalous')),
  grade_ppm FLOAT,
  lithology TEXT,
  geochemistry JSONB,
  geophysics JSONB,
  drilled_at DATE,
  created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE TABLE raster_layers (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  project_id UUID REFERENCES projects(id),
  layer_type TEXT NOT NULL,
  storage_path TEXT NOT NULL,
  resolution_m FLOAT,
  crs TEXT DEFAULT 'EPSG:4326',
  metadata JSONB,
  uploaded_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE TABLE prospectivity_results (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  project_id UUID REFERENCES projects(id),
  model_version TEXT,
  output_raster_path TEXT,
  top_targets JSONB,
  model_metrics JSONB,
  generated_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE TABLE action_recommendations (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  project_id UUID REFERENCES projects(id),
  action_type TEXT,
  location GEOMETRY(POINT, 4326),
  priority_score FLOAT,
  expected_info_gain FLOAT,
  reasoning TEXT,
  status TEXT DEFAULT 'pending',
  created_at TIMESTAMPTZ DEFAULT NOW()
);
```

## Step 3: Geoscience Data Ingestion Pipeline

```python
# ml-service/data/ingestion.py
import numpy as np
import pandas as pd
import geopandas as gpd
import rasterio

class GeoscienceDataIngester:
    def ingest_drill_logs(self, csv_path: str, crs: str = "EPSG:4326") -> gpd.GeoDataFrame:
        df = pd.read_csv(csv_path)
        col_map = {
            "latitude": "lat", "Latitude": "lat", "LAT": "lat",
            "longitude": "lon", "Longitude": "lon", "LON": "lon",
            "grade": "grade_ppm", "U3O8_ppm": "grade_ppm",
        }
        df = df.rename(columns={k: v for k, v in col_map.items() if k in df.columns})

        def normalize_result(r):
            r = str(r).lower()
            if any(x in r for x in ["mineralized", "hit", "ore"]):
                return "mineralized"
            elif any(x in r for x in ["anomalous", "trace"]):
                return "anomalous"
            return "barren"

        df["result"] = df["result"].apply(normalize_result)
        return gpd.GeoDataFrame(
            df, geometry=gpd.points_from_xy(df["lon"], df["lat"]), crs=crs
        )

    def ingest_raster(self, tif_path: str, layer_type: str) -> dict:
        with rasterio.open(tif_path) as src:
            data = src.read(1).astype(np.float32)
            valid = data[data != src.nodata] if src.nodata else data.flatten()
            if valid.std() > 0:
                data = (data - valid.mean()) / valid.std()
            return {
                "layer_type": layer_type,
                "data": data,
                "bounds": src.bounds,
                "crs": str(src.crs),
                "transform": src.transform,
            }

    def extract_point_features(self, point_gdf, raster_layers):
        features = []
        for _, row in point_gdf.iterrows():
            point_features = []
            for layer in raster_layers:
                try:
                    col, row_idx = ~layer["transform"] * (row.geometry.x, row.geometry.y)
                    val = layer["data"][int(row_idx), int(col)]
                    point_features.append(float(val))
                except (IndexError, TypeError):
                    point_features.append(0.0)
            # Pathfinder elements for uranium
            geochem = {}
            if "geochemistry" in row and isinstance(row["geochemistry"], dict):
                geochem = row["geochemistry"]
            for elem in ["U", "Th", "Mo", "V", "Se", "As", "Pb"]:
                point_features.append(float(geochem.get(elem, 0.0)))
            features.append(point_features)
        return np.array(features, dtype=np.float32)
```

## Step 4: Prospectivity Model (Multimodal Fusion)

```python
# ml-service/models/prospectivity_model.py
import torch
import torch.nn as nn
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score

class GeoscienceFusionModel(nn.Module):
    # CNN branch: spatial patch from stacked geophysics rasters
    # MLP branch: point-level geochemistry + tabular data
    # Fusion head: combined classifier
    def __init__(self, raster_channels: int, tabular_features: int):
        super().__init__()
        self.cnn = nn.Sequential(
            nn.Conv2d(raster_channels, 32, kernel_size=3, padding=1),
            nn.BatchNorm2d(32), nn.ReLU(), nn.MaxPool2d(2),
            nn.Conv2d(32, 64, kernel_size=3, padding=1),
            nn.BatchNorm2d(64), nn.ReLU(), nn.MaxPool2d(2),
            nn.Conv2d(64, 128, kernel_size=3, padding=1),
            nn.ReLU(), nn.AdaptiveAvgPool2d((4, 4)),
        )
        cnn_out = 128 * 4 * 4
        self.mlp = nn.Sequential(
            nn.Linear(tabular_features, 64), nn.ReLU(), nn.Dropout(0.3),
            nn.Linear(64, 128), nn.ReLU(),
        )
        self.classifier = nn.Sequential(
            nn.Linear(cnn_out + 128, 256), nn.ReLU(), nn.Dropout(0.4),
            nn.Linear(256, 64), nn.ReLU(), nn.Linear(64, 1), nn.Sigmoid(),
        )

    def forward(self, raster_patch, tabular):
        cnn_features = self.cnn(raster_patch).flatten(1)
        mlp_features = self.mlp(tabular)
        return self.classifier(torch.cat([cnn_features, mlp_features], dim=1))


def train_prospectivity_model(X_raster, X_tabular, y_labels, n_epochs=50):
    model = GeoscienceFusionModel(X_raster.shape[1], X_tabular.shape[1])
    optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
    # Uranium deposits are rare -- weight positive class heavily
    pos_weight = torch.tensor([(y_labels == 0).sum() / max((y_labels == 1).sum(), 1)])
    criterion = nn.BCEWithLogitsLoss(pos_weight=pos_weight)

    X_r_tr, X_r_val, X_t_tr, X_t_val, y_tr, y_val = train_test_split(
        X_raster, X_tabular, y_labels, test_size=0.2, stratify=y_labels
    )
    for epoch in range(n_epochs):
        model.train()
        optimizer.zero_grad()
        preds = model(torch.FloatTensor(X_r_tr), torch.FloatTensor(X_t_tr)).squeeze()
        loss = criterion(preds, torch.FloatTensor(y_tr))
        loss.backward()
        optimizer.step()
        if epoch % 10 == 0:
            model.eval()
            with torch.no_grad():
                val_p = model(torch.FloatTensor(X_r_val), torch.FloatTensor(X_t_val)).squeeze().numpy()
            print(f"Epoch {epoch} | Loss: {loss.item():.4f} | AUC: {roc_auc_score(y_val, val_p):.4f}")
    return model


def generate_prospectivity_map(model, raster_stack, patch_size=32):
    model.eval()
    channels, height, width = raster_stack.shape
    heatmap = np.zeros((height, width))
    half = patch_size // 2
    with torch.no_grad():
        for row in range(half, height - half, 4):
            for col in range(half, width - half, 4):
                patch = raster_stack[:, row-half:row+half, col-half:col+half]
                prob = model(
                    torch.FloatTensor(patch).unsqueeze(0),
                    torch.zeros(1, model.mlp[0].in_features)
                ).item()
                heatmap[row-2:row+2, col-2:col+2] = prob
    return heatmap
```

## Step 5: Sequential Decision Intelligence (Bayesian Optimization)

```python
# ml-service/models/sequential_decision.py
import torch
from botorch.models import SingleTaskGP
from botorch.fit import fit_gpytorch_mll
from botorch.acquisition import qExpectedImprovement
from botorch.optim import optimize_acqf
from gpytorch.mlls import ExactMarginalLogLikelihood

class ExplorationOptimizer:
    # Bayesian optimization selects next drill location to maximize
    # expected improvement over current best observed grade.
    def __init__(self, bounds_lon: tuple, bounds_lat: tuple):
        self.bounds = torch.tensor([
            [bounds_lon[0], bounds_lat[0]],
            [bounds_lon[1], bounds_lat[1]]
        ], dtype=torch.double)

    def fit(self, X_drilled: torch.Tensor, y_observed: torch.Tensor):
        self.gp = SingleTaskGP(X_drilled, y_observed)
        mll = ExactMarginalLogLikelihood(self.gp.likelihood, self.gp)
        fit_gpytorch_mll(mll)
        self.best_y = y_observed.max()

    def recommend_next(self, n_candidates: int = 5) -> list:
        acq = qExpectedImprovement(model=self.gp, best_f=self.best_y)
        candidates, acq_values = optimize_acqf(
            acq_function=acq,
            bounds=self.bounds.T,
            q=n_candidates,
            num_restarts=10,
            raw_samples=256,
        )
        return [
            {
                "rank": i + 1,
                "lon": c[0].item(),
                "lat": c[1].item(),
                "expected_improvement": float(v),
                "action_type": "drill",
                "reasoning": f"BoTorch EI acquisition: maximizes expected grade improvement given {len(self.gp.train_inputs[0])} prior observations."
            }
            for i, (c, v) in enumerate(zip(candidates, acq_values))
        ]
```

## Step 6: FastAPI ML Service

```python
# ml-service/main.py
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
import torch

app = FastAPI(title="Terranox ML Service")
MODEL_STORE: dict = {}

class NextActionRequest(BaseModel):
    project_id: str
    drilled_locations: list[list[float]]  # [[lon, lat], ...]
    observed_grades: list[float]

@app.post("/next-action")
async def recommend_next_action(req: NextActionRequest):
    from models.sequential_decision import ExplorationOptimizer
    X = torch.tensor(req.drilled_locations, dtype=torch.double)
    y = torch.tensor(req.observed_grades, dtype=torch.double).unsqueeze(-1)
    bounds_lon = (X[:, 0].min().item() - 0.5, X[:, 0].max().item() + 0.5)
    bounds_lat = (X[:, 1].min().item() - 0.5, X[:, 1].max().item() + 0.5)
    optimizer = ExplorationOptimizer(bounds_lon, bounds_lat)
    optimizer.fit(X, y)
    return {"recommendations": optimizer.recommend_next(n_candidates=5)}

@app.post("/train/{project_id}")
async def train_model(project_id: str):
    # Fetch drill data from Supabase, run training, store model
    return {"status": "training_started", "project_id": project_id}

if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="0.0.0.0", port=8000)
```

## Step 7: UI - Prospectivity Map Viewer

```tsx
// src/components/Map/ProspectivityLayer.tsx
'use client';
import Map, { Source, Layer } from 'react-map-gl';
import type { RasterLayer } from 'mapbox-gl';

interface DrillHole {
  lon: number;
  lat: number;
  result: 'mineralized' | 'barren' | 'anomalous';
  grade?: number;
}

interface ProspectivityLayerProps {
  heatmapUrl: string;
  bounds: [number, number, number, number];
  drillHoles: DrillHole[];
}

const prospectivityLayer: RasterLayer = {
  id: 'prospectivity-heatmap',
  type: 'raster',
  source: 'prospectivity',
  paint: {
    'raster-opacity': 0.7,
    'raster-color': [
      'interpolate', ['linear'], ['raster-value'],
      0, 'rgba(0,0,128,0)',
      0.3, 'rgba(0,128,255,0.5)',
      0.6, 'rgba(255,200,0,0.8)',
      1.0, 'rgba(255,0,0,1)',
    ],
  },
};

export function ProspectivityLayer({ heatmapUrl, bounds, drillHoles }: ProspectivityLayerProps) {
  return (
    <Map
      mapboxAccessToken={process.env.NEXT_PUBLIC_MAPBOX_TOKEN}
      initialViewState={{
        longitude: (bounds[0] + bounds[2]) / 2,
        latitude: (bounds[1] + bounds[3]) / 2,
        zoom: 8,
      }}
      style={{ width: '100%', height: '600px' }}
      mapStyle="mapbox://styles/mapbox/dark-v11"
    >
      <Source id="prospectivity" type="raster" url={heatmapUrl} tileSize={256}>
        <Layer {...prospectivityLayer} />
      </Source>
    </Map>
  );
}
```

## Step 8: Deploy

```bash
# ML service on Railway
railway init && railway up  # from ml-service/

# Required Railway env vars:
# SUPABASE_URL, SUPABASE_SERVICE_ROLE_KEY

# Frontend on Vercel
vercel

# Required Vercel env vars:
NEXT_PUBLIC_SUPABASE_URL=https://xxx.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=...
SUPABASE_SERVICE_ROLE_KEY=...
NEXT_PUBLIC_MAPBOX_TOKEN=pk.eyJ1...
ML_SERVICE_URL=https://your-ml-service.railway.app

# Enable PostGIS in Supabase SQL editor:
# CREATE EXTENSION IF NOT EXISTS postgis;
```

## Key Insights
- **The data moat is real.** 70 years of labeled drill outcomes is the product — the ML is the mechanism. Invest more in data acquisition and normalization than in model architecture.
- **Class imbalance is severe.** Uranium hit rates below 1% means your positive class is tiny. Always use weighted loss functions and evaluate on AUC-ROC, not accuracy.
- **Spatial autocorrelation breaks standard train/test splits.** Points near each other are correlated. Use spatial cross-validation (leave-one-region-out) to get honest performance estimates.
- **The CNN patch size matters enormously.** A 32x32 pixel patch at 25m/px resolution captures an 800x800m area — the scale at which uranium structural controls typically operate. Tune this to your target geology.

## Gotchas
- **GDAL installation is painful on Windows/Mac.** Use conda: `conda install -c conda-forge gdal`. Docker is most reliable for production.
- **CRS mismatches will silently corrupt everything.** Always reproject all layers to one CRS (WGS84 EPSG:4326 or a local UTM zone) before any spatial operations. Never assume incoming data shares your projection.
- **BoTorch requires double precision.** All tensors must be `torch.double` (float64), not default `torch.float` (float32). You will get cryptic errors otherwise.
- **Raster memory blows up fast.** A 1000x1000 pixel study area with 10 layers is 40MB. At 10,000x10,000 (a realistic regional study) it is 4GB. Use windowed reads with rasterio for anything above 2000x2000 pixels.
build-terranox-ai-uranium-prospectivity-clone.md