
1M Satellites: Can It Be Done? - Part 2
š This is Part 2 of a 2-part series: ā Part 1: Economic Analysis Part 2 (this post): Manufacturing, regulatory, and physical constraints Last Updated: February 5, 2026 ā ļø Accuracy Disclaimer: This analysis synthesizes data from regulatory filings, manufacturing precedents, and aerospace industry reports. While weāve made every effort to verify production rates, regulatory approvals, and physical constraints, the space manufacturing landscape evolves rapidly. Launch capacity numbers reflect current FAA approvals as of February 2026. Timeline projections are based on historical precedents from Tesla, Apollo, and Starlink programs. Readers are encouraged to verify critical details independently. ...

Space AI Economics - Part 1
š This is Part 1 of a 2-part series: Part 1 (this post): Economic viability and cost analysis Part 2: Manufacturing Reality Check ā Last Updated: February 5, 2026 ā ļø Accuracy Disclaimer: This analysis synthesizes data from 60+ sources including SpaceX filings, FAA approvals, academic research, and industry reports. While weāve made every effort to verify claims and cite primary sources, the rapidly evolving space industry means some figures may become outdated. Launch capacity approvals, cost projections, and timeline estimates should be treated as point-in-time assessments. When specific claims are unverified or based on company projections, we note this explicitly. Readers are encouraged to verify critical details independently. ...

Why AI Needs Human ValidationāAnd Eventually, Artificial DNA
TL;DR: When humans validate AI output, diverse perspectives catch diverse errors. When AIs validate each other, they convergeābecause similar training produces similar weights, which produces similar reasoning. Temperature adds surface-level noise, not new capabilities. Genuine novelty requires evolutionary mutation: artificial DNA. Expert vs. Researcher: Two Modes of Validation I recently published a two-part series on space-based AI infrastructure. Iām not an aerospace engineerāIām a software developer. That distinction defines how I validate AI output. ...

Why AI Shouldn't Orchestrate Workflows
Iāve learned through experience that thereās a fundamental truth about AI-assisted development: AI enforcement is not assured. You can write the most detailed skill file. You can craft the perfect system prompt. You can set up MCP servers with every tool imaginable. But hereās the uncomfortable truth: the AI decides whether to follow any of it. Thatās not enforcement. Thatās hope. TL;DR: LLMs are probabilistic and canāt guarantee workflow compliance. Skills and MCP tools extend capabilities but donāt enforce behavior. Claude Code Hooks solve this by providing deterministic control pointsāSessionStart, PreToolUse, and PostToolUseāthat ensure critical actions always happen. As AI-generated code scales, you need automated validation systems that codify architectural rules, business constraints, and design patterns. Workflow orchestration must live outside the AI. ...

How Brains and AI Work
Can machines think like humans? Explore the fascinating comparison between biological brains (20 watts, continuous learning) and artificial neural networks (megawatts to train, frozen after training). Understand thinking, creativity, and consciousness.
Build LLM Guardrails, Not Better Prompts
Instructions and tools tell LLMs what to do, but guardrails ensure they do it. Discover how to build validation feedback loops that make LLM outputs reliable through automated guardrailsāwith a 10-minute quick start guide.
Building an MCP Server in 2 Hours
Built a fully functional Codecov MCP server in 2 hours using Claude Code to extend Claude Code itself. From zero to working server with authentication, API integration, and real-world lessons learned.
GitHub Actions Pricing Update Dec 2025
Breaking: GitHub postponed self-hosted runner pricing changes scheduled for March 2026 after developer community feedback. Complete analysis of the December 2025 pricing update and whatās next.