
1M Satellites: Can It Be Done? - Part 2
📌 This is Part 2 of a 2-part series: ← Part 1: Economic Analysis Part 2 (this post): Manufacturing, regulatory, and physical constraints Last Updated: February 5, 2026 ⚠️ Accuracy Disclaimer: This analysis synthesizes data from regulatory filings, manufacturing precedents, and aerospace industry reports. While we’ve made every effort to verify production rates, regulatory approvals, and physical constraints, the space manufacturing landscape evolves rapidly. Launch capacity numbers reflect current FAA approvals as of February 2026. Timeline projections are based on historical precedents from Tesla, Apollo, and Starlink programs. Readers are encouraged to verify critical details independently. ...

Space AI Economics - Part 1
📌 This is Part 1 of a 2-part series: Part 1 (this post): Economic viability and cost analysis Part 2: Manufacturing Reality Check → Last Updated: February 5, 2026 ⚠️ Accuracy Disclaimer: This analysis synthesizes data from 60+ sources including SpaceX filings, FAA approvals, academic research, and industry reports. While we’ve made every effort to verify claims and cite primary sources, the rapidly evolving space industry means some figures may become outdated. Launch capacity approvals, cost projections, and timeline estimates should be treated as point-in-time assessments. When specific claims are unverified or based on company projections, we note this explicitly. Readers are encouraged to verify critical details independently. ...

Why AI Shouldn't Orchestrate Workflows
I’ve learned through experience that there’s a fundamental truth about AI-assisted development: AI enforcement is not assured. You can write the most detailed skill file. You can craft the perfect system prompt. You can set up MCP servers with every tool imaginable. But here’s the uncomfortable truth: the AI decides whether to follow any of it. That’s not enforcement. That’s hope. TL;DR: LLMs are probabilistic and can’t guarantee workflow compliance. Skills and MCP tools extend capabilities but don’t enforce behavior. Claude Code Hooks solve this by providing deterministic control points—SessionStart, PreToolUse, and PostToolUse—that ensure critical actions always happen. As AI-generated code scales, you need automated validation systems that codify architectural rules, business constraints, and design patterns. Workflow orchestration must live outside the AI. ...

How Brains and AI Work
Can machines think like humans? Explore the fascinating comparison between biological brains (20 watts, continuous learning) and artificial neural networks (megawatts to train, frozen after training). Understand thinking, creativity, and consciousness.
Build LLM Guardrails, Not Better Prompts
Instructions and tools tell LLMs what to do, but guardrails ensure they do it. Discover how to build validation feedback loops that make LLM outputs reliable through automated guardrails—with a 10-minute quick start guide.
Building an MCP Server in 2 Hours
Built a fully functional Codecov MCP server in 2 hours using Claude Code to extend Claude Code itself. From zero to working server with authentication, API integration, and real-world lessons learned.
GitHub Actions Pricing Update Dec 2025
Breaking: GitHub postponed self-hosted runner pricing changes scheduled for March 2026 after developer community feedback. Complete analysis of the December 2025 pricing update and what’s next.

My Kubernetes setup
My CICD setup is to leverage GitHub workflows fed by secrets and vars from my springcloudconfig server. ...

Multi-Project Deployer: 100% LLM Code
Built a complete infrastructure deployment orchestrator in 3 days with 100% AI-generated code. Automates Pulumi deployments across multiple projects while handling complex dependencies.

Software Development with LLM
After multiple failed attempts, I finally cracked the code for using LLMs to generate complex, production-ready software. Learn how AI went from a frustrating tool to a revolutionary development partner.