
large-language-modelsdeterminism
Demystifying the Determinism of Large Language Models
How LLMs mix randomness and reproducibility, and what Monte Carlo methods can teach us about balancing the two.
Thoughts on software engineering, AI integration, and building technology that makes a difference.

How LLMs mix randomness and reproducibility, and what Monte Carlo methods can teach us about balancing the two.

As LLMs evolved, prompt engineering became key to shaping outputs. Now, context engineering is the next step: fine-tuning not just the prompt, but the broader context to guide models more effectively.

We know exactly how Large Language Models work. We also have no idea how they work. Both statements are true. This paradox a fascinating paradox at the intersection of AI and neuroscience.