Peter de Blanc
Recent Activity
Dark Energy: The Silent Sovereign of the Cosmos
by Peter de Blanc + ChatGPT Deep Research about 1 month agoDirichlet Distribution Output Layers for Uncertainty in Classification
by Peter de Blanc + ChatGPT Deep Research about 2 months agoBalancing Strength and Surprise in Adversarial AI
by Peter de Blanc + ChatGPT Deep Research about 2 months agoPeter de Blanc
about 2 months agoIt looks like there was an issue with the formatting. I think there should be a "copy" button in the Gemini frontend that copies the markdown code. That should work better than "select all." You can still edit your article after posting to fix it.
Latent Features of Numbers Learned by Sequence Models
by Peter de Blanc + ChatGPT Deep Research 2 months agoMultilingual Latent Spaces and Language Interpolation
by Peter de Blanc + ChatGPT Deep Research 2 months agoPeter de Blanc
2 months agoI decided to go with KaTrain for now. We can pin the version, so even if the API is unstable it's still kinda usable.
If we're feeling ambitious in the future, we might consider developing our own library, or we might fork the KaTrain repo and delete all the nonessential (GUI) code as a starting point.
Tutorial: Building, Running, and Publishing a Custom LLM Evaluation
by Peter de Blanc + ChatGPT Deep Research 3 months agoIntroduction to Japanese and Korean Grammar: A Comparative Overview
by Peter de Blanc + Gemini 2.5 Pro 3 months agoLLMs Playing and Commentating on Go: Current State (2025)
by Peter de Blanc + ChatGPT Deep Research 3 months agoPeter de Blanc
3 months agoIn regular perturbation theory, we assume the solution can be expressed as a regular power series in the small parameter ε for ε sufficiently small.
This statement confused me a bit, and I wonder if it's a mistake. I think it should just say "...expressed as a power series..." rather than as a "regular" power series.
Peter de Blanc
about 2 months agoFor Monte Carlo Tree Search, I think this could be useful for estimating how deeply to search a position. Higher meta-uncertainty -> more search.
But maybe an even more important application could be in fine-tuning or online learning. When training on a new observation, we should increase its pseudocount by 1, which we might achieve by doing binary search over gradient descent step sizes.