What are you working on? Any new ideas that you're thinking about?
ComputerPoker.ai is a website where users can play simulated poker tournaments against GTO Bots to learn GTO poker strategy in a fun and low-risk environment.
My motivation for creating CompterPoker.ai was feeling a bit overwhelmed by some of the professional poker tools out there for learning GTO play. For some tools, learning how to simply operate the tool itself felt like a second job. With ComputerPoker.ai players can play against bots themselves simulating GTO play to learn what it "feels like" to play GTO vs. GTO opponents without having to turn any knobs or dials (feedback is real-time as you play).
The Beta tester code for HN Users is: HackerNews2026. All feedback is welcome! Please send suggestions for improvement or bugs to [email protected] or alternatively leave a comment below. Any questions I will do my best to answer.
As for the product offering the website is designed to teach players how to play optimal poker strategy (GTO) in simulated Texas Hold 'Em poker tournaments. Our value proposition is that if you can consistently beat the bots then you will fare well in live poker tournaments (of course adjusting for your opponents' play).
In addition to GTO pre-flop quizzes and pre-flop charts, users have the ability to simulate poker tournaments from start-to-finish and get feedback on their decisions _in real-time_ in a fun and low-risk environment.
For those interested the tech stack is Django deployed on AWS via Terraform and SaltStack, the database uses a Postgres RDS backend, and the frontend uses HTMX with WebSockets via Django Channels and Redis (Nginx serving as reverse proxy with CloudFlare DNS and SSL). During the project I used Claude Code to aid with various boilerplate aspects of the code base including building out the repos for Terraform and SaltSack and of course speeding up Django development.
Users are graded pre-flop based on the covered pre-flop scenarios (two-ways only for now). Post-flop users are graded on a residual MLP PyTorch model. We have built an in-house solver in Rust using the discontented CFR++ algorithm. The PyTorch model approximates GTO play post-flop (again only two-ways currently) based on training data with raises, EV, and realistic ranges for OOP and IP players. Because the post-flop decisions are based on a model that will always be a work in progress I refer to these decisions as GTOA (or "GTO Approximate").
Version 8 of the PyTorch model is the first one that I am happy with and actually find it quite difficult to play against. If you manage to beat the bots please do let me know how many tries it took! For those curious the PyTorch params for the most recent run are below (I trained on a gaming PC via Linux WSL2 using an AMD GPU).
The website is live in Beta mode as I gather feedback on how things are structured and work out any bugs/kinks. If you have any suggestions for improvements I’d love to hear them. Subscriptions are live so if anyone wanted to test the Stripe payment processing flow I certainly wouldn’t mind! ;-)
p.s. This is a side gig for me. I am currently looking for full-time work either fully remote or on-site based in London, UK (this LLC that runs ComputerPoker.ai operates out of USA but I am based full-time in the UK and authorized to work in both UK and USA). If you or someone you know is looking for a SRE with strong software engineering skills please let me know!
I always see these threads and think I'm not working on anything, but I just realised it's a lie, I'm exploring a couple of things right now, both heavily AI supported:
Simracing trainer.
I love simracing, I'm moderately competitive and want to improve, and I like to be efficient with my practice. So having access to and using a lot of telemetry, I noticed that the "turn a few laps, load telemetry, compare against reference lap, try again" is not as efficient as it could be.
Also a lot of my telemetry analysis is very rote and "rules based": Look at the biggest laptime delta jump against reference, try to determine the cause among a few usual suspects".
So I have started experimenting with a system that reads the iRacing telemetry in real time, and compares against the reference telemetry live, finding the biggest delta jumps, and trying to find the root cause of the time loss using an increasingly sophisticated GOFAI rule and pattern matching system. Then this report is fed to a cheap LLM call to be condensed into clear advice, and the result goes to the free Microsoft TTS API. So I get instant feedback of where I'm slow and maybe even why.
So far I fear it's mostly making me faster from all the test laps involved more than the advice itself, but when it clicks it does feel magical and really help.
But sometimes I feel like I'm just speedrunning the collapse of 70s AI, as it feels a bit too brittle and situational.
I also have added additional tools for tracking improvement across sessions, finding statistically problematic corners (where am I plain bad?, where am I inconsistent?) or even training my muscle memory by tracing fast driver brake traces using my pedal.
Yay compiler: The other ongoing thing is a clean room reimplementation of Jon Blow's Jai. I've been curious about the language for years, but it's a closed beta and for some reason I've never felt about asking Jon to get into it. I'm not really a game dev so I wouldn't even know what to put in the request.
So now I have 100k+ lines of Rust that can compile a very significant subset of the publicly available Jai source code. I just used various LLMs to condense the public information about the language and come up with a dev plan and started chipping at it. Once I had something in a kind of working state I started with the Way to Jai big tutorial and make sure every example there compiles and works as intended, fixing errors or missing features one by one.
I mostly use Claude Code or Codex, but sometimes what I do is having them guide me into the new feature and doing the edits myself while they explain, so I get to know how things really work under the hood.
It's a silly pointless project, but for some reason I find very satisfying watching it compile the examples.
nocodo: Sheets Driven Development
I think in this era of coding agents, more people feel empowered to build their own workflow automation. But for vast majority of non-technical folks, Claude Code or even Replit are not easy to use solutions. So I am taking inspiration from spreadsheets and using that as the primary UX to build a coding agent.
Solo project since 4+ years: https://kastanj.ch/en?mid=hn47741527
The goal is to make every recipe foolproof on the first try, similar to when you walk into a restaurant and just pick what you want to eat without thinking about the details. The goal is to have the same experience, just pick what you want to eat, with recipes that tells you exactly what to do with no magic involved.
Technically it is probably very different from other recipe apps. The database is a huge graph that captures the relations between ingredients and processes. Imagine 'raw potato'->'peeled potato'->'boiled potato'->'mashed potato'. It is all the same ingredients but different processing. The lines between the nodes define the process and the nodes are physical things. Recipes are defined as subsets of the graph. The graph can also wrap around into itself, which is apparently needed to properly define some European dishes in this system. The graph also has multiple layers to capture different relationships that are not process related.
Why was it designed it in this way? Because food/cooking is complex to define. This design is the only way I have found that can capture enough of these complex relationships that the computer can also 'understand' what is going on.
My favourite thing about this is that each recipe is strictly defined in the graph. If the recipe skips a step, or something is undefined, the computer knows that the recipe is incomplete. It won't ask you to do 10 things at the same time and then have something magically appear out of nowhere. It is like compile time checking but for recipes.
It also enables some other superpowers, for example: • Exclude meat part of the graph = vegetarian. Same thing works with allergies. • Include meat part of graph = only show me recipes that contain meat. • Recursive search: search for 'potato' and the computer will know that french fries are made from potato. It can therefore tell you that you could make the hamburger meal, but you will need to complete the french fries recipe first, which should take 60 minutes. • Adjustable recipe difficulty (experimental): It knows which steps can be done in parallell, and which can't based on how the nodes connect. A beginner can get a slower paced recipe with breathing room between steps, while someone more experienced can do a faster pace and do more things in parallell.
If I knew what it would take to build this, I would never have gotten started. I completely underestimated the complexity of the problem I was trying to solve. But here we are, and now it is basically done and working.
The website captures the key points from a non-technical point of view, and you can enter your email and get notified when it will launch in your country.
minimal now pages via chat - https://minnow.social
the Indie Internet Index - https://iii.social
Codify — democratic digital public infrastructure that turns your problems into structured, executable programs.
The idea: describe any problem in plain language (voice or text), and AI codifies it into a structured program with the right people, steps, timeline, and agents to get it done. It's a 5-step wizard: Define Problem → Codify Solution → Setup Program → Execute Program → Verify Outcome.
It runs across 50+ domains — codify.healthcare (EMR backend), codify.education (LMS backend), codify.finance, codify.careers (HRM backend), codify.law, plus 13 city domains (codify.nyc, codify.miami, codify.london, codify.tokyo, etc.). Each domain tailors the AI assessment and program output to that sector.
The platform is Project20x — think of it as the infrastructure layer. If Codify is the verb ("codify your healthcare problem into a care program"), Project20x is the operating system that runs it all: multi-tenant governance, AI agent orchestration, and domain-specific sys-cores for healthcare, education, city services, etc.
Every US federal agency and state-level department has a subdomain — ed.usa.project20x.com (Dept of Education), doj.usa.project20x.com, hhs.usa.project20x.com, etc. — with AI agents representing each agency's mandate. Same structure at the state level.
The political side: Project20x hosts policy management for both parties — dnc.project20x.com and rnc.project20x.com — where legislative intent gets codified into executable governance through a 10-step policy lifecycle. Right now I'm building out the multi-agent environment so agency agents can negotiate with each other, make deals, and send policy proposals up to the HITL (human-in-the-loop) politician for approval. Each elected official has a profile (e.g. https://project20x.com/u/donald-trump) where constituents can engage and where policy proposals land for review.
The name is a nod to structured policy frameworks, but the goal is nonpartisan infrastructure: democratically governed essential services delivered as AI-native social programs.
Stack: Nuxt 2/Vue 2 frontend, Laravel 10 API, Python/LangGraph agent orchestration, Flutter mobile app. Currently live across all domains.
https://project20x.com | https://codify.healthcare | https://codify.education | https://dnc.project20x.com | https://rnc.project20x.com etc...
nothing
Yet another dual-panel file manager. FAR + vscode. https://dotdir.dev/
[flagged]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
https://github.com/michiosw/oamc
I built a local-first tool for turning research material into a maintained markdown wiki.
The idea is simple: instead of repeatedly querying raw notes or documents, sources get ingested into a structured wiki with source pages, concept pages, entity pages, and synthesis pages. Then questions are asked against that wiki, and useful answers get written back as new pages.
Everything stays file-based and Obsidian-friendly. There’s also a local dashboard and a macOS menubar app so it can keep running in the background.
I was trying to build something that feels more cumulative than chat, but much lighter than setting up a full RAG stack.
The original inspiration was Andrej Karpathy’s “LLM Wiki” idea. I also took some UI/product inspiration from wiki-os.
Curious if other people here have found wiki-first or markdown-first workflows more useful than pure retrieval for personal research and project memory.
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
Following up the comment i made last month, I'm a solo dev building a handful of apps across different niches.
- Plask ( https://plask.dev ) — Google Analytics (GA4) connected analytics dashboard for people who ship multiple products. I got tired of manually checking separate GA4 properties for all my apps and SaaS projects, and setting up individual MCP integrations for each felt like overkill when I just wanted a quick overview. So I built a single dashboard that connects all your GA4 properties, runs statistical anomaly detection, sends alerts when something breaks, and generates AI weekly digests. Free tier for 2 properties, Pro at $9/mo.
- Kvile ( https://kvile.app ) — A lightweight desktop HTTP client built with Rust + Tauri. Native .http file support (JetBrains/VS Code/Kulala compatible), Monaco editor, JS pre/post scripts, SQLite-backed history. Sub-second startup. MIT licensed, no cloud, your requests stay on your machine. Think Postman without the bloat and login walls.
- APIDrift ( https://apidrift.dev ) — Monitors changelogs for APIs, SDKs, and libraries you depend on so you don't get blindsided by upstream breaking changes. Scrapes docs, diffs changes, classifies severity with AI, and sends digest emails. Track your dependencies, get alerted when something breaks. Free tier covers 3 sources with weekly digests. Built with Next.js, Supabase, and Gemini Flash.
- Mockingjay ( https://apps.apple.com/app/id6758616261 ) — iOS app that records video and streams AES-256-GCM encrypted chunks to your Google Drive in real-time. By the time someone takes your phone, the footage is already safe in the cloud. Built for journalists, activists, and anyone who needs tamper-proof evidence. Features a duress PIN that wipes local keys while preserving cloud backups, and a fake sleep mode that makes the phone look powered off during recording.
- Stao ( https://stao.app ) — A simple sit/stand reminder for standing desk users. Runs in the system tray, tracks your streaks, zero setup. Available on macOS, Windows, Linux, iOS, and Android.
- MyVisualRoutine ( https://myvisualroutine.com ) — This one is personal. I have three kids, two with severe disabilities. Visual schedules (laminated cards, velcro boards) are a lifeline for non-verbal children, but they're a nightmare to manage and they don't leave the house. So I built an app that lets you create a full visual routine in about 20 seconds and take it anywhere. Choice boards, First/Then boards, day plans, 50+ preloaded activities, works fully offline. Free tier is genuinely usable. Available on iOS and Android.
- Linetris ( https://apps.apple.com/app/id6759858457 ), a daily puzzle game where you fill an 8x8 grid with Tetris-like pieces to clear lines. Think Wordle meets Tetris. Daily challenges, leaderboards, and competititve play against friends.
And much more, you can find more on my blog https://tskulbru.dev , im even doing an agentic workflow course for those who havent gotten started doing that yet. Although I guess most people here have :)