What are you working on? Any new ideas that you're thinking about?
(1) I've somewhat stumbled on a persona as a Fox-photographer who strongly communicates that he is a public affordance which (a) helps me get better photos of people, (b) gets me flagged down by people telling me about interesting things going on like
https://mastodon.social/@UP8/116021033821248982
and (c) results in handing out several business cards a day
https://mastodon.social/@UP8/115901190470904729
and I'm within sight of having to reorder cards. I just finished a landing page for the cards (before they went to one of my socials)
but having to reorder the cards I am planning on making a next generation card which has a unique chibi and unique QR code that will let me personalize the landing page for cards, particularly I will be able to share a photo just with the person who has the card.
============
(2) I've been doing heart rate variability biofeedback experiments and I have this demo
https://gen5.info/demo/biofeedback/
which is still not quite done but has source code at
https://github.com/paulhoule/VulpesVision
It works with most heart rate monitors that support the standard BTLE API not just the H10. I run it on the Windows desktop with Chrome and with Bluefy on iPad. Once it displays the instantaneous heart rate I can control
https://en.wikipedia.org/wiki/Mayer_waves
by following the slope of the instantaneous heart rate, breathing out when it is slowing down and breathing in when it is speeding up. This greatly intensifies the Mayer_wave and increases the SD1 metric. I think this drops my blood pressure significantly when I'm doing it. This needs better instructions and some kind of auditory cue so I can entrain my breathing when I am looking at something else. Longer term I am interested in incorporating some other biofeedback gadgets I have like a respiration monitor (got an abdomen band and a radar which could probably even read HRV if I had the right software for it) and a GSR sensor, and EMG sensor, etc.
Just Godot things: https://github.com/invadingoctopus/comedot
Still no actual game of course :')
Web based mapping and navigation application that basically does everything all the other products don't do. Raceline analysis, driving aids, Dakar rally, CANBUS, OBD, Nightrider for race cars? or something. Passion project whatever. Investors get lost.
Verimu - we're trying to help medium and small businesses across the EU get up to speed with the new Cyber Resiliance Act requirements (which start in September!) trying to make it as frictionless as possible - drop in a github action and you are good to go. Web based dashboard coming soon!
The State of Utopia[1] is currently fine-tuning an older 1 GB model called Bitnet, so that we have something beginning to have the shape of a sovereign model that can run on the edge. We think having model sovereignty is important for our citizens, and we are working on tools for them to easily further fine-tune the model straight from their browser. We are currently running a 30-hour training run on some simple hardware and through webGPU, so that no trust or installation is required.
We made it possible to run the model in webGPU and it is pretty fast even in that environment. You can see the porting process in my last few submissions, because we livestreamed Claude Code porting the base model from the original C++ and Python.
In a separate initiative, we produced a new hash function with AI - however, although it is novel, it might not be novel enough for publication and it's unclear whether we can publish it. It has several innovations compared to other hash formats.
We are running some other developments and experiments, but don't want to promise more than we can deliver in a working state, so for more information you just have to keep checking stateofutopia.com (or stofut.com for short).
Our biggest challenge at the moment is managing Claude's use of context and versions, while working on live production installs.
Everything takes time and attention and Claude Code is far from being fully autonomous building new productive services on a server - it's not even close to being able to do that autonomously. We feel that we have to be in the loop for everything.
[1] eventual goal: technocratic utopia, will be available at stateofutopia.com
Editor/IDE in Go. Mainly as a challenge to replace JetBrains.
https://github.com/thansen0/seabed-sim-chrono
I've been working on a deep seabed simulation, specifically to simulate polymetallic nodules for cobalt/nickel mining in Project Chrono. Development has stalled as I scan my nodule samples to enter them into the simulation (half of my samples were stolen from my porch, which delayed things), although the sim works just fine. The idea is you could take what I have now and, in project chrono, load a vehicle and test deep sea nodule mining using different designs.
It comes with a rigid (fast but wholly inaccurate) simulation, as well as DEM (which will make you cry and want to build a new computer). Having lots of fast cache helps with the DEM sim
Funding for https://infinite-food.com/ - seeking $100M - now finalizing four strong patents in the non-military drone space. Had a couple of false start time wasting lawyers, but now it's home run time. We've got what seems to be a few simultaneous nice technical edges over the multibillion dollar investments in civilian aerial delivery of food from major early stage players to date. Can't wait to close, itching to get to market and start generating some proper California lunch money.
Simultaneously, working on some technical demonstration materials, including novel fabrication and supply chain, plus some reduced BOM strategies for greater efficiency in mass manufacturing (once we get cash over the line). Bit of electronics in there, some mechanical. Keeps me interested so it's not 100% admin.
Also getting back in to badminton, super fun, losing weight nicely, feeling better every week.
New ideas? AI government will have its day in our lifetime.
I'm a filmmaker. I'm working on a tool to make movies with AI models:
https://github.com/storytold/artcraft
It's not like ComfyUI - it focuses on frontier models like Higgsfield or OpenArt do, and it is structurally oriented rather than node graph based.
Here's what that looks like (skip to halfway down the article):
Improving path-planner for 3D metal printing slicer project to reduce internal localized stress.
Designing closed loop micro-position 4-axis stage driver section v0.2.
Other stuff maybe three other people would care about =3
LLM thingz
https://codeinput.com - Tools for PR-Git workflows
Currently experimenting with semantic diffs for the merge conflicts editor: https://codeinput.com/products/merge-conflicts/demo
You can try by installing the GitHub App which will detect PRs who have a merge conflict and create a workspace for them.
Chess67 - Website for Chess coaches, club organizers, and tournament directors
Chess67 is a platform for chess coaches, clubs and tournament organizers to manage their operations in one place. It handles registrations, payments, scheduling, rosters, lessons, memberships, and tournament files (TRF/DBF) while cutting out the usual mix of spreadsheets and scattered tools. I’m focused on solving the practical workflow problems coaches deal with every day and making it easier for local chess communities to run events smoothly.
I'm currently unemployed and I started using Codex a couple of weeks ago so lot's of simultaneous projects, some stalled
Pre-codex:
Local card game: there's a very specific card game played in my country, there's online game rooms, but I want to get something like lichess.org or chess.com scale, oriented towards competitive play, with ELO (instead of social aspects), ideally I would get thousands of users and use it as a portfolio piece while making it open source.
cafetren.com.ar: Screen product for coffee shops near train stations with real time train data.
Post-codex:
SilverLetterai.com: Retook a project for an autonomous sales LLM assistant, building a semi-fake store to showcase the product (I can fulfill orders if they come by dropshipping), but I also have a friend and family order which I should do after this. 2 or 3 years late to the party, but there's probably a lot of work in this space for years to come.
Retook Chess Engine development, got unstuck by letting the agent do the boring busywork, I wish I would have done it without, but I don't have the greatest work ethic, hopefully one day I will manually code it.
Finally, like everyone else, I'm not quite 100% content with the coding agents, so I'm trying to build my own. Yet another coding agent thingy. But tbf this is more for myself than as a product. If it gets released it's as-is do what you want with it.
trying to get rid of microwave radio harassment for the past 2 years and counting
I'm learning about "AI programming" by working on some toy problems, like an automated subtitle translator tool that can take both the existing English subtitles and a centre-weighted mono audio extracted from the video file and feed it to an AI.
My big takeaway lesson from this is that the APIs are clumsy, the frameworks are very rough, and we're still very much in the territory of having to roll your own bespoke solutions for everything instead of the whole thing "just working". For example:
Large file uploads are very inconsistent between providers. You get fun issues like a completed file upload being unusable because there's an extra "processing" step that you have to poll-wait for. (Surprise!)
The vendors all expose a "list models" API, none of which return a consistent and useful list of metadata.
Automatic context caching isn't.
Multi-modal inputs are still very "early days". Models are terrible at mixed-language input, multiple speakers, and also get confused by background noises, music, and singing.
You can tell an AI to translate the subtitles to language 'X', and it will.. most of the time. If you provide audio, it'll get confused and think that it is being asked to transcribe it! It'll return new English subtitles sometimes.
JSON schemas are a hint, not a constraint with some providers.
Some providers *cough*oogle*cough* don't support all JSON Schema constructs, so you can't safely use their API with arbitrary input types.
If you ask for a whole JSON document back, you'll get timeout errors.
If you stream your results, you have to handle reassembly and parsing yourself, the frameworks don't handle this scenario well yet.
You'd think a JSON list (JSONL) schema would be perfect for this scenario, but they're explicitly not supported by some providers!
Speaking of failures, you also get refusals and other undocumented errors you'll only discover in production. If you're maintaining a history or sliding window of context, you have to carefully maintain snapshots so you can roll back and retry. With most APIs you don't even know if the error was a temporary or permanent condition, of if your retry loop is eating into your budget or not.
Context size management is extra fun now that none of the mainstream models provide their tokenizer to use offline. Sometimes the input will fit into the context, sometimes it won't. You have to back off and retry with various heuristics that are problem-specific.
Ironically, the APIs are so new and undergoing so much churn that the AI models know nothing about them. And anyway, how could they? None of them are properly documented! Google just rewrote everything into the new "GenAI" SDK and OpenAI has a "Responses" API which is different from their "Chat" API... I don't know how. It just is.
Ending genocide omniversally now every timeline with every breath.
EGONETWEB, now recruiting.
Kill your ego so we can stop the killing.
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
Microplastics are bad. People are concerned that there are microplastics in your balls! And that this could epigenetically affect downstream generations. I want to test that theory with a real human, not an animal model.
My plan: collect my own sperm samples over time and do whole DNA preps + basic body metrics. Sperm regenerates approximately every 10w, so planning time series over 10w. Next, inject myself to ~10x the average amount of microplastics, directly into the bloodstream. Continue with the sperm collection, DNA preps, and basic body metrics. Nanopore sequence, and see if there actually ARE any epigenetic changes. Eventually I'll go back down to baseline - are there any lasting changes?
Of course, this is an N=1 experiment, but rather than a metastudy I'm directly changing one variable, so I think it is valuable. We should have more people doing controlled experiments on themselves for the sake of all of society - and as a biologist, I actually have the capacity to design the experiments and scientifically interpret the results. In a way, it's part of civic duty :)
[flagged]
I am a DevOps engineer with a background in AI. I think OpenClaw is the best that happened to us, giving some power from the well funded AI companies back to the community. I think it's the new kind of Linux and it's exciting to me to witness its early days
I'm starting cold weather veggies indoors for my spring garden and preparing the soil.
I've been playing with various mineral amendments for years and produce some extremely tasty produce I have yet to see matched in stores (even the organic section).