-
yes, the DALL-E (or w/e it is) update is very impressive
It's caused me to rethink AI. It's kind of not as fun when it's perfect; there are fewer surprises, just competency. We're certainly not at perfection yet, but this update definitely has me thinking about us getting there in my lifetime. From my view -- and I'm not here to say what you should believe, for sure -- I don't believe in souls, so to me I think it's more a matter of coherently describing your whole vision (which I think would come from your soul, if you should be so romantic) and having an output that matches. The human will need to spend a long time thinking on the vision still, and I often find the errors in code when I work with ChatGPT come not from the LLM, but instead from the incomplete or even incompatble instructions I gave it -- there is definitely still work just in telling AI what to do. Anyway, I decided to do just two test images with Kevorkian -- as I said, I am not finding this quite as fun; they came out really well (less so on the text test) with basically zero time and effort; no need to sort.
-
Quick grid webpage for mobile projectors
I picked up a tablet with an integrated projector a bit ago. Here is a simple webpage which merely displays a grid with an adjustable number of equidistant rows and columns. My intent is to use it to help me plant in the garden.
-
Mid-year resolution: stop referring to me as "we"
I already try to do this, but I think I could do with strengthening the mental association so I fully stop. A while ago, I was listening to an older person opening gifts, and they were confused by a card someone'd sent where they referred to themselves as "we" and was trying to talk to the person as like... some kind of self-declared hivemind monarch. She was confused by this; who is this person representing? They didn't say, but kept referring to their opinions as the opinions of some kind of group. -And it does strike me as entitled, arrogant, and cowardly to be using "we" this way; I only say it so harshly because I've done it! -And it's a pretty new phenomenon, I think; I definitely don't recall this happening before the Internet, and I think it started becoming common somewhere in the 2010s.
The idea behind this resolution is just that I represent myself and am inherently unable to represent others accurately. I am not part of some Internet hivemind, despite how ubiquitous of a mindset I perceive this to have become as global connectivity increases, and while I (I just had to go back and erase "we" here) am part of a greater whole, I am a unique and unbound mechanism who can connect to others as I please, or even not at all, modifying my shape and speed to work with others as I choose. Furthermore, I should push back (not necessarily explicitly) against the idea that someone else represents me, especially when we haven't explicitly agreed to this. Finally, I am not a member of any monolithic group and cannot be pandered to as such; in fact, the very idea of a "Polish-American" is offensive to me, and I won't kindly view any attempts to address this "we" I don't believe even exists. I do not appreciate someone "I see you; I hear you"-ing at me when they can't even recognize me as a person, instead opting to profile me as a bunch of micro-demographics. I will try harder to recognize every person as unique and not part of some undeclared mind-union: no autistic person is the same; no liberal is the same; no Hawaiian is the same; no bald person is the same. Even if I come to know an autistic, liberal, blind Hawaiian, they are not the same as other ALBHs and it would be dishonest to claim I can fully emulate the original ALBH I knew to project onto new ones I meet.
Edit: A post in March just to say I will likely be away from tech stuff for a while. I'll be doing farm stuff this year. Maybe I will take pictures, or maybe not. Also doing health stuff; stopped smoking, greatly cut back on caffeine, started exercising (a lot), taking blood pressure readings twice a day; paternal side of family doesn't typically live past 60 due to CVD; it'd be nice to get another exception on the list. I've also generally been disengaging with the US economy, taking money out of bank account, selling out of equities and ending recurring donations & subscriptions; I was pretty upset with the Zelensky Oval Office meeting (I have the blood pressure readings to prove it!). All that said, I did make a credential-less file share server with unique IDs (based on a part of a hash from IP address), but it's not shareable code and I only made it for a small group of non-tech people; it would be unfortunate if the URL were leaked -- BUT, to do this, I had to open a tunnel from one server to the normal web server here, after giving up trying to figure out how to make the consumer router (on a different subnet) talk to the professoinal router in the way I wanted -- this was good experience I'll be able to put to use in the future.
Edit2: Sorry again about the US government (we just had the Twitter exchange between Musk, Rubio, and the Polish FM). I've made some decent progress tearing up the lawn, though. I made a little trench that goes to a bin I sunk down in the ground. Plan is to sink it more, put a lid on and cut some holes, then automate its "emergency discharge" if it fills, else automate its distribution to plants if the soil dries.
Edit3: I built up the trench with some clay (soil turns completely to rich clay about a foot under the sod) to try minimizing erosion and ensuring most water gets to the bucket. We had our first big rain of the year -- there was way too much rain ... or my bin is way too small. I wound up needing to bring a big bilge tank out to it during a thunderstorm, which worked pretty well, I guess, but I'm not going to leave a mains-AC bilge pump outside in a bin of water all Summer, so I'll need to come up with something else. I made some "smart" DC switches using ESP32C6s last year, but I seem to have forgotten whatever quirks I had with them, because I can get them to start on or off based on web interface, but once I switch them on, they never turn off again despite a 10k pulldown resistor directly between the MOSFET gate and the common ground. I have a bunch of pictures but this page isn't really set up for it.
I've started in on the next phase of health stuff, primarily making all my food taste like plants and nothing (my favorite foods are salty eggs and cheese), and finally dealing with the oxymetazoline dependency I've been reluctantly feeding for a year (plan is to start spraying only one nostril with it, and put the other on mometasone furoate; after a week, I stop oxymetazoline fully and switch which nostril I'm putting mometasone in; after another week, I stop nasal sprays altogether -- ideally). That Sinex stuff -- stay away from it; can't believe it's legal to sell OTC. If blood pressure still isn't under control after all these changes and prior continuing changes, plan is to move on to dosing myself with pamabrom (with electrolytes/magnesium supplements for safety). -And I'll still refuse to go to the doctor if that doesn't work, so after that it's probably some combination of leeches and trying to genetically modify my flesh to be more pliant so the blood isn't fighting me so much. -
Fractal renderer, classics + two ChatGPT functions
I guess it was only a matter of time once I started down the programmatic drawing/animating rabbit hole (bunny hole? that doesn't sound right...). I tried doing a fractal explorer in WebGL and WebGPU. WebGL has restrictions on the kind of math it can do, and WebGPU implementations only succeeded in crashing my graphics card driver, so for now, just a multi-threaded CPU renderer in web browser.
It features some classics (the original mandelbrot; julia; burning ship; multibrot; tricorn; celtic mandelbrot), and a couple o3 came up with (or copied and claimed as his own; hard to say).
Controls:
r = toggle zoomin/zoomout;
1,2,3,4 = set zoomin/zoomout speed;
p = toggle pause zooming/rendering;
spacebar = change which fractal pattern is being rendered.
You can drag the screen around to move where you're zooming in to or zooming out from, allowing full exploration. At some point, due to bad code and wanting to render at a reasonable-ish speed, you'll hit "bedrock" where everything becomes pixelated because that's as far as we bother calculating to. It definitely stirs some ideas for different avenues for programmatic drawing. NOTE THE BELOW WEBPAGE CONSUMES A LOT OF CPU RESOURCES! It's multi-threaded and shouldn't "freeze" your PC, but is definitely going to consume a lot of resources/electricity; use "p" to pause. It works reasonably well for me on an i7-12700K at 1080p canvas size, but that's as much of a reference as I can give, and it does max out at 100% CPU utilization depending on the pattern I'm looking at.
Multi-Fractal Explorer -
Putting the o3-mini discoveries together into a "generative video"
Not sure how to describe it; it's a self-playing game? o3 made all the drawings and animations itself with simple guidance. I also had it write the main background music (the potential end tracks are done by Udio) using a modified ABC notation it wrote a script to export to MIDI. I take the MIDI into REAPER and do some quick hackery to instrument it. The basic idea is there are bunnies; bunnies breed, as they do (I did consider the idea of an unacceptably graphic modal popup animation, but resisted). Eventually, there will be too many bunnies, so we needed a solution for that. The end functions/animations have some bugs (and there's currently no mute button, and things can behave oddly in some browsers if it's not a focused/active tab), but the game ultimately plays out; you get the gist. I can finish it some day and put it in the games(?) page, but it's past my bedtime. Bunnies
I'm mid-update on this tonight (Feb 26th) and heading to bed. Instead of spawning the "Destroyer of Worlds bunny" (DowBunny) as we did before, based on #adult bunnies and trying to progressively depopulate the bunnies as DowBunny eats them (and the sun), I thought it would be simpler to spawn a giant comet at a set date and animate it to consume the world. It sounded very simple in my head. I'll try getting to this again soon to fix it up, and I want to add a census modalish thing to it. -
o3-mini can draw in javascript canvas API (and music tokens!)
I drew some simple shapes in tool and exported code to o3-mini model, telling it what the shapes were. I then asked it to draw the bunny's face (with ears), which it did. I drew a face on it to show it could use colors and change the pencil brush size, and told it to complete the bunny. Then I had it create the background layer and add some details. It added two flowers, a butterfly, and a sun. I'm very impressed. You can import it into the canvas tool I mentioned earlier in the changelog using the code here. The flowers are not supposed to be black like that; that's due to a bug in how I have fill implemented.
Edit: I then asked o3-mini to transform just the bunny into a chibi style, the result of which is below:
o3-mini is likely capable of writing music for trackers, something I haven't had ChatGPT try in years (GPT-3 or -4; it was able to grasp how to place notes, but it was always quite random, not really music in the conventional sense). I haven't experimented with this yet, but I likely will.
Edit2: What an exciting day! I find o3-mini can also write music in ABC notation. You can play this using something like Michael Eskin's web ABC player:
X:1
T:Infernal Odyssey Expanded
C:ChatGPT
M:4/4
L:1/8
K:Dmin
"Dm"D2 F2 A2 d2 | "Dm"d2 c2 B2 A2 | "Gm"G2 A2 B2 d2 | "A"A2 ^C2 E2 F2 |
"Dm"D2 F2 A2 d2 | "Gm"^c2 A2 f2 d2 | "A"A2 C2 E2 F2 | "Dm"D4 z4 |
"Bb"B2 d2 f2 a2 | "Gm"g2 f2 e2 d2 | "A"A2 C2 E2 F2 | "Dm"D2 F2 A2 d2 |
"Gm"g2 f2 e2 d2 | "A"A2 C2 E2 F2 | "Dm"d2 c2 B2 A2 | "Dm"D4 z4 |
"Dm"D2 F2 A2 d2 | "Dm"d2 c2 B2 A2 | "Gm"G2 A2 B2 d2 | "A"A2 ^C2 E2 F2 |
"Dm"D2 F2 A2 d2 | "Gm"^c2 A2 f2 d2 | "A"A2 C2 E2 F2 | "Dm"D4 z4 |
"Bb"B2 d2 f2 a2 | "F"A2 c2 f2 a2 | "Gm"g2 f2 e2 d2 | "A"A2 ^C2 E2 F2 |
"Bb"B2 d2 f2 a2 | "F"A2 c2 f2 a2 | "Gm"g2 f2 e2 d2 | "A"A2 ^C2 E2 F2 |
"Bb"B2 d2 f2 a2 | "F"A2 c2 f2 a2 | "Gm"g2 f2 e2 d2 | "A"A2 ^C2 E2 F2 |
"Dm"d2 f2 a2 d2 | "Gm"^c2 A2 f2 d2 | "A"A2 C2 E2 F2 | "Dm"D2 F2 A2 d2 |
"Bb"B2 d2 f2 a2 | "F"A2 c2 f2 a2 | "Gm"g2 f2 e2 d2 | "A"A2 ^C2 E2 F2 |
"Dm"d2 f2 a2 d2 | "Gm"^c2 A2 f2 d2 | "A"A2 C2 E2 F2 | "Dm"D4 z4 |
Edit3: Pushing this a bit further, here is some generative music code for TidalCycles (this is a big bother to install and learn the basics of; I probably wouldn't recommend it if you're just casually curious):
let themeP = n (scale "aeolian" "0 0 3 4 0 3 4 0"); bassTheme = n (scale "aeolian" "0 2 0 2 0 2 0 2"); labyrinthP = n (scale "aeolian" "5 4 3 2 5 4 3 2") in (d1 $ slow 4 $ cat [themeP, labyrinthP, themeP] # s "superpiano" # legato 1.2 # room 0.2 # size 1.0) >> (d2 $ slow 4 $ cat [bassTheme, silence, bassTheme] # s "bass" # legato 1.5 # cutoff 0.8) >> (d3 $ slow 4 $ cat [silence, labyrinthP, silence] # s "pluck" # legato 1.3 # room 0.4 # size 1.2 # cutoff 0.7)
I'm not super-big into generative music, so I'll keep exploring.
Edit4: -And finally before bed, a piece done in Ruby using Sonic Pi. This one's coherent; the mixing's a bit off, but that's fine. The code's long, though, so I'll link that here, and here is the mp3 rendering of it. Sonic Pi, fwiw, is much easier to install and use than TidalCycles. Not quite a replacement for the ancient-by-AI-speed MuseNet, but it's a nice little treat that the general models are picking these kinds of skills up incidentally. Though... I suspect o3 could do better if we could set up a simple language for it to use, which could then be scripted to output MIDI or similar from, which I don't think would be terribly challenging. -
Backend changes
Got a USBC PCIe card for a Zimablade SBC; inserted it backwards (x1 card, x4 slot, more-or-less fits either way, but one way's very incorrect), worried I fried board; shorted reset pins, now good to go. Added two 1TB USBC SSDs, put in s/w raid0 to live on edge, and moved video and mp3 server to it and off web server. While in web server, cleaned up apache2 config and reviewed security settings; changed some things which nobody should notice but should make site marginally more secure.
-
Rocktoof SBCs
I picked up four Radxa Rock 2F (henceforth, Rocktoof) boards after a great video from ExplainingComputers. At around $15/board after shipping (depending on how many you buy at a time; the shipping fee is high but flat per order at all places I looked at), they feature an MMU w/ 1GB RAM (up to 4GB), USBC power in, TRRS port, full HDMI port, 40-pin GPIO, 2 USBA ports, an SD card slot (also eMMC soldered on if you pay extra), WiFi+BT, and a bunch else. A pretty incredible deal. I actually thought low-cost SBCs were dying out and I'd have to wait for Espressif and a board partner to put an MMU on a <$10 MCU board, which still feels like it's going to happen before 2030.
Despite a price tag of <$10 before shipping, I feel reasonably confident this could host this web server if I removed the video and music hosting test servers I added recently (the video scripts tend to take up a lot of RAM). Without those, and even on the relatively bloated Ubuntu, this web server (currently using a Raspberry Pi 4B) never uses more than 1GB RAM. However, the Rocktoof lacks an ethernet port, and it also has been lacking availability since the EC video; I was lucky to snag some. I view these essentially as MCUs with MMUs; MCUs I can run linux on, and with some extra physical connectivity options (I'm excited to look into abusing the TRRS connectors). Hopefully they do bigger runs of this; I think manufacturers were getting a bit confused why people weren't wanting $40-80 SBCs, and I think it's because MCUs were essentially eating their lunch; like yes, full MMU linux would be neat, but I'm not going to pay 8x-16x what I pay for a very competent ESP32 board to get it. 2x, though, and we have a deal, and a thank-you! You can find full specs here and the ExplainingComputers video here. I picked my boards up from Arace simply because they had stock; I have no affinity for Arace nor any kind of sponsorship/whatever with Radxa. The board draws ~1W at idle with WiFi connected, ~1.5W under "normal" load (downloading and installing packages), ~2.25W at full CPU load, and ~2.7W with CPU, I/O, and RAM all being maximally stressed (note these are all "headless" tests!). I read 65C from thermal imaging after 15M of CPU stress testing, and the board ran like this for around 10 hours without issue (was doing a discharge test of a USB battery bank)
Edit: Headless setup with a Windows workstation was a bit of a bother. In case it helps, here's what I did:
1) Flash Radxa System (there's an img file on their website for the Rocktoof under Downloads) onto a microSD card using Balena Etcher.
2) (Windows) Assign a letter to the 15MB partition of the card using third-party Ext2FSD tool (or whatever you like which handles this)
3) Edit before.txt using the "Headless Mode" instructions on Radxa's "Radxa System" documentation to configure the SSID to connect to (I didn't see how to specify IP address, and gave up specifying this on the device; see later workaround)
4) (Windows) IMPORTANT: un-mount that partition in Windows before physically disconnecting the SD card from the workstation (not doing this will cause problems in Windows if you have more to set up)
5) Put the SD card in the Rocktoof and boot'er!
6) Use an IP scanner to figure out what IP address your router (or DHCP server otherwise) assigned your Rocktoof (default hostname is something like Rock-2)
7) SSH into it
8) (optional) Run whatever normal setup scripts you do for your Debian-based SBCs. I like to have consistent aliasing for things like python and add scripts with aliases to do things like easily make scripts a service.
9) IMPORTANT: Radxa does not assign unique MAC addresses to these boards, instead using the same dummy address for all. This caused me headaches later. Fix with below command, substituting your wireless network SSID and changing the MAC address to whatever you want:
sudo nmcli connection modify "YourSSID" 802-11-wireless.cloned-mac-address 00:11:22:33:44:55 || { echo "Error: Failed to modify the connection configuration"; exit 1; }
then
sudo nmcli connection down "YourSSID" && sudo nmcli connection up "YourSSID"
10) (this one depends on your router; I can't help much) In your wireless router settings, find where DHCP reservations are handled. Add your device (this might mean typing in the MAC address you specified, or selecting it from a list of attached devices) and set a memorable IP address. I had multiple to add, so I used a memorable spot and counted up from 0 (i.e. 192.168.1.110, 192.168.1.111, etc).
11) SSH back into your device (it will still be at old address) and run the below command to have it re-fetch DHCP asssignment:
sudo find /var/lib/NetworkManager/ -type f -name "*.lease" -delete && sudo systemctl restart NetworkManager
12) You may wish to set some other things like a unique hostname, SFTP, and UFW, but you're basically done now. Congrats! -
The end of an era? USPS suspends delivery of Chinese parcels (edit: policy reversed)
Per CNN and other news sources, the USPS has suddenly halted acceptance of parcels from China. The USPS is frequently used for last-mile delivery of electronics and other small components. This includes all PCBs I order and often SBCs & MCUs. If the cheap options, where you pay $5 or so and wait 2-3 weeks for it to eventually show up, are removed, this will likely end my electronics hobby which relies on the economics of DIY making sense. This is so disruptive and caught so many off-guard, I think, that they will eventually reverse this, but I can't say that confidently. I should say that, so far, no company has yet reached out to me to cancel an outstanding order (though we're also just a few hours into this).
What typically would happen in these shipping arrangements is China and US private logistics companies get the product to a USPS logistics hub within a dozen or so miles of the delivery address, and USPS will drop it off from there with nearly no extra expenses since they drive the route every day anyway and the packages are small enough to fit in a standard mailbox. The policy change doesn't make sense for the USPS, it doesn't make sense for the American consumer nor the American maker; it's just bad policy. I will quit whining about this now. Edit: Bloomberg reports this policy has been reversed just hours after announcement.
In other news, I've been working on scripts to regularly pull in videos of all the channels I watch on Youtube locally, and the same with all the songs I like on Spotify, to route through Flask servers with frontends running locally, but I can't share any of that for a lot of reasons. I've been trying to do this in a friendly way, significantly delaying how often we check and pull. You can see in the server stats when it's running, and see as the disk slowly (or quickly, depending on your perspective) fills. I have another external SSD on-hand so I can get these processes off the webserver later, but I'm using it to help my mom with data backups. -
Canvas Paint semi-functional mockup
I want to do more with drawing and animation by using lower-level rendering instructions for browsers to interpret. As such, I spent a couple hours today with o3-mini-high making graphical painting tool that runs in a browser. It handles layers (using z-index) and transparency. You draw what you please on it and it'll output the render instructions a browser's able to read. It also outputs the html needed to make it work standalone for testing, so if you just click export, then copy the code, then paste it into a text document and save that as a .html, you should see your image on the new page. You can import this back into the tool to refine it further.
This tool needs a lot of refinement. I eventually want to add settable animations and some basic coding language so it works sort of like the old Macromedia Flash editor where you could actually make a game in it (but outputting an HTML file which can be dropped anywhere; can be parsed by any modern browser without third-party libraries, so no need for proprietary cloud nonsense).
Undocumented/unlabeled stuff: The slider at the top changes alpha/transparency; this is buggy. The number textbox to the right of the pencil/eraser size thickness number changes the color-match threshhold for the fill tool, which I believe is non-functional. ctrl+z undoes; ctrl+y redoes. Tools have keyboard hotkeys assigned (p for pencil; f for fill; m for move; e for eraser). You can double-click a layer name to change its name. You can right-click a layer name to manually set its z-index. You can drag the layer names around to re-order them (which also automatically sets their z-index, with the caveat that this will reset anything you manually set).
Known bugs and caveats include needing to be on the bottom-most (by z-index) layer to interact with the textbox (without doing this, you can draw on the textbox, though this shouldn't get exported), ctrl+z and ctrl+y behaving unexpectedly sometimes (exporting and then importing your code will often fix this), the fill tool often behaving unexpectedly, the canvas size is non-specifiable and is set automatically based on what the canvas is auto-sized to at the time you open the webpage, and things I've already forgotten. I'll get back to this someday, probably. Canvas Paint -
Hyper Blackjack functional mockup
The o3-mini model's released, and I've been putting it through its paces! Do you remember Pogo? I quite liked a few games from there (I played as a kid while at my grandparents' house) I'll try replicating over time, with differing variations. I figured Turbo 21 would be a good place to start. The end-game spin modal wound up being, by far, the most challenging part of this, because I insisted o3 do it in a pretty ridiculous way. It's here if you'd like to test it. I'll build it up eventually, but I'm not sure if I want to get to that right away or work on a couple tools I want to make (HTML5 Paint specifically for getting javascript shape/geometry code, and a tool to automatically have Wayback Machine take a snapshot of webpages you have cited in your papers at the exact time it'll insert into the citation). I've learned there's no point at me guessing what I'll want to do tomorrow, though; maybe I'll make Youtube but for ASCII in a kind of MIDI marker format, or maybe I'll make headway on cleaning and work stuff (I'm well behind). Oh, I should say it's expected to use keyboard (1,2,3,4) in the mockup, not the buttons at the bottom I've never even tested to see if works.
-
Brainwiper update
Changed around foods dropped and added vaguely-intuitive recipes, had Chad make a lot of icons I cropped rather poorly. Need to digest for a while before deciding how I want to expand/modify further, if I do. Inventory is quick to clutter. I found code is able to handle drop rate weighting by simply duplicating entries in the file it pulls from, so doubled potato drop rate to keep game from becoming too difficult. (two potatoes = 1 extra life; as a hint, the only final recipe now is spaghetti & meatballs [and the two potatoes; but that's handled specially]; salt is the only "food" used in two recipes and thus has double drop rate)
Edit: There is a critical issue. Dragging an ingredient to cooking pot causes all Chrome instances to crash out once every 200-300 times that action's performed. I don't have enough information to understand why it happens; I was playing optimally, so my actions are very repetive; I think the bug lies in Chrome or Nvidia drivers related to graphical rendering. I don't have enough environmental information to work around whatever the specific issue is, so I think my best bet would be to prevent that event entirely by changing how ingredients are added to the pot.
Edit2: Fixed by making it so items can no longer be dragged to the pot; they're just clicked now and instantly added to potContents. I don't think it feels as good, and makes it much less intuitive (I'll need to add text explaining why your ingredient disappeared when you clicked it later, and/or some animations), but I guess it's better than browser crashing. Edit3: This will probably conclude my development burst on this particular game for now; leaving it like this will give me an easy W next time I touch it; good momentum. Edit4: I also bugged a couple of the achievemnt unlocks in the subsequent updates. I'll fix it someday.- DEFEND!
We now spawn an AA turret when changelogButton is a missile so you can prevent it from exploding. Shooting it is quite difficult (use mouse cursor and hold or tap left-click), especially if you don't expect it (or on phone/tablet where must tap repeatedly, in another new innovation expressing my disapproval of "client devices"), though I slowed the missile quite a bit to give a chance. The missile is not always destroyed when hit and may require up to five hits before being destroyed. In exchange for giving you the means of preventing the missile from exploding, there are now consequences if the missile explodes. Edit: Further slowed missile acceleration and fixed issue with explosion animation not displaying.- Minor Brainwiper update
Tiles are now colored based on number of mines around. Added "final" recipe tier (this combine tier removes the ingredients and grants a large points bonus [and later a separate bonus], to help with the inventory clutter). Moved the cooking pot to be above the graphical inventory array to further reduce whitespace; though I use a 4k monitor with DPI scaling, so it's a bit of a guess to me whether or not I'm forcing use of a scrollbar for normal people at 1080p. I should probably check that some day (uh, probably for the whole site).... I think we're bug-free, though (I did also fix the issue with which item graphically goes into the pot; we track what you're dragging, now). I did a lot of testing trying to get a tile to spawn which was surrounded by 6, 7, or 8 mines before finally cheating so I can go to sleep. I forgot to add instructions/controls again....- Prep work for next Brainwiper expansion + fixes & cheevos
Added an achievement system to the game. Only three easy ones in atm. It will tie into a later system. Getting close to the saturation point for code either I or ChatGPT can work with, though. Plan is to do a content expansion and then one new feature expansion -- and then I'm probably done with it for the foreseeable future. Maybe I'll add SEs and some graphical effects then or later, too; idk. I want to make sure I do the work left with o1 so it's a "pure" example to compare o3 against.
Also fixed an issue with inventory shuffling around when items are added to cooking pot. Reduced inventory image size slightly. Fixed the lingering bug related to needing to un-flag and re-flag mines in some circumstances when the mine was flagged by the extra life code.
As an aside, I did get DeepSeek-R1 running locally, but only the 14B distilled model (believe it or not, I don't have a cluster of ten H100s casually set up in my house). I was pretty unimpressed and felt like I had my time wasted, but I'll give it one more shot on MCU code some time in the future. I think I'm starting to appreciate more just how willing ChatGPT is to humor my crazier ideas -- I want to grow strong vines to hold ornamental shields? No problem for ChatGPT; it'll spit out seven species to consider with some essential stats and pros & cons of each, along with some suggestions on influencing the vines' growth. Other models seem to get stuck in the idea of "that's impractical; people don't do that; go get some hooks; be reasonable!" ChatGPT is great for discovery, exploration, and tangential learning; I'm not sure if they've specifically targeted that, but it's wound up being a defining feature of their family of models, I think. Edit2: I retested and it can't even stay in the same written communication language consistently, much less programming language. News media is generally being way too credulous about DeepSeek, I think.
Edit: a lingering graphical bug I won't fix immediately -- when you drag an item into the cooking pot, instead of knowing which specific item of a type you dragged, it will use the first (left to right, top to bottom) item in the image array of the same type, which can be a bit disorienting.- Smart mains plug
I picked up a 4-pack of Sonoff S31 smart plugs off Amazon for $32 or so shipped after learning they have an ESP8266 inside. It also has pads for easily soldering leads from a USB-TTL adapter. Disassembly, soldering, flashing, and config were all quick and straightforward.
I originally wanted to put MicroPython on it, but that didn't work out. I settled on something called Tasmota, which I find a bit awkward to use because my brain wants something more oriented around file systems/structure. I thought I could get Berry scripting on there, but I think I needed more heap for that. -But that's fine; it accepts REST commands, so all I really need to do is set a static IP address, and then I handle the logic on a central server. I'll be well-pleased whenever they start selling an ESP32 version. I'm told there's an HLW8012 chip on there for power monitoring, but without Berry scripting, I guess I'm happy enough to just be able to turn the plug off and on programmatically for things like humidifiers.
Edit: I wound up putting in an API for the thermostat for returning JSON of most recent data-recordings for device name queried, used for yet another new service which handles the humidifier plugs (and future AC plugs otherwise) switching on/off. Quickly ramping up the number of services on servers, and there are some other new ones I haven't posted about here like a blind file drop; but with network map, at least I have basic documentation on them, and I made a script I can invoke with "makeservice" command, and a "normalizer" script so that command and others are standardized across my machines; this speeds up testing and deployment. It'd be nice if, out of the box (that is, without pre-installing anything), there were some standard short line I could write to install a cross-platform (not limited to just linux distros) package manager, but it hasn't dawned on me yet how to do this; maybe I could create a "fake" wifi AP that runs arbitrary scripts when connected to or something like that; or similar with bluetooth... seems like those "vulnerabilities" would get patched pretty quickly.
I think I will turn my attention back to Brainwiper in the near future; there are some issues that are bothering me. Idunno if I'm just having a week or so with extra "cyclical energy" without cause or if it's from starting to take lactase (I didn't realize I've been bigly lactose-intolerant for a decade until a couple weeks ago), but I've been unusually productive lately and also a bit extra "ADHD" (detrimental levels of task-juggling), and also very uncharacteristically interested in socializing. I found Shirley Curry ("Gaming Grandma") lives right next to me, which was pretty neat, and I've been reaching out more to people I'd lost contact with.
- MicroPython remote environment / Device Wrangler
So far, I've tried FTP and Telnet to remotely control ESP32 devices. I don't like either of those, so I started a little bit of work on an API so we have a web frontend (websockets), an intermediary API, and then our micropython device runs a lightweight HTTP server to talk with the intermediary. So far, I've just replicated the FTP experience (+ remote reboot, but I had that solved in an old FTP solution, too, by having ESP32 check for a particular file I could drop in); you click a device, it lists its files; you can add, modify, or delete files, and get a little text box for modifications. These routes are defined on the micropython device, so it understands what to do locally when we push a delete event to it.
As far as I know, everything on it works now, but it's finicky, not very robust, and is extremely unsafe (there are virtually no safeguards to prevent someone from remotely deleting everything on your micropython device and then saving and executing whatever code they please). I will work on this more later -- add it to the pile of 50 other things.
Download it here.- Metric UNIX Clock (Updated)
Soooooo, remember a couple years ago when I put up a "metric clock" as a joke about the EU? I kind of joked myself into taking it seriously. We should count kiloseonds in a day, not hours; "we're open from 30 to 60". I've made a mockup of an analog clock for it here.
The seconds hand makes 1000 stops around, counting toward the next kilosecond. The kiloseconds hand should match where your deprecated-style analog clock's hour hand would be if your clock had a full 24 hours instead of 12. In this system, we would currently be in the Third Era. The First Era would be what we today call BC/BCE, and the Second Era would be AD/CE up to 1970 when UNIX time was born, kicking off the Third Era we now live in. Each era after would be one terasecond, or approximately 37,710 years.- Brainwiper first expansion now live
In the Games tab. Added recipes (only three in-game atm), a persistant-until-loss score system, new graphics, end-game screen with ranking (graphics currently missing), a life system, and maybe some things I already forgot. Next time I work on it, I'll move the ingredients and pot over to the left-hand side rather than the bottom, and add increasing difficulty. I think there's a bug somewhere in here related to bombs or flagging, but I've only triggered it a couple times and wasn't paying enough attention to understand what I did to trigger the issue. If on Android, holding a tile acts as a right-click to flag a mine. Oh.... I guess in next expansion, too, I should add instructions. Oops!
I also made up a python-backend/web-frontend network map utility which lets me organize all my servers and services, but it contains sensitive information so I won't show it. I might release the code for it once I've added an SSH client into it, but it's quite messy, like most things I make.
Edit: I went ahead and shoved the Brainwiper gameboard to the left, then brought up cooking pot and ingredients so we aren't wasting so much screen space. This looks okay-ish; def more functional for folks in landscape orientation. Added graphics for ranks. Found and fixed bug I was referring to earlier; left-clicking an unrevealed treasure tile would not run the proper code to reveal the tile, disallowing access to that treasure. Edit2: there is a separate bug, though; when an extra life saves you from a mine, it seems you sometimes need to unflag and reflag it for the purposes of determining if you won the round or not -- I probably won't fix it for a bit.- Added Dawn of Valor to Udio Normal mp3 playlist
Was supposed to be based around the idea of Starcraft Terran campaign music, but I wasn't able to craft a prompt for this, so I outsourced the promptcrafting to Chad, who also wasn't able to come close in style ... but it sounds fine anyway.- The changelog button is sometimes a missile
Some minor graphical issues with it, but for a fun little gag (the majority of index.js is now missile code...), I'm pleased with it.- Working on alternative to Android launch app for local services
We pre-load all websites in independent frames off-canvas, now, eliminating latency (particularly for the thermostat which is on a wireless connection); then we just visually manipulate which iframes are showing in the main content area when a button's pressed. Webpages continue live-updating via AJAX and WS in the background to ensure fresh data when we load in a different page, but they are very lightweight.
There's also a popin/popout drawer showing some other stats, served as shtml via SSI, which I'll build up more later. See demo video below (it starts on EBO just to show it's a fresh page load). I have the garage thermometer right next to me to work on; I'm juggling a bit -- we also aren't actually getting a latency read from Workstation because it's timing out (it doesn't have a web server and we can't do ICMP ping; is on the to-do list).
- New year, new changelog.
- DEFEND!