Blog

Cycling

September 30th, 2024 (permalink)

If you've been reading this blog, you know that I have swytch bike kit for my bicycle. I've been using it for a few years, and it's still functional. I got the upgrade battery kit recently but haven't bothered to install it. I did install the new kit to my wife's bike and she loves it, but the usability is slightly worse than with the old kit.

Okay, with the old kit, the rather heavy battery is at the handle bars, which means that the cycle will fall over very easily when parked, meaning I have to detach the battery when going to the store. I probably would, in any case, but for short stops it's annoying. With the new kit the battery can be attached basically anywhere on the bike, but the attachment takes more time and effort. The old one was just "jam it in and go", with the new one it's "connect three straps and cable".

Anyway, the kit makes cycling more fun. I generally cycle with the lowest power setting, but if there's a crazy uphill or if I'm getting exhausted, I can always turn the power up. When the speed goes over 25km/h, the assist turns off, so it's all you. I try to keep my travelling speed above 25km/h. I'd prefer if it was possible to set that cutoff lower, to 20 or even 15km/h, as that would save on the batteries, and you usually only really need it at those lower speeds (due to cycling uphill or starting from a standstill).

Since it's more fun, I'm much more inclined to do more of it. Sure, it's not the same, excercise-wise, as "normal" cycling, at least up to the 25km/h speeds. Using a trailer is not also the same. Every route is also different, and the 28km to our summer cottage is particularly hilly. So comparing one cyclist to the next just by their cycling distance (and whether they used assist or not) is a bit unfair.

Anyway, there's this "playful competition" called kilometrikisa where you join a team and mark up how far you've cycled per day (and whether you used assist). Our company had a team, so I joined in. That gave further motivation to do some cycling, and I found a 13km route around my town which I did on many days. Of the 153 possible days this summer I managed to cycle on 102; the rest I was sick, it was raining, or otherwise not around. I cycled over 1700km, which happens to be a bit over 1000 miles, which is funny.

As of this writing I'm raking third in our company's team ("with assist" showing as a sore thumb), and our company team is ranking at... 1157. Which tells how popular this "competition" is, as well as how much we suck =)

Now that the season is over, storms moving in and temperature hit 0'C this morning, I'm probably not cycling much this year anymore. So I started using the rowing machine again. I should have used it over the summer as well, but I rather went cycling, you know?

My goal, assuming I'm not sick, is to do at least two 2km sets a week. My times so far were 10:02 and 9:50, which are not great (professional rowers do sub-7 minute times regularly), but left me completely spent. As in, "it's hard to breathe" spent. I'm hoping that doing more of it will get me in a better shape, and who knows, maybe my time gets better too. It's quite efficient, time-wise, although it takes a while to recover.

Oh and I also have the project to turn our excercise bike into a game controller. I already have the parts and did a bit of microcontroller code, so next I'd need to do some physical design to get the servo attached to the bike (I'm seeing some 3d printing in my future), and then there's the whole PC side of things to do, too.

Cohost

September 22nd, 2024 (permalink)

So Cohost.org is going down by the end of the year. I mostly used it to host my Advent of Code visualization animations, which I've now moved to this site. It's still a bit of a work in progress, but probably easier to browse too. What's missing are any comments I may have made related to the animation, but a lot is gained as well. I'll see if there's anything worth saving in the comments. (edit: I added the comments back in, along with the puzzle names)

Apart from that there were a few random blog posts, and I'll re-post the most relevant ones here, below:

Old Post From Cohost: Revamping my graphics programming tutorial

September 22nd, 2024 (permalink)

(post date 2023-12-04)

About a couple decades back I wrote a graphics/game programming tutorial (https://solhsa.com/gp/index.html) based on SDL 1.2. It used to be a major driver of traffic to my site, so one or two of you may have come across it.

Time has passed and because of the old SDL (and old compiler suites) the tutorial has been more or less obsolete.

I've started to revamp it - not completely rewrite, as it still uses much of the old material (as basics don't really change). I'm basing it on SDL3, so it should be relevant for a while again. Well, as relevant as plotting pixels goes.

As of this writing the first 10 chapters are here: https://solhsa.com/gp2/index.html

If someone does drag themselves through the tutorial, feedback and inevitable bug reports are welcome.

Old Post From Cohost: ko-fi conclusions

September 22nd, 2024 (permalink)

(post date 2023-10-03)

Like I mentioned in the last post (and in several other places), asking for donations is illegal in Finland. I did set up a ko-fi account in the end, set the donation minimum to 5 million eur so nobody will accidentally use it (if you DO give me 5 mil, I'm pretty sure we can work something out), and created the following commissions:

    $10 - ask me anything through email
    $20 - I'll photoshop your image, badly
    $50 - ask me anything via video call
    $100 - I'll critique your business plan
    $250 - you tell me what open source project I should update next
    $1000 - I'll workshop your software architecture with you

I may add other stuff there eventually. Ideas for other tiers welcome, too.

Am I expecting customers? Not really, but it's there: https://ko-fi.com/sol_hsa/commissions

Old Post From Cohost: Pondering about ko-fi

September 22nd, 2024 (permalink)

(post date 2023-10-02)

I don't have a patreon, ko-fi or even a paypal "gimme donations" link anywhere, primarily because it's legally complicated in Finland. Basically asking for donations is illegal in Finland unless you have a permit, and pretty much the only people with permits are churches. Which means churches fight against freeing the donation drives. Fun, eh?

Anyway, that rules out a lot of the "creator economy" stuff like donations, kickstarters etc. Giving donations is totally legal, it's just the Finnish scammers that the world needs to be protected against.

Selling stuff is fine. Asking for money without giving something back is illegal.

So I guess I could sell .jpg:s that I send to people who want to give me money, or something. Or I could sell services like "answering your email" or "I'll have a 30 minute video call with you" for those who really want to splurge. Not that I expect a lot of people to do so in any case..

The thing being, I do see myself as a creator, but the stuff I do is totally random. I don't specialize in furry porn or cooking recepies. Someone who might follow me because I do stuff for ZX Spectrum Next might be annoyed if I get inspired to create printable 3d models for a while. Those who follow me because of SoLoud might be annoyed that I decide to make animations about every single Advent of Code puzzle ever.

I also don't hold myself to any kind of creation schedule, so you can't expect me to post a new video on YouTube every month.

The stuff I do also doesn't quite fit in a typical commission format. I mean, I could photoshop your image, badly (and at one point did a bunch of forum avatars, actually), but it's not something people generally pay others for.

So anyway, even if I did set up something, I wouldn't use Patreon. I've had a bad feeling about them for a long time, and things seem to be going downhill there. Venture capital is like rocket fuel; you tend to burn your fingers playing with it.

Ko-fi, at least based on all the information I can gather, seems to be healthier in that aspect. They've kept themselves small and aren't at least publicly talking about the millions of venture capital they've taken. So if I was to set up something, I'd probably use them.

I couldn't use them as a tip jar, though. I'd have to figure out some demonstrable way to give back for the money given. And after all the effort... I don't know if anyone would actually give me money as thanks for whatever reason they know about me.

Old Post From Cohost: I have no idea where my code is being used

September 22nd, 2024 (permalink)

(post date: 2023-07-03)

Answer to: "did you know I use soloud in luxe engine" -- ruby0x1

Nobody ever tells me anything. It took a few years from nintedo's shipping SoLoud in the nes and snes "classic" mini consoles for me to learn about it. And the reason I found out was when someone uploaded nintendo's open source dumps (zip files in an obscure directory on nintendo's site) to github, and I just happened to search github for my name.

I've found out similar things about other pieces of software I've written in really roundabout ways. Someone posted modding tools for some game and that happened to include an uncompressing tool (not by me) for a compressed file library / virtual filesystem I wrote ages ago. Popped up in some search or other. Thus, the game was using my code. In that case I contacted the company that made said game and asked if there were any feedback; they never gave me any feedback, but the game appeared in the mail..

Apart from all these open source projects I've also written some driver code for some Rather Large Corporations, we're at the point where if you're living in a western country it's more likely you've used a device that runs code I've written than not.

But do I know where my code is used? Nope.

Old Post From Cohost: Art req

September 22nd, 2024 (permalink)

(post date: 2023-07-03)

Might as well post this here as well:

One of my various hobby projects is building an open source, free multiple-choice adventure / game dialogue engine called "dialogtree".

I'd like it to have a nice logo, and I'm envisioning a tree made of comic dialog bubbles, but I'm down with other ideas too. Could use AI but would rather have human touch.

The logo should be good for print but also for low res.

Willing to pay an insultingly low fee for it.

Old Post From Cohost: Sprite rotator

September 22nd, 2024 (permalink)

(post date: 2023-06-04)

I wrote an implementation of the "three shears" sprite rotation inspired by tomforsyth's post https://cohost.org/tomforsyth/post/891823-rotation-with-three. I'm pretty sure it can be useful for developers for retro platforms.

Source code for the tool can be found at: https://github.com/jarikomppa/spriterotator

The code is relatively portable c++, but I've only tried it on windows.

Old Post From Cohost: Advent of Code gifs

September 22nd, 2024 (permalink)

(post date: 2023-05-19)

Starting next week I'll be posting gifs daily. I have quite few generated, and if I posted them once weekly it would take YEARS.

The motivation for posting them once a week was to potentially get other people interested in doing old AoC puzzles at a once-a-week pace, but that didn't quite work out.

Since I recently finished 2019, meaning I should have gifs for basically every single puzzle now, I might as well start posting them more rapidly.

A lot of them are of "least possible effort" quality.. and multiple puzzles don't give much to visualize to begin with. But there's a few that turned out pretty nice in my opinion.

Old Post From Cohost: Old Advent of Code Runs

September 22nd, 2024 (permalink)

(post date: 2023-02-09)

I tried solving some old advent of code years at some point, and get the itch to do so every year that I take part in AoC, but it just doesn't motivate me when it's not... "live".

A couple days ago when I was discussing this on a discord someone said that one puzzle a week could work. Low-stress. And doing it in sync so that everyone (who's participating) does one puzzle a week, so we can discuss them, might keep the motivation up.

So I'm proposing that next Monday, 13th of February 2033, it's 1st of December, 2015: https://adventofcode.com/2015/day/1

Who's with me? No time pressure, day won't change for a week.

Old Post From Cohost: Adventures in VQ

September 22nd, 2024 (permalink)

(post date: 2023-01-23)

graph
graph

Since a bunch of my hobby projects have gravitated into the land of vector quantization (which I managed to rediscover by accident) I figured I'd make some (more) generic code towards that, maybe solve some of the issues I've had with my earlier implementations and maybe even learn some (more) modern c++ features while at it.

I'm sure there's tons of ready built VQ libraries out there, but this is a learning journey so I didn't even check.

I started off with a really simple use case. Have integers 0..1023 and ask for 16 groupings. So you'd expect 16 groups of 64 values, or to be more precise the centers of those groups, so each center is 64 apart.

As a starting point, since from the algorithms point of view it "shouldn't" matter what the initial selections are (typical instructions say "start from random numbers"), I just selected values 0..15, which is pretty much the worst. As seen in the image, it converges pretty nicely (yes, that's 284 iterations), but to my surprise did not give optimal result.

I figured it must be about integer rounding so I changed the data type to doubles, but got the exact same result.

expected actual delta
32 27.5 4.5
96 84 12
160 141.5 18.5
224 200 24
288 259.5 28.5
352 320 32
416 381.5 34.5
480 444.5 35.5
544 508.5 35.5
608 573.5 34.5
672 640 32
736 707.5 28.5
800 776 24
864 845.5 18.5
928 916 12
992 987.5 4.5

The steps range from 56.5 to 71.5. They do average to 64, though! The +/-11% difference from optimum feels pretty high though.

If I start from the optimal position the algorithm doesn't muck it up, at least.. but finding the optimal position is kinda what the algorithm is meant to do.

Either I'm missing something, or this is just one of those things I have to live with..

Old Post From Cohost: Optimizing a Python Script

September 22nd, 2024 (permalink)

(post date: 2023-01-10)

I figured it would be time to get back to some Spectrum Next programming, and since NextSync has been on my mind lately I figured it would be fun to update it.

Last time I was working on something for the next it was a video player, but I hit some issues which required a new core and operating system version, which were not quite ready at the time.

The new versions have been out for a while now and one thing that got broken by the update was NextSync. Kind person with the nick SevenFFF figured out what was wrong and made a fixed version, so me getting back to development should be relatively pointless. Developing without NextSync is rather awkward =)

Longer-term goal for NextSync is to convert it from C-with-some-assembly to purely assembly.

First thing was to simplify things. NextSync is too flashy for its own good, so I ditched the custom font and custom text prints and real-time progress view.

Then I started pondering about one old idea I had: compressing on the server and decompressing on the Next. The compression and decompression would have to be fast enough not to slow things down too much, or there would be no point in doing that. And in practice, we want to reduce the number of packets sent over the network because handshaking between packets is what is taking a lot of time.

I figured I'd implement a very simple RLE compression scheme - if things worked out, it would be relatively straightforward to add other compression schemes later. Here's where I fell into python optimization rabbit hole.

Compressing with RLE comes down to finding spans of 3 or more bytes with the same value, and then splitting the data to "run" blocks (do N copies of X) and "skip" blocks (copy N bytes from input). I took a page from NextFli compressors and the exact scheme is as follows:

    Skip: [len][bytes]
    Run: [len][byte]
    Repeat

Where length is encoded as follows: If length is less than 0xff, encode it as one byte; if length is 0xff or more, encode it as 0xff followed by 16 bit length value.

After implementing the compressor and decompressor in python I run into a problem: the compressor only managed about 0.5 megabytes per second, which, while fine for most Spectrum Next related files (which tend to be 100k or less) it was still really annoying when compressing a 150 megabyte file took 250 seconds.

I wrote an external exe in C to do the compression and ran that exe from the python script, and compressing that file took 0.78 seconds. That includes loading it into memory in the script, spawning the process, loading the file into memory in the exe, compressing it, writing compressed data out and loading the compressed data in the script. Yay for disk caches!

While quite fast, using an external executable isn't portable and I know people use the NextSync server on macs and linux boxes, so I went back to looking at the script.

I found out python has built-in profiler, which let me spot some interesting things, like calling len() of a list actually takes some time even though the list doesn't change; storing the value in a temporary variable is faster. And appending to a list was a clear bottleneck.

Since the decompressor was simpler and faster (taking about a minute to decompress the 150 meg file) I concentrated on optimizing that. If I found something, I might be able to apply the same optimizations to the compressor.

Through googling I found some suggestions. Apparently appending to a list is slow because of memory allocations, so a common suggestion is to pre-allocate it to a huge pile of "None".

This made the decompressor slower. Probably because handling the output wasn't simple append but required additional variable to keep track where we're putting the data.

Another suggestion was to use bytearray instead of list, since all my data were bytes and the internal representation is a literal array of bytes.

This made the decompressor slower. I don't know why, but if I had to guess, lists are so central to python that they've probably seen more optimization than the byte array.

Yet another suggestion was to use memoryview to reduce copying.

This made the decompressor slower. I don't even care to figure out reasons why at this point.

In the end I ditched all the above attempts and just made the decompressor more pythonic; instead of playing with single indexes I used slices a lot. The decompression time went from 60 seconds to about 12.

If it worked for the decompressor, what could I do to make the compressor more pythonic? There's no ready function to find spans of bytes. Except, maybe, regular expressions.. The python re module has a fun function called "split" which I could use to split the input into a list of lists; each list would contain either run or skip data.

By combining that list of lists to a single list, I should get the original data. Or I could encode each sub-list to a compressed form.

Armed with pythex.org I got to working. I'd need a regular expression that both matches and captures the repeated sequences, or the split would not work. So (.)\1{2,} for instance matches fine but only captures the first byte. Eventually I found that the correct solution would be something like (?=.)(\1{2,}) but that does not work with python since you can't refer to a non-capture group (which makes them a bit useless). I ended up with ((.)\2{2,}) which does what I want, almost: it captures the whole sequence and then captures the first item. I added logic in python to just skip the next one after each run sequence. And as they say, we were in business.

Compressing the 150 meg file now took about 25 seconds. Still far from C's 0.78, and if someone really wanted, they could still use the C solution; but being 10 times faster than my original python compressor I figured it was a pretty good cross platform solution.

I still haven't updated my Next to the new core or updated the operating system... I guess that's Next, then. =)

Old Post From Cohost: One way to look at it

September 22nd, 2024 (permalink)

(post date: 2022-12-28)

d(0)=character

d(1)=line of characters

d(2)=page of text

d(3)=book

d(4)=shelf of books

d(5)=bookcase

d(6)=row of bookcases

d(7)=room of bookcases

d(8)=library building

d(9)=street of libraries

d(10)=city block of libraries

d(11)=row of city blocks of libraries

d(12)=city of libraries

d(13)=continental coast of library cities

d(14)=library continent

d(15)=planet of libraries

d(16)=library starsystem

d(17)=library constellation

Old Post From Cohost: Clout

September 22nd, 2024 (permalink)

(post date: 2022-12-01)

I was recently moved to another project within The Company.

In the previous project I had to prove myself through a bunch of powerpoint presentations to show that I can, in fact, design complex systems and what I designed was correct. Once it clicked, people reacted to me and what I said differently. People believed what I said without me having to actually go through the effort of building presentations out of everything.

So, new project, back to square one. Feels like nobody just... believes me when I say something. It's not active snubbing, but it's as if my words don't have much weight. I'm sure it'll pass, but it's really annoying.

And then I realized, this is how women in IT feel all the time.

Old Post From Cohost: Tiny Epic Dungeons

September 22nd, 2024 (permalink)

(post date: 2022-11-08)

Finally successfully finished a game of tiny epic dungeons by Gamelyn Games

Some thoughts.

  • Game may come in a tiny box but takes lots of desk space.
  • Gameplay setup is so complex we've ended up using the digital companion every time.
  • Gameplay icons are really confusing. There's a reference in the manual but it's not really useful. What we eneded up doing is looking up every second card on the digital companion just to know what the card does.
  • The digital companion always assumes you're playing for the first time, with no way to jump to the general index. This is pretty frustrating after a couple of times.

I'm pretty sure we would never have figured out how the game works without the digital companion just trying to go by the manual.

But did we have fun? Yes. Definitely. Took us a couple of tries to even get to the end boss.

Old Post From Cohost: How to make an acceptable Tolkien tv show

September 22nd, 2024 (permalink)

(post date: 2022-11-08)

  • Hire a bunch of Tolkien Nerds(tm). They're pretty easy to find. Just go to some LOTR forum and ask if the nazgul were vegetarian. Don't worry, you won't need to talk to them.
  • Hire a bunch of screenwriters, force them to read the silmarillion and maybe lost tales or two, then submit up to three episode scripts.
  • Ask the Tolkien Nerds (tm) to sort the scripts in order they least dislike them.
  • The three best screenwriters are your writing team. Fire the rest. Keep the Nerds (tm) as consultants for the writers.
  • Spend a year just honing the scripts. This is cheap, you just need to pay the writers and the nerds.
  • Now you can start worrying about actually making the show.

Chipas

July 27th, 2024 (permalink)

Several years ago, during a refugee crisis, one refugee center was placed in our town. One side effect of this was that we had an influx of foreign foods. The refugees would come to markets and sell food of their making, from their various cultures. I liked trying out different things, and apparently so did a lot of other people. When the govenment closed the refugee center (with most of the refugees deported, as far as I understand), the locals protested.

Anyhoo, one of the foods was the chipa, south american cheese bread. It has an amazing chewy mouth feel, and the cheesiness makes it rather addictive. Not like drugs addictive, more like salt chips addictive.

I missed it enough to start researching how to make my own. The original recipes require south american cheeses, but those are a bit hard to come by. Luckily they can be substituted with hard italian cheeses.

Another bit was the flour - the recipe calls for tapioca flour which isn't carried by the local stores, so I tried a few different substitutes, none of which worked - tapioca is essential to the recipe. I eventually just ordered tapioca starch online, and it's become easier to source since.

Amount Ingredient Note
800 g (two bags) Tapioca starch Or tapioca flour. Haven't tried.
240 g Butter, melted
3 Eggs
16 g Salt
416 g Cheese See below
3 dl Milk I used fat-free

The original recipe I have modified required 550g of tapioca starch, but I scaled it to full bags because that's the most annoying ingredient to measure. As a side effect, this two-bag recipe would require 3.2 eggs, but what's an egg, exactly?

As far as I understand, if you manage to get your hands on actual tapioca flour instead of starch, you can skip the milk.

For cheese, I've used a mix of mostly grana padano or parmesan and pecorino romano, and a little bit (~30g) of aged cheddar. I've seen completely wild variations of the recipe, some even using mozzarella, so your mileage may vary. Wildly.

Prep time is around an hour, and makes a bunch of sets to freeze and prepare quickly later.

Melt the butter, add the milk to the butter to cool it down - you don't want to cook the eggs at this point. Shred the cheese finely using some kind of power tool. Or by hand. But given the massive amount, you probably don't want to do it by hand.

Mix all the ingredients in a large bowl. Be careful with the starch, it tends to go everywhere. I've tried mixing the ingredients in different orders to minimize the mess, but it doesn't really matter. Just don't mix boiling butter with raw eggs.

Do the mixing by hand, folding over and over and over again until the mixture is uniform, and then some more. If the mixture doesn't turn into play-doh like mass and feels too dry, add a little bit of milk. Be careful though, as too much will ruin it. Well, not ruin, exactly, but the resulting breads will become flatter.

After you're utterly fed up of folding, start rolling little balls - I'd say meatball-sized but since meatballs can be, like, anything, who knows how big your mental image of a meatball would be. Something that's convenient to roll into a ball between your palms. Not too big, not too little.

Place around 13 of those balls on a tray, leaving them some space to flatten out, and cook in an oven at 175'C (or about 350 freedom units) for about 15 minutes. Once the breads start to have brown spots, they're done. Move into a basket to cool down for a bit.

Now, that's just one tray - there's enough dough for about five trayfuls. So while the first set is in the oven, roll those balls and freeze them in sets of around 13. I've found that amount fits nicely on a plate, but again, your plates are probably different than mine.

When freezing, try to avoid freezing them into one huge clump, as they are really annoying to separate. Not impossble, but annoying. I've tried a few different methods, like freezing them on a plate for a while and then bagging them, or bagging them and putting the bags flat in a freezer. Your mileage may vary and all that.

I've found the two-bags-of-starch amount to be fine, there's enough in the freezer for later or when you're having guests, it's not too massive amount to make at once (everything fits in one large bowl, etc). Maybe halve the recipe if you plan to cook everything at once, when there's a family gathering or something.

Like most things, chipas are best eaten fresh. Just let them cool down a bit so you don't burn yourself.

Anyhoo, enjoy.

SoLoud on SpecNext

July 21st, 2024 (permalink)

I pushed a new update to the SoLoud repository a couple days ago, after years of absence. What I uploaded was a SoLoud console, with 8000+ lines of generated code. What it means that, if you have C code like..

SoLoud::Soloud soloud;
SoLoud::Wav sample;
soloud.init();
sample.load("pew_pew.wav");
soloud.play(sample);

..then, you can achieve the same through the console by:

SoLoud console 202002
Type "help" for help, "quit" to quit
s> s soloud create
s> w wav create
s> 0 soloud init s
s> 0 wav load w "pew_pew.wav"
s> 0 soloud play s w

The console commands are parsed and sent through the 'c' api. The commands follow the format of destination variable, object, function, and zero to however many parameters are required.

Since it follows the 'c' api, the first parameter is typically a created object.

The console is super easy to crash. Just use a wrong handle as the object, for example. Everything is casted through a void pointer.

So what's the point? Well... the Spectrum Next has a Raspberry Pi Zero on it, which is not used for much. Early on I had the idea to just drop SoLoud in there, but it didn't feel right. I mean, when you have this retro-plus 8 bit machine, you don't expect it to play several mp3 streams through a reverb filter...

I've been requested to try it out nevertheless, so the first step was to ponder what kind of program it would be, and the console makes most sense.

Next up is the hurdle to compile it to the nextpi. As it happens, I went through a lot of that last year so I have most of the pieces.

What I didn't mention was the compile command line(s). Since it's been a while that would have been annoying to re-research.

I rebuilt the frankensteinian setup I used last time - Raspberry Pi Zero connected to an USB hub with a USB ethernet adapter which is connected to the laptop, raspi is also taking power from the laptop USB; raspi's mini-hdmi goes to a monitor and an USB keyboard is also connected to the hub so I can log in to the NextPi so it can tell me the IP address I can ssh to from the laptop. (After I know the IP I can take out about half of this config).

And looking at command history, I have this:

g++ nextgpio.cpp RtMidi.cpp nextpimidi.cpp -Os -lpthread -D__LINUX_ALSA__ -lasound -s

The console won't need GPIO. In fact, I want to leave the GPIO alone so audio output works. I also won't need RtMidi, for that matter, but I will need ALSA. Uh, which means I'll also need to plug in some USB audio device so I can test if this works at all without having to send the binary to the Next.

To prep, I gathered all the sources and headers I needed in a single directory (the ALSA sink, all audio sources, all headers, and the console sources). I scp'd these to the pi, and compiled:

g++ *.cpp *.c -Os -lpthread -D__LINUX_ALSA__ -lasound -s -DWITH_ALSA

The __LINUX_ALSA__ bit is probably just for RtMidi, but it doesn't hurt to have it there.

Executing this took a very long time. And failed due to some wrong include paths, which is not surprising given that I had flattened everything to a single directory. I had to edit a bunch of files to remove relative paths, after which it... failed when linking. I stubbed the openmpt calls (it's not like I was going to compile openmpt anyway) and to my surprise it built. To a 607204 byte binary. Which is rather huge, but then, there's a lot of stuff in there.

It also ran, but since I had not bothered to plug in an audio device I couldn't say if it plays or not at the time.

I then scp'd the binary out of the raspi, copied it over to my other PC on an usb stick (they're on different networks... don't ask), nextsynced the binary to the next, used .pisend to copy the binary over to the pi.

Next I hit .term and the following sequence:

SUP> chmod 777 nextpi-soloud
SUP> ./nextpi-soloud
Soloud console SOLOUD_VERSION
type "help" for help, "quit" to quit
> s soloud create
> 0 soloud init s
> p speech create
> p speech settext p "hello from spec next"
> 0 soloud play s p

soloud-nextpi.ogg

Audio through nextpi

Don't mind the noise floor.. I probably have a ground loop or two. Anyway, I did try a couple different audio sources and it seems to be working fine.

The nextpi-soloud binary I used can be downloaded here. For more info about SoLoud, point your browser this way, with the sources to it here on github.

I haven't stressed SoLoud on the zero, so I don't know how many mp3 files you can decode at the same time, but consider this: we're talking about decoding several mp3 files at the same time. On a retro platform. You now have great power, so handle it with great responsibility.

MMXXIV

July 21st, 2024 (permalink)

I think this must be a record for my late start of blogging in a year.

Let's start with the new year demo, released almost 8 months ago already..

What can I say.. the start of the year was pretty stressful, largely for reasons I'm not allowed to talk about. Since then, I've bicycled around 1000 km, and found that doing physical things helps with stress. It's just generally super boring. And you don't even lose weight. You just get hungrier. Cycling is not as boring, though, and I've learned a lot about my local surroundings as I've criss-crossed all sorts of paths.

One thing I haven't apparently mentioned in this blog (but you can see the links around), I set up a Ko-Fi shop. As a finn it's illegal for me to ask for donations (which also rules out stuff like kickstarter), but nothing stops me from selling services. At a quick glance what I'm offering may seem like a joke, but they're all serious offers, and in the past I've had (mostly) happy customers for every single item I'm listing.

The prices I'm listing are really low-balling the actual value. I'll adjust the prices if there's too much interest.

One update from a post from last year:

So I modded my 3d printer a bit. Only a bit. The Ikea Lack-table enclosure is the obvious bit, with LED strip lighting, concrete slab to reduce vibrations, two filament dryer/feeders on top, raspberry pi with OctoPrint somewhere below along with two buckets, one for PLA and other for PETG waste. Less obvious are a few smaller prints that do cable management and other tidyings-up.

I ran the printer basically nonstop for a couple of months, but now I only print when I have a need for something. Most of the early prints were for playing with the printer and/or modding it. The enclosure clearly lifts the temperature over ambient which may help with the print stability, but in all honestly the primary reason for it is to look nice.

As for look ahead.. I have a few ideas about what I want to blog about (that don't fit in a toot), and I have one rather experimental project for the ZX Spectrum Next cooking, so unless something happens, you should find a few more blog posts here this year.