I talk about my journey to a plot of land I purchased, and the bitter realization that met me shortly after I arrived.
I talk about the problems of automating the every-day world around us
I share some photos and talk about attending a mushroom festival
Xorg is an aging pile of bugs, leftovers from stripped out previous features, and poor implementations. However, it's the only good display server we have.
It has been in maintenance mode for the majority of the time I have been using Linux, despite the lack of new features, Linux adoption has exploded and most people don't even have to think about what display server they are running. That is, until they are "upgraded" to Wayland by default and features that used to work suddenly stop working.
Now I'm not going to beat the dead horse about all the things that Wayland can't do. This has been trodden over so many times, and with ongoing development this post would have to be updated with strike-throughs and asterisks. Yet the main point still rings true, something shouldn't be touted as a replacement until it is actually capable of replacing the thing it's supposed to replace, and in the case of Wayland for Xorg, there are many workflows that it cannot.
What I'm more interested in discussing is the fundamental flaws with Wayland's approach.
One of the key "issues" with Xorg is its security model, or lack there of. Any client can become a keylogger or be funnelling off a live stream of your desktop to a hostile actor, and there's no way for X to stop it. Is this really a problem though? How often do we hear about the keyloggers plaguing Xorg desktops? The software distribution model of most distributions makes most pathways for malware null and void, however that's not how Wayland took it.
Their solution to the (non-existent) problem is to isolate clients so that they are only able to effect themselves. They're only able to read key-presses when their client is focussed and they can't capture the screen. Of course, global hotkeys or screenshot utilities must have completely slipped the mind of Wayland developers when they were drafting that design decision out.
Furthermore, this design principle just does not adapt well to a multi-window environment. With the single-window environment of mobile OSs, it makes sense; I don't want any old app to be poking around all the apps I forgot to close out of, and really they have no reason to. With all major platforms taking the Xorg attitude towards isolation (until recently), applications and the people who use them expect to be able to have their windows interact with one another.
Where a display server should be working to enable applications to be able to do whatever they need to do, Wayland goes out of it's way to make sure they can't do anything outside a slim set of acceptable actions.
"But Jasco!!", you exclaim, "Wayland allows applications to interact with each other through the desktop portals and pipewire!"
Which brings us to the next point.
Since applications DO need to interact with one another, Wayland needed a way to allow it. Their solution was desktop portals. This allows temporary access to certain parts of the underlying system through a user-confirmation pop-up window. Really?
Confirmation pop-ups have been the butt of jokes for the past couple decades for a reason, they suck. They slow you down, having to confirm each time you want to do something, instead of just doing the thing you asked it to. All this does is train users to automatically accept any of these pop-ups. Granted, they may have learned these habits from other platforms (looking at you iOS), but it basically nullifies any gained security. If I were ever to be targeted on my work Mac, the bad actor would have no issues because by the time my eyes have registered I'm looking at the Okta verify prompt, my finger is already on the scanner to authorize it. It seems like this is being done a bit more intelligently on most portals, only asking once on first time use, but it is nevertheless not actual security, and thus further makes this a poor decision by Wayland.
So much of what makes the desktop actually usable is not part of the display server at all. Sure this makes it a bit easier to swap things out if it ever comes to that in the future, but how is this an improvement? Now the code can grow long in the tooth in a different repository than the rest of the display server?
"Wayland is a protocol" is a mantra (deflection) of Wayland zealots. It is true, Wayland is simply the protocols used to communicate with the compositor, which is the display server. However, by not having a unified /usr/bin/wayland, You require each project to implement the same protocols in their own ways. In a perfect world this wouldn't be an issue as all the Wayland projects would work together to find the best way to implement it, but we do not live in a perfect world. When Xorg implemented a new feature, every window manager and desktop environment didn't need to scramble to implement it, the rising tide simply raised all the ships.
Looking at the governance model, it seems pretty good on paper. Major projects get to agree on what protocols get added and each project must implement the core protocols. Ignoring the fragmentation problem, so long is everyone is being a good steward of the project, relevant protocols will get added to the core as necessary.
The only problem is Gnome does not act as a good steward, they only seek to serve their own interest. This has held back protocols from being added because Gnome had no use for them. This has left the core protocols to be anemic and ones that really should be part of core to live out their lives in the unstable namespace.
This is makes life particularly difficult for an application developer, as you don't know what extended protocols are going to be supported, and the core ones don't provide the functionality you need.
I understand as a group of former Xorg developers, you wouldn't want to reimplement the same system that caused you so many issues and made you to start a new project in the first place. However, after a decade I would start to question if I had tossed the baby out with the bathwater if it has still not taken over as the successor. I would start to wonder if the dogmatic ideals I based my project on were in some way flawed. I'd look at the 2 year long debate over a coordinate system being merged as something to be ashamed about.
Yes, things are getting better in terms of things like governance, and the overall user experience, but how did it take 12 years?
Sigh...
So as I have made clear so far, I'm not a big fan of Wayland. Naturally when I heard the current lead developer of Xorg started a fork, XLibre, I was intrigued. Even if it means breaking some software, I think there are good bones and a strong history behind X and don't want it to be replaced by Wayland. However I am disappointed by how the project has positioned itself:
"This is an independent project, not at all affiliated with BigTech or any of their subsidiaries or tax evasion tools, nor any political activists groups, state actors, etc. It's explicitly free of any "DEI" or similar discriminatory policies. Anybody who's treating others nicely is welcomed."
I don't really take issue with the substance of what he is saying. The recent (now waning) obsession over Codes of Conduct and the more illiberal elements of "wokeness" have not made things any better for the open source world.
But man, it's a display server, do you really need to champion it as being the anti-DEI alternative? That's going to do little more than stop you from being included in repositories and alienate people. You could have cut out 1 sentence and not made this all about the culture war.
Still, progress is being made, and I hope for the best for the project. I'll probably end up running it sooner or later.
In reality, my software selection would only need to be minorly tweaked in order to have a comparable setup to what I have now, some things may even be improved, but yet I don't want to. Maybe it's in part an act of defiance against a project that wants a different future for the desktop than what I imagine. Maybe it's just that innate fear of change that we all have to some degree. I'm not sure which, but I do know one thing.
There I said it, get out your pitchforks or cartoonishly large bags of money and let me have it.
I was, and to a large extent still am, an AI-skeptic. However, it has proven to be very useful numerous times and has helped my learning. Instead of needing to go scouring through dense documentation or finding a GeeksForGeeks page for proper syntax, I can just ask chatGPT and it will spit out not only the information I was looking for, but also relevant follow ups for further down the line. It severely softens and speeds up the learning curve.
Now don't think I was trying to be sneaky and cheat, the professor actively encouraged us to take advantage of LLMs.
Our senior year requires a project to help a company or organization in our community by building a piece of software for them. I, along with my 7 compatriots, are writing a web app for a volunteer organization.
Our team leader made a good decision and chose to use a pre-packaged starter kit. I just wish I had been more involved in the start of the term and voiced my preferences, because the whole stack is Typescript.
I am already not privy to Javascript, but that's more so to do with what it has done to the web. I don't have enough real experience with it or its statically typed counterpart to have a full-throated opinion on it. Nevertheless, working in this codebase feels completely alien, I don't know what I should or shouldn't touch or really how any of it all works.
What I should have done a month ago, when development started in earnest, was to sit down and spend 30 minutes familiarising myself with the language and the stack we are using. What I actually did was tell chatGPT what repo we were basing off of and asking it how to do what I wanted to get accomplished. In my defence, this was during a study date and I didn't want to tell my girlfriend that I needed to put my earbuds in and ignore her for a while. Undermining my defence, I still haven't taken the time to learn it.
I've just been asking the bot a question, fully intending to write the code myself, only to find that it provided a solution in its response. Of course the code still has issues, mostly around where things are placed in the repo, but after enough prompting, I get code that is working and I get to move my tasks to completed on the Jira board. And so has the rest of my group.
That brings us to tonight.
In my off time at work for the past few days I've been cracking away at getting BetterAuth working. I had the back-end portion set up, now it was just time to tie it in with the existing front end. I pasted the bot's code in the files it told me to and tried to get it up and running, but the dev environment kept on complaining about not being able to find a file. I went back and forth for half an hour asking it what I should do with the error message, each time it suggesting to make the file that the interpreter couldn't find. I knew this was wrong, after all this was one of the files I hadn't touched.
In my attempts to try and pinpoint the issue, I was peeking through the files and found all these things for connecting to the database. I looked at our repo history and my back-end partner hadn't touched them either.
So I went ahead and finally looked at the documentation.
I don't care to go into the specifics, but if I had continued doing what I was doing, We all would have to be fighting against the scaffolding already set up for us from the starter kit.
I knew chatGPT has a tendency to "forget" specifics about what you are trying to accomplish once you're a few prompts deep, but with it's instructions it's almost like it straight up ignored what I had told it about the codebase.
I just feel like a fool for not listening to my own advice, to try to fully understand what the code the bots generate is doing before ever using it.
Hell, all the programs I've written have been either web-GUIs or TUIs
Shameless plug to feedie if you haven't seen it already
No need to worry about theming, being able to access it over ssh, Vim style-controls nearly ubiquitous, decent scaling, Whats there not to love?
The problem is that terminals were not intended to be GUIs.
You had these symbols:
Note the first 31 of those are just control codes, including #7, the terminal bell, Which I share a love-hate relationship with.
It just typed text. Far easier than punch cards. Eventually they stopped using paper all together and adopted screens. Besides the physical closing of the keyswitches, it was digital end-to-end.
Computers continued to evolve and proved to be more capable. Yet for a myriad of reasons, it was important to preserve that simple interface, And we still use it to this day. Sure it picked up a few new sets of codes along the way, and quite a few more characters, but you still log in at a TTY.
The atomic unit of the terminal is the cell. For teletypes each cell matched the dimensions of the typebar, for terminal emulators they are a fixed set of pixels wide and tall, determined by your font. Herein is the kernel of almost every problem.
My 1920x1200 pixel display, with a maximized terminal window, has 63 rows and 192 columns. That gives me 12,096 characters to work with, not bad. However, most of the time I don't just run my computer with the terminal maximized, most of the time it is in a window with 30 rows and 75 columns, 2250 characters, significantly less. Not to mention you probably want to have a border for your TUI app so more like 28 rows and 73 columns, even less. And you can't fill every single cell, you have to have some whitespace in between data in order to have each piece recognizable as separate from the previous. Overall, you have a lot less space to work with than if you were to be working with a true GUI, where you can use a far more granular atomic unit, the pixel.
For instance, lets say you have a TUI composed of two panes that take up half the window each. The window can either be an even or odd amount of units wide.
In the case of a GUI with an odd window width, you'll have (2k + 1) pixels to work with, with each pane being k pixels wide and k+1 being the separator line. In the even pixel scenario, its not quite as simple and is handled differently between toolkits. You can either make one pane 1 pixel wider than the other, which is nearly imperceivable on our modern, pixel-dense displays, or you could do some anti-aliasing tricks to make the line still appear to be at the exact center
It is handled similarly with a TUI. An odd cell width can be split exactly down the center, and an even one can be split into two equal sized panes. The issue is that you can't consistently draw the same border for both scenarios like you more or less could with a GUI. In the odd scenario, you can have each cell share the center column as a border, using 1 cell. In the even scenario, you can't, so you either use some ascii tricks to make the two columns appear to be the same width as if it were a single column, or you don't. You just make 1 pane 1 cell wider than the other, or throw out that extra column. The ascii tricks solution is highly dependent on what border style you are using and the available symbols. Making one pane wider can make the TUI look asymmetrical, so most programs just throw out a column. This makes the programming really simple, since pane_width = window_width/2 will always yield the correct width no matter if the window width is odd or even due to the extra .5 being thrown out with integer division.
It seems like most TUIs opt to not share borders though (as so have I) and go with each pane having separate borders:
So in order to be consistent between window sizes, the best option is to par down an already limited working space.
A cell is approximately twice as tall as it is wide, in my case each cell is 19 pixels high by 10 pixels wide, though the standard was 16x8 for the longest time. This adds too the aforementioned centering problem with the fact that an off-by-one on the height is far more noticeable than a off-by-one on the width. It also means that your x and y axis are at different scales.
Now why would that matter? After all, you're working with characters that take up a cell height and width no matter what.
It matters when you're using one of the new terminal image rendering programs.
Now this is more of a niche issue, If you stick with text it makes things a lot simpler. However the adage of a image being worth a thousand words, although trite, rings true. When I was first dreaming up Feedie, the whole desire of having a thumbnail came from not being able to tell what a video was about by the title alone, which is an exasperated issue on Youtube, but that's a whole different subject.
It just proves to be useful from time to time, and there are programs to make it relatively simple.
However, like all things in the Linux space, there are a few options for you to work with that all work a little different and have their upsides and downsides.
KittyI think kitty's image protocol is the best currently existing method. Of course, you have to use a terminal that supports it, but quite a few do. It's just one command to draw an image, and one command to clear an image. You have to make sure that the stdout of the thread that runs the command is the stdout of the window you are outputting to, which adds some frustrations, but for the most part it just works. However it is not perfect, and still glitches out from time to time.
UeberzugUeberzug has it's own quirks. Despite what documentation might have you believe, images are not drawn centered within their canvas size. Due to the non-square cell issue, any offsets to make images centered requires one to make calculations using the image aspect ratio and the cell aspect ratio. It's real pain in the ass and adds complexity, requiring the drawer to parse this information.
There's also the new(er) Ueberzugpp, a c++ reimplementation of ueberzug as the original developer ragequit. I have not yet to try it nor know if it has the same centering quirks.
ChafaWhen it is not just calling the kitty protocol in the background, something it now supports, images look like this (80x80 cell resolution):
original:
So image quality is significantly degraded even when it takes up the full terminal window.
Others?There are a handful of other options, but these are the major players for terminal image rendering today. W3m-img used to be in that list, but with how poorly it worked, it no longer is really in the running.
In a terminal's normal context, if one line is too long to be expressed on a single row, it is moved down to the next. In most cases this is fine and good. In the case of TUIs, this is the bane of our existence.
Go's bubbletea library handles this intelligently and does all wrapping handling in the background. NCurses does not. Now it's not the end of the world to work around, but it is a real pain to do it in a way that do
esn't cut off a word half way through. Again, there are libraries for handling this too.
This becomes more apparent when you remember that most TUIs only redraw characters that have changed, not a full redraw of the screen, so if there is one stray wrapping line, or unexpected new line it can do this:
This is an existing bug in Feedie, I'm not sure why it happens. It only happens when using the kitty image backend and so far I have not found any other images that cause this phantom newline to be printed. Nothing is printed to stdout or stderr either. The weirdest part is that it only happens on certain window resolutions.
TUIs are awesome and come with a great deal of advantages. They bring the simple interfaces of GUIs with the portability and simplicity of a terminal. They also inherit a lot of downsides stemming from their half-century old history. You have to count on the developer to account for edge cases.
I'm sure I'd have my own bucket of complaints for GUIs if I was more well versed in writing true GUI apps. However, my experience with web programming has not given me the same frustrations.
At one point I even had it listed under my projects, but I took it down. I am kinda ashamed of it in it's current state. When I first wrote it in 2021, I was quite proud. Now all I see are bugs and half-baked implementations.
I wanted Feedie to do 2 things:
It gets the job done. It provides a list of entries that are able to be opened, and provides a thumbnail, but that's it. It doesn't do those things particularly well, the thumbnail will be placed somewhere in the top right corner (sometimes covering text), the description is so horribly formatted that it is nearly incomprehensible, The highlighting on the selected item background bugs out depending on which feed is being read, it doesn't always select the correct link if the entry has multiple, and worst of all there's no way to refresh or fetch new entries after the program launches.
It wasn't always this bad though.
When I first switched to it as my rss feed reader it was in a better state. It cached previously downloaded feeds, making load times a lot quicker. This was done with a bad version of a csv file, except using tab characters instead of any other better delimiter. Being a gung-ho 19 year old know-it-all, I thought my version would be so much simpler, being purpose built for the application. It, of course, was not. It actually was quite problematic, each time the description was read from the cache, a leading "b'" was added. This was because instead of figuring out how to properly decode a byte array, I just casted it to be a string, giving the debugging view representation of it. It was always one of those "I'll go back and fix it later" type issues.
Inevitably some change in python broke my horrible caching scheme.
I opened the dusty code files again and couldn't tell what the hell was going on. The amount of coupling made the modules I had separated the project into purely nominal. After an hour of trying to find where the issue even was, I decided to cut my losses and just remove the caching functionality all together. This would buy me some time until the eventual rewrite.
The point at which I stopped the original development was not my intended final destination. I had wanted to give it some polish, make things a bit neater and squash the bugs. However, once I finally got it to it's minimal working state, I completely stopped developing it. The code was bad. It was the best I could do at the time, but it was bad. How was it bad?
Let me tell you:
If you have taken a introductory computer science class sometime in the past decade and a half (maybe before that too), the language they started you with was likely Java. Java is the epitome of Object Oriented Programming, for better or worse, everything is an object.
This means for a student programmer, every assignment starts with making a handful of classes.
So that's exactly what I did for Feedie. While not strictly necessary because Python is not Java and everything need not be an object, it's also not a bad place to start. What I did wrong was trying to cram all the functionality into those few objects I had already made. This resulted in one object in particular having its thumb in far too many pies: Feed Folders. It's responsible for both grouping feeds into categories for the GUI (think tags in every other rss feed reader), as well as constructing the feed and entry objects from cache, fetching new data during feed refreshes, and merging the two intelligently. It really should have been 2 separate objects, a "CacheManager" object of sorts, and a FeedFolder. Except I should have done what everybody else does and used tags because it is not possible to have a feed in 2 categories.
Whenever I got in the mood to work on the project, I wanted to implement a new feature. I'd tirelessly type away until I got something working, but then a problem would appear. Whether it be the aforementioned caching problem, the program being unusable while there was a running program, or a myriad of other issues, I just kept saying "I'll fix it later" and that later date never came. It's fun to make something work, it's less fun to iron out the edge cases where it doesn't.
As I've continued as a programmer, I've learned that hacky fixes aren't the foundation of stable programs.
Hearing others speak about their early days of development, It seems like a common pattern. You're passionate about programming and you've done a handful of smaller projects, why not just try to do everything yourself for the next one? You'll definitely make something better that way.
HA!!
Of course I relented fairly early on, but that attitude of thinking I knew better stuck with me the whole way through. I could've taken heed of the writing on the walls when things didn't work the way I expected, but I decided to trudge forth.
A couple years later I felt ready for the rewrite. I had a half dozen more programming classes and a new language under my belt. I was going to rewrite it in C++!!
Oh C++, what a language. I don't hate it, but it's so...
Old? No C is older and it doesn't come with the same pangs.
Ruled by committee, that's it.
It's like 5 people had their own good ideas and crammed them into the same language, without much thought in how it would all work together.
Yet the challenge intrigued me, so I pushed forward. I worked for a week to get the rss feed data into an object I could work with. That hubris was still somewhat around because I decided to write my own rss parser this time around. If I remember correctly, the cursory run-throughs of each available one didn't sit right with me. With the help of TinyXML2, I was able to get basic elements out of Atom and rss feeds like the author and links. However this was a lot more taxing work, spending hours chasing segmentation faults and coming out each session with so little to show for. After that week I probably touched the project a half dozen more times before accepting it was dead and moving onto other things.
Last December I was bored and alone. I needed to put my energy into something and decided to revive the dead project. This time I was making progress, little by little. I had forgotten the faulty reasoning that lead me to writing my own parser, remembering it as being the only reasonable option. Still, I stuck with it and was able to parse the more complex elements, ones in enclosures and handling dates. I got into a rhythm of working on it little by little, but in the back of my mind I knew my heart wasn't in it. I knew given some time I would be knocked out of that rhythm and the project would fall to the wayside again.
I started working on the Step, and all enthusiasm for the project died again.
Over the summer I wanted to learn a new language. I tried Rust and really didn't care for it. I liked zig but felt weary starting anything with it because of the lacking libraries. Then I remembered how much Roomie's dad bugged me about trying Go, so I did.
I FUCKING LOVE GO
With the exception of the syntax of enums, I struggle to find anything to complain about in the language. Great syntax, great standard library, great build system, everything is great.
A few weeks back I was staring at the nearly half-decade old rotting mess that was Feedie's first iteration and knew what I needed to do.
And so I did.
I remembered where I had slipped before, I learned my lessons and changed my approach. When something wasn't working I kept at it, though the challenges were far easier with an improved language. I broke it up into a server and client, another thing I should have done from the beginning.
Like with all stickers that in some way or form convey an opinion, I hesitated to put it on my laptop. After today, any reluctance regarding it's message has vanished.
If you are not reading this on or about October 29th 2025, there was a massive Microsoft outage today. This is shortly following a similar outage with AWS on October 19th.
I'm taking a course on general purpose GPU computing, which so far has been using NVidia's cuda library. We submit the final .exe executable requiring it to be compiled on Windows. I do not have a (modern enough) NVidia GPU for the version of cuda we are to use, as well I do not want to faff about with PCIE pass through for a VM to take advantage of it, even if I were to have one. Fortunately the school provides remote access to the computer lab with Azure Virtual Desktop. While there is no native Linux client, the web interface works fine enough so I make do with it.
I've been very busy the past week and so did not start in earnest on the assignment until today, the day it was due. So I was quite alarmed when I sat down at my desk, preparing to hunker down for the next few hours and complete the assignment, when the webpage (windows.cloud.microsoft) wouldn't load. Surely it's just my NIC acting up again right? Nope!!
Fortunately I had access to a Mac from work and was able to sign in using the "Windows App" client, but man it was a pain using such a tiny screen and not being able to use my keyboard and mouse. Apple's move to only having type-c is one that makes my blood boil.
After a few hours I was able to log in on the webpage and suddenly started making far faster progress than previously. The site was back up and I presumed that would be the end of my troubles. Then I got to work.
I asked my coworker how the outage effected us, he didn't have much to say other than most everything had been resolved by the point I arrived. He left, 30 minutes passed, and then the phone started to ring. A student said office.com was down so they couldn't log into Teams. I directed them to teams.microsoft.com, got them logged in, and sent them on their way. Then 2 minutes later it rings again, another student unable to log into Teams, only being given the option to sign out. I walk them through signing in, and as I'm sending them off I get another call, and then another, and another.
I guess one of the faculty decided the best way to proctor her students' exams was to have them join on their phones with their cameras facing them. Arguably a better idea than lockdown browser, but nevertheless problematic, especially today. Since Microsoft is not as big as AWS, word of the outage did not spread as far and as quickly, and this instructor was none the wiser of how miserable they were making my day.
I won't pretend that I am a local-compute only purist, the site you're reading this on is hosted on a computer in some datacenter I have not an likely will never visit. However, I keep that in mind. I don't use the server for anything else besides this website. Every file on here is backed up on at least 3 devices. I am well aware that this is not my computer, if a rat cuts the wrong wire or a technician pees on the wrong hard drive, everything on it could be lost.
That's not how it's treated by the mainstream.
"The cloud" is treated as infrastructure, like bridges, power plants, and dams. Do failures occur? Yes, but normally with some degree of warning, and with passive safeguards in case it ever does. We have to be able to count on infrastructure being there and operational in order to complete much of anything. The past two weeks alone have shown that is not the case for the cloud. We can't treat the near endless well of computing capability to always be online, outages happen, and that's okay if it's not essential. However, more and more essential services purely exist in "The cloud".
Honestly, I can't really fault my school for not being able to access the computer, since it is at least tied to an actual physical machine on premises. If I really needed to I could have just driven there and logged in.
My work on the other hand deserves all the shame it can get. As the company is pushing to be more and more of an online education program, it has been bit so many times by outages and overages that prevent students from being able to go to participate. Each time the solution is to throw money at the issue, buy another service that supposedly fixes it, or shrug and wait until it is resolved.
Maybe instead of doing that, they invested in some on-prem servers and paid a couple of guys to make sure they stay on, an AWS outage wouldn't mean a complete halt in providing service.
Beyond that, if each company had to invest time, planning, and real estate into online services instead of just swiping the company card, it's reasonable to believe they may actually care more about the end product. If they had to install a pallet of GPUs in order to have the new AI feature, they may think twice about the new "smart assistant". They may have stronger backup routes, ones that don't involve a web-portal sign-in page, ones that the non-tech-savvy could use. They may look to conquer their slice of the market instead of chasing after unlimited growth and the line moving up and to the right.
Whenever I'm working on my computer in front of my girlfriend, she always says something to the degree of "I don't get how you can get anything done with all of that complicated mess on your screen".
And honestly I get it. To me it looks like a highly refined setup, each tweak made to maximise the ease of getting the task at hand completed. But to the layman, It looks like a nonsensical mess.
When I found this post (please read it it is a quick one), I came into it skeptically, but after reading it, the author does make a good point:
"If my Windows/Python/Notepad++ setup is more ubiquitous, understandable, intuitive and replicable than your obscure Arch/Hyprland build with its hundred painstakingly typed-out customizations for every single software in it, then my setup is better and more minimalist than yours."
I'd say the author puts way too much emphasis on how much cognitive load is actually required to write a configuration file, but I'd be lying if there have been times that I had avoided "minimalist" projects in favour of ones with a graphical configuration and a mainstream audience for the sole reason of needing something to "just work" in a time crunch.
However, in times where I am not in a crunch, when I have the time to fiddle around, I almost solely look at "minimalist" software; Software that gives me the tools, an instruction manual, and nothing else. I have sunk hours over the years into my configuration of HerbstluftWM, to the point where I regularly start using my hotkeys on other systems, only to be utterly confused as to why nothing is happening, before recognising which computer I'm really on.
There are times where I get into a flow, each thing behaves exactly as how I programmed and I'm able to be incredibly productive, but it is only that way because I have invested the time. Realistically, the productivity difference is probably negligible in comparison to being used to a standard desktop environment. However, there is a great joy and peace of mind that comes with knowing how each moving part works.
So is a hodgepodge of a dozen or more utilities really more "minimalist"? Maybe not. However, you can reasonably understand more completely how each one of those dozen utilities work on their own in comparison to the behemoth that is something like KDE or Gnome. Each piece in itself is minimalist in a sense.
Also as for it being "self-imposed complexity and worthless dogma ... straight-up asinine" to dedicate time into customizations, I couldn't really disagree more. If you don't get joy out of your customizing, then yeah just use one of the many "does what it says on the tin" options. As for it being worthless dogma, I understand where it's coming from. Starting off I did switch to a independent WM and terminal apps because it was what all the cool people were doing, but as my career has progressed, the familiarity with the command line and understanding of how pieces are interconnected has proven to be invaluable. Being able to play around with code in programming languages I had never touched before provided that foot in the door. Making the 100th calculator app or millionth hello world in a language is boring. You know what isn't? Modifying the window manager and getting to enjoy the fruits of my labour.
I think that profound joy is shared by my fellow tinkerers, but it's hard to communicate effectively why it's fun. We come up with justifications for putting time into something that to most people don't care about. Is it really more productive to use a tiler? Not really. Is it really more minimal than a modern DE? Maybe by a dozen MB of ram, inconsequential. Is it a way for a person to learn valuable skills while building something that reflects what they value? Absolutely.
As I mentioned in a previous entry, I was fed up with my iPhone and needed something new. I was done with iOS and after hearing so much about GrapheneOS, I wanted to give it a try. Within minutes of unboxing my new phone, the device was wiped and had a shiny, new, private version of Android installed.
It was pretty much just how I remembered.
F-Droid is awesome!! A lot of the apps on it have been suffering from bit-rot for the greater part of the last decade, but I don't care. There are simple utilities I can install that don't have any logins or pop-up ads, things just work.
The Aurora store seems to run a lot smoother than I remembered, but I would hope it would after 8 or so years of improved CPU design. It served the purpose of installing the necessary, proprietary apps.
Having selective access to play services with a secondary account makes things nearly seamless. I need directions to somewhere that isn't in the OpenStreetMap database? Not a problem!! Just switch over to my google services enabled account and search it on Google Maps.
Android's new native theming is very aesthetically pleasing to me. I know flat design is somewhat polarising, but I'm a big fan and am happy with the way UI takes on sensible blends of grays and greens. A huge step up over the days of using 3rd party theming apps that left me with unreadable menus.
One thing that I had completely forgotten about was Android 8's neutering of notifications. I had dealt with it before, but had forgotten my solutions. For the first day or so I didn't notice the lack of pings, but as I was still carrying my iPhone with me until the migration was complete, noticed that it buzzed for every Telegram message. I got lucky if I got it within 10 minutes for the Pixel, even with the always-present background notification to keep the app running and all the battery optimization shut off. Then I found UnifiedPush and a Telegram client that supported it, and I was satisfied. The message banners don't show up most of the time, but I get a buzz, and that's good enough to get the job done.
OpenBubbles, the successor to BlueBubbles, was going to be my saving grace, or so I thought. I was able to sign in and send messages, but it didn't make the transition any easier. I set it up with my existing AppleID thinking this would mean that all my old messages would automatically be rerouted. Nope!! I had to let my recently-abandoned friends inside the walled garden know that they would need to add my email to my contact in order for them to send. This appeared to work, but found that I was actually missing messages from my father throughout the week. Fortunately nothing serious. It turns out I had to instruct them to make a new contact that was only my email because otherwise it defaulted to trying to send it to the contact's phone number. This would have been fine if it didn't try to send it as an RCS message.
RCS has had a deeply flawed roll out. It took a good while before non-Google devices supported it and today Samsung messages is the only other large app that does. From my cursory research it does not appear that this standard is as simple as SMS in terms of implementation and requires some coordination with a central server and a GSMA liscense. I was looking forward to using it, so it was a bit of a letdown. Even more of one when it was the culprit behind these missing messages.
Having a separate user for Google services is great, until it isn't. Almost everything is under my Google-free profile so when I have to swap to the Googled one, it's like I'm stranded. None of my logins or data is able to be accessed. This would be mostly fine if every app didn't send a 2FA code that can only be accessed by my main profile and the app gets killed when swapping over, preventing login. I was able to fix this in settings, but it is still not ideal. As I delved more into multi-user settings I saw that I could enable SMS and phone calling within the profile as well as being able to see notifications from my main user, great!! To my frustration, this was not entirely as described. I do get notifications, however it is merely a message that I have a notification in another profile, no way to see what the message actually is. I do receive calls, but do not receive SMS messages.
I want to say I'm happy with my switch, and for the most part I am, but I still get that feeling of wishing I didn't have to jump through so many hoops to get a working phone that didn't ping back to a big tech giant every second. For the first time (possibly being the first in history) I am wishing my phone would spam me with notifications rather than living in a semi-constant fear of missing one important one.
In general it has been a godsend. Never again do I have to care about formatting while writing documents!! And when I do care about formatting there is a plethora of online materials online.
However, there was one niche where LaTeX didn't quite fit my ideal workflow: note taking.
I generally write my notes in a series of bullet points
LaTeX's sections, subsections, and their deeper sub counterparts revolutionized my organization. For the first time my notes were actually comprehensible upon returning to them, not just scattered asides with various whitespace between, now I could properly group them into their own sections. There was just one problem, once I went deeper in a section depth I couldn't go back to a higher level.
Lets say I am in a lecture about human prehistory. We are going over the initial discoveries of ancient relics, when we get to a series of slides about the Piltdown Man hoax. The next series of bullet points are all going to be related so I start a new subsection. Then a few slides later we go back to the discussion of non-hoaxed relics. Do I make a new subsection called "relics continued"? No, then the prior notes and current notes are at differing levels. Do I place all my notes above the Piltdown Man section? This works better, but is non-chronological and becomes tricky if there is another subsection I need to add later. Do I refactor my notes to completely change the grouping while struggling to add new notes at the same time? (No answer warranted)
I wanted a way to go back up a level and LaTeX's system didn't really allow it. There were also a few other things that bothered me. The indenting never really made sense in my notes pages, but I would never remember to set up no indentation in my template document. The whole concept of pages wasn't needed for something that would never be printed, and would leave my headings floating on the end of the previous page. The syntax was consistent, but clunky, and not necessarily easy to remember while also trying to pay attention in class.
For a semester I tried switching to markdown, but never loved the syntax and the lack of sections made me fall into bad habits of disorganization. Something about the asterisk being the way to start a bullet point never quite felt natural and needing to end each line that required a new line with two spaces was frustrating, not aided by not being able to see the spaces; At least with LaTeX I can see the \\ to signify the new line.
As I've been playing around more with HTML, I have realized how versatile it can be, even for non-web related documents. The only problem is that writing HTML by hand is not exactly comfortable, sure it can be made easier with Vim shortcuts, but it somewhat takes me out of the flow of writing when I have to think about which tag this should be in and all the various arguments. This is one of the major upsides of markdown after all, being able to write in more or less plain text but with the ability to easily convert to HTML.
A few weeks ago, around the start of the new semester, I decided to fix my problem. I started writing Notex (like "not LaTeX" and "notes" which is what it was originally designed for use with). I started with writing out a sample document, seeing what I liked in terms of syntax, thinking up new features as I was writing. When I landed on a nice balance of easy typability and parsability, I started writing the code. In a few days I had a mostly working version, just in time for class the next day to try it out.
I could not have made a better decision!! Immediately it became second nature for me, typing away my bullet points and making my new sections. I no longer had to think, which allowed me to stay focused and take better notes.
Any line that does not start with one of Notex's special characters will be put in an HTML p tag. If there is more than one newline between two lines of non-special-character-prefixed text, they will be separate p tags.
\ The same goes for lines that start with a \ character, but it will place the line verbatim. So if you wanted to write HTML for the end document, or wanted to start the line with a special character then it will be placed in the document.
- hyphens are used for unordered lists
. and periods for ordered lists
{
Anything between the braces will be in a subsection
{
And they can be infinitely nested
}
}
My favorite feature I've implemented are plugins. There are 3 different plugin scopes:
plugins can be called with /@pluginName:arguments or just /@pluginName, or for elements that effect the head section of the html document !key=@pluginName
For example with the theming of the document I have a document scope plugin named "dark" so to set the style I have !style=@dark. When it is called, it determines the deepest subsection level and it creates the relevant css, placing it in the style section in the head.
For images I use the single-line level plugin, "img", which is called by /@img:/path/to/image,extra_args=foo
And for generating timestamps I have the in-line plugin /@now
At first this was only going to be for notes, but with how much I enjoyed writing in Notex, I remembered my dissatisfaction with the current tooling for posting here. When I was still getting it ready for launch in the Winter 2023, I created a python script so I could automatically generate my RSS feed and home page. It served its purpose, but was kinda janky, post descriptions could only be updated after posting by modifying the CSV file that served as my "database", There was no differentiation between drafts and posts, you could very easily add duplicate entries by accident, just in general an unpolished turd.
I rewrote it to use a real database, cleaning up those duplication issues, and now have a separate drafts folder. Each draft is started from a template Notex document. I don't have to write out each tag by hand any more, letting me focus on writing. Additionally, with life being busier now, I can pick stop and pick back up on drafts, taking my time or bouncing between a few without needing to publish.
My first iPhone was the 4s, the first iPhone to feature Siri and the last to have the 30 pin apple charging connector. That massive connector ended up being it's killer as water inevitably made it's ingress. After that came my 5c in lime green, which for all I know is still working fine, though I lost track of it during my last move. That was all my experience with iOS and it's devices until 2022. I wouldn't describe it as a negative experience, I just lacked any sort of comparison. After building my first PC and truly getting into tech for the first time, Android's feature-set seemed more appealing. I hadn't chose Apple explicitly anyways, it was merely due to convenience since my father was already in the ecosystem with his iMac and iPhone
The interim period, years 2016-2022, were marked by a collection of Android phones. The first and longest held was my Oneplus 3. I got it new and fell in love with it from the start. I remember that first day vividly, playing with every option in the developer settings, trying out all the various launchers, tweaking to my heart's content. After leaving the walled garden of iOS, Android was like a breath of fresh air. I had it for three years, playing with custom roms and even SailfishOS for some time.
It's departure was a slow one, after a spontaneous collision with a concrete step cracked the digitizer, I replaced the screen, damaging the power button ribbon cable in the process. For a few weeks I tried and failed again and again to get that ribbon cable properly installed, and inevitably admitted defeat. Instead of continuously wasting money on repairs, I opted to hunt for a replacement, settling on the HTC one m8, though it never quite worked right. Then came the Razer phone 2. HA! Remember when Razer tried making phones? That was awful. It was so bad I ended up swapping between it and my half-functioning Oneplus 3 throughout my time using it. Then came a relatively short stint with a Nexus 6p, which was an amazing phone on release, but less of one 6 years later when I purchased it. Then came the second Oneplus 3, I still have my original but something on it stopped working and I found one listed for $35. I was hoping it was going to be like that first time again when I powered it on, but it too had the same fate as the Nexus 6p, great on release, but lackluster half a decade on.
During this time I was getting fed up with Android, or well, degoogled Android. During my teenage years, it didn't really matter if maps were working or if every app would launch, because my life existed within a 25 mile radius and I had all the time in the world to find work-arounds. Adult life required a working phone, one with play services. At times I made it work with 2 phones, a main degoogled phone with a sim and a backup googled one, but that was inconvenient at the best of times.
Apple suddenly felt like a worthy compromise. Sure they still collected data, but they weren't an advertising company and probably not as much, right? And I can probably still opt out of stuff, right? And yeah, I'll be limited to the app store, but there's tons of similar options as Android for apps, right?
Wrong! Where you can still use Android without a signed in Google account, you have to be signed in for iOS. Where you have to explicitly set up certain "features" on Android, you're automatically signed up for them with iOS. I remember vividly a few weeks after starting my job at the store, in the widget screen (when you swipe right on the home screen) the suggested location popped up with "work 7 minutes away". Mind you, I never told it that that location was my work, nor did I seem to have the ability to turn off this smart location tracking. For the things you can turn off, it's not a simple toggle switch, but something you "haven't set up yet", which shows up as a pending notification from the settings app until it eventually realizes you're never going to set it up and it just disappears. Switching everything away from using iCloud by default took a month to fully disable.
Then there's the apps. The apps which are so containerized that they can't see any worthwhile data from the device. After paying 6 bucks to get "Mobious Sync", the only Syncthing client for iOS, it didn't even allow to sync photos, the only real reason I would want Syncthing for my phone, something that works absolutely fine on Android. Apps all but refuse to run in the background, making sshing into any of my computers an absolute pain. Quickly copy and paste a command from the browser? Nope! Have to log in again! Maybe it will work if I pay the $1.99 a month for the "pro" version. Every "free" app is kneecapped unless you fork over x dollars per month. Of course this exists on Android, but at a drastically lower rate, especially on FDroid. Of course, there's no equivalent to FDroid on iOS, and if the reaction by Apple after the EU tried to open up the platform indicates anything, there won't be one any time soon.
I tried my best to give it a fair shake. I knew there were gonna be growing pains, but after living in a (mostly) open ecosystem for so long, it feels suffocating when trying to accomplish the most basic of tasks, knowing that Tim Cook can lift a finger and the problem would cease to exist.
I've known for a while that the next phone I would be would be a non-iOS device. I was hoping that Linux phones would take off, but it's clear we've hit some stagnation. The Step allows me to keep track of how things are progressing, and I've gone days where I've barely touched my iPhone, but for things like authentication and taking photos, it's just not there yet. Yet, I need a new phone now. The charging port has been on it's way out for over a year and now only will take a charge if it's being forced in with a few newtons. The wireless charging still works fine, but it's bit my ass on more than a few occasions when an accidental midnight bump off the charger leaves me with a day of charge anxiety.
With Google announcing that they are severely limiting sideloading with their "certified Android" devices, it makes me worry for the future of the ecosystem. I plan on running GrapheneOS so hopefully that will be less of an issue, but with how Google has been curtailing any projects that utilize their projects outside of Google's intended vision, I worry for its future too.
While I love a good television series or movie, there's still something magical about having centuries worth of videos, submitted by ordinary people, just a few clicks away. I owe the wealth of my knowledge to the various netizens who have detailed their personal interests to the rest of the world through their uploads. While I will rail on and on about the ills of social media, I am weary about when Youtube is lumped in with the likes of Facebook, Twitter, or Instagram, since it still provides a utility of hosting and sharing video, even with the social aspects stripped out. There's just one problem: it's owned by Google.
Of course, if it wasn't for one of the largest tech and advertising companies owning the site, Youtube would have gone under within a couple of years, but it's still something I wish weren't the case. Until the day that a decentralized video hosting site that rewards creators comes online and the masses flock to it, I will have to continue to fight with the beast that is Youtube data collection.
When I was younger, I used the Youtube in the way it was intended, signing in on the site or app, commenting, liking, and subscribing. As I grew older and became more concious (paranoid) about data collection, I merely exported my subscriptions to an opml feed (something that used to be a built in feature on the site) and collected all the videos that came through in my rss feed reader, watching them in MPV through their youtube-dl plugin. This worked great for years, but as time has continued, channels I followed stopped uploading, my own interests have changed, and the list got whittled down to fewer and fewer feeds. I would open up my feed reader and realize that none of the new videos were anything I wanted to watch. Every once in a while I would navigate back to the site and check out if anything caught my eye for a potential new feed to add, but it turns out when the algorithm has nothing to go off of, it will feed you the most generic and mind-numbing crap it has available for your recommendations, resulting in fruitless hunts.
Then I found Invidious, it was exactly what I wanted, a way to watch Youtube without all the tracking associated with it. Sure it was a little rough around the edges in places, but it worked. For a while I used the yewtu.be instance, but after getting fed up with the occasional outages, decided to host my own private one, which was sprung forward by finding Tailscale and not having to deal with port forwarding. It was heaven, for a time.
Then Google started cracking down. First it started blocking IPs that it detected were Invidious instances, which was solved by rotating your IPv6 address and forcing a connection with IPv6. Then every few weeks Invidious would break and you'd need to wait a couple of days for an update to come through to fix it. Then there was a day when it just stopped. Every video you'd click on would give you the same message: "This helps protect our community. Please sign in to confirm you’re not a bot". It didn't matter which instance you were on. After a few weeks I knew what it meant, Invidious was as good as dead in it's current iteration.
So I switched back to primarily using my feed reader, adding a few of the channels I had found, feeling defeated. As the weeks turned into months, any hope I still held for Invidious to return dwindled. At a certain point I accepted defeat, I just started using the main site, not signing in. Of course I knew this was a far cry from privacy with all the fingerprinting that I'm sure is going on behind the curtain, but they had won.
Wile using the site it was somewhat puzzling as to how they knew what I was watching. I rarely watched the video within the Youtube player, instead copying the link to the video and watching it in MPV. I became paranoid about every video I would watch, wondering just what little bit of data the Google overlords had gleaned from my activity on the site. I'd be cautious about every video my pointer would hover over.
I was tired of the paranoia eating away at me. I needed to do something. After some projects at work required me to script some browser automation, I learned about the Selenium project. If what did Invidious in was using the hidden API, maybe just scraping the actual site from what looks like a regular user will be the Trojan horse that can get past Youtube's increasingly strong defences! So that's exactly what I did:
This still provides more information to Google than what I would like, and is in a somewhat precarious position with Youtube's AI verification system potentially coming along, but it has eased some of my nerves. I discovered that almost every video link on Youtube comes with an extra part of the link (an '&' followed by some identifying string) which I presume is used for tracking watch history on a signed out session. There's also a huge preamble of recent search queries that comes along with the page. Debloatube strips all the links of any of this tracking information, just giving you the raw video link. It also allows you to leverage the work that has been put into the algorithm by allowing you to press the "feed algorithm" button on any video. in the background this opens the video in the background browser session so the algorithm sees that you clicked on it and thus hopefully will start putting similar videos in your feed. It also saves me the hassle of having to right-click and select "copy link" since clicking on the video card just copies the link to your clipboard. I have a keybinding for running "mpv $(xclip -o -selection clipboard)", a very useful one to have on the modern web.
This is still very work in progress, but has been working great for what I need it to be and has given me a bit more peace of mind.
It's something I'm rather ashamed of, being lumped in with flat earth, faked moon landings, and lizard-people occupied government by the average person. I find it laughable to put it on the same level as those obviously false theories, but I'm sure there are flat earthers who would say the same about me.
The fascination with the subject started from a young age. When I was four or five I would have reoccurring nightmares about being chased down by a 50 foot tall ape, destroying everything in it's path, eventually reaching me, where I would awake in a panic. By the end of elementary school I saw an ad for Animal Planet's Finding Bigfoot and almost immediately got hooked on the show, staying up late every Sunday night to watch the new episode as soon as it came out. This time meeting the subject with intrigue instead of fear as I did in early childhood. Of course the apatite of a fanatical prepubescent was unable to be satiated by a low-rate network TV show and I explored anything I could find on the subject during the show's off season, renting out any book at the library about sasquatch and watching all the videos I could during the dawn of online video streaming. Years passed and I found other things to be interested in, slowly letting that passion dwindle. That was until finding a Bob Gymlan video pop up in my recommended feed while I was in high school. I hadn't thought about the subject much during the time in between the fall off in interest before I was 10 and the time I found the video in my mid teens, but after watching it the interest flared up again, albeit slightly less fanatical and with a bit more skepticism towards reports.
From everything I have ingested on the subject over the years, my thesis could be summed up as: There exists a species of large bipedal apes that inhabit forested regions of North America. They share many characteristics with known apes. Their footprints are similar to human prints, though are larger, do not have a notable arch, and appear to have a flexible mid-foot. They walk with an interesting gait where the leg rises to almost a 90 degree angle with each step and have arms that extend past their knees. While growing in popularity within the zeitgeist after the 1967 Patterson-Gimlin film, there are descriptions of similar creatures from westerners since their arrival in North America and before that from Native American cultures. Most of the details on foot structure come from the research of Jeff Meldrum of Idaho State University and the gait and arm details come from analysis of the PGF and other recorded sightings.
I could make this a whole collection of my favorite pieces of evidence, but that would be long and I'd much rather answer the problem that most non-believers raise as reason for the non-existence of Bigfoot.
If we ignore the parenthetical, I would answer that we do. The previously mentioned Patterson-Gimlin Film (PGF) is probably our best existing video evidence, but other relevant videos would be The Brown Footage, and The Freeman Footage. There are hundreds of plaster casts showing detailed characteristics that could not be replicated with stamping, like dermatoglyphics and evidence of injury as seen in the cripplefoot footprints. There have been hair samples which appear to be very similar to human hair, except thicker and without any signs of every being cut, also notably lacking a medulla so no DNA sampling can be done on it. There are recordings of possible sasquatch vocalizations, The Sierra Sounds. All in all, there is a large amount of evidence, but most of it is not taken seriously by non-believers. Why that is? Who really knows? I lean more on the side of the band-wagon effect over the more conspiratorial theories. Even many "believers" are mostly unaware of the evidence and are just more in it for the eye-witness testimony or worse, vibes.
Now let's not ignore the parenthetical. With every new piece of evidence, it is deemed by the non-believers as unsatisfactory in some manner, or if it is good evidence, the fact it is good proves it is a hoax. A common issue raised is that with the explosion of widely available HD cameras, why is it that we don't get better quality photos. I think the best explanation is that the creatures inhabit areas that make it harder for photography in combination with a misunderstanding of the capabilities of smartphone cameras. They tend to remain in areas with dense tree-cover and in shadows which they can blend into with their dark fur (though lighter color fur has been reported). Secondly the amount of post processing that occurs from trained data models on smartphone photos is why a relatively weak camera can produce stunning images in daily scenarios. With my Iphone 14 Pro I have taken some of the worst photographs known to man due to being outside the operating window of the camera and post-processor capabilities by taking photos durring dimly lit nighttime walks. This is only compounded by rural poverty, in isolated regions where these animals frequent, the people who live there probably don't have the latest and greatest technology. As for hoaxes, they are plentiful and mostly laughable, yet often get the most media coverage such as the 2023 Colorado Bigfoot. The PGF has been "debunked" dozens of times, yet most go by somebody claiming to be the guy in the monkey suit, where the person claiming also has no evidence of that being the case. Bob Heronimus is the only believable one, but parts of his narrative are discrediting such as the suit being made of horse hide, something that would have made the suit 300 pounds, while also lacking an explanation for the elongated upper arms and inability to recreate the gait shown in the video. Most rely purely on ad-hominem attacks on Robert Patterson. While the psychology of the hoaxer escapes me, why put in so much effort when a sub-hundred dollar monkey suit and a shaky cam can get you just as much attention.
I understand that there are definitely some holes in the existing theories of sasquatch existence, something that likely won't be solved until someone manages to down one or a dead body washes up somewhere, but can we not pretend everything is just hogwash.