Fedora 37, openQA news, Mastodon and more
Hey, time for my now-apparently-annual blog post, I guess? First, a quick note: I joined the herd showing up on Mastodon, on the Fosstodon server, as @[email protected]. So, you know, follow me or whatever. I posted to Twitter even less than I post here, but we'll see what happens!
The big news lately is of course that Fedora 37 is out. Pulling this release together was a bit more painful than has been the norm lately, and it does have at least one bug I'm sad we didn't sort out, but unless you have one of a very few motherboards from six years ago and want to do a reinstall, everything should be great!
Personally I've been running Fedora Silverblue this cycle, as an experiment to see how it fares as a daily driver and a dogfooding base. Overall it's been working fine; there are still some awkward corners if you are strict about avoiding RPM overlays, though. I'm definitely interested in Colin's big native container rework proposal, which would significantly change how the rpm-ostree-based systems work and make package layering a more 'accepted' thing to do. I also found that sourcing apps feels slightly odd - I'd kinda like to use Fedora Flatpaks for everything, from a dogfooding perspective, but not everything I use is available as one, so I wound up with kind of a mix of things sourced from Flathub and from Fedora Flatpaks. I was also surprised that Fedora Flatpaks aren't generally updated terribly often, and don't seem to have 'development' branches - while Fedora 37 was in development, I couldn't get Flatpak builds of apps that matched the Fedora 37 RPM builds, I was stuck running Fedora 36-based Flatpaks. So it actually impeded my ability to test the latest versions of everything. It'd be nice to see some improvement here going forward.
My biggest project this year has been working towards gating Rawhide critical path updates on the openQA tests, as we do for stable and Branched releases. This has been a deceptively large effort; ensuring all the tests work OK on Rawhide was a relatively small job, but the experience of actually having the tests running has been interesting. There are, overall, a lot more updates for Rawhide than any other release, and obviously, they tend to break things more often. First I turned the tests on for the staging instance, then after a few months trying to get on top of things there, turned them on for the production instance. I planned to run this way for a month or two to see if I could stay on top of keeping the tests running smoothly and passing when they should, and dealing with breakage. On the whole, it's been possible...but just barely. The increased workload means tests can take several hours to complete after an update is submitted, which isn't ideal. Because we don't have the gating turned on, when somebody does submit an update that breaks the tests, I have to ensure it gets fixed right away or else get it untagged before the next Rawhide compose happens, or else the test will fail for every subsequent update too; that can be stressful. We also have had quite a lot of 'fun' with intermittent problems like systemd-oomd killing things it shouldn't. This can result in a lot of time spent manually restarting failed tests, coming up with awkward workarounds, and trying to debug the problems.
So, I kinda felt like things aren't quite solid enough yet to turn the gating on, and I wound up working down a path intended to help with the "too many jobs take too long" and "intermittent failures" angles. This actually started out when I added a proper critical path definition for KDE. This rather increased the openQA workload, as it added a bunch of packages to critical path that weren't there before. There was especially a fun moment when a couple hundred KDE package updates got submitted separately as Rawhide updates, and openQA spent a day running 55 tests on all of them, including all the GNOME and Server tests.
As part of getting the KDE stuff added to the critical path, I wound up doing a big update to the script that actually generates the critical path definition, and working on that made me realize it wouldn't be difficult to track the critical path package set by group, not just as one big flat list. That, in turn, could allow us to only run "relevant" openQA tests for an update: if the update is only in the KDE critical path, we don't need to run the GNOME and Server tests on it, for instance. So for the last few weeks I've been working on what turned out to be quite a lot of pieces relevant to that.
First, I added the fundamental support in the critical path generation script. Then I had to make Bodhi work with this. Bodhi decides whether an update is critical path or not, and openQA gets that information from Bodhi. Bodhi, as currently configured, actually gets this information from PDC, which seems to me an unnecessary layer of indirection, especially as we're hoping to retire PDC; Bodhi could just as easily itself be the 'source of truth' for the critical path. So I made Bodhi capable of reading critpath information directly from the files output by the script, then made it use the group information for Greenwave queries and show it in the web UI and API query results. That's all a hard requirement for running fewer tests on some updates, because without that, we would still always gate on all the openQA tests for every critical path update - so if we didn't run all the tests for some update, it would always fail gating. I also changed the Greenwave policies accordingly, to only require the appropriate set of tests to pass for each critical path group, once our production Bodhi is set up to use all this new stuff - until then, the combined policy for the non-grouped decision contexts Bodhi still uses for now winds up identical to what it was before.
Once a new Bodhi release is made and deployed to production, and we configure it to use the new grouped-critpath stuff instead of the flat definition from PDC, all of the groundwork is in place for me to actually change the openQA scheduler to check which critical path group(s) an update is in, and only schedule the appropriate tests. But along the way, I noticed this change meant Bodhi was querying Greenwave for even more decision contexts for each update. Right now for critical path updates Bodhi usually sends two queries to Greenwave (if there are more than seven packages in the update, it sends 2*((number of packages in update+1)/8) queries). With these changes, if an update was in, say, three critical path groups, it would send 4 (or more) queries. This slows things down, and also produces rather awkward and hard-to-understand output in the web UI. So I decided to fix that too. I made it so the gating status displayed in the web UI is combined from however many queries Bodhi has to make, instead of just displaying the result of each query separately. Then I tweaked greenwave to allow querying multiple decision contexts together, and had Bodhi make use of that. With those changes combined, Bodhi should only have to query once for most updates, and for updates with more than seven packages, the displayed gating status won't be confusing any more!
I'm hoping all those Bodhi changes can be deployed to stable soon, so I can move forward with the remaining work needed, and ultimately see how much of an improvement we see. I'm hoping we'll wind up having to run rather fewer tests, which should reduce the wait time for tests to complete and also mitigate the problem of intermittent failures a bit. If this works out well enough, we might be able to move ahead with actually turning on the gating for Rawhide updates, which I'm really looking forward to doing.
Comments