Fdroid checkupdates running in Gitlab CI

We have a new fdroid checkupdates job running in gitlab CI. This means the output and failures of this part of the F-Droid build cycle are now easily viewable for contributors.

The job currently runs twice three four times daily and taking about 70-90 min to complete.
If this proves stable enough we can increase the frequency. For now this is already a solid improvement over once every 2-3 days we had before.

The link to the last triggered pipeline can always be found here:

Here is what a completed run looks like:

The job is split into 10 batches of 350 apps, running in parallel. You’ll have to click around a bit to find the output for a specific appid, they go alphabetically from top to bottom.


That’s great, but won’t we trigger some limitations if we hit say Github that often?


Haven’t seen anything like that so far, if you notice anything from the logs (the jobs won’t be marked as failed when a repo can’t be reached, so you need o actually view the job output) please tell me.

1 Like

That’s why I’ve split up checks in my repo: apps that haven’t been updated for more than a year are no longer checked daily. They go to a separate run which is scheduled weekly (or was it monthly with apps not updated for 2 years? Need to look that up – once configured, one tends to forget if it simply works :rofl:)

But yeah, good to know that at least checkupdates runs regularly now, and even with such a frequency! Kudos to all who worked on this! :bowing_man:

Can’t wait for the daily builds :smiley:


Thanks for doing this @bubu! Its also a big step in bus factor: another big piece is moved off of CiaranG’s infrastructure. As long as the checkupdates job runs only on our runners, then we should be fine in terms of GitLab CI runner time. It is quite a bit of bandwidth, @bubu have you checked with the GCC cfarm admins whether that amount is OK? If not, I think it would be good to send them an email. Those boxes are setup as a compile farm, so they might not be on a network with cheap bandwidth.

1 Like

Hm, there was an about 12 hour long outage in running checkudpates today as the VM host got restarted (or from the filesystem check it rather got powercycled). The runner VM should have come up again automatically but didn’t. I tested a manual restart of the host system and the gues autostart worked, so I’ve no idea what did happen nor can’t predict this not happening again.

Also gitlab helpfully just moved all jobs which can’t be picked up by a runner because there’s none online with the specified tag to canceled without sending out an error email so no way to get notified there. I hope I can find some time to set up proper monitoring for this soon-ish.

I think it would be worth removing the gitlab-ci/docker dependency on this before investing more time in it. I’d like to do the same with the website deployserver. Using gitlab-ci for those kinds of things ends up being more trouble than its worth. Even in CI, it can be troublesome but its easier to manage there.

This wasn’t related to gitlab-ci at all. It was about the VM this runs on not correctly coming up. We need monitoring for this regardless of how the jobs are run. Yes it’s unfortunate that gitlab doesn’t do the monitoring for us here, but forgoing gitlab doesn’t magically give us VM uptime monitoring either.

I know, I just want to be sure that new work like monitoring isn’t tied
to gitlab-runner or gitlab-ci.

Checkupdates will for now run once a day at 0300 UTC as the rest of the pipeline currently has trouble catching up.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.