Two new, huge test servers donated to F-Droid by the GCC Compile Farm

We just got two huge servers devoted to F-Droid from the GCC Compile Farm. Each one has a 32-thread/16-core CPU, 144GB RAM, and 2.7TB of disk. These machines have zero redundancy and no public IPs, so they can’t be used for services, really only CI builds and test environments. It is an exciting thing nonetheless. Our machines are gcc136 and gcc137:

For example, it means we can give root access to a complete buildserver instance to all fdroid @contributors, since we can easily have 5-10 complete buildserver host/guest setups running in parallel. @uniqx has done quite a bit of work to automate the deployment to the GCC boxes using Ansible, and I’ve done a little bit too. What form those actually take is up to all of us to decide. What seems clear to @uniqx and I is that those boxes should provide these services at least:

  • complete clone of the entirely production buildserver/repo environment, with all existing APKs, website publishing, etc.
  • multiple, disposable buildserver environments that contributors can get full shell access to
  • full buildserver environment running builds for all fdroiddata merge requests and all RFP issues via gitlab-ci
  • a mirror of the full archive to test against
  • gitlab-ci “runners” which can host Android emulators in KVM mode, to run fast, reliable emulator tests
  • the possibility for getting a shell in any gitlab-ci job directly in the website
  • building the website, which currently requires ~24GB RAM and is massively parallel (one process per language)

What else do people want to run there? Keep in mind it needs to be automated and/or disposable. @uniqx and I have started with Ansible for automation, and @Bubu said he’s also familiar with it.


@hans I’m not sure if you meant that with this quote but what’s the state of F-Droid’s verification server? Can we use those machines for trying to reproduce the apps of and publish the website with the results on some other server?

Linked gcc136/137 only show 9.8Gb disks; with 2.7Tb, don’t you dream of an aosp Treble gsi, with all specifics F-Droid & privacy known components ? donated a VM for, its mostly up and running, it just needs someone to finish it. That has a public, static IP for ports 80 and 443 only.

I’d like to suggest using them for seeding the repo and archive on IPFS, as this doesn’t require a public IP.

1 Like

Or an official .onion address for the repos :wink:

1 Like

I have no clue how .onion addresses and TOR stuff works, but i can’t imagine it taking up enough resources to exclude doing other things as well :smiley:

1 Like

The hard part is that these machines are not setup to be redundant at all, so disk failure means the machine will be down for a while, and all the data will be gone. Not great for a network service. I was thinking of hosting the .onion elsewhere, on a more appropriate box that is already running.

Well with IPFS that wouldn’t be a problem, as anyone requesting the hash for the repo or archive will simply load it from other peers who have pinned it. This is exactly why i think these servers are suitable for hosting IPFS content.

1 Like

(quoting from #fdroid-dev) Perhaps we can actually move an actual buildserver and the website deploy to the new cfarm servers. It seems like it is hardware hosted by a large, respected university, so perhaps it is trustyworthy enough. The hard part there is: who wants to be the sysadmin that takes over from @ciaran in ensuring the release builds are working? @Bubu has said he is willing. There should also be backup admin people.

Looking at the machine list, there are a couple builders there: OpenWRT, Lineage, VLC. I wonder if they are the official builders? The current buildserver setup has the security advantage of no remote access, but serious bus factor issues. If we really have a lot of reproducible builds, then this would be totally fine, especially since the verification server is on totally different infrastructure in a different country, continent even, run by different people.

Some of the official Debian release build machines are also hosted at OSUOSL:, anything marked with buildd. I asked some of the Debian developers about this idea, and they seem to think it is reasonable (_hc is me):

_hc: so for fdroid, we’re considering hosting release builds on hardware at OSUOSL. I’m wondering where/how the Debian release builders are hosted and whether they have remote access, etc.
Mithrandir: we have remote access, yes. They’re hosted in various places.
Mithrandir: has an overview
Mithrandir: buildd = “build daemon”, which is likely what you want to look for on that list.
_hc: thanks. I am familiar with buildd and that list, I just never was sure whether those were the actual release builders. I see some buildd machines at OSUOSL
weasel: osl are good people
Mithrandir: indeed they are.
_hc: so its good to hear we’re not crazy for thinking of putting key build machines there, running Debian, of course
_hc: is there somewhere we can read up more on DSA’s experience regarding things like root access policy and how to handle legal requests? my searches haven’t turned up much
Mithrandir: not really. What are you wondering about?
_hc: for things like govt requests, it would be good to have something outlined in advance, rather than freaking out when we get one. I’ve only handled a couple, and it was mostly just like: forward the email to the EFF lawyer. With a lead sysadmin in Germany, I don’t know that process there, nor Germany lawyers to refer to.
Mithrandir: I don’t think we’ve received any government requests. I could be misremembering, of course.
_hc: that would be ideal :slight_smile:
svuorela: will that be before or after the gag order?
_hc: for example, I have little idea of whether there are govt orders with gags in Germany. I know the US law pretty well, and the Austrian law somewhat.
Mithrandir: I don’t know German or Austrian law, as I’m Norwegian. :slight_smile:
_hc: can the Norwegian govt give you a secret order?
Mithrandir: not entirely sure, I’d have to check.
_hc: perhaps being friends with Nick Merrill and knowing that whole story has made me more paranoid…
Mithrandir: I think talking about gag orders is a bit pointless, since if somebody were subject to one, they couldn’t tell, and if they weren’t you wouldn’t necessarily believe them. We try to limit the amount of information we keep that’s not public, though, which would make a bunch of requests fairly easy to comply with.
_hc: well, there are warrant canaries and the like
_hc: yeah, the US “NSL” secret order can only request metadata
Mithrandir: but can you trust warrant canaries? Hard to tell
_hc: but the UK “snooper’s charter” seems to allow demands for private keys and things like that, or at least that’s what I’ve heard. and I know in China, the govt orders backdoors in services frequently.


I think we can do both @Licaon_Kter’s and @Swedneck’s idea easily if it’s deployable with an ansible script.
So, on those machines, we run two VMs, deployed with ansible, with one machine automatically mirroring the index and serving it over Tor, and with the other machine basically doing the same but serving it via ipfs. If those machines should fail, the Tor mirror would be offline but the ipfs network would just lose one seed. As there are hopefully more (Tor) mirrors, the client would automatically switch to another one, so I think this is indeed a great use case for those machines. As far as I know, Tor neither needs a public IP address.

Now, it’s just up to someone to prepare the ansible role for the two mirrors, so that anyone including ourselves can deploy them.

1 Like

As far as I understand it, it is not yet possible to use IPFS with fdroidclient. So I think we should hold off discussing IPFS here until we sort out our pressing core infrastructure needs. That said, a VM set up by Ansible would be the way to do it, like @NicoAlt said, so someone could start working on that independently at any time.

1 Like

@hans That’s right, fdroidclient doesn’t yet support IPFS. But if I understand @Swedneck correctly, there is another use case: people wanting to run a mirror could choose between rsync and IPFS and might be faster with IPFS depending on their location. As the index is stilled signed, we don’t have to trust other machines seeding the index in the IPFS network.

But you’re right, anyone reading this can start working on that ansible role, and if it’s working and doesn’t make much hassle, we can deploy it there.

It’s perfectly possible to use IPFS, as long as there’s HTTP support (ironically :stuck_out_tongue:)
You only need a TXT record with the text dnslink=/ipfs/$(current hash of the fdroid repo), then you can use any public ipfs gateway:
And if you want a proper domain you just need to add a CNAME record pointing to a public gateway with valid https certs for that subdomain.

Also here is an ansible role for ipfs, provided by the peeps in

Here’s a working example of the fdroid repo on ipfs:
It can also be fetched from any other public gateway, to get better potential bandwidth and latency:

It should work just fine as a normal repo, although it won’t be fully up to date since i forgot to rsync it before adding it to ipfs.

1 Like

These links should also be very useful, they provide some tips for exactly this scenario:

1 Like

I agree that IPFS is cool and promising, but please don’t hijack this
thread to promote it. Having more mirrors is not a pressing concern at
the moment.

Sorry, would it have been better to put it all in the same comment?

As I see it, the advantage of IPFS would be decentralizing who is
running the infrastructure. This new hardware is for the infrastructure
that cannot yet be decentralized. So I request to stop discussing IPFS
in this thread.

1 Like

I’m happy with @ciaran s current production system, so I wouldn’t mind keeping things as they are. On the other hand doing release builds in OSUOSL datacenters sounds very practical indeed. So I don’t really have a strong opinion here. What’s the popular @contributors opinion on this?