Sure. Were it not, I wouldn’t have invested enough time for this thread and related reports.
While it is popular security mantra for general population, it is completely unusable in practice, as issue is waaaaay more complex than that. As the saying goes: “only truly safe computer is one that has been disconnected from network, turned off, crushed in hydraulic press, encased in concrete and sunk to the bottom of the ocean.”
In the end (in practice) it boils down to ephemeral concept of trust, i.e. How much do I trust F-droid not to sign & ship malicious software (intentionally or accidentally) to me? And the answer (for me) is “pretty much”!
To detail why the issue is so complex, let’s say you did use only open source software on your phone (do you?), and you did scan every line of that source and all libraries it uses that they do not contain malicious pieces of code (did you really, personally?). There are still a zillion other security problems remaining:
- you might’ve missed the malicious code. Quick skimming will find just the worst offenders, but even hard dedicated multi-month detailed source audit by security experts is not fool-proof. For skeptics, sure in their ability to spot malicious code instantly, I’d highly recommend checking out finalists for Underhanded C Contest, where the point of contest is to make exploitable code for specific problem that looks completely innocuous so it is very hard to find even if when you know that simple one-page code contains malicious bug, with a bonus points given for plausible deniability (e.g. typical programmer mistake like off-by-one or similar).
- especially on android, compiling code pulls other code from net. While f-droid tries very hard to make sure only open-source things are included, if company hides the fact that the thing is not really open source, potentially closed-source still can (extremely rarely, but still) sneak through. For example, ARcore by Google managed to do that relatively recently, so it is not impossible.
- code that you can see e.g. on github.com might not be the same code that you (or someone else) checks out (i.e. you might get one copy of code, but f-droid different copy). You’d have to trust not only Microsoft (GitHub owners), but also git authors and all the rest of security stack (which is quite hard as GitHub does not publish its source code), security of currently used crypto that it is not susceptible to MITM attacks by other actors etc. Companies might even be forced by government to do such activity, or it might be result of some third party hack, even in company is benevolent (which in itself is not an attribute I’d slap on Microsoft!)
- even if the code that reaches the compiler is the same code written by trusted author and without any intentional malware or exploitable bugs (which is huge leap of faith in itself!), you’d still have to trust that the whole build toolchain is not compromised. Even if you check source code for all your compilers and rest of toolchain (good luck with that, if you’d wish to accomplish anything else in your life), you can still be fooled - see Ken Thompson /bin/login hack as a popular teaching example.
- even if all the code and toolchain were to be correct, not tampered with, and bug-free (which they are not, not by a far stretch), there is still an issue of OS itself being full of exploitable bugs and downright intentional malware. You might try to significantly reduce that risk by replacing stock ROM with Replicant, if your phone is supported and you’re OK with losing (significant) hardware functionality (like WiFi, GPS, mobile data etc) due to missing open source alternatives.
- even with all those pure-software issues addressed (if you somehow still believe that fable of “secure computer” is possible), even below all that sit spy chips, updatable things like microcode, hardware made in such way that other remote-controlled chips can modify the CPU, special hardware intentionally build to override CPU outside of user control like AMT etc. While those hardware issues can be significantly reduced if you go for (currently still below-par technology-features-vise) open hardware like OpenMoko, it still does not solve problem completely.
- and that is just the most popular things that I (as not-particularly-well invested security expert, but still somewhat above the average user) know of (and can pull from the top of my head). I’m sure there are many more known by people invested in the subject, and more so known to blackhats and state actors like NSA etc.
IOW, Security is hard. Really hard. You just won’t believe how vastly hugely mindboggingly hard it is. I mean you may think it’s hard to build a spaceship and visit other stars, but that’s just peanuts to security. Listen… (with apologies to Douglas Adams)
Theoretically, one might avoid most of those problems by building computer hardware themselvers from sourced discrete parts that are too simple to be able to be rigged. So, transistors, maybe even logic gates. But definitely not anything as complex as memory chip or ($DEITY forbid!) microprocessor. Then proceed to use toggle switches to create your own assembler for it (because that keyboard might be compromised too), and from there you’re ready to write your complete software stack, from OS upwards. Provided you can do that without any bugs or methods exploitable for side-channel and other attacks (both in software or hardware), you should be mostly safe.
In practice however, only way to make sure your computer is safe, is not to use any. Or, if that is not an attractive option, use security management techniques (most importantly proper risk assessment) so that (even after you somewhat reduce the risk) whenever and whatever bad thing happens (which it eventually will, even for most die-hard security experts), you are OK with its scope and prepared remedies.
“Security” - said Marvin - “don’t talk to me about security”
(sorry for the longish diatribe, but that myth had to be dispelled)