I imagine it probably is inspected, just not by the public. They probably do it themselves.
And they may have contracts with certain companies specializing in this sort of security that also inspect it.
And there’s also the cybersecurity companies that test it whether they’re contracted or not. At some companies, their entire job revolves around finding bugs (especially security bugs) in other companies’ software.
Just because it’s not on GitHub doesn’t mean it’s not a good product that hasn’t been thoroughly tested.
That’s where the second and third paragraphs come in. Because other companies likely test it themselves, too.
They’ll typically report security bugs privately and then, after X amount of months, publicly announce the bug. Doing it this way will, ideally, force the other company to patch the bug prior to the announcement. If not, they’ll end up with a publicly known security bug that bad actors can now exploit. The announcement will also let the public (including companies) know to update their software.
You realize that Microsoft code is inspected as well, even more heavily and regulated… and yet they still end up with major breaches. Security evolves through open source collaboration and inspection by experts that aren’t being paid to say you’re doing a good job.
I imagine it probably is inspected, just not by the public. They probably do it themselves.
And they may have contracts with certain companies specializing in this sort of security that also inspect it.
And there’s also the cybersecurity companies that test it whether they’re contracted or not. At some companies, their entire job revolves around finding bugs (especially security bugs) in other companies’ software.
Just because it’s not on GitHub doesn’t mean it’s not a good product that hasn’t been thoroughly tested.
Surely we’re not gullible enough to accept “we inspected ourselves and determined we are secure and you should use our services”?
That’s where the second and third paragraphs come in. Because other companies likely test it themselves, too.
They’ll typically report security bugs privately and then, after X amount of months, publicly announce the bug. Doing it this way will, ideally, force the other company to patch the bug prior to the announcement. If not, they’ll end up with a publicly known security bug that bad actors can now exploit. The announcement will also let the public (including companies) know to update their software.
You realize that Microsoft code is inspected as well, even more heavily and regulated… and yet they still end up with major breaches. Security evolves through open source collaboration and inspection by experts that aren’t being paid to say you’re doing a good job.
You are making a lot good points… But is there any other practical solution?
Seems this is the best a normie on budget can get