OT: Dealing with a hosting company with it's head up it's rear end
aryeh.friedman at gmail.com
Fri Aug 14 19:46:45 UTC 2020
On Fri, Aug 14, 2020 at 2:48 PM Tim Daneliuk <tundra at tundraware.com> wrote:
> On 8/14/20 12:49 PM, Aryeh Friedman wrote:
> > If the controls can be circumvented they are essentially useless and
> > shouldn't be in place in the first place. Besides anyone who knows what
> > RDP or SSH is would also know how to circumvent controls designed for
> > non-technical people so that makes the blocking of them even more short
> > sighted. This is what I meant by security by obfuscation (i.e. hiding
> > obvious truths that everyone with any knowledge knows).
> I am not taking a position on whether or not blocking ssh is always good,
> bad, or irrelevant. However, I pretty fundamentally disagree with the
> position above as written. It is absolutely possible to dramatically
> reduce the technical attack surface by limiting what ports can be accessed
> on a
> given machine.
The question was not about blocking incoming ports, it was about blocking
> For example, suppose I have some batch process that ingests data and
> produces some sort of results. Assume that I only permit the inbound
> data and outbound results to be made available over a single mechanism -
> let's use an MQ system if you like. No other ports of any kind are open
> beyond the TCP/IP interface to the MQ system.
The issue was not the very idea of limiting ports in general (which I agree
can be useful up to a point), but rather the fact that the hosting
company's *NEW* policy is to limit ports to what *THEY* think you need, not
what you actually need, and then refuse to open what you actually need.
Also, IMO, the only reason outbound ports should be blocked is to prevent
malware/spyware automatically/invisibly sending stuff. I *DO NOT* agree or
support the idea that humans should be blocked from doing anything (anybody
who really wants to get this will find some way, even if it is just what is
between their ears).
BTW, message queues are a fundamentally flawed assumption in many
application domains such as the one I am dealing with. The reason why
this is bad is it makes it impossible for third party applications to be
developed that interface directly to the DB which is not avoidable if your
magic message queue is closed source and only works with a set
configuration (which is the case in many such areas). It gives a false
sense of solving the concurrency issues when there is no such solution in
place (the only way to solve them is with true record locking). And it
gives the developers of any system the false impression they don't need to
worry about concurrency at all. This is the *ROOT CAUSE* of why all the
issues with the hosting came up in the first place -- the other vendor I
only mentioned in passing made just such a system and due to high turn over
no one in their org has any idea of what concurrency issues, if any, exist
in their app, thus we need to get paranoid with backups, and this is what
caused all the flaws with the hosting provider to become obvious and major
Every other system I have seen based on message queues, like OpenStack, are
disasters waiting to happen (OpenStack even admits it when they say the
worst possible disaster for a cloud is a power failure?!?!?!?!?).
> Let's further suppose that access to the MQ system, in- or outbound,
> is narrowly limited in time with dynamic firewalling/network rules.
> And let's harden this even more by making those inbound- and outbound
> payloads encrypted using one-time pad asymmetric keys.
That's the very system the law requires for us and I can tell you from
first hand experience it is nowhere near secure and anyone who says it is
has never attempted to actually use such a system. The exception is the
one-time pad since there is no such thing in practice (not even this
idiotic idea the hosting company has of useing TOTP).
> Can that system NEVER be compromised? Of course it can, but the
> compromise has to happen either at the physical server (or, by proxy,
> the hosting entity's console interface... OR it has to happens somewhere
> *outside* the server itself.
> Think about what an attack on this system would entail:
> - Hacking access into the private network where all this runs.
Which, in a datacenter that has public components, is so much easier than
> - Figuring out how to compromise access to the MQ system at the moments
> in time it was handling traffic to/from the server AND showing up
> as a legitimate subscriber to those topics.
Completely trivial on most message queues. The fact you're even holding
the message makes it vulnerable.
> - Figuring out how to crack into an one-time pad encoded payload -
> something known to be computationally impossible in reasonable time
> for a sufficiently good key - at least until quantum cell phones are
Relying on too many moving parts is always less secure then fewer and
better designed ones. This solution has far too many moving parts and is
frankly the main source of idiocy of the hosting provider this thread is
asking about. (See other replies in the thread beyond mine to see why).
> Is the risk zero? No. And certainly the same set of concerns have to be
> to the surrounding infrastructure (network, MQ series, key management and
> system ...) But the system as described above, and built with proper
> rigor and skill,
> is really, really, REALLY hard to break into, in large part because the
> only place
> where the plain data lives is in a server that has only very brief
> connection with
> anything and then only over a very narrow mechanism.
The system above increases (not reduces) your attack surface exponsionally.
> My point is that the "principle of least privilege" is very much a proper
> for designing security hardened systems. So not allowing ssh on a system
> with a web server isn't security by obscurity. It's just limiting the
> surface ... a very reasonable decision for some applications.
Yes the principle is sound but not the application you're making of it, nor
is any attempt to externally limit what can be done and what can't be done
(except for passive firewalls).
> In general, security has to be seen as a risk management activity, not
> a technical one. The amount of security focus on, say, the nuclear launch
> codes, had jolly well be exponential greater than protecting the grocery
> on your cell phone. But *if* you need great protection, reduction of
> is entirely legit.
Security first and foremost is a technical issue and it is a huge mistake
to say it is not. If you can't afford and/or the right security makes the
system unusable and you need to loosen it up for that reason, that is when
it becomes non-technical in that you need to decide where to compromise.
> The truth is that the single greatest weakness in the design above has
> to do with the technology at all. It has to do with the recipient of the
The technical aspects of it *ARE* it's single biggest weakness because the
technical aspects are fundamentally flawed starting with the mindset behind
them (i.e. the mindset of -- "I know better than the mere mortals who
actually have to use it because they are all idiots"). It makes it
impossible to secure stuff with the only thing in the data universe that is
100% secure which is what is between my ears [it is impossible to force
someone who would rather die than give out their password to ever give it,
but once you write it down you have lost this last line of defense]. This
assumes that they have had proper training in not falling for social
engineering (which no truly paranoid person would do anyways).
> report generated by our mythical server. If that recipient is a person,
> risk is that they will "leak" the report outside the organization in a
> or malevolent manner. THAT is what Data Loss Prevention systems are
If you don't trust someone to do stuff right in the first place *DON'T*
hire them once you hire someone you don't trust they no amount of
safeguards will prevent data loss (if nothing else there is always what's
between their ears)
> addressing (often poorly in my experience). Most companies try to
> reduce this particular threat by turning off USB access on laptops,
> any form of remote access outside their own networks, dividing their
> networks into
> separate, hardened subnets, doing deep scans and audits on email traffic,
> and so
> forth. And yet, even when done with almost infinite money and endless
> paranoia, this remains one of the most intractable problems in information
> security. Two words: Edward Snowden
Like I said if you don't trust someone don't hire them and if your
management can't be trusted to not piss off its employees so much that they
might turn against your org then it is an organizational and not security
Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org
More information about the freebsd-questions