config management Re: [nycbug-talk] A couple of security related questions

Tillman Hodgson tillman
Wed Oct 6 10:33:59 EDT 2004


On Wed, Oct 06, 2004 at 12:15:50AM -0400, George Georgalis wrote:
> On Tue, Oct 05, 2004 at 01:57:28PM -0600, Tillman Hodgson wrote:
> 
> but for now, yes I'm talking fstab. I'm not pro-cfengine either,
> maybe if I was more happy with a stock OS/Distro it'd be an option.

In theory cfengine can handle a heterogenous environment. I'm too lazy
to make that work, though ;-)

> >Use rcp and push the critical files from a golden master. No, seriously:
> >Kerberized rcp is secure, data session encrypted (with the '-x' switch),
> >and can be easily automated from cron with the use of a keytab on the
> >golden master in place of a password (without needing to deal with the
> >mess of putting matching keys on all the clients machines). Push is
> >better than pull in this sort of situation simply because failure
> >detection and resolution is centralized.
> 
> umm, are you suggesting push passwd et al for NFS, and pull everything
> else from the gold master? but doesn't that mean compromise of any host
> can lead to compromise of all hosts -- since auth tokens for root on
> each, is on each?

No. Kerberos isn't like SSH rsa keys -- there would be no auth tokens on
any host but the gold master. And if you're using Kerberos, the
/etc/master.passwd file has all the password disabled (or they can be
unique per-host as a backup if you prefer) as all authentication takes
place over the network in a secure fashion.

> Isn't important to use one of push / pull but not both?  I've always
> favored push, but after reading infrastructures.org pushpull, and
> considering a readonly CVSup, I'm thinking pull from gold....

Pushing meta-network, but not pulling anything. If you have NFS mounts
you need on all clients (/usr/home, let's say) the golden master can do
this (in pseudo-scripting-code):

 rcp -x /usr/local/golden/etc/fstab $client:/etc/fstab
 rsh -x $client 'mount -a'

The idea is to push out the critical files via rcp, not via NFS. I only
mentioned NFS originally because you were talking about /etc/fstab and
it seems likely that you wanted to ensure NFS mountpoints were correct.

> kerberos, is sweet, but it's black magic to me, short term I'd rather
> maintain ssh keys. (BTW - when a box comes up, you can manually add a
> passphrase into ssh-agent then start crond with that environment and
> exit; to get passphrase ssh under cron :) )

ssh keys would work in this case too, if you don't mind the key
management issues and the potential for key compromise. Using different
key pairs per host would limit the spread of an intrusion.

> >The script, running on the golden master, can contain all kinds of
> >safety checks and can email details of inconsistencies to your cell
> >phone or whatever you use for notifications. Heck, if you use
> >something like rt3 to track problems you can have the script create a
> >trouble ticket for you and dump details into the ticket
> >automatically.
> 
> oh good idea, mon scripts submit rt tickets ;-)

Your help desk will /love/ it *evil grin*

> >You can also use mtree to check ownership/permissions and reset them
> >if necesasry. The mtree master file can be rcp'ed in from a golden
> >master (and should be, as a local copy is vulnerable to tampering).
> 
> never heard of mtree, but it looks like it will "help the hack" I want
> to avoid. I would rather make a jail for each client on the gold
> master, then the clients can just no passphrase ssh rsync pull their
> junk, as root. 

mtree is part of FreeBSD, it's used durign the installworld process to
build the filesystem tree and fix up permissions. So it's built into the
base OS and is designed to precisely answer the problem of confirming
fielsystem permissions and fixing them if necessary.

THink of it as a Tripwire replacement built into the OS.

> But I don't think auth is really an issue here, doesn't it really come
> down to auth tokens are on the client or not; and allowing them to get
> all the uid/gid/perms but not more than the particular files for the
> client.

I like using Kerberos with NIS because auth tokens are never on the
client (or on the "wire"/network, for that matter) as long as the
connections original from the golden master and ticket forwarding isn't
enabled.

If you have IPs to burn, you could run a jail per client and run with a
minimal set of meta-data (passwd, group, etc) that applies only to that
jail. Maintenance probably becomes painful, though.

> But let me backup a little, for NFS do I need to sync any other files
> than: /etc/shadow /etc/passwd /etc/group

NIS can serve those files, though you'd probably want to combine NIS
with a central auth mechanism and run the RPC traffic over IPsec out of
healthy paranoia.

For that matter I like running all my RPC traffic over IPsec in
transport mode -- I got a free NFS speed increase out of it!

I did some testing a few months ago using a Sun Ultra 5 @360MHz running
-current and a generic Celeron 400Mhz box. Here's the "plain text"
speeds (using netperf to simulate NFS):

[root at caliban ~]#  /usr/local/netperf/netperf -t UDP_STREAM -H athena
UDP UNIDIRECTIONAL SEND TEST to athena : histogram
Socket  Message  Elapsed      Messages
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec
  9216    9216   10.01       13004   6587      95.81

Using Blowfish as the crypto algorithm rather than 3des, here's the
results between the same machines:

[root at caliban ~]#  /usr/local/netperf/netperf -t UDP_STREAM -H secathena
UDP UNIDIRECTIONAL SEND TEST to secathena : histogram
Socket  Message  Elapsed      Messages
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec
  9216    9216   10.01       14744      0     108.63

108.63, greater than 100mbit max of the ethernet segment I was using, is
due to using 'deflate' compression algorithm in IPsec. That's a 13%
virtual "bandwidth" boost.

Naturally, latency suffers. There's always a trade-off. With faster
harder the negatives get smaller as the CPUs are able to do the crypto
work much faster than the network can feed them.

So, since I don't really trust RPC traffic for security, I create a VLAN
and configure all hosts to only accept IPsec transport mode traffic on
it. I then run my RPC services on that VLAN. It's even better than
tcpwrappers/hosts.allow: I can cryptographically confirm that the host
isn't spoofing it's IP.

> I guess there is no practical (easy) way to have some extra accounts
> on a LAN box while keeping the global users from the NFS server synced
> too?

NIS does that. NIS users are _in addition to_ existing local users.

> Ironically, you (I think) recommended infrastructures.org, but I was
> really wanting to solve my root read pull only specific config
> problem; rsync push to each and every client from gold was never an
> issue. But you win a couple drinks anyway. :-)

Yup, that was me. I'm in a different country (hey, BSD lists are few and
far between, I take what I can get) so I'll just have mine in spirit ;-)

-T


-- 
Do not consciously seek enlightenment.
	-Muso Kokushi




More information about the talk mailing list