From bonsaime at gmail.com Sun May 3 15:00:32 2009 From: bonsaime at gmail.com (Jesse Callaway) Date: Sun, 3 May 2009 15:00:32 -0400 Subject: [nycbug-talk] glusterfs Message-ID: Anyone having any luck with glusterfs >= 2.0 on BSD? There is a very nice source distribution which has port Makefiles and all so that a port and package can be made very easily... despite the INSANE directions in the readme. Gonna try going back to version 1.3.9 or whatever the previous stable version is. Miles, I'm sure you have something smarmy and intelligent to say about glusterfs... I'd like to hear it. Let's open a discussion if anyone's interested in it, or if anyone has any good suggestions for an alternative. Sun has a cool project called Celeste going on, but I don't think it's anywhere near fun to play with yet. Haven't even downloaded it. -jesse From carton at Ivy.NET Sun May 3 16:35:23 2009 From: carton at Ivy.NET (Miles Nordin) Date: Sun, 03 May 2009 16:35:23 -0400 Subject: [nycbug-talk] glusterfs In-Reply-To: (Jesse Callaway's message of "Sun, 3 May 2009 15:00:32 -0400") References: Message-ID: >>>>> "jc" == Jesse Callaway writes: jc> Miles, I'm sure you have something smarmy and intelligent to jc> say about glusterfs I don't. I'm not using it yet is the problem. I've been meaning to use it at work. I don't have any problem with it if that's what you mean. To me this type of thing looks like the future---not necessarily the fuse/rump side of it but the disk-[optionalredundancy]-filesystem-redundancy-filesystem layering, and secondly the idea that the storage backplane needs to be inside a network switch and not along a single link of any kind, not even if it's a single link of FC-SW. I like the pluggable back-ends and the ghetto-HSM policy stuff. I like the way they can supposedly lose whole chunks of unredundant storage bricks without completely shitting themselves, just losing some files or subdirectories. but I have not tried it so...if you find it doesn't work that's kind of a big negative point. jc> Sun has a cool project called Celeste going on I will look at this! I was impressed that Lustre has a plausible, rigid timetable. However it seems like ZFS's super-efficient snapshots (in space consumption and in creation/deletion time, more efficient than vmware or oracle db) are neither in Lustre nor on the Lustre timetable, which is a big loss. I'm not sure the efficient way to cram that back into the new model. probably need to dig into it more. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: From marco at metm.org Sun May 3 16:54:14 2009 From: marco at metm.org (Marco Scoffier) Date: Sun, 03 May 2009 16:54:14 -0400 Subject: [nycbug-talk] shared disks for heavy I/O Message-ID: <49FE0476.2020004@metm.org> Hello all, I am looking for recommendations for a shared disk solution in the 3 to 5TB range that can support heavy reading and writing. Standalone ? Dedicated server ? Fiber channel ? Budget is around $4000. Has someone looked into this recently? How does Gig-ethernet performance compare to other solutions in real world situations? I am working with large data sets (presently 100G but eventually to become larger). I have to store intermediate steps of the computation to disk. Often to speed things up I run parallel computations which each read and write 1 to 10G of data at the beginning and the end of the computation. I have 2 16cpu servers with SATA disks which each exports its' disks to the other using NFS. Very often all cpus on both systems are maxed at 100%. Lately when ramping up the computations NFS has been locking up (even when only reading remotely and writing locally). I/O has been slow (ls takes forever to return for eg.). I think that we are probably asking too much of the current setup where we are both running computations and exporting NFS from the same machines. Thanks, Marco From kacanski_s at yahoo.com Mon May 4 06:34:42 2009 From: kacanski_s at yahoo.com (Aleksandar Kacanski) Date: Mon, 4 May 2009 03:34:42 -0700 (PDT) Subject: [nycbug-talk] shared disks for heavy I/O In-Reply-To: <49FE0476.2020004@metm.org> References: <49FE0476.2020004@metm.org> Message-ID: <313956.83954.qm@web53602.mail.re2.yahoo.com> Hi Marco, I am about to finish building 30 TB raw NAS for under 9k. I can give you details if you are interested. I did not test it yet but I will as soon as new RAID SATA II adapters arrive in. It seems it will be very descent but will see. If you are interested in fiber GFS does perform well in that space but is not cheep solution. I tried lustre but lustre is for bigger installation and required infiniband ... --sasha --Aleksandar (Sasha) Kacanski ----- Original Message ---- From: Marco Scoffier To: NYCBUG Sent: Sunday, May 3, 2009 4:54:14 PM Subject: [nycbug-talk] shared disks for heavy I/O Hello all, I am looking for recommendations for a shared disk solution in the 3 to 5TB range that can support heavy reading and writing. Standalone ? Dedicated server ? Fiber channel ? Budget is around $4000. Has someone looked into this recently? How does Gig-ethernet performance compare to other solutions in real world situations? I am working with large data sets (presently 100G but eventually to become larger). I have to store intermediate steps of the computation to disk. Often to speed things up I run parallel computations which each read and write 1 to 10G of data at the beginning and the end of the computation. I have 2 16cpu servers with SATA disks which each exports its' disks to the other using NFS. Very often all cpus on both systems are maxed at 100%. Lately when ramping up the computations NFS has been locking up (even when only reading remotely and writing locally). I/O has been slow (ls takes forever to return for eg.). I think that we are probably asking too much of the current setup where we are both running computations and exporting NFS from the same machines. Thanks, Marco _______________________________________________ talk mailing list talk at lists.nycbug.org http://lists.nycbug.org/mailman/listinfo/talk From pete at nomadlogic.org Mon May 4 12:37:25 2009 From: pete at nomadlogic.org (Pete Wright) Date: Mon, 4 May 2009 09:37:25 -0700 Subject: [nycbug-talk] shared disks for heavy I/O In-Reply-To: <49FE0476.2020004@metm.org> References: <49FE0476.2020004@metm.org> Message-ID: <910606F6-516E-45B1-96E2-B669A213232F@nomadlogic.org> On 3-May-09, at 1:54 PM, Marco Scoffier wrote: > Hello all, > > I am looking for recommendations for a shared disk solution in the 3 > to > 5TB range that can support heavy reading and writing. Standalone ? > Dedicated server ? Fiber channel ? Budget is around $4000. Has > someone > looked into this recently? How does Gig-ethernet performance > compare to > other solutions in real world situations? > hey marco - on that budget i'd say you should be able to get pretty fast storage for ~5TB. it may not be reliable though (i.e. not something like a netapp or isilon were you can suffer nfs server failures w/ no downtime) - which may or may not be a big deal to you. at a previous employer we were building high-resolution video playback systems (capable of playing 2048x1536 at 60fps, as well as systems capable of playing dual stream 1080p 3D video streams) for around this much. our setup was pretty simple - since we needed to stream ~300MB/ s we used hardware RAID SATA controllers to our video playback systems. for what you need 1Gb nics would probably be fine...if you start saturating a single gig-nic you can always bond them for more bandwidth. i think your disk subsystem will get saturated before your network interfaces. out setup was pretty simple: 1 dual quad-core workstation with 32GB ram 1 3ware 9000 series sata raid controller (no BBU - although that'd probably help with your use case, but it'd drive up the cost). 1 external sata JBOD (something similar to this: http://rackmountmart.stores.yahoo.net/sa3urastch10.html) a bunch of large sata drives. since we were a linux shop with a bunch of former SGI'ers we used the XFS filesystem which has *very* good streaming I/O performance. For your workload ZFS would probably suite fine - and you'd get to use FreeBSD :) The only hack we did was to format the disks in such a way that we did not use any of the inside tracks of the individual disks. this ensured that we'd be laying down, and reading blocks in a contiguous manner on the outside of tracks of the disk. it actually had a significant impact on the performance for us (at a slight storage penalty). > I am working with large data sets (presently 100G but eventually to > become larger). I have to store intermediate steps of the > computation to disk. Often to speed things up I run parallel > computations which each read and write 1 to 10G of data at the > beginning > and the end of the computation. I have 2 16cpu servers with SATA > disks > which each exports its' disks to the other using NFS. Very often all > cpus on both systems are maxed at 100%. Lately when ramping up the > computations NFS has been locking up (even when only reading remotely > and writing locally). I/O has been slow (ls takes forever to return > for > eg.). I think that we are probably asking too much of the current > setup > where we are both running computations and exporting NFS from the same > machines. > I reckon the above setup would be good for your environment. loading up your server with ton's of ram for caching should help with I/O thrashing situations like you describe above. also using a bunch of disks will help in this situation as well. a Battery Backup Unit on our RAID controller will further help with caching - and give you a little security in case of power failures etc. also - don't forget about tuning your NFS client options. use large read and write block sizes; think about using async writes if your data isn't *that* important . and if you can use jumbo frames use them - that'll help both the client and server. sorry for rambling post - i've been neck deep in designing some new storage systems and have been kicking around alot of ideas lately :) HTH, -pete From pete at nomadlogic.org Mon May 4 13:08:44 2009 From: pete at nomadlogic.org (Pete Wright) Date: Mon, 4 May 2009 10:08:44 -0700 Subject: [nycbug-talk] glusterfs In-Reply-To: References: Message-ID: <717CCFCC-7DBC-42E9-801C-7B4DA3F4109B@nomadlogic.org> On 3-May-09, at 12:00 PM, Jesse Callaway wrote: > Anyone having any luck with glusterfs >= 2.0 on BSD? There is a very > nice source distribution which has port Makefiles and all so that a > port and package can be made very easily... despite the INSANE > directions in the readme. > > Gonna try going back to version 1.3.9 or whatever the previous stable > version is. > > Miles, I'm sure you have something smarmy and intelligent to say about > glusterfs... I'd like to hear it. Let's open a discussion if anyone's > interested in it, or if anyone has any good suggestions for an > alternative. > > Sun has a cool project called Celeste going on, but I don't think it's > anywhere near fun to play with yet. Haven't even downloaded it. > hey jesse - i took a good look at glusterfs recently, and i've been keeping an eye on it pretty closely. my impression is that if you need a global filesystem it is certainly worth a look. i know that my previous employer is going to be using this pretty extensively in the near future fwiw. another gfs worth taking a look at is hadoop: http://hadoop.apache.org/core/ It is a google-filesystem workalop (i.e. uses algo's similar to map reduce for file storage etc). i know yahoo is a consumer/developer of this as well. -p From carton at Ivy.NET Mon May 4 14:05:59 2009 From: carton at Ivy.NET (Miles Nordin) Date: Mon, 04 May 2009 14:05:59 -0400 Subject: [nycbug-talk] glusterfs In-Reply-To: <717CCFCC-7DBC-42E9-801C-7B4DA3F4109B@nomadlogic.org> (Pete Wright's message of "Mon, 4 May 2009 10:08:44 -0700") References: <717CCFCC-7DBC-42E9-801C-7B4DA3F4109B@nomadlogic.org> Message-ID: >>>>> "pw" == Pete Wright writes: pw> another gfs worth taking a look at is hadoop: pw> http://hadoop.apache.org/core/ pw> It is a google-filesystem workalop GFS from redhat != GFS from google. The first one is a POSIX filesystem. The second one and hadoop are neither one of them POSIX filesystems. They are more like crappy databases in that the interface to them is a library with its own unique API. and unlike a database which can efficiently store smaller structured objects than a filesystem can, I think maybe you can only store really big files in hadoop/google-gfs, not small ones and maybe not randomly-accessed ones. Lustre, GlusterFS, RedHat GFS, all purport to accomodate small files, and also big files which must be randomly-accessed (like .vdi/.vmdk). And Celeste sounds insane. If they would manage to make the old Berkeley filesystem from 2002 on which it's based actually work and nothing else, it could be awesome, but they are trying to deliver a Platform again (platform == $$$, software == $). I think it'll be strangled by necktie damage. Every other ``platform'' Sun has delivered seems to be cumbersome, brittle, slow, stagnant, and behind schedule. I think they should not try to deliver the platform and its fundamental building blocks in one step---all the stuff they have now which doesn't suck is delivered seperably. It looks like so far yuo get at least some of the source: http://src.opensolaris.org/source/xref/celeste/trunk/ also it sounds like delivers the snapshot feature I wanted and then some. Maybe these distributed filesystems will categorize themselves across which ones are optimized for high-latency networks, and which for maximum throughput on low-latency preferably-lossless networks. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: From marco at metm.org Mon May 4 14:10:57 2009 From: marco at metm.org (marco scoffier) Date: Mon, 04 May 2009 14:10:57 -0400 Subject: [nycbug-talk] shared disks for heavy I/O In-Reply-To: <910606F6-516E-45B1-96E2-B669A213232F@nomadlogic.org> References: <49FE0476.2020004@metm.org> <910606F6-516E-45B1-96E2-B669A213232F@nomadlogic.org> Message-ID: <49FF2FB1.9010302@metm.org> Thanks a lot for the details Pete. I actually had you in might when I posed the question :) > on that budget i'd say you should be able to get pretty fast storage for ~5TB. it may not >be reliable though (i.e. not something like a netapp or isilon were you can suffer nfs >server failures w/ no downtime) Sorry but too many double negatives in the opener... I think I understood, netapp and isilon are good but more expensive ? But I think I am more interested in the system you describe below... Pete Wright wrote: > > out setup was pretty simple: > > 1 dual quad-core workstation with 32GB ram > 1 3ware 9000 series sata raid controller (no BBU - although that'd > probably help with your use case, but it'd drive up the cost). > 1 external sata JBOD > (something similar to this: > http://rackmountmart.stores.yahoo.net/sa3urastch10.html) > a bunch of large sata drives. > Forgive me for being a bit clueless here. I haven't done one of these external disk setups before. There are 10 cables running between the workstation and the external JBOD ? The RAID controller is in the workstation or the external ? The idea is that the workstation exports NFS shares through gigabit ethernet but uses all its memory and CPU for disk access ? > since we were a linux shop with a bunch of former SGI'ers we used the > XFS filesystem which has *very* good streaming I/O performance. For > your workload ZFS would probably suite fine - and you'd get to use > FreeBSD :) > Of course I would like the FreeBSD with ZFS solution :) Have to see what works best with the tech who will be administering... > The only hack we did was to format the disks in such a way that we did > not use any of the inside tracks of the individual disks. this > ensured that we'd be laying down, and reading blocks in a contiguous > manner on the outside of tracks of the disk. it actually had a > significant impact on the performance for us (at a slight storage > penalty). I didn't know one had access to know where tracks are on the disk. That the drive manufacturer could lay down tracks randomly distributed across the disk if that helped them get the performance specs they required. > a Battery Backup Unit on our RAID controller will further help with > caching - and give you a little security in case of power failures etc. > Why does a BBU help with caching? I understand that it allows a write to finish from cache in the event of a power failure, but I didn't know it could help with performance, or did I misunderstand. > also - don't forget about tuning your NFS client options. use large > read and write block sizes; think about using async writes if your > data isn't *that* important . and if you can use jumbo frames > use them - that'll help both the client and server. > Thanks for the tips. We could do some async writes but then would need some integrity checks. This is financial data so someone cares about every number :) > sorry for rambling post - i've been neck deep in designing some new > storage systems and have been kicking around alot of ideas lately :) > I appreciate the ramblings. That's why I subscribe to mailing lists, for the slightly open ended brain picking :) Marco From pete at nomadlogic.org Mon May 4 14:34:08 2009 From: pete at nomadlogic.org (Pete Wright) Date: Mon, 4 May 2009 11:34:08 -0700 Subject: [nycbug-talk] shared disks for heavy I/O In-Reply-To: <49FF2FB1.9010302@metm.org> References: <49FE0476.2020004@metm.org> <910606F6-516E-45B1-96E2-B669A213232F@nomadlogic.org> <49FF2FB1.9010302@metm.org> Message-ID: On 4-May-09, at 11:10 AM, marco scoffier wrote: > Thanks a lot for the details Pete. I actually had you in might when > I posed the question :) > > > > > on that budget i'd say you should be able to get pretty fast > storage for ~5TB. it may not >be reliable though (i.e. not > something like a netapp or isilon were you can suffer nfs >server > failures w/ no downtime) > > Sorry but too many double negatives in the opener... I think I > understood, netapp and isilon are good but more expensive ? But I > think I am more interested in the system you describe below... > gah - that's awful typing on my part, sorry about man. basically i was trying to say that storage vendors like NetApp can provide you with high performance, reliable storage. But it is quite expensive, which would be well over the 4k budget. > Pete Wright wrote: >> >> out setup was pretty simple: >> >> 1 dual quad-core workstation with 32GB ram >> 1 3ware 9000 series sata raid controller (no BBU - although that'd >> probably help with your use case, but it'd drive up the cost). >> 1 external sata JBOD >> (something similar to this: http://rackmountmart.stores.yahoo.net/sa3urastch10.html) >> a bunch of large sata drives. >> > Forgive me for being a bit clueless here. I haven't done one of > these external disk setups before. There are 10 cables running > between the workstation and the external JBOD ? The RAID controller > is in the workstation or the external ? The idea is that the > workstation exports NFS shares through gigabit ethernet but uses all > its memory and CPU for disk access ? so in our setup what happens is you have the external drive bay with lets say 10 SATA drives in them. The drives connect on a backplane which concentrates some (up to 4 i believe) SATA interfaces into one external SATA cable. The cable(s) then connect to external ports on our 3ware cards. The cards see the 10 individual drives though - so you can do hardware RAID on the 3ware card, or pass them through to your OS. If I have time today I can google up the parts we were using to do this...but here's a link from 3ware that may help get ya started: http://www.3ware.com/products/cables.asp look under Cables for 9590SE and 3Ware Sidecar. we were using the 19" SATA "Multilance" CBL-IB-05M. Another configuration we've used is the 3Ware sidecar (check out the Drive Cages menu on the left hand side) - but this limits you to 4 drives. In our case we had one workstation that had the storage directly attached to it for video playback. In your case I would recommend setting up a dedicated NFS server if possible. Then you can tune your systems accordingly. >> >> The only hack we did was to format the disks in such a way that we >> did not use any of the inside tracks of the individual disks. this >> ensured that we'd be laying down, and reading blocks in a >> contiguous manner on the outside of tracks of the disk. it >> actually had a significant impact on the performance for us (at a >> slight storage penalty). > I didn't know one had access to know where tracks are on the disk. > That the drive manufacturer could lay down tracks randomly > distributed across the disk if that helped them get the performance > specs they required. > yea this was achieved in the fdisk/parted phase of preparing the disks for a filesystem. it took a little math, hard drive knowledge and testing to get the correct values here :) >> a Battery Backup Unit on our RAID controller will further help with >> caching - and give you a little security in case of power failures >> etc. >> > Why does a BBU help with caching? I understand that it allows a > write to finish from cache in the event of a power failure, but I > didn't know it could help with performance, or did I misunderstand. sorry, I should have been more clear. The cache should help with performance of writes, as your disk subsystem can return an file handle of a write when it is in the BBU cache rather than waiting for the bits to hit the disk itself. >> also - don't forget about tuning your NFS client options. use >> large read and write block sizes; think about using async writes if >> your data isn't *that* important . and if you can use jumbo >> frames use them - that'll help both the client and server. >> > Thanks for the tips. We could do some async writes but then would > need some integrity checks. This is financial data so someone cares > about every number :) oh yea - then i'd stay away from async writes then :) -p -------------- next part -------------- An HTML attachment was scrubbed... URL: From pete at nomadlogic.org Mon May 4 14:44:37 2009 From: pete at nomadlogic.org (Pete Wright) Date: Mon, 4 May 2009 11:44:37 -0700 Subject: [nycbug-talk] glusterfs In-Reply-To: References: <717CCFCC-7DBC-42E9-801C-7B4DA3F4109B@nomadlogic.org> Message-ID: <72E99B86-4131-437C-9065-9DD53EF57E60@nomadlogic.org> On 4-May-09, at 11:05 AM, Miles Nordin wrote: >>>>>> "pw" == Pete Wright writes: > > pw> another gfs worth taking a look at is hadoop: > pw> http://hadoop.apache.org/core/ > pw> It is a google-filesystem workalop > > GFS from redhat != GFS from google. > > The first one is a POSIX filesystem. > yes...i was using gfs in two separate ways which were unclear: Global FileSystem < -- generic Google FileSystem < -- mostly referring to paper google released several years ago and gfs from rh sucks...gfs2 from rh is getting there but i wouldn't want anyone to go there either... > The second one and hadoop are neither one of them POSIX filesystems. > They are more like crappy databases in that the interface to them is a > library with its own unique API. and unlike a database which can > efficiently store smaller structured objects than a filesystem can, I > think maybe you can only store really big files in hadoop/google-gfs, > not small ones and maybe not randomly-accessed ones. Lustre, > GlusterFS, RedHat GFS, all purport to accomodate small files, and also > big files which must be randomly-accessed (like .vdi/.vmdk). > well i think that's the whole point, and one which is made in the "google paper". it's not a filesystem that most people think of it - it is more of a way to store large, arbitrary data sets that are not suited to RDBMS's. i think when people start going down the global filesystem route they are not going to be using them on a single node - but are using them to store *lots* of arbitrary data that will be getting accessed by *many* consumers. the side benefits being that it scales very well compared to a POSIX fs, gives you some level of redundancy for "free" and has a programatic API that is used to access/ monitor/manage data. all of which become a real concern when you start dealing with PB's worth of online data. don't forget OCFS too :) i'd personally never use a global filesystem for OS images or things of that nature. use a proper SAN or device than export LUNs to your hyper visor as proper block devices (via either FCP, iSCSI, AoE etc) and be done with it. no need to incur the complexity of a global filesystem when all you really want is block level disk access. -p From spork at bway.net Mon May 4 15:07:46 2009 From: spork at bway.net (Charles Sprickman) Date: Mon, 4 May 2009 15:07:46 -0400 (EDT) Subject: [nycbug-talk] sca adapter? Message-ID: Hi all, I'm in need of an SCA (80 pin) to 68 pin scsi adapter. Is there anyone in Manhattan that would stock such a thing? It's one of these doohickeys: http://www.ritzcamera.com/product/EP5111582.htm?utm_medium=productsearch&utm_source=google Thanks, Charles ___ Charles Sprickman NetEng/SysAdmin Bway.net - New York's Best Internet - www.bway.net spork at bway.net - 212.655.9344 From george at ceetonetechnology.com Mon May 4 15:47:46 2009 From: george at ceetonetechnology.com (George Rosamond) Date: Mon, 04 May 2009 15:47:46 -0400 Subject: [nycbug-talk] sca adapter? In-Reply-To: References: Message-ID: <49FF4662.3020100@ceetonetechnology.com> Charles Sprickman wrote: > Hi all, > > I'm in need of an SCA (80 pin) to 68 pin scsi adapter. Is there anyone in > Manhattan that would stock such a thing? > > It's one of these doohickeys: > > http://www.ritzcamera.com/product/EP5111582.htm?utm_medium=productsearch&utm_source=google > *maybe* cables and chips on fulton street downtown. 212-619-3132 g From lavalamp at spiritual-machines.org Mon May 4 15:46:12 2009 From: lavalamp at spiritual-machines.org (Brian A. Seklecki) Date: Mon, 4 May 2009 15:46:12 -0400 (EDT) Subject: [nycbug-talk] sca adapter? In-Reply-To: References: Message-ID: > > http://www.ritzcamera.com/product/EP5111582.htm?utm_medium=productsearch&utm_source=google > Do you need the active terminators? I just tried two of the ~ $15 priced units off of Ebay. Both were bad. If you can't find a local distributor, the ones from GraniteDigital.com were reliable. ~BAS > Thanks, > > Charles > > ___ > Charles Sprickman > NetEng/SysAdmin > Bway.net - New York's Best Internet - www.bway.net > spork at bway.net - 212.655.9344 > > _______________________________________________ From spork at bway.net Mon May 4 16:23:05 2009 From: spork at bway.net (Charles Sprickman) Date: Mon, 4 May 2009 16:23:05 -0400 (EDT) Subject: [nycbug-talk] sca adapter? In-Reply-To: <49FF4662.3020100@ceetonetechnology.com> References: <49FF4662.3020100@ceetonetechnology.com> Message-ID: On Mon, 4 May 2009, George Rosamond wrote: > Charles Sprickman wrote: >> Hi all, >> >> I'm in need of an SCA (80 pin) to 68 pin scsi adapter. Is there anyone in >> Manhattan that would stock such a thing? >> >> It's one of these doohickeys: >> >> http://www.ritzcamera.com/product/EP5111582.htm?utm_medium=productsearch&utm_source=google >> > > *maybe* cables and chips on fulton street downtown. No such luck. Funny, the guy said that SCSI is "old fashioned" and they don't really have much call for SCSI anything these days. The annoying thing is that I have one at home that is in a box somewhere, probably mis-packed during a move. A client sent it to me along with an Adaptec 2940 to just do drive testing before RMAing dead drives... I did just dig up an old poweredge 4300 which has SCA drive bays. I think this might be the ticket. Thanks, C > 212-619-3132 > > g > From spork at bway.net Mon May 4 19:05:55 2009 From: spork at bway.net (Charles Sprickman) Date: Mon, 4 May 2009 19:05:55 -0400 (EDT) Subject: [nycbug-talk] sca adapter? In-Reply-To: References: Message-ID: I'm set! The old Dell with an sca backplane and a gentoo boot cd is working out so far. Shockingly that old 500MHz beast is saturating our metro-e connection. Charles On Mon, 4 May 2009, Alex Pilosov wrote: > We (pilosoft) have lots of these. George knows how to find us. :) > > -alex > > On Mon, 4 May 2009, Charles Sprickman wrote: > >> On Mon, 4 May 2009, George Rosamond wrote: >> >>> Charles Sprickman wrote: >>>> Hi all, >>>> >>>> I'm in need of an SCA (80 pin) to 68 pin scsi adapter. Is there anyone in >>>> Manhattan that would stock such a thing? >>>> >>>> It's one of these doohickeys: >>>> >>>> http://www.ritzcamera.com/product/EP5111582.htm?utm_medium=productsearch&utm_source=google >>>> >>> >>> *maybe* cables and chips on fulton street downtown. >> >> No such luck. Funny, the guy said that SCSI is "old fashioned" and they >> don't really have much call for SCSI anything these days. >> >> The annoying thing is that I have one at home that is in a box somewhere, >> probably mis-packed during a move. A client sent it to me along with an >> Adaptec 2940 to just do drive testing before RMAing dead drives... >> >> I did just dig up an old poweredge 4300 which has SCA drive bays. I think >> this might be the ticket. >> >> Thanks, >> >> C >> >>> 212-619-3132 >>> >>> g >>> >> _______________________________________________ >> talk mailing list >> talk at lists.nycbug.org >> http://lists.nycbug.org/mailman/listinfo/talk >> > > From alex at pilosoft.com Mon May 4 18:38:34 2009 From: alex at pilosoft.com (Alex Pilosov) Date: Mon, 4 May 2009 18:38:34 -0400 (EDT) Subject: [nycbug-talk] sca adapter? In-Reply-To: Message-ID: We (pilosoft) have lots of these. George knows how to find us. :) -alex On Mon, 4 May 2009, Charles Sprickman wrote: > On Mon, 4 May 2009, George Rosamond wrote: > > > Charles Sprickman wrote: > >> Hi all, > >> > >> I'm in need of an SCA (80 pin) to 68 pin scsi adapter. Is there anyone in > >> Manhattan that would stock such a thing? > >> > >> It's one of these doohickeys: > >> > >> http://www.ritzcamera.com/product/EP5111582.htm?utm_medium=productsearch&utm_source=google > >> > > > > *maybe* cables and chips on fulton street downtown. > > No such luck. Funny, the guy said that SCSI is "old fashioned" and they > don't really have much call for SCSI anything these days. > > The annoying thing is that I have one at home that is in a box somewhere, > probably mis-packed during a move. A client sent it to me along with an > Adaptec 2940 to just do drive testing before RMAing dead drives... > > I did just dig up an old poweredge 4300 which has SCA drive bays. I think > this might be the ticket. > > Thanks, > > C > > > 212-619-3132 > > > > g > > > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > From george at ceetonetechnology.com Mon May 4 22:12:17 2009 From: george at ceetonetechnology.com (George Rosamond) Date: Mon, 04 May 2009 22:12:17 -0400 Subject: [nycbug-talk] sca adapter? In-Reply-To: References: Message-ID: <49FFA081.5060503@ceetonetechnology.com> Alex Pilosov wrote: > We (pilosoft) have lots of these. George knows how to find us. :) > too late, i guess, but yes. . . and chip, you have my number for next time. g From george at ceetonetechnology.com Tue May 5 17:32:23 2009 From: george at ceetonetechnology.com (George Rosamond) Date: Tue, 05 May 2009 17:32:23 -0400 Subject: [nycbug-talk] bsdcan Message-ID: <4A00B067.4050709@ceetonetechnology.com> Anyone going to BSDCan from NYC? Please hit me offlist if possible. . . g From dan at langille.org Tue May 5 18:20:18 2009 From: dan at langille.org (Dan Langille) Date: Tue, 05 May 2009 18:20:18 -0400 Subject: [nycbug-talk] BSDCan has started Message-ID: <4A00BBA2.1000703@langille.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 BSDCan has started. :) http://twitter.com/bsdcan - -- Dan Langille BSDCan - The Technical BSD Conference : http://www.bsdcan.org/ PGCon - The PostgreSQL Conference: http://www.pgcon.org/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.11 (FreeBSD) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkoAu6IACgkQCgsXFM/7nTzZXQCfczbfjjnfqFRS9qqfjXrKTCHB LcwAoLB44qK3vIaPByoHxKJJDJF2Vog2 =xqta -----END PGP SIGNATURE----- From george at ceetonetechnology.com Tue May 5 22:22:36 2009 From: george at ceetonetechnology.com (George Rosamond) Date: Tue, 05 May 2009 22:22:36 -0400 Subject: [nycbug-talk] bsdcan, again Message-ID: <4A00F46C.4050207@ceetonetechnology.com> For those who didn't know, bsdcan is this weekend in Ottawa. . . May 8/9. But if you were on the announce list, you'd know :) g From george at ceetonetechnology.com Wed May 6 15:17:58 2009 From: george at ceetonetechnology.com (George Rosamond) Date: Wed, 06 May 2009 15:17:58 -0400 Subject: [nycbug-talk] tonight's meeting Message-ID: <4A01E266.3050308@ceetonetechnology.com> FYI, for those who aren't (but should be) on the announce list: > Due to extenuating circumstances, tonight's speaker has been forced to > postpone his GNUstep meeting until a later date. We apologize for the > last minute notice. > > We will, however, have the meeting, and will have an open discussion and > also raise an idea we have for a fall conference. > > We look forward to seeing you at the meeting tonight. > _______________________________________________ > announce mailing list > http://lists.nycbug.org/mailman/listinfo/announce > From jkeen at verizon.net Wed May 6 21:59:16 2009 From: jkeen at verizon.net (James E Keenan) Date: Wed, 06 May 2009 21:59:16 -0400 Subject: [nycbug-talk] Parrot Virtual Machine Links Message-ID: <029AA53B-CDB3-4196-914B-3A4B72D8C6A9@verizon.net> As a follow-up to my remarks on the Parrot VM at tonight's NYCBUG meeting, here are some Parrot links: The Parrot home page: http://www.parrot.org/ Parrot Developer Wiki: https://trac.parrot.org/parrot The Parrot mailing list (Google Groups interface): http:// groups.google.com/group/parrot-dev/topics Parrot mailing lists (sign-up): http://lists.parrot.org/mailman/ listinfo Parrot-related blog aggregator: http://planet.parrotcode.org/ #parrot IRC channel logged: http://irclog.perlgeek.de/parrot/ Parrot SVN repository (web interface): https://trac.parrot.org/ parrot/browser/ Check out head: svn co https://svn.parrot.org/parrot/trunk Parrot Smolder (smoke report server): http://smolder.plusthree.com/ app/public_projects/details/8 How to get involved: http://www.parrot.org/dev/get-involved And my own little corner of the Parrot universe: Test coverage reports: http://thenceforward.net/parrot/coverage/configure-build/ coverage.html Enjoy! Jim Keenan From jkeen at verizon.net Wed May 6 22:06:24 2009 From: jkeen at verizon.net (James E Keenan) Date: Wed, 06 May 2009 22:06:24 -0400 Subject: [nycbug-talk] On Hackathons Message-ID: At tonight's NYCBUG, I spoke of some experiences participating in, and organizing hackathons within the Perl and Parrot communities. Some of our experiences may be applicable to the organizing of barcamps. Here's a link to a tarball of an HTML-slideshow that was part of a lightning talk I gave at YAPC::NA::2007 in Houston: How to Get the Most Out of a Hackathon. http://thenceforward.net/perl/yapc/YAPC-NA-2007/houslight.tgz Here is a Perl Foundation page that talks about past hackathons: http://www.perlfoundation.org/hackathons Enjoy Jim Keenan From george at ceetonetechnology.com Thu May 7 10:43:29 2009 From: george at ceetonetechnology.com (George Rosamond) Date: Thu, 07 May 2009 10:43:29 -0400 Subject: [nycbug-talk] On Hackathons In-Reply-To: References: Message-ID: <4A02F391.50802@ceetonetechnology.com> James E Keenan wrote: > At tonight's NYCBUG, I spoke of some experiences participating in, > and organizing hackathons within the Perl and Parrot communities. > Some of our experiences may be applicable to the organizing of barcamps. > > Here's a link to a tarball of an HTML-slideshow that was part of a > lightning talk I gave at YAPC::NA::2007 in Houston: How to Get the > Most Out of a Hackathon. > > http://thenceforward.net/perl/yapc/YAPC-NA-2007/houslight.tgz > > Here is a Perl Foundation page that talks about past hackathons: > > http://www.perlfoundation.org/hackathons > > Enjoy > Thanks James. . . cool stuff. g From george at ceetonetechnology.com Thu May 7 10:53:23 2009 From: george at ceetonetechnology.com (George Rosamond) Date: Thu, 07 May 2009 10:53:23 -0400 Subject: [nycbug-talk] Wednesday meeting discussion Message-ID: <4A02F5E3.8010401@ceetonetechnology.com> We had 20 or so people show up even though Gregory couldn't do the meeting on GNUstep. James led a discussion about Parrot, then we jumped into a discussion about the bsdcamp idea for NYC in the fall. Overall, IMHO, a brilliant discussion. The point is this: how to have a non-con. We have talked about on list before. . but this should give you an idea: http://barcamp.org/ These are some ideas. . . some of which are mine (and maybe only mine :) - no crazy organizing with a 6 mos lead time - low reliance on sponsor money - loose, organic organization with which topics are voted on "with people's feet" - potentially something that could happen quarterly - could also spawn more development/organizing/bug-fixing in certain projects - could spawn more cross-project activity, such as device driver development, etc. The vast majority of the room was engaged in it, and provided useful insight, from Ivan talking about his twice a year meetings in Bulgaria, to Bill speaking about sorting out a criteria for which sessions happen, and when they should be cut off. I don't think anything concrete was decided, but the response live and in-person was great. Such an idea only works with that level of engagement by a broad number of people. On that note. . . I hope that last night's discussion could translate onto the talk list . . here, now. James provided some references for his comments last night. Ultimately, an nyc bsdcamp is only what we make it. . . George From jkeen at verizon.net Thu May 7 07:54:10 2009 From: jkeen at verizon.net (James E Keenan) Date: Thu, 07 May 2009 07:54:10 -0400 Subject: [nycbug-talk] Yet Another Hackathon Link Message-ID: <3997F958-1989-420D-98C7-4E492CC6AC77@verizon.net> Another link on how a hackathon works. Experience of the recent Perl QA hackathon in Birmingham, UK, written by someone who is both a Perl and a Ruby developer http://www.h-online.com/open/What-happens-at-a-hackathon--/features/ 112997/0 From george at ceetonetechnology.com Thu May 7 13:56:37 2009 From: george at ceetonetechnology.com (George Rosamond) Date: Thu, 07 May 2009 13:56:37 -0400 Subject: [nycbug-talk] Wednesday meeting discussion In-Reply-To: <4A031DB7.8000304@bsdunix.net> References: <4A02F5E3.8010401@ceetonetechnology.com> <4A031DB7.8000304@bsdunix.net> Message-ID: <4A0320D5.50007@ceetonetechnology.com> Siobhan Lynch wrote: > George, hey T > > would this be an overnight event, like rent hotel space kinda thing? > maybe a couple of meeting rooms, OR we can even host talks in our own > rooms? > we're thinking one day with a bunch of classrooms. . . not flying in anyone, no heavy overhead. > (for another genre of study, I have been at "cons" where we've done > that... it can be alot of fun) > agree > Whatever the venue, though, I think this is a great idea. I could never > afford to spend time to prep for BSDCon, or even go, because of parental > responsiblities... > low overhead in multiple ways. we should do childcare onsite! :) > Hopefully a lower key evet with a lower price point would mean that I > can actually participate. > > -Trish > we're looking cheap. . . i don't know, maybe, say $10. pizza lunch included. g From trish at bsdunix.net Thu May 7 13:43:19 2009 From: trish at bsdunix.net (Siobhan Lynch) Date: Thu, 07 May 2009 13:43:19 -0400 Subject: [nycbug-talk] Wednesday meeting discussion In-Reply-To: <4A02F5E3.8010401@ceetonetechnology.com> References: <4A02F5E3.8010401@ceetonetechnology.com> Message-ID: <4A031DB7.8000304@bsdunix.net> George, would this be an overnight event, like rent hotel space kinda thing? maybe a couple of meeting rooms, OR we can even host talks in our own rooms? (for another genre of study, I have been at "cons" where we've done that... it can be alot of fun) Whatever the venue, though, I think this is a great idea. I could never afford to spend time to prep for BSDCon, or even go, because of parental responsiblities... Hopefully a lower key evet with a lower price point would mean that I can actually participate. -Trish On 5/7/09 10:53 AM, George Rosamond wrote: > We had 20 or so people show up even though Gregory couldn't do the > meeting on GNUstep. > > James led a discussion about Parrot, then we jumped into a discussion > about the bsdcamp idea for NYC in the fall. > > Overall, IMHO, a brilliant discussion. > > The point is this: how to have a non-con. > > We have talked about on list before. . but this should give you an idea: > > http://barcamp.org/ > > These are some ideas. . . some of which are mine (and maybe only mine :) > > - no crazy organizing with a 6 mos lead time > - low reliance on sponsor money > - loose, organic organization with which topics are voted on "with > people's feet" > - potentially something that could happen quarterly > - could also spawn more development/organizing/bug-fixing in certain > projects > - could spawn more cross-project activity, such as device driver > development, etc. > > The vast majority of the room was engaged in it, and provided useful > insight, from Ivan talking about his twice a year meetings in Bulgaria, > to Bill speaking about sorting out a criteria for which sessions happen, > and when they should be cut off. > > I don't think anything concrete was decided, but the response live and > in-person was great. > > Such an idea only works with that level of engagement by a broad number > of people. > > On that note. . . I hope that last night's discussion could translate > onto the talk list . . here, now. > > James provided some references for his comments last night. > > Ultimately, an nyc bsdcamp is only what we make it. . . > > George > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > From trish at bsdunix.net Thu May 7 14:59:41 2009 From: trish at bsdunix.net (Siobhan Lynch) Date: Thu, 07 May 2009 14:59:41 -0400 Subject: [nycbug-talk] Wednesday meeting discussion In-Reply-To: <4A0320D5.50007@ceetonetechnology.com> References: <4A02F5E3.8010401@ceetonetechnology.com> <4A031DB7.8000304@bsdunix.net> <4A0320D5.50007@ceetonetechnology.com> Message-ID: <4A032F9D.9050004@bsdunix.net> On 5/7/09 1:56 PM, George Rosamond wrote: > > low overhead in multiple ways. > > we should do childcare onsite! > > :) > *laugh* I almost said that in my last email and then deleted it! GMTA. -Trish From cwolsen at ubixos.com Thu May 7 18:06:11 2009 From: cwolsen at ubixos.com (Christopher Olsen) Date: Thu, 07 May 2009 18:06:11 -0400 Subject: [nycbug-talk] Hello Message-ID: <0KJA00DSFO1WPXD2@vms173009.mailsrvcs.net> Hi Everyone, I just wanted to drop a quick note to say hello to every as I am new to the NYCBUG list. I attempted to make my first meeting last night however I was held up at a clients. I do plan to make Junes meeting though.. So besides all things BSD can this group be utilized as a resource in times of need? -Christopher Christopher Olsen IT Builders 32 Broadway Suite 204 New York, NY 10004 212-514-6270 http://www.tuve.tv/mrolsen -------------- next part -------------- An HTML attachment was scrubbed... URL: From bonsaime at gmail.com Fri May 8 09:22:45 2009 From: bonsaime at gmail.com (Jesse Callaway) Date: Fri, 8 May 2009 09:22:45 -0400 Subject: [nycbug-talk] Hello In-Reply-To: <0KJA00DSFO1WPXD2@vms173009.mailsrvcs.net> References: <0KJA00DSFO1WPXD2@vms173009.mailsrvcs.net> Message-ID: On Thu, May 7, 2009 at 6:06 PM, Christopher Olsen wrote: > Hi Everyone, > > I just wanted to drop a quick note to say hello to every as I am new to the > NYCBUG list. I attempted to make my first meeting last night however I was > held up at a clients. I do plan to make Junes meeting though.. > > So besides all things BSD can this group be utilized as a resource in times > of need? > > -Christopher > > Christopher Olsen > IT Builders > 32 Broadway Suite 204 > New York, NY 10004 > 212-514-6270 > http://www.tuve.tv/mrolsen > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > > Well, here's the standard mailing list response to that... sorry.... Can you be a little more specific about this resource utilization? The answer is most likely yes. But nobody pays fees to be a member of NYCBUG and there is no obligation to do anything. That said, I think most everyone follows the spirit of contributing where they can. If it's some sort of commercial proposition then there is a jobs mailing list which would be most appropriate for that sort of thing. -jesse From carton at Ivy.NET Fri May 8 15:10:49 2009 From: carton at Ivy.NET (Miles Nordin) Date: Fri, 08 May 2009 15:10:49 -0400 Subject: [nycbug-talk] Hello In-Reply-To: <0KJA00DSFO1WPXD2@vms173009.mailsrvcs.net> (Christopher Olsen's message of "Thu, 07 May 2009 18:06:11 -0400") References: <0KJA00DSFO1WPXD2@vms173009.mailsrvcs.net> Message-ID: >>>>> "co" == Christopher Olsen writes: co> can this group be utilized as a resource in times of need? I would like to make it clear upfront that I'm not interested in massage ``meet-ups'' nor circle-jerks. I'm opting way the hell out of that. -- READ CAREFULLY. By reading this fortune, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: From trish at bsdunix.net Fri May 8 17:20:35 2009 From: trish at bsdunix.net (Siobhan Lynch) Date: Fri, 08 May 2009 17:20:35 -0400 Subject: [nycbug-talk] Hello In-Reply-To: References: <0KJA00DSFO1WPXD2@vms173009.mailsrvcs.net> Message-ID: <4A04A223.6070209@bsdunix.net> On 5/8/09 3:10 PM, Miles Nordin wrote: >>>>>> "co" == Christopher Olsen writes: >>>>>> > > co> can this group be utilized as a resource in times of need? > > I would like to make it clear upfront that I'm not interested in > massage ``meet-ups'' nor circle-jerks. I'm opting way the hell out of > that. > > WTF did that come from? I know you have a habit of being rude, but its *usually* somewhat within the topic. That was completely out of left field, and plain.... offensive. -Trish > ------------------------------------------------------------------------ > > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spork at bway.net Fri May 8 18:02:05 2009 From: spork at bway.net (Charles Sprickman) Date: Fri, 8 May 2009 18:02:05 -0400 (EDT) Subject: [nycbug-talk] Hello In-Reply-To: <4A04A223.6070209@bsdunix.net> References: <0KJA00DSFO1WPXD2@vms173009.mailsrvcs.net> <4A04A223.6070209@bsdunix.net> Message-ID: On Fri, 8 May 2009, Siobhan Lynch wrote: > On 5/8/09 3:10 PM, Miles Nordin wrote: >>>>>>> "co" == Christopher Olsen writes: >>>>>>> >> >> co> can this group be utilized as a resource in times of need? >> >> I would like to make it clear upfront that I'm not interested in >> massage ``meet-ups'' nor circle-jerks. I'm opting way the hell out of >> that. >> >> > > WTF did that come from? > > I know you have a habit of being rude, but its *usually* somewhat within the > topic. > > That was completely out of left field, and plain.... offensive. I sensed humor in Myles' post. He expanded "times of need" to a different sort of "need" beyond consulting... Back on-topic though, I think one-off consulting work should be fair-game on -jobs. I've had the need for a quick few hours of help and I'd certainly not mind picking up a few hours of work here and there myself. No "release" though. C > -Trish > > >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> talk mailing list >> talk at lists.nycbug.org >> http://lists.nycbug.org/mailman/listinfo/talk >> > > From george at ceetonetechnology.com Fri May 8 20:30:40 2009 From: george at ceetonetechnology.com (George Rosamond) Date: Fri, 08 May 2009 20:30:40 -0400 Subject: [nycbug-talk] Hello In-Reply-To: References: <0KJA00DSFO1WPXD2@vms173009.mailsrvcs.net> <4A04A223.6070209@bsdunix.net> Message-ID: <4A04CEB0.1060801@ceetonetechnology.com> Charles Sprickman wrote: > On Fri, 8 May 2009, Siobhan Lynch wrote: > >> On 5/8/09 3:10 PM, Miles Nordin wrote: >>>>>>>> "co" == Christopher Olsen writes: >>>>>>>> >>> co> can this group be utilized as a resource in times of need? >>> >>> I would like to make it clear upfront that I'm not interested in >>> massage ``meet-ups'' nor circle-jerks. I'm opting way the hell out of >>> that. >>> >>> >> WTF did that come from? >> >> I know you have a habit of being rude, but its *usually* somewhat within the >> topic. >> >> That was completely out of left field, and plain.... offensive. > > I sensed humor in Myles' post. He expanded "times of need" to a different > sort of "need" beyond consulting... > > Back on-topic though, I think one-off consulting work should be fair-game > on -jobs. I've had the need for a quick few hours of help and I'd > certainly not mind picking up a few hours of work here and there myself. > > No "release" though. > > C okay. . .take it easy everyone. Christopher, I guess an elaboration of what you mean by a 'resource' is the question. User group mailing lists can be helpful for support issues. . . g From cwolsen at ubixos.com Fri May 8 20:02:21 2009 From: cwolsen at ubixos.com (Christopher Olsen) Date: Fri, 08 May 2009 20:02:21 -0400 Subject: [nycbug-talk] Hello Message-ID: <0KJC00AC2O3BOE3A@vms173019.mailsrvcs.net> Well if you kept your mind in the same track as the list you would have taken need as bsd support nothing more. Christopher Olsen IT Builders 32 Broadway Suite 204 New York, NY 10004 212-514-6270 http://www.tuve.tv/mrolsen -----Original Message----- From: Miles Nordin Sent: Friday, May 08, 2009 15:10 To: talk at lists.nycbug.org Subject: Re: [nycbug-talk] Hello >>>>> "co" == Christopher Olsen writes: co> can this group be utilized as a resource in times of need? I would like to make it clear upfront that I'm not interested in massage ``meet-ups'' nor circle-jerks. I'm opting way the hell out of that. -- READ CAREFULLY. By reading this fortune, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. From brian.gupta at gmail.com Sun May 10 01:22:57 2009 From: brian.gupta at gmail.com (Brian Gupta) Date: Sun, 10 May 2009 01:22:57 -0400 Subject: [nycbug-talk] Parrot Virtual Machine Links In-Reply-To: <029AA53B-CDB3-4196-914B-3A4B72D8C6A9@verizon.net> References: <029AA53B-CDB3-4196-914B-3A4B72D8C6A9@verizon.net> Message-ID: <5b5090780905092222o252e4ef1rf841e3d895ebb85@mail.gmail.com> Hey James, Parrot has been in dev for close to ten years. Is there light at the end of the tunnel? (It's beginning to be a bit Duke Nuke Em Forever). Cheers, Brian On Wed, May 6, 2009 at 9:59 PM, James E Keenan wrote: > As a follow-up to my remarks on the Parrot VM at tonight's NYCBUG > meeting, here are some Parrot links: > > The Parrot home page: ?http://www.parrot.org/ > Parrot Developer Wiki: ?https://trac.parrot.org/parrot > The Parrot mailing list (Google Groups interface): ?http:// > groups.google.com/group/parrot-dev/topics > Parrot mailing lists (sign-up): ?http://lists.parrot.org/mailman/ > listinfo > Parrot-related blog aggregator: ?http://planet.parrotcode.org/ > #parrot IRC channel logged: ?http://irclog.perlgeek.de/parrot/ > Parrot SVN repository (web interface): ?https://trac.parrot.org/ > parrot/browser/ > Check out head: ?svn co https://svn.parrot.org/parrot/trunk > Parrot Smolder (smoke report server): ?http://smolder.plusthree.com/ > app/public_projects/details/8 > How to get involved: ?http://www.parrot.org/dev/get-involved > > And my own little corner of the Parrot universe: ?Test coverage > reports: ?http://thenceforward.net/parrot/coverage/configure-build/ > coverage.html > > Enjoy! > > Jim Keenan > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > -- - Brian Gupta New York City user groups calendar: http://nyc.brandorr.com/ From jkeen at verizon.net Sun May 10 10:25:28 2009 From: jkeen at verizon.net (James E Keenan) Date: Sun, 10 May 2009 10:25:28 -0400 Subject: [nycbug-talk] Parrot Virtual Machine Links In-Reply-To: <5b5090780905092222o252e4ef1rf841e3d895ebb85@mail.gmail.com> References: <029AA53B-CDB3-4196-914B-3A4B72D8C6A9@verizon.net> <5b5090780905092222o252e4ef1rf841e3d895ebb85@mail.gmail.com> Message-ID: <4511FBC5-F4D9-4548-82BD-88C3A5149341@verizon.net> On May 10, 2009, at 1:22 AM, Brian Gupta wrote: > Hey James, Parrot has been in dev for close to ten years. Is there > light at the end of the tunnel? (It's beginning to be a bit Duke Nuke > Em Forever). > At the meeting the other night I discussed both what Parrot has accomplished and still has to accomplish. You can also get more information by following these links: Parrot home page: http://www.parrot.org/ Parrot blog aggregator: http://planet.parrotcode.org/ Parrot wiki: https://trac.parrot.org/parrot/wiki You can get our 1.0 current supported release, our current developer release, and packages here: http://www.parrot.org/download After you take a look at those links I'd be happy to answer more specific questions or point you to places where you can have those questions answered. Thank you very much. Jim Keenan From cwolsen at ubixos.com Tue May 12 10:37:59 2009 From: cwolsen at ubixos.com (Christopher Olsen) Date: Tue, 12 May 2009 10:37:59 -0400 Subject: [nycbug-talk] Projects Message-ID: <4A0989C7.8000006@ubixos.com> Anyone on this list working on or run any form of BSD related projects? -Christopher From nycbug at cyth.net Tue May 12 11:06:05 2009 From: nycbug at cyth.net (Ray Lai) Date: Tue, 12 May 2009 08:06:05 -0700 Subject: [nycbug-talk] Projects In-Reply-To: <4A0989C7.8000006@ubixos.com> References: <4A0989C7.8000006@ubixos.com> Message-ID: <7765c0380905120806j6c2f0e1n30d266d856582afa@mail.gmail.com> I hack OpenBSD, but I'm also a slacker. -Ray- On Tue, May 12, 2009 at 7:37 AM, Christopher Olsen wrote: > Anyone on this list working on or run any form of BSD related projects? > > -Christopher > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > From spork at bway.net Tue May 12 13:30:14 2009 From: spork at bway.net (Charles Sprickman) Date: Tue, 12 May 2009 13:30:14 -0400 (EDT) Subject: [nycbug-talk] Trusted HVAC vendors Message-ID: Hello all, Looking for an HVAC contractor with experience installing air conditioning for a small server room. Anyone here have direct experience with someone trustworthy in that space? Thanks, Charles ___ Charles Sprickman NetEng/SysAdmin Bway.net - New York's Best Internet - www.bway.net spork at bway.net - 212.655.9344 From jonathan at kc8onw.net Tue May 12 16:02:29 2009 From: jonathan at kc8onw.net (Jonathan) Date: Tue, 12 May 2009 16:02:29 -0400 Subject: [nycbug-talk] SAS to SATA breakout cables. Message-ID: <4A09D5D5.4080707@kc8onw.net> Hello all, Does anyone have a good source for SFF-8087 to 4-SATA breakout cables? I'm probably dreaming but I'm hoping to find a pair from a reliable source for $30 or less total. Thanks, Jonathan From matt at atopia.net Thu May 14 14:49:52 2009 From: matt at atopia.net (Matt Juszczak) Date: Thu, 14 May 2009 14:49:52 -0400 (EDT) Subject: [nycbug-talk] Split Horizon DNS Message-ID: Hi all, Right now, I've got the following setup going: -8 FreeBSD boxes -2 of them running bind, one master one slave -every /etc/resolv.conf set to those two servers -two servers configured to forward onto ISP nameservers The goal? Allows me to create a "domain name".int (IE: server1.mydomain.int) for use internally, while still allowing everything external to resolve correctly. The reason for creating the .int was to allow use internal access to each box without overwriting the IP addresses of the .com or confusing them in anyway shape or form. The setup seems to work nicely (especially since I have a timeout of 1 set in /etc/resolv.conf, so fail over occurs quickly if one of the DNS boxes is down). The only negative seems to be that if both boxes are down, DNS fails entirely. However, this is almost the same for any /etc/resolv.conf configuration. What are your thoughts? Thanks, Matt From carton at Ivy.NET Thu May 14 15:54:12 2009 From: carton at Ivy.NET (Miles Nordin) Date: Thu, 14 May 2009 15:54:12 -0400 Subject: [nycbug-talk] Split Horizon DNS In-Reply-To: (Matt Juszczak's message of "Thu, 14 May 2009 14:49:52 -0400 (EDT)") References: Message-ID: >>>>> "mj" == Matt Juszczak writes: mj> -two servers configured to forward onto ISP nameservers I used to do this but don't now. It's a nice idea, but for me causes more problems than it solves. DNS turns out not to be among the big problems with making things web scale, so the elaborate tree of caching resolvers early advocates envisioned probably isn't necessary---one cache per user, or one cache per handful of users, should be fine. no cache is of course not fine at all. It's slow and it's being a bad neighbor. In fact I think it'd be good if bind wrote its cache to disk across restarts, but I don't know of any resolver that does this. But if you are using ISP forwarders instead of unleashing your recursive resolvers onto the internet directly to be neighborly, I think your etiquette is somewhat obsolete. If your ISP is nationwide and has a cluster of nameservers at ``national headquarters'' instead of spreading recursive resolvers over all their POP's, then you are much better running your own recursive resolver and not using forwarding because (a) you get lower latency on the queries themselves and (b) many CDN's will end up serving you better because they'll know where you are. mj> "domain name".int use a TLD that is not already in use on the internet. .int is in use so that's a very bad choice. also do not use .local just say 'dig int. ns' and you can see that it's in use already. mj> timeout of 1 set in /etc/resolv.conf that's a nice idea, and not necessarily a bad one. However please do understand the timeout isn't for your internal servers---it's for the whole process of resolution. There's no way for the resolver to tell the difference between a recursive resolver that's down and a name that honestly takes a long time to resolve. so, if you are behind a slow pipe with a big FIFO buffer (DSL, T1), or if the other guy's authoritative servers are really slow, or if one of the authoritative servers is down and BIND's recursive resolver timeout is taking longer than 1 second (yes I think it is) to move on to the next redundant nameserver, then you won't be able to resolve those slow domains at all. at least not the first time. If the data is cacheable maybe you can press ``reload'' and try again after the query finishes behind the scenes. But there seems to be some caching inside Firefox itself too which could fuck you. :( If you had the slave use the master as a forwarder, then your timeout for the overall external process becomes 2 seconds, but I think this is a loss overall. First it's not much longer. Second if done stupidly it destroys the redundancy you wanted---may as well just list the master. Even if done smartly so the slave can be made to forward first, then try to resolve on its own, it's still a loss: What you want to avoid most with resolver-side DNS redundancy is a situation where every lookup takes 4 - 30 seconds. (ex. what facebook was causing by blackholing AAAA queries instead of returning NXDOMAIN. but they fixed it.) This is what really pisses people off. really you need a failover stratagy that's stateful lower down in the resovler tree---right now we only have state in the caches, which is too far up, beyond the point where you put your redundancy. IMHO with most OS's you may as well just mention one nameserver because if it ever has to move onto the second, it would be better to fail outright and get attention than silently work in this severely broken/slow way. the type of resolver redundancy that would work is anycast or CARP or something like that. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: From carton at Ivy.NET Thu May 14 16:09:06 2009 From: carton at Ivy.NET (Miles Nordin) Date: Thu, 14 May 2009 16:09:06 -0400 Subject: [nycbug-talk] Split Horizon DNS In-Reply-To: (Miles Nordin's message of "Thu, 14 May 2009 15:54:12 -0400") References: Message-ID: sorry, also it is not split horizon that you're doing, AIUI. Split horizon DNS is a glossary word for giving different answers for the same FQDN depending on whether the querier is an internal or an external hosts. It's a way of dealing with DMZ's that are broken because of a combination of the netadmin not being that smart, and clumsy things having been done to work around a shortage of global IP addresses. so, split horizon fancyness would apply only to your proper global domain name. The word exists because the feature was added to bind relatively late. Before this feature you had to run two nameservers to accomplish this. What you are doing is just a plain old bogus TLD. Split horizon is IMHO much uglier and less self-documenting than a plain old bogus TLD. We are using split horizon (old two-nameserver style) at work, and I'm trying to fix and garbagecollect it. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: From matt at atopia.net Thu May 14 16:11:48 2009 From: matt at atopia.net (Matt Juszczak) Date: Thu, 14 May 2009 16:11:48 -0400 (EDT) Subject: [nycbug-talk] Split Horizon DNS In-Reply-To: References: Message-ID: > What you are doing is just a plain old bogus TLD. Ah yes, didn't think of that. Though I assume a bogus TLD for internal use only isn't the worst thing in the world? I assume it's still practiced in places? From matt at atopia.net Thu May 14 16:13:18 2009 From: matt at atopia.net (Matt Juszczak) Date: Thu, 14 May 2009 16:13:18 -0400 (EDT) Subject: [nycbug-talk] Split Horizon DNS In-Reply-To: References: Message-ID: > If your ISP is nationwide and has a cluster of nameservers at > ``national headquarters'' instead of spreading recursive resolvers > over all their POP's, then you are much better running your own > recursive resolver and not using forwarding because (a) you get lower > latency on the queries themselves and (b) many CDN's will end up > serving you better because they'll know where you are. So you're saying that I should remove the forwarders {} block out of named entirely and just have my internal DNS servers gather data from the roots directly? This would work except, I don't believe the boxes are able to do external DNS queries (outbound firewall rules), and the other boxes are local to the network. From carton at Ivy.NET Thu May 14 16:22:52 2009 From: carton at Ivy.NET (Miles Nordin) Date: Thu, 14 May 2009 16:22:52 -0400 Subject: [nycbug-talk] Split Horizon DNS In-Reply-To: (Matt Juszczak's message of "Thu, 14 May 2009 16:11:48 -0400 (EDT)") References: Message-ID: >>>>> "mj" == Matt Juszczak writes: mj> a bogus TLD for internal use only isn't the worst thing in the mj> world? in spite of the negativesounding word bogus I have absolutely no problem with it and do it ~everywhere that I've got my shit together and rfc1918 is in use. just be sure not to bogify a real TLD and do not use the deafult mDNS domain .local -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: From matt at atopia.net Thu May 14 16:26:52 2009 From: matt at atopia.net (Matt Juszczak) Date: Thu, 14 May 2009 16:26:52 -0400 (EDT) Subject: [nycbug-talk] Split Horizon DNS In-Reply-To: References: Message-ID: > in spite of the negativesounding word bogus I have absolutely no > problem with it and do it ~everywhere that I've got my shit together > and rfc1918 is in use. So say there are 8 servers. All boxes have: search bogusdomain.internal nameserver 10.15.96.2 nameserver 10.15.96.3 options timeout:1 in /etc/resolv.conf And .2 and .3 are setup as a master and slave of bogusdomain.internal, with all other queries going to the Internet. The servers are actually named "servername.bogusdomain.com" even though in /etc/resolv.conf search is set to "bogusdomain.internal" because, internally, you communicate on private IP's, while the boxes when connected to publicly use the public IP's (just the way the network is setup, I had no say in it). So to you, that isn't a problematic setup, minus the fact that the timeout:1 may actually cause more harm than good? From matt at atopia.net Thu May 14 17:38:00 2009 From: matt at atopia.net (Matt Juszczak) Date: Thu, 14 May 2009 17:38:00 -0400 (EDT) Subject: [nycbug-talk] External Authentication Implementation in FreeBSD Message-ID: Hi all, This question refers specifically to LDAP, but I assume that it would work for other services too, such as NIS. In my opinion, I see three possible ways these things can be implemented into pam, nss, sudoers, etc: 1) every 5 minutes or so, generate /etc/passwd, /etc/master.passwd, and /etc/group from the information in LDAP. Also, generate a /usr/local/etc/sudoers file. benefits are that the boxes work 100% standalone even if all ldap servers become unavailable. 2) half-half it. put system accounts in /etc/passwd, /etc/master.passwd, etc., and only put USERS in ldap. That way, it will try ldap just for users, but otherwise the boxes function normally even if LDAP is down (perhaps a backdoor user account?). Sudoers would tie into LDAP with a fail over somehow to the file system. 3) all ldap - put all accounts, including system accounts, root, etc., into LDAP. This is my least favorite option. Just looking for what most of you use in your FreeBSD setups. Thanks! -M From bobleigh at twomeeps.com Thu May 14 21:52:59 2009 From: bobleigh at twomeeps.com (Bob Leigh) Date: Thu, 14 May 2009 21:52:59 -0400 Subject: [nycbug-talk] Split Horizon DNS In-Reply-To: References: Message-ID: <20090515015259.GF80110@twomeeps.com> Miles Nordin wrote: > mj> "domain name".int > > use a TLD that is not already in use on the internet. .int is in use > so that's a very bad choice. also do not use .local That last sentence caught my interest since I do use ".local" as an internal TLD. What's the reason not to do that? -- Bob Leigh bobleigh at twomeeps.com From carton at Ivy.NET Thu May 14 22:44:24 2009 From: carton at Ivy.NET (Miles Nordin) Date: Thu, 14 May 2009 22:44:24 -0400 Subject: [nycbug-talk] Split Horizon DNS In-Reply-To: <20090515015259.GF80110@twomeeps.com> (Bob Leigh's message of "Thu, 14 May 2009 21:52:59 -0400") References: <20090515015259.GF80110@twomeeps.com> Message-ID: >>>>> "bl" == Bob Leigh writes: bl> I do use ".local" as an internal TLD. What's the reason not bl> to do that? conflicts with link-local mDNS. all macs and i think also ubuntu use it, and maybe windows boxes if itunes is installed on them, not sure. It's probably well-explained on wikipedia. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: From bonsaime at gmail.com Fri May 15 09:19:55 2009 From: bonsaime at gmail.com (Jesse Callaway) Date: Fri, 15 May 2009 09:19:55 -0400 Subject: [nycbug-talk] Split Horizon DNS In-Reply-To: References: Message-ID: On Thu, May 14, 2009 at 2:49 PM, Matt Juszczak wrote: > Hi all, > > Right now, I've got the following setup going: > > ? ? ? ?-8 FreeBSD boxes > ? ? ? ?-2 of them running bind, one master one slave > ? ? ? ?-every /etc/resolv.conf set to those two servers > ? ? ? ?-two servers configured to forward onto ISP nameservers > > The goal? ?Allows me to create a "domain name".int (IE: > server1.mydomain.int) for use internally, while still allowing everything > external to resolve correctly. ?The reason for creating the .int was to > allow use internal access to each box without overwriting the IP addresses > of the .com or confusing them in anyway shape or form. > > The setup seems to work nicely (especially since I have a timeout of 1 set > in /etc/resolv.conf, so fail over occurs quickly if one of the DNS boxes > is down). ?The only negative seems to be that if both boxes are down, DNS > fails entirely. ?However, this is almost the same for any /etc/resolv.conf > configuration. > > What are your thoughts? > > Thanks, > > Matt > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > Hi Matt, One thing I like about the DJB working model of how to do things is that the authoritative nameservers, listed in whois, for YOURDOMAIN.COM are and should be totally separate entities from anything you are doing in-house as far as name resolution is concerned. There's no need to change software to adopt this approach. Especially if you are tinkering, it might be a good idea to take the NS's which talk to the Internet and put them by themselves. Then do all of this .localdomain stuff somewhere else. The implementation could be in jails or what have you. Personally my brain goes fuzzy when I look at all of the authorization clauses in the bind configs. -jesse From bschonhorst at gmail.com Fri May 15 23:08:13 2009 From: bschonhorst at gmail.com (Brad Schonhorst) Date: Fri, 15 May 2009 23:08:13 -0400 Subject: [nycbug-talk] OpenBSD large filesystem experiences? Message-ID: <7708fd680905152008x4b03fb09t71c6ff1a69b5a053@mail.gmail.com> Hi all! Wondering if anyone has experimented with large file systems under openbsd. It sounds like ffs2 filesystems are limited to ~1TB if you want to be able to run fsck. Have you created larger file systems - presumable by increasing the block size? Any problems or other issues I should consider? Thanks, Brad -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.saad at ymail.com Fri May 15 23:49:50 2009 From: mark.saad at ymail.com (mark.saad at ymail.com) Date: Sat, 16 May 2009 03:49:50 +0000 Subject: [nycbug-talk] OpenBSD large filesystem experiences? In-Reply-To: <7708fd680905152008x4b03fb09t71c6ff1a69b5a053@mail.gmail.com> References: <7708fd680905152008x4b03fb09t71c6ff1a69b5a053@mail.gmail.com> Message-ID: <1479372543-1242445793-cardhu_decombobulator_blackberry.rim.net-395363067-@bxe1122.bisx.prod.on.blackberry> I hate to say it but zfs is your only choice in the 1TB size , and its freebsd or solaris on whole disks not partitions is the only option . Sent from my Verizon Wireless BlackBerry -----Original Message----- From: Brad Schonhorst Date: Fri, 15 May 2009 23:08:13 To: Subject: [nycbug-talk] OpenBSD large filesystem experiences? _______________________________________________ talk mailing list talk at lists.nycbug.org http://lists.nycbug.org/mailman/listinfo/talk From matt at atopia.net Sun May 17 00:00:35 2009 From: matt at atopia.net (Matt Juszczak) Date: Sun, 17 May 2009 00:00:35 -0400 (EDT) Subject: [nycbug-talk] External Authentication Implementation in FreeBSD In-Reply-To: References: <0KJN0079AQPND0JW@vms173007.mailsrvcs.net> Message-ID: What about "ldapifying" the LDAP servers? If server1 is LDAP primary and server2 is LDAP secondary, should you put nss_ldap/pam_ldap on those boxes, have them connect to the local instance, and have it failover to files just in case the LDAP process is down? or should those boxes that drive authentication and authorization, etc. be driven by local files/system only? From cwolsen at ubixos.com Sun May 17 08:30:14 2009 From: cwolsen at ubixos.com (Christopher Olsen) Date: Sun, 17 May 2009 08:30:14 -0400 Subject: [nycbug-talk] External Authentication Implementation in FreeBSD In-Reply-To: References: <0KJN0079AQPND0JW@vms173007.mailsrvcs.net> Message-ID: <4A100356.2010906@ubixos.com> What I was hoping was if it can do something similar to way the way workstations work from a windows domain if the domain is there they will log right onto it if by chance it's not available it will use cached credentials to get them onto the workstation. Matt Juszczak wrote: > What about "ldapifying" the LDAP servers? If server1 is LDAP primary > and server2 is LDAP secondary, should you put nss_ldap/pam_ldap on > those boxes, have them connect to the local instance, and have it > failover to files just in case the LDAP process is down? or should > those boxes that drive authentication and authorization, etc. be > driven by local files/system only? From matt at atopia.net Sun May 17 12:08:39 2009 From: matt at atopia.net (Matt Juszczak) Date: Sun, 17 May 2009 12:08:39 -0400 (EDT) Subject: [nycbug-talk] External Authentication Implementation in FreeBSD In-Reply-To: <4A100356.2010906@ubixos.com> References: <0KJN0079AQPND0JW@vms173007.mailsrvcs.net> <4A100356.2010906@ubixos.com> Message-ID: Ah... I think the best bet would be to setup an ldap slave on each server and use it as the failover server. The other option is to generate passwd/shadow/group files from ldap so that it will always work. On Sun, 17 May 2009, Christopher Olsen wrote: > What I was hoping was if it can do something similar to way the way > workstations work from a windows domain if the domain is there they will log > right onto it if by chance it's not available it will use cached credentials > to get them onto the workstation. > > > Matt Juszczak wrote: >> What about "ldapifying" the LDAP servers? If server1 is LDAP primary and >> server2 is LDAP secondary, should you put nss_ldap/pam_ldap on those boxes, >> have them connect to the local instance, and have it failover to files just >> in case the LDAP process is down? or should those boxes that drive >> authentication and authorization, etc. be driven by local files/system >> only? > > From matt at atopia.net Sun May 17 20:00:22 2009 From: matt at atopia.net (matt at atopia.net) Date: Mon, 18 May 2009 00:00:22 +0000 Subject: [nycbug-talk] Make package-recursive problem Message-ID: <1592882363-1242604820-cardhu_decombobulator_blackberry.rim.net-163190730-@bxe1247.bisx.prod.on.blackberry> Hi all, I'm stuck. I have been building packages all day. Apache, php, cacti, nagios, etc. Each time I do this, I cd into the port and run make package-recursive. What I'm noticing is that many php extension packages that were once normal working packages are being replaced by 2k sized packages. These packages still contain the proper meta data, but are missing the .so file. They seem to install correctly on the build box, but with those same packages installed on a production webserver, they don't install the .so file. Any ideas? I'm stuck and its been making me pull my hair out. Matt From mark.saad at ymail.com Sun May 17 21:12:05 2009 From: mark.saad at ymail.com (mark.saad at ymail.com) Date: Mon, 18 May 2009 01:12:05 +0000 Subject: [nycbug-talk] Make package-recursive problem Message-ID: <1872579251-1242609122-cardhu_decombobulator_blackberry.rim.net-663046506-@bxe1122.bisx.prod.on.blackberry> Matt What 'ports' system are you using ? Also on what os & version ? ------Original Message------ From: matt at atopia.net Sender: talk-bounces at lists.nycbug.org To: talk at lists.nycbug.org ReplyTo: matt at atopia.net Subject: [nycbug-talk] Make package-recursive problem Sent: May 17, 2009 8:00 PM Hi all, I'm stuck. I have been building packages all day. Apache, php, cacti, nagios, etc. Each time I do this, I cd into the port and run make package-recursive. What I'm noticing is that many php extension packages that were once normal working packages are being replaced by 2k sized packages. These packages still contain the proper meta data, but are missing the .so file. They seem to install correctly on the build box, but with those same packages installed on a production webserver, they don't install the .so file. Any ideas? I'm stuck and its been making me pull my hair out. Matt _______________________________________________ talk mailing list talk at lists.nycbug.org http://lists.nycbug.org/mailman/listinfo/talk Sent from my Verizon Wireless BlackBerry From matt at atopia.net Sun May 17 21:40:15 2009 From: matt at atopia.net (matt at atopia.net) Date: Mon, 18 May 2009 01:40:15 +0000 Subject: [nycbug-talk] Make package-recursive problem Message-ID: <1089862135-1242610804-cardhu_decombobulator_blackberry.rim.net-1793152335-@bxe1247.bisx.prod.on.blackberry> Freebsd 7.1. The ports system that comes with it =) ------Original Message------ From: mark.saad at ymail.com Sender: talk-bounces at lists.nycbug.org To: talk at lists.nycbug.org ReplyTo: mark.saad at ymail.com Subject: Re: [nycbug-talk] Make package-recursive problem Sent: May 17, 2009 21:12 Matt What 'ports' system are you using ? Also on what os & version ? ------Original Message------ From: matt at atopia.net Sender: talk-bounces at lists.nycbug.org To: talk at lists.nycbug.org ReplyTo: matt at atopia.net Subject: [nycbug-talk] Make package-recursive problem Sent: May 17, 2009 8:00 PM Hi all, I'm stuck. I have been building packages all day. Apache, php, cacti, nagios, etc. Each time I do this, I cd into the port and run make package-recursive. What I'm noticing is that many php extension packages that were once normal working packages are being replaced by 2k sized packages. These packages still contain the proper meta data, but are missing the .so file. They seem to install correctly on the build box, but with those same packages installed on a production webserver, they don't install the .so file. Any ideas? I'm stuck and its been making me pull my hair out. Matt _______________________________________________ talk mailing list talk at lists.nycbug.org http://lists.nycbug.org/mailman/listinfo/talk Sent from my Verizon Wireless BlackBerry _______________________________________________ talk mailing list talk at lists.nycbug.org http://lists.nycbug.org/mailman/listinfo/talk From o_sleep at belovedarctos.com Sun May 17 22:40:12 2009 From: o_sleep at belovedarctos.com (Bjorn Nelson) Date: Sun, 17 May 2009 22:40:12 -0400 Subject: [nycbug-talk] Make package-recursive problem In-Reply-To: <1089862135-1242610804-cardhu_decombobulator_blackberry.rim.net-1793152335-@bxe1247.bisx.prod.on.blackberry> References: <1089862135-1242610804-cardhu_decombobulator_blackberry.rim.net-1793152335-@bxe1247.bisx.prod.on.blackberry> Message-ID: <4A10CA8C.1070801@belovedarctos.com> Matt, If it's a prod webserver, you might want to just copy the .so files so you can leave it for now in a working shape and spend tomorrow, rested, fixing the ports dependencies :) Otherwise, are you using lang/php4 or another? Do you have different /etc/make.conf files from your build server or are calling the make with different mark variables set (make config differences)? Maybe it would be easier to run pkgadd -r for now? -Bjorn matt at atopia.net wrote: > Freebsd 7.1. The ports system that comes with it =) > ------Original Message------ > From: mark.saad at ymail.com > Sender: talk-bounces at lists.nycbug.org > To: talk at lists.nycbug.org > ReplyTo: mark.saad at ymail.com > Subject: Re: [nycbug-talk] Make package-recursive problem > Sent: May 17, 2009 21:12 > > Matt > What 'ports' system are you using ? Also on what os & version ? > > ------Original Message------ > From: matt at atopia.net > Sender: talk-bounces at lists.nycbug.org > To: talk at lists.nycbug.org > ReplyTo: matt at atopia.net > Subject: [nycbug-talk] Make package-recursive problem > Sent: May 17, 2009 8:00 PM > > Hi all, > > I'm stuck. I have been building packages all day. Apache, php, cacti, nagios, etc. Each time I do this, I cd into the port and run make package-recursive. > > What I'm noticing is that many php extension packages that were once normal working packages are being replaced by 2k sized packages. These packages still contain the proper meta data, but are missing the .so file. They seem to install correctly on the build box, but with those same packages installed on a production webserver, they don't install the .so file. > > Any ideas? I'm stuck and its been making me pull my hair out. > > Matt > > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > > > Sent from my Verizon Wireless BlackBerry > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > From matt at atopia.net Sun May 17 22:48:00 2009 From: matt at atopia.net (Matt Juszczak) Date: Sun, 17 May 2009 22:48:00 -0400 (EDT) Subject: [nycbug-talk] Make package-recursive problem In-Reply-To: <4A10CA8C.1070801@belovedarctos.com> References: <1089862135-1242610804-cardhu_decombobulator_blackberry.rim.net-1793152335-@bxe1247.bisx.prod.on.blackberry> <4A10CA8C.1070801@belovedarctos.com> Message-ID: > If it's a prod webserver, you might want to just copy the .so files so you > can leave it for now in a working shape and spend tomorrow, rested, fixing > the ports dependencies :) It's a webserver that will be production. It's out of production now. I attempted to fix the problem by doing: cd /usr/ports/www/php5-session && make package-recursive which did not fix the problem. In fact, it created more "no files" packages: s507# pkg_info -L php5-session-5.2.9 Information for php5-session-5.2.9: Files: s507# But I just went and manually built a package just for that module: cd /usr/ports/www/php5-session && make clean && make deinstall && make package and it worked: s507# pkg_info -L php5-session-5.2.9 Information for php5-session-5.2.9: Files: /usr/local/lib/php/20060613/session.so /usr/local/include/php/ext/session/config.h /usr/local/include/php/ext/session/mod_files.h /usr/local/include/php/ext/session/mod_mm.h /usr/local/include/php/ext/session/mod_user.h /usr/local/include/php/ext/session/php_session.h My only question at this point is... it seems like all these modules do is make .so files right? So if I make clean and recompile them, without creating a new php5 master package, it should still all work as long as its the same version.... correct? Or are there potential issues by building these php5-* modules separately from php5? (doing a make clean first) -M From o_sleep at belovedarctos.com Sun May 17 22:53:10 2009 From: o_sleep at belovedarctos.com (Bjorn Nelson) Date: Sun, 17 May 2009 22:53:10 -0400 Subject: [nycbug-talk] Make package-recursive problem In-Reply-To: References: <1089862135-1242610804-cardhu_decombobulator_blackberry.rim.net-1793152335-@bxe1247.bisx.prod.on.blackberry> <4A10CA8C.1070801@belovedarctos.com> Message-ID: <4A10CD96.9020002@belovedarctos.com> > My only question at this point is... it seems like all these modules > do is make .so files right? So if I make clean and recompile them, > without creating a new php5 master package, it should still all work > as long as its the same version.... correct? Or are there potential > issues by building these php5-* modules separately from php5? (doing a > make clean first) It should be fine to install the ports directly instead of from the master port. I think the worst that will happen is you may need to clear out some duplicate lines in your httpd.conf. Is pkgdb -F coming back with anything weird? -Bjorn From matt at atopia.net Sun May 17 22:56:32 2009 From: matt at atopia.net (Matt Juszczak) Date: Sun, 17 May 2009 22:56:32 -0400 (EDT) Subject: [nycbug-talk] Make package-recursive problem In-Reply-To: <4A10CD96.9020002@belovedarctos.com> References: <1089862135-1242610804-cardhu_decombobulator_blackberry.rim.net-1793152335-@bxe1247.bisx.prod.on.blackberry> <4A10CA8C.1070801@belovedarctos.com> <4A10CD96.9020002@belovedarctos.com> Message-ID: > It should be fine to install the ports directly instead of from the master > port. I think the worst that will happen is you may need to clear out some > duplicate lines in your httpd.conf. Is pkgdb -F coming back with anything > weird? I'm not actually installing the ports. I'm creating packages from the ports and then installing them on another box. Haven't run pkgdb -F. The thing is, these packages worked at one point. For instance, when I did "make package-recursive" inside the cacti directory, it creates php and about 10 php modules, including php5-session. And the php5-session package worked fine. But then, I immediately went to build a nagios package, did make package-recursive, and for some reason, it re-created the dependency packages, including session. Except this time, php5-session became an empty package. At this point, I've just started running pkg_info -L on packages created to make sure they have files. Kind of a round about way of doing it, but any packages that don't have files, I run back and do manually and rebuild. Like I said, my only worry is that when I do things like: cd /usr/ports/lang/php5 && make package-recursive and then: cd /usr/ports/www/php5-session && make clean && make package I wonder, since it's redownloading and re-compiling the code, are there differences being created, or no? I guess, if you compile something twice on the same box, you'll always have the same outcome, right? -M From bruno at loftmail.com Mon May 18 12:18:01 2009 From: bruno at loftmail.com (Bruno Scap) Date: Mon, 18 May 2009 12:18:01 -0400 Subject: [nycbug-talk] Solaris / Linux admin Message-ID: <20090518161801.GI18102@loftmail.com> If anyone is interested in a Solaris / Linux sysadmin position, some Java/Jboss, MySQL, Artesia, Sharepoint, full time, let me know and I will put you in touch with someone who can provide you with more info. From matt at atopia.net Mon May 18 15:31:08 2009 From: matt at atopia.net (Matt Juszczak) Date: Mon, 18 May 2009 15:31:08 -0400 (EDT) Subject: [nycbug-talk] Do you guys/gals _____ify your _____ boxes? Message-ID: The subject is confusing, I know. But you can fill in almost anything: Do you guys/gals cfengineify your cfengine boxes? Do you guys/gals ldapify your ldap boxes? Do you guys/gals puppetify your puppet boxes? In other words, on the boxes where these services are running, do you set those services up? Say you have 5 boxes. box1 box2 - hosts LDAP server box3 - hosts puppet daemon box4 box5 box1, box4, and box5 would obviously be setup to authenticate to LDAP (box2) and have their configurations managed by puppet (box3). But would you have box2 authenticate to LDAP? and would you have box3 managed by puppet? Thanks for everyone's opinion :) -Matt From isaac at diversaform.com Mon May 18 15:48:01 2009 From: isaac at diversaform.com (Isaac Levy) Date: Mon, 18 May 2009 15:48:01 -0400 Subject: [nycbug-talk] Do you guys/gals _____ify your _____ boxes? In-Reply-To: References: Message-ID: <08B21C62-9360-4CA6-9D28-8B05172FF120@diversaform.com> Hi Matt, On May 18, 2009, at 3:31 PM, Matt Juszczak wrote: > The subject is confusing, I know. > > But you can fill in almost anything: > > Do you guys/gals cfengineify your cfengine boxes? > Do you guys/gals ldapify your ldap boxes? > Do you guys/gals puppetify your puppet boxes? > > In other words, on the boxes where these services are running, do > you set > those services up? > > Say you have 5 boxes. > > box1 > box2 - hosts LDAP server > box3 - hosts puppet daemon > box4 > box5 > > > box1, box4, and box5 would obviously be setup to authenticate to LDAP > (box2) and have their configurations managed by puppet (box3). But > would > you have box2 authenticate to LDAP? and would you have box3 managed > by > puppet? > > Thanks for everyone's opinion :) > > -Matt > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > I would think this kind of recursion is terribly bad practice- but this would depend on your requirements. For example, I tend to see glaring problems letting the LDAP server machine auth to iteslf, but heck- there may be a need to provide users in LDAP, some kind of access to that box. Still smells like a terrible idea. The Puppet daemon, that seems a bit odd- unless one has many different puppet boxes to manage- but I can't really get creative enough on a monday to think up a scenario when that'd happen. DNS, is a no-brainer not sane... Etc... Etc... my .02? Best, .ike From matt at atopia.net Mon May 18 15:51:46 2009 From: matt at atopia.net (Matt Juszczak) Date: Mon, 18 May 2009 15:51:46 -0400 (EDT) Subject: [nycbug-talk] Do you guys/gals _____ify your _____ boxes? In-Reply-To: <08B21C62-9360-4CA6-9D28-8B05172FF120@diversaform.com> References: <08B21C62-9360-4CA6-9D28-8B05172FF120@diversaform.com> Message-ID: > I would think this kind of recursion is terribly bad practice- but this would > depend on your requirements. For example, I tend to see glaring problems > letting the LDAP server machine auth to iteslf, but heck- there may be a need > to provide users in LDAP, some kind of access to that box. Still smells like > a terrible idea. Hi .ike, In my setup, the ldap and puppet server actually is the same set of two boxes. So you're saying that I should keep those two boxes standalone, completely independently managed and authenticated? -Matt From bcully at gmail.com Mon May 18 17:20:48 2009 From: bcully at gmail.com (Brian Cully) Date: Mon, 18 May 2009 17:20:48 -0400 Subject: [nycbug-talk] Do you guys/gals _____ify your _____ boxes? In-Reply-To: References: Message-ID: On 18-May-2009, at 15:31, Matt Juszczak wrote: > Do you guys/gals cfengineify your cfengine boxes? When I set up cfengine I clone the complete contents of the box to every other box it manages. Thus any box can become any box with the flip of a switch, including the cfengine master. This methodology would probably apply to puppet itself. > Do you guys/gals ldapify your ldap boxes? I don't use LDAP, but I do use Kerberos, and in that case, no, I do not use Kerberos to manage access to the Kerberos server. I have no real reason for this except that it assuages my security related anxiety and if there's some issue with Kerberos I still need to get access to that box somehow. FWIW, I consider my auth boxen to require the most restrictive kinds of security. I don't even put telnet/ssh on them. If they have issues you either need physical access or some other kind of highly secure back channel to get into them and deal with it, so in that sense the question doesn't even apply: you can't use Kerberos to auth non- existent services. -bjc From o_sleep at belovedarctos.com Mon May 18 21:53:47 2009 From: o_sleep at belovedarctos.com (Bjorn Nelson) Date: Mon, 18 May 2009 21:53:47 -0400 Subject: [nycbug-talk] Do you guys/gals _____ify your _____ boxes? In-Reply-To: <08B21C62-9360-4CA6-9D28-8B05172FF120@diversaform.com> References: <08B21C62-9360-4CA6-9D28-8B05172FF120@diversaform.com> Message-ID: <4A12112B.4030607@belovedarctos.com> Isaac Levy wrote: > DNS, is a no-brainer not sane... Etc... Etc... > Is there something wrong with dns pointing to itself (127.0.0.1) for dns? -Bjorn From kacanski_s at yahoo.com Tue May 19 10:54:20 2009 From: kacanski_s at yahoo.com (Aleksandar Kacanski) Date: Tue, 19 May 2009 07:54:20 -0700 (PDT) Subject: [nycbug-talk] Approach for the NFS cluster Message-ID: <595270.63168.qm@web53602.mail.re2.yahoo.com> Hi, I am interested if anyone implemented NFS in HA configuration and what are folks experiences with different scenarios. I am interested in pNFS, but there is DRBD (harbeat) or just plain cluster of nfs servers without sharing metadata. My requirement is that I need to make service available and I can't have one point of failure... Regards, --Aleksandar (Sasha) Kacanski From george at ceetonetechnology.com Tue May 19 11:15:42 2009 From: george at ceetonetechnology.com (George Rosamond) Date: Tue, 19 May 2009 11:15:42 -0400 Subject: [nycbug-talk] Approach for the NFS cluster In-Reply-To: <595270.63168.qm@web53602.mail.re2.yahoo.com> References: <595270.63168.qm@web53602.mail.re2.yahoo.com> Message-ID: <4A12CD1E.40703@ceetonetechnology.com> Aleksandar Kacanski wrote: > Hi, I am interested if anyone implemented NFS in HA configuration and > what are folks experiences with different scenarios. I am interested > in pNFS, but there is DRBD (harbeat) or just plain cluster of nfs > servers without sharing metadata. My requirement is that I need to > make service available and I can't have one point of failure... > > Regards, --Aleksandar (Sasha) Kacanski AFAIK, carp was developed for file mounts. . . I think SMB though. Worth the pursuit. g From skreuzer at exit2shell.com Tue May 19 12:32:19 2009 From: skreuzer at exit2shell.com (Steven Kreuzer) Date: Tue, 19 May 2009 12:32:19 -0400 Subject: [nycbug-talk] Approach for the NFS cluster In-Reply-To: <4A12CD1E.40703@ceetonetechnology.com> References: <595270.63168.qm@web53602.mail.re2.yahoo.com> <4A12CD1E.40703@ceetonetechnology.com> Message-ID: <7A6DBDC2-404F-4379-BBDB-65BFEEADC94D@exit2shell.com> On May 19, 2009, at 11:15 AM, George Rosamond wrote: > Aleksandar Kacanski wrote: >> Hi, I am interested if anyone implemented NFS in HA configuration and >> what are folks experiences with different scenarios. I am interested >> in pNFS, but there is DRBD (harbeat) or just plain cluster of nfs >> servers without sharing metadata. My requirement is that I need to >> make service available and I can't have one point of failure... >> >> Regards, --Aleksandar (Sasha) Kacanski > > AFAIK, carp was developed for file mounts. . . I think SMB though. > > Worth the pursuit. The biggest issue with high availability nfs is that you need to share a common pool of storage between two servers. Take a look at geom_mirror and ggate{c,d} and carp. One the backup machine, you would export the storage as a block device using ggated to the primary server. On the primary server, you would mount the block device using ggatec and then use geom_mirror to mirror writes to the primary and backup server. Export the volumes via nfs on each host and configure a carp interface that clients will connect to. This is an extremely oversimplified explanation of how you could provide HA NFS. There are alot of things to take into consideration such as how you would fail back etc If this is mission critical, this is definitely an instance where I would solve this problem by throwing money at it. I would suggest you look at Isilon, NetApp and Sun, all who do exactly what you want very well. -- Steven Kreuzer http://www.exit2shell.com/~skreuzer From skreuzer at exit2shell.com Tue May 19 12:41:23 2009 From: skreuzer at exit2shell.com (Steven Kreuzer) Date: Tue, 19 May 2009 12:41:23 -0400 Subject: [nycbug-talk] Do you guys/gals _____ify your _____ boxes? In-Reply-To: References: Message-ID: <52865BEF-67F4-477D-B164-A0BD7EA037AD@exit2shell.com> On May 18, 2009, at 3:31 PM, Matt Juszczak wrote: > box1, box4, and box5 would obviously be setup to authenticate to LDAP > (box2) and have their configurations managed by puppet (box3). But > would > you have box2 authenticate to LDAP? and would you have box3 managed > by > puppet? If you have a master puppet server, it makes sense that all the configuration you do to the box is done via puppet. If you master puppet server dies, it will allow you to say this is the new master puppet server and have the box back online in a matter of minutes. If someone changes something on your mater puppet server, its better to have puppet discover and change it back and alert you instead of discovering the change weeks later. As for LDAP, I prefer to configure every machine to first auth against the primary ldap server, the slave ldap sever and then files. You keep root and system level accounts in /etc/passwd and user accounts are stored in ldap. This allows you to login to the box if you break something but keeps the auth subsystem of each server consistent -- Steven Kreuzer http://www.exit2shell.com/~skreuzer From carton at Ivy.NET Tue May 19 13:23:00 2009 From: carton at Ivy.NET (Miles Nordin) Date: Tue, 19 May 2009 13:23:00 -0400 Subject: [nycbug-talk] Approach for the NFS cluster In-Reply-To: <7A6DBDC2-404F-4379-BBDB-65BFEEADC94D@exit2shell.com> (Steven Kreuzer's message of "Tue, 19 May 2009 12:32:19 -0400") References: <595270.63168.qm@web53602.mail.re2.yahoo.com> <4A12CD1E.40703@ceetonetechnology.com> <7A6DBDC2-404F-4379-BBDB-65BFEEADC94D@exit2shell.com> Message-ID: >>>>> "sk" == Steven Kreuzer writes: sk> This is an extremely oversimplified explanation of how you sk> could provide HA NFS. yeah, though this is coming from someone who's never done it, that sounds like a good summary. Except that I don't know of any actual clustering software built over ggate, and it's not something you roll yourself with shell scripts. The volume cannot be mounted on both nodes at the same time because obviously the filesystem doesn't support that, so, like other HA stuff, there has to be a heartbeat network connection or a SCSI reservation scheme or some such magic so the inactive node knows it's time to take over the storage, fsck/log-roll it, mount it, export the NFS. It's not like they can both be ready all the time, and CARP will decide which one gets the work---not posible. Also the active node has to notice if, for some reason, it has lost control by the rules of the heartbeat/reservation scheme even though it doesn't feel crashed, and in that case it should crash itself. There may also be some app-specific magic in NFS. The feature that lets clients go through server reboots without losing any data, even on open files, should make it much easier to clusterify than SMB: on NFS this case is explicitly supported by, among other things, all the write caches in the server filesystem and disks are kept in duplicate in the clients so they can be re-rolled if the server crashes. But there may be some tricky corner cases the clustering software needs to handle. For example, on Solaris if using ZFS, you can ``disable the ZIL'' to improve NFS performance in the case where you're opening, writing, closing files frequently, but the cost of disabling is that you lose this stateless-server-reboot feature. sk> suggest you look at Isilon, NetApp and Sun, The solaris clustering stuff may actually be $0. I'm not sure though, never run it. The clustering stuff is not the same thing as the pNFS stuff. +1 on Steven's point that you can do this with regular NFS on the clients---only the servers need to be special. But they need to be pretty special. The old clusters used a SCSI chain with two host adapters, one at each end of the bus, so there's no external terminator (just the integrated terminator in the host adapters). These days probably you will need a SAS chassis with connections for two initiators. unless the ggate thing works, but there's a need to flush write buffers deterministically when told to for the NFS corner case, and some clusters use this SCSI-2 reservation command, so...shared storage is not so much this abstract modular good-enough blob. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: From riegersteve at gmail.com Tue May 19 14:05:28 2009 From: riegersteve at gmail.com (Steve Rieger) Date: Tue, 19 May 2009 11:05:28 -0700 Subject: [nycbug-talk] (a bit ot) web server weird stuff Message-ID: <4A12F4E8.1030408@gmail.com> starting yesterday i see the following in my access logs, and cant seem to figure out what the heck is going on, using lighttp, got any insight ? 77.108.102.246 - - [19/May/2009:10:07:10 -0700] "CONNECT 205.188.251.43:443 HTTP/1.0" 501 357 "-" "-" 77.66.227.146 - - [19/May/2009:10:07:10 -0700] "CONNECT 205.188.251.36:443 HTTP/1.0" 501 357 "-" "-" 77.66.227.146 - - [19/May/2009:10:07:10 -0700] "CONNECT 205.188.251.16:443 HTTP/1.0" 501 357 "-" "-" 60.168.252.7 xml.nbcsearch.com - [19/May/2009:10:07:10 -0700] "GET http://xml.nbcsearch.com/xml.php?affiliate=searchdao&Terms=food+nutrition&IP=208%2E127%2E94 %2E89 HTTP/1.0" 404 345 "-" "Mozilla/4.0 (compatible; MSIE 5.01; Windows 95; Alexa Toolbar)" 77.108.102.246 - - [19/May/2009:10:07:10 -0700] "CONNECT 205.188.251.31:443 HTTP/1.0" 501 357 "-" "-" 59.90.1.66 - - [19/May/2009:10:07:10 -0700] "CONNECT 205.188.251.6:443 HTTP/1.0" 501 357 "-" "-" 59.90.1.66 - - [19/May/2009:10:07:10 -0700] "CONNECT 205.188.251.1:443 HTTP/1.0" 501 357 "-" "-" 113.22.163.156 - - [19/May/2009:10:07:10 -0700] "GET http://n31.login.re3.yahoo.com/config/pwtoken_get?login=roseau at snet.net&src=ygodgw&passwd=bc144134bc7b611 91e8e2f6c0833364c&challenge=FqJZxsmRe5Eq__AOpETXgvYrGqMd&md5=1 HTTP/1.0" 404 345 "-" "MobileRunner-J2ME" 117.13.200.239 adserver.adtech.de - [19/May/2009:10:07:10 -0700] "GET http://adserver.adtech.de/adiframe/3.0/932/2081232/0/225/ADTECH;target=_blank;grp=%5Bgro up%5D HTTP/1.1" 404 345 "http://www.vampirefreaks.com/" "mozilla/5.0 (windows; u; win98; en-us; rv:1.8.0.7) gecko/20060909 firefox/1.5.0.7" 95.79.193.64 - - [19/May/2009:10:07:10 -0700] "CONNECT 205.188.179.233:443 HTTP/1.0" 501 357 "-" "-" 77.66.227.146 - - [19/May/2009:10:07:10 -0700] "CONNECT 64.12.161.185:443 HTTP/1.0" 501 357 "-" "-" 117.14.247.15 adserver.adtech.de - [19/May/2009:10:07:10 -0700] "GET http://adserver.adtech.de/adiframe/3.0/932/2081232/0/225/ADTECH;target=_blank;grp=%5Bgrou p%5D HTTP/1.0" 404 345 "http://www.vampirefreaks.com/" "mozilla/4.0 (compatible; msie 6.0; windows nt 5.1; sv1; .net clr 1.1.4322)" 121.204.134.135 blueadvertise.com - [19/May/2009:10:07:10 -0700] "GET http://blueadvertise.com/publisher/____ic300250.php?cache=625 HTTP/1.0" 404 345 "-" "Moz illa/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)" 77.66.227.146 - - [19/May/2009:10:07:10 -0700] "CONNECT 205.188.251.31:443 HTTP/1.0" 501 357 "-" "-" 77.66.227.146 - - [19/May/2009:10:07:10 -0700] "CONNECT 205.188.251.21:443 HTTP/1.0" 501 357 "-" "-" 77.66.227.146 - - [19/May/2009:10:07:10 -0700] "CONNECT 64.12.200.89:443 HTTP/1.0" 501 357 "-" "-" 59.40.58.109 adserver.adtech.de - [19/May/2009:10:07:11 -0700] "GET http://adserver.adtech.de/adiframe/3.0/932/2067462/0/170/ADTECH;target=_blank;grp=%5Bgroup %5D HTTP/1.0" 404 345 "http://www.cheatcc.com/" "mozilla/4.0 (compatible; msie 6.0; windows nt 5.1)" 79.46.69.130 - - [19/May/2009:10:07:11 -0700] "" 400 349 "-" "-" 92.243.182.98 - - [19/May/2009:10:07:11 -0700] "CONNECT login.icq.com:443 HTTP/1.0" 501 357 "-" "-" 123.118.117.2 network.realmedia.com - [19/May/2009:10:07:11 -0700] "GET http://network.realmedia.com/RealMedia/ads/adstream_sx.ads/xbox-pro/300x250/ron/gmsent 1834/ss/a at x15 HTTP/1.0" 404 345 "http://www.xbox-pro.com/" "mozilla/4.0 (compatible; msie 6.0; windows nt 5.1; sv1)" 67.19.122.146 ad.reachjunction.com - [19/May/2009:10:07:11 -0700] "GET http://ad.reachjunction.com/st?ad_type=pop&ad_size=0x0§ion=505085&banned_pop_types= 29&pop_times=1&pop_frequency=86400 HTTP/1.1" 404 345 "http%3A%2F%2Fwww.rsfox.com%2Findex.html" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; .NET CLR 1. 1.43" 77.108.102.246 - - [19/May/2009:10:07:11 -0700] "CONNECT 64.12.161.153:443 HTTP/1.0" 501 357 "-" "-" 219.134.252.92 network.realmedia.com - [19/May/2009:10:07:11 -0700] "GET http://network.realmedia.com/RealMedia/ads/adstream_jx.ads/couponhill/728x90/ron/ents hpwmn/ss/a/1044233186 at Top1 HTTP/1.1" 404 345 "http://www.couponhill.com" "mozilla/4.0 (compatible; msie 6.0; windows nt 5.0)" 66.39.218.8 www.ticketmaster.com - [19/May/2009:10:07:11 -0700] "GET http://www.ticketmaster.com/event/040042788C0D2230?artistid=805913&majorcatid=10004&minor catid=9 HTTP/1.1" 404 345 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.0 4506.648; .NET CLR 3.5.21022)" 77.108.102.246 - - [19/May/2009:10:07:11 -0700] "CONNECT 205.188.251.36:443 HTTP/1.0" 501 357 "-" "-" 123.118.117.116 content.pulse360.com - [19/May/2009:10:07:11 -0700] "GET http://content.pulse360.com/cgi-bin/context.cgi?id=88550819&cgroup=external_content_v ideo&color=orange&format=vid300x500swf&subid=92020793 HTTP/1.0" 404 345 "http://www.spineshealth.com/" "mozilla/4.0 (compatible; msie 6.0; windows nt 5.1)" 60.26.10.79 adserver.adtech.de - [19/May/2009:10:07:11 -0700] "GET http://adserver.adtech.de/adiframe/3.0/932/2067447/0/170/ADTECH;target=_blank;grp=%5Bgroup% 5D HTTP/1.1" 404 345 "http://www.mugglenet.com/" "mozilla/4.0 (compatible; msie 6.0; windows nt 5.1; sv1; .net clr 1.1.4322)" From george at ceetonetechnology.com Tue May 19 14:40:17 2009 From: george at ceetonetechnology.com (George Rosamond) Date: Tue, 19 May 2009 14:40:17 -0400 Subject: [nycbug-talk] (a bit ot) web server weird stuff In-Reply-To: <4A12F4E8.1030408@gmail.com> References: <4A12F4E8.1030408@gmail.com> Message-ID: <4A12FD11.4050405@ceetonetechnology.com> Steve Rieger wrote: > starting yesterday i see the following in my access logs, and cant seem > to figure out what the heck is going on, > using lighttp, got any insight ? > > 77.108.102.246 - - [19/May/2009:10:07:10 -0700] "CONNECT > 205.188.251.43:443 HTTP/1.0" 501 357 "-" "-" > 77.66.227.146 - - [19/May/2009:10:07:10 -0700] "CONNECT > 205.188.251.36:443 HTTP/1.0" 501 357 "-" "-" > 77.66.227.146 - - [19/May/2009:10:07:10 -0700] "CONNECT > 205.188.251.16:443 HTTP/1.0" 501 357 "-" "-" > 60.168.252.7 xml.nbcsearch.com - [19/May/2009:10:07:10 -0700] "GET > http://xml.nbcsearch.com/xml.php?affiliate=searchdao&Terms=food+nutrition&IP=208%2E127%2E94 > %2E89 HTTP/1.0" 404 345 "-" "Mozilla/4.0 (compatible; MSIE 5.01; Windows > 95; Alexa Toolbar)" > 77.108.102.246 - - [19/May/2009:10:07:10 -0700] "CONNECT > 205.188.251.31:443 HTTP/1.0" 501 357 "-" "-" > 59.90.1.66 - - [19/May/2009:10:07:10 -0700] "CONNECT 205.188.251.6:443 > HTTP/1.0" 501 357 "-" "-" > 59.90.1.66 - - [19/May/2009:10:07:10 -0700] "CONNECT 205.188.251.1:443 > HTTP/1.0" 501 357 "-" "-" > 113.22.163.156 - - [19/May/2009:10:07:10 -0700] "GET > http://n31.login.re3.yahoo.com/config/pwtoken_get?login=roseau at snet.net&src=ygodgw&passwd=bc144134bc7b611 > 91e8e2f6c0833364c&challenge=FqJZxsmRe5Eq__AOpETXgvYrGqMd&md5=1 HTTP/1.0" > 404 345 "-" "MobileRunner-J2ME" > 117.13.200.239 adserver.adtech.de - [19/May/2009:10:07:10 -0700] "GET > http://adserver.adtech.de/adiframe/3.0/932/2081232/0/225/ADTECH;target=_blank;grp=%5Bgro > up%5D HTTP/1.1" 404 345 "http://www.vampirefreaks.com/" "mozilla/5.0 > (windows; u; win98; en-us; rv:1.8.0.7) gecko/20060909 firefox/1.5.0.7" Weird. . . Just taking some stabs here, or at least stating the obvious. . . The only URLs i ever remember seeing in access.logs (in apache or lighttpd) are connected to bots and spiders for indexing. Is it possible that users can proxy through this box? Is mod_proxy enabled in the conf file? This might shed some light (doh): http://rubyforge.org/pipermail/typo-list/2005-October/000864.html (yeah, but you're not getting 200 status codes) Without looking too closely, is your access log format standard? Certainly doesn't look like it's the same user proxying, not use based on http header info. . . I mean chicago hawks hockey and vampire freaks? . . (gulp) g From dave at donnerjack.com Tue May 19 14:47:54 2009 From: dave at donnerjack.com (David Lawson) Date: Tue, 19 May 2009 14:47:54 -0400 Subject: [nycbug-talk] (a bit ot) web server weird stuff In-Reply-To: <4A12FD11.4050405@ceetonetechnology.com> References: <4A12F4E8.1030408@gmail.com> <4A12FD11.4050405@ceetonetechnology.com> Message-ID: <6AFCB2F6-AA6F-4882-BC93-AA35EA458D48@donnerjack.com> On May 19, 2009, at 2:40 PM, George Rosamond wrote: > Steve Rieger wrote: >> starting yesterday i see the following in my access logs, and cant >> seem >> to figure out what the heck is going on, >> using lighttp, got any insight ? >> >> 77.108.102.246 - - [19/May/2009:10:07:10 -0700] "CONNECT >> 205.188.251.43:443 HTTP/1.0" 501 357 "-" "-" >> 77.66.227.146 - - [19/May/2009:10:07:10 -0700] "CONNECT >> 205.188.251.36:443 HTTP/1.0" 501 357 "-" "-" >> 77.66.227.146 - - [19/May/2009:10:07:10 -0700] "CONNECT >> 205.188.251.16:443 HTTP/1.0" 501 357 "-" "-" >> 60.168.252.7 xml.nbcsearch.com - [19/May/2009:10:07:10 -0700] "GET >> http://xml.nbcsearch.com/xml.php?affiliate=searchdao&Terms=food+nutrition&IP=208%2E127%2E94 >> %2E89 HTTP/1.0" 404 345 "-" "Mozilla/4.0 (compatible; MSIE 5.01; >> Windows >> 95; Alexa Toolbar)" >> 77.108.102.246 - - [19/May/2009:10:07:10 -0700] "CONNECT >> 205.188.251.31:443 HTTP/1.0" 501 357 "-" "-" >> 59.90.1.66 - - [19/May/2009:10:07:10 -0700] "CONNECT >> 205.188.251.6:443 >> HTTP/1.0" 501 357 "-" "-" >> 59.90.1.66 - - [19/May/2009:10:07:10 -0700] "CONNECT >> 205.188.251.1:443 >> HTTP/1.0" 501 357 "-" "-" >> 113.22.163.156 - - [19/May/2009:10:07:10 -0700] "GET >> http://n31.login.re3.yahoo.com/config/pwtoken_get?login=roseau at snet.net&src=ygodgw&passwd=bc144134bc7b611 >> 91e8e2f6c0833364c&challenge=FqJZxsmRe5Eq__AOpETXgvYrGqMd&md5=1 HTTP/ >> 1.0" >> 404 345 "-" "MobileRunner-J2ME" >> 117.13.200.239 adserver.adtech.de - [19/May/2009:10:07:10 -0700] "GET >> http://adserver.adtech.de/adiframe/3.0/932/2081232/0/225/ADTECH;target=_blank;grp=%5Bgro >> up%5D HTTP/1.1" 404 345 "http://www.vampirefreaks.com/" "mozilla/5.0 >> (windows; u; win98; en-us; rv:1.8.0.7) gecko/20060909 firefox/ >> 1.5.0.7" > > Weird. . . > > Just taking some stabs here, or at least stating the obvious. . . > > The only URLs i ever remember seeing in access.logs (in apache or > lighttpd) are connected to bots and spiders for indexing. > > Is it possible that users can proxy through this box? Is mod_proxy > enabled in the conf file? It does look very much like it's being used as an open proxy. I've seen similar stuff on Squid boxes prior to having their ACLs locked down. --Dave From riegersteve at gmail.com Tue May 19 14:48:26 2009 From: riegersteve at gmail.com (Steve Rieger) Date: Tue, 19 May 2009 11:48:26 -0700 Subject: [nycbug-talk] (a bit ot) web server weird stuff In-Reply-To: <4A12FD11.4050405@ceetonetechnology.com> References: <4A12F4E8.1030408@gmail.com> <4A12FD11.4050405@ceetonetechnology.com> Message-ID: <4A12FEFA.8060902@gmail.com> George Rosamond wrote: > > Just taking some stabs here, or at least stating the obvious. . . > > The only URLs i ever remember seeing in access.logs (in apache or > lighttpd) are connected to bots and spiders for indexing. > > Is it possible that users can proxy through this box? Is mod_proxy > enabled in the conf file? > mod_proxy is enabled, i serve tomcat via lighttpd (both on localhost) From riegersteve at gmail.com Tue May 19 14:52:21 2009 From: riegersteve at gmail.com (Steve Rieger) Date: Tue, 19 May 2009 11:52:21 -0700 Subject: [nycbug-talk] (a bit ot) web server weird stuff In-Reply-To: <6AFCB2F6-AA6F-4882-BC93-AA35EA458D48@donnerjack.com> References: <4A12F4E8.1030408@gmail.com> <4A12FD11.4050405@ceetonetechnology.com> <6AFCB2F6-AA6F-4882-BC93-AA35EA458D48@donnerjack.com> Message-ID: <4A12FFE5.60103@gmail.com> David Lawson wrote: > On May 19, 2009, at 2:40 PM, George Rosamond wrote: > > > It does look very much like it's being used as an open proxy. I've > seen similar stuff on Squid boxes prior to having their ACLs locked > down. > > --Dave care to expand on that ? am stabbing in the dark here.... From cwolsen at ubixos.com Tue May 19 14:53:44 2009 From: cwolsen at ubixos.com (Christopher Olsen) Date: Tue, 19 May 2009 14:53:44 -0400 Subject: [nycbug-talk] Projects In-Reply-To: <7765c0380905120806j6c2f0e1n30d266d856582afa@mail.gmail.com> References: <4A0989C7.8000006@ubixos.com> <7765c0380905120806j6c2f0e1n30d266d856582afa@mail.gmail.com> Message-ID: <002a01c9d8b3$22bd59b0$68380d10$@com> Nice any here toy with FreeNAS? -----Original Message----- From: raaaay at gmail.com [mailto:raaaay at gmail.com] On Behalf Of Ray Lai Sent: Tuesday, May 12, 2009 11:06 AM To: Christopher Olsen Cc: NYC*BUG Talk Subject: Re: [nycbug-talk] Projects I hack OpenBSD, but I'm also a slacker. -Ray- On Tue, May 12, 2009 at 7:37 AM, Christopher Olsen wrote: > Anyone on this list working on or run any form of BSD related projects? > > -Christopher > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > From slynch2112 at me.com Tue May 19 13:03:45 2009 From: slynch2112 at me.com (Siobhan Lynch) Date: Tue, 19 May 2009 13:03:45 -0400 Subject: [nycbug-talk] Approach for the NFS cluster In-Reply-To: <595270.63168.qm@web53602.mail.re2.yahoo.com> References: <595270.63168.qm@web53602.mail.re2.yahoo.com> Message-ID: <4A12E671.4060209@me.com> On 5/19/09 10:54 AM, Aleksandar Kacanski wrote: > Hi, > I am interested if anyone implemented NFS in HA configuration and what are folks experiences with different scenarios. > I am interested in pNFS, but there is DRBD (harbeat) or just plain cluster of nfs servers without sharing metadata. > My requirement is that I need to make service available and I can't have one point of failure... > Once upon a time I would have suggested something like CODA (which I implemented in a redundant mission critical file system on FreeBSD like 5 years ago. I'm not even sure of the status of CODA on FreeBSD 6 and 7, so I couldn't recommend it now. AFS hasn't worked in a dog's age either, so there is "slim pickin's" when it comes to tried and true. If you do use any of those solutions, I would love to hear how they work out. Especially if you use pNFS, as its been something I've been looking at for a while to replace our single-point-of-failure NFS here. -Trish > Regards, --Aleksandar (Sasha) Kacanski > > > > > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > From lists at zaunere.com Tue May 19 15:06:30 2009 From: lists at zaunere.com (Hans Zaunere) Date: Tue, 19 May 2009 15:06:30 -0400 Subject: [nycbug-talk] (a bit ot) web server weird stuff In-Reply-To: <4A12FFE5.60103@gmail.com> References: <4A12F4E8.1030408@gmail.com> <4A12FD11.4050405@ceetonetechnology.com> <6AFCB2F6-AA6F-4882-BC93-AA35EA458D48@donnerjack.com> <4A12FFE5.60103@gmail.com> Message-ID: <004f01c9d8b4$eb4bdc20$c1e39460$@com> > > It does look very much like it's being used as an open proxy. I've > > seen similar stuff on Squid boxes prior to having their ACLs locked > > down. > > > > --Dave > > care to expand on that ? > > am stabbing in the dark here.... CONNECT is a proxy verb and it appears as though someone/something is trying to use your server as a proxy. Try to disable anything proxy, assuming of course it's not really supposed to be a proxy, or at least lock things down by IP/etc. H From isaac at diversaform.com Tue May 19 15:30:14 2009 From: isaac at diversaform.com (Isaac Levy) Date: Tue, 19 May 2009 15:30:14 -0400 Subject: [nycbug-talk] Approach for the NFS cluster In-Reply-To: <4A12CD1E.40703@ceetonetechnology.com> References: <595270.63168.qm@web53602.mail.re2.yahoo.com> <4A12CD1E.40703@ceetonetechnology.com> Message-ID: <4E169530-AB97-4505-A5EE-D25DB95DEF02@diversaform.com> Wordup All, On May 19, 2009, at 11:15 AM, George Rosamond wrote: > Aleksandar Kacanski wrote: >> Hi, I am interested if anyone implemented NFS in HA configuration and >> what are folks experiences with different scenarios. I am interested >> in pNFS, but there is DRBD (harbeat) or just plain cluster of nfs >> servers without sharing metadata. My requirement is that I need to >> make service available and I can't have one point of failure... >> >> Regards, --Aleksandar (Sasha) Kacanski > > AFAIK, carp was developed for file mounts. . . I think SMB though. SMB read-only shares, to be persnickety. Keeping data state is always a trick. > > > Worth the pursuit. > > g Indeed.... down the rabbit hole though..., From jbaltz at 3phasecomputing.com Tue May 19 15:42:47 2009 From: jbaltz at 3phasecomputing.com (Jerry B. Altzman) Date: Tue, 19 May 2009 15:42:47 -0400 Subject: [nycbug-talk] Approach for the NFS cluster In-Reply-To: <4E169530-AB97-4505-A5EE-D25DB95DEF02@diversaform.com> References: <595270.63168.qm@web53602.mail.re2.yahoo.com> <4A12CD1E.40703@ceetonetechnology.com> <4E169530-AB97-4505-A5EE-D25DB95DEF02@diversaform.com> Message-ID: <4A130BB7.1000708@3phasecomputing.com> on 5/19/2009 3:30 PM Isaac Levy said the following: > Wordup All, > Keeping data state is always a trick. >> Worth the pursuit. > Indeed.... down the rabbit hole though..., I'd be interested to find *anyone* who's gotten a good roll-your-own HA NFS / HA NAS setup working on BSD. I've seen a lot of DRBD/Heartbeat setups in "semi-production" mode, but that was all on Linux. I know I live in a somewhat cloistered, rarefied world, but ... //jbaltz -- jerry b. altzman jbaltz at 3phasecomputing.com +1 718 763 7405 From dave at donnerjack.com Tue May 19 15:46:11 2009 From: dave at donnerjack.com (David Lawson) Date: Tue, 19 May 2009 15:46:11 -0400 Subject: [nycbug-talk] (a bit ot) web server weird stuff In-Reply-To: <4A12FEFA.8060902@gmail.com> References: <4A12F4E8.1030408@gmail.com> <4A12FD11.4050405@ceetonetechnology.com> <4A12FEFA.8060902@gmail.com> Message-ID: <7B0AEDB8-AAF2-4C2C-B645-B2439BFE79BB@donnerjack.com> On May 19, 2009, at 2:48 PM, Steve Rieger wrote: > George Rosamond wrote: >> >> Just taking some stabs here, or at least stating the obvious. . . >> >> The only URLs i ever remember seeing in access.logs (in apache or >> lighttpd) are connected to bots and spiders for indexing. >> >> Is it possible that users can proxy through this box? Is mod_proxy >> enabled in the conf file? >> > > mod_proxy is enabled, > i serve tomcat via lighttpd (both on localhost) I don't know enough about lighttpd or mod_proxy to offer suggestions, but I imagine it has some kind of ACL mechanism to let you tighten things up. In Squid there's a set of allowed destinations that can be defined, so I'd look into whether there's something similar for lighttpd, if you set that up and it's working, all those weird entries will 403 status codes. --Dave From isaac at diversaform.com Tue May 19 15:46:48 2009 From: isaac at diversaform.com (Isaac Levy) Date: Tue, 19 May 2009 15:46:48 -0400 Subject: [nycbug-talk] Approach for the NFS cluster In-Reply-To: References: <595270.63168.qm@web53602.mail.re2.yahoo.com> <4A12CD1E.40703@ceetonetechnology.com> <7A6DBDC2-404F-4379-BBDB-65BFEEADC94D@exit2shell.com> Message-ID: <7AD47D69-1B64-4A68-8F14-2099B7CAC368@diversaform.com> Steve and Miles both have the key points, On May 19, 2009, at 1:23 PM, Miles Nordin wrote: >>>>>> "sk" == Steven Kreuzer writes: > > sk> This is an extremely oversimplified explanation of how you > sk> could provide HA NFS. > > Except that I don't know of any actual > clustering software built over ggate, and it's not something you roll > yourself with shell scripts. Actually, it kindof is something one can roll with shell scripts- I've done it, (at home, no real/heavy usage, just messing around). Thing is, it sucked: > The volume cannot be mounted on both > nodes at the same time because obviously the filesystem doesn't > support that, This, in the end, is the big problem. I've shell scripted the hell out of this before- it sucked, (and not just my scripts- the act of discovering-> mounting-> cleaning-up is riddled with annoyance). Perhaps Gmirror can be setup to trick the filesystems? > so, like other HA stuff, there has to be a heartbeat > network connection or a SCSI reservation scheme or some such magic so > the inactive node knows it's time to take over the storage, > fsck/log-roll it, mount it, export the NFS. It's not like they can > both be ready all the time, and CARP will decide which one gets the > work---not posible. Actually when I asked this very question, Mickey said there were some hooks in Carp somewhere to force master to flip, but I haven't particularly hacked into it... This part definately seems to be a 'read the source' kind of maneuver. > Also the active node has to notice if, for some > reason, it has lost control by the rules of the heartbeat/reservation > scheme even though it doesn't feel crashed, and in that case it should > crash itself. > > There may also be some app-specific magic in NFS. The feature that > lets clients go through server reboots without losing any data, even > on open files, should make it much easier to clusterify than SMB: on > NFS this case is explicitly supported by, among other things, all the > write caches in the server filesystem and disks are kept in duplicate > in the clients so they can be re-rolled if the server crashes. But > there may be some tricky corner cases the clustering software needs to > handle. For example, on Solaris if using ZFS, you can ``disable the > ZIL'' to improve NFS performance in the case where you're opening, > writing, closing files frequently, but the cost of disabling is that > you lose this stateless-server-reboot feature. I was under the impression that disabling the ZIL was a developer debugging thing- it's dangerous, period. > > > sk> suggest you look at Isilon, NetApp and Sun, > > The solaris clustering stuff may actually be $0. I'm not sure though, > never run it. The clustering stuff is not the same thing as the pNFS > stuff. +1 on Steven's point that you can do this with regular NFS on > the clients---only the servers need to be special. But they need to > be pretty special. The old clusters used a SCSI chain with two host > adapters, one at each end of the bus, so there's no external > terminator (just the integrated terminator in the host adapters). > These days probably you will need a SAS chassis with connections for > two initiators. unless the ggate thing works, but there's a need to > flush write buffers deterministically when told to for the NFS corner > case, and some clusters use this SCSI-2 reservation command, > so...shared storage is not so much this abstract modular good-enough > blob. Data state. This is the big deal... I really feel that some Ggate-ish thing could be written for the Geom subsystem which allowed for multiple writes? Or something which did writes according to some transactional model- (locking files, etc...) Hrm... Rocket- .ike From isaac at diversaform.com Tue May 19 15:48:59 2009 From: isaac at diversaform.com (Isaac Levy) Date: Tue, 19 May 2009 15:48:59 -0400 Subject: [nycbug-talk] Approach for the NFS cluster In-Reply-To: <4A130BB7.1000708@3phasecomputing.com> References: <595270.63168.qm@web53602.mail.re2.yahoo.com> <4A12CD1E.40703@ceetonetechnology.com> <4E169530-AB97-4505-A5EE-D25DB95DEF02@diversaform.com> <4A130BB7.1000708@3phasecomputing.com> Message-ID: On May 19, 2009, at 3:42 PM, Jerry B. Altzman wrote: > on 5/19/2009 3:30 PM Isaac Levy said the following: >> Wordup All, >> Keeping data state is always a trick. >>> Worth the pursuit. >> Indeed.... down the rabbit hole though..., > > I'd be interested to find *anyone* who's gotten a good roll-your-own > HA NFS / HA NAS setup working on BSD. I've seen a lot of DRBD/ > Heartbeat setups in "semi-production" mode, but that was all on > Linux. I know I live in a somewhat cloistered, rarefied world, but ... Per my previous post- I have got it working, but can't say it was any better than what you describe above. Super crappy hacking, and mine used ssh to do heartbeats. So the technical answer is yes- but the true answer is, no- and I'm interested in finding it as well... Best, .ike From pete at nomadlogic.org Tue May 19 16:46:32 2009 From: pete at nomadlogic.org (Pete Wright) Date: Tue, 19 May 2009 13:46:32 -0700 Subject: [nycbug-talk] Approach for the NFS cluster In-Reply-To: <4A130BB7.1000708@3phasecomputing.com> References: <595270.63168.qm@web53602.mail.re2.yahoo.com> <4A12CD1E.40703@ceetonetechnology.com> <4E169530-AB97-4505-A5EE-D25DB95DEF02@diversaform.com> <4A130BB7.1000708@3phasecomputing.com> Message-ID: <53F6C5C1-9B97-4C63-A4E8-B9C08973DA5F@nomadlogic.org> On 19-May-09, at 12:42 PM, Jerry B. Altzman wrote: > on 5/19/2009 3:30 PM Isaac Levy said the following: >> Wordup All, >> Keeping data state is always a trick. >>> Worth the pursuit. >> Indeed.... down the rabbit hole though..., > > I'd be interested to find *anyone* who's gotten a good roll-your-own > HA > NFS / HA NAS setup working on BSD. I've seen a lot of DRBD/Heartbeat > setups in "semi-production" mode, but that was all on Linux. I know I > live in a somewhat cloistered, rarefied world, but ... the trick with a true HA setup for NFS is keeping your file handles in sync b/w N nodes in your cluster. this is a pretty serious issue - sure you can replicate your data b/w N nodes using tricks like DRBD, which frankly has a pretty negative performance penalty, or using a global filesystem behind your NFS servers (hadoop, gpfs, etc). I wouldn't consider these solutions to be highly available - highly replicated maybe, but I'm not sure they increase your "availability" in failure scenarios. this is why NetApp, EMC, Isilon and Sun get to charge you lots of money. Its a hard problem to do correctly and reliably. Although when it works it is nice to see that you can be reading from a file, failover your NFS server, and not be disrupted :) FWIW - the new Sun 7000 "open storage" platform has the ability to be configured in an HA setup (check out the 7410 which i've just finished installing actually). This platform is built on top of OpenSolaris and ZFS. Perhaps there is some doc around opensolaris.org that describes how they handle failover. i know it's not a BSD solution - but perhaps there are some bits they use that can be ported over to BSD. oh yea...both NetApp and Isilon are built on FreeBSD too btw! So I guess I have seen this setup on BSD before :) -pete From carton at Ivy.NET Tue May 19 16:49:24 2009 From: carton at Ivy.NET (Miles Nordin) Date: Tue, 19 May 2009 16:49:24 -0400 Subject: [nycbug-talk] Projects In-Reply-To: <002a01c9d8b3$22bd59b0$68380d10$@com> (Christopher Olsen's message of "Tue, 19 May 2009 14:53:44 -0400") References: <4A0989C7.8000006@ubixos.com> <7765c0380905120806j6c2f0e1n30d266d856582afa@mail.gmail.com> <002a01c9d8b3$22bd59b0$68380d10$@com> Message-ID: >>>>> "co" == Christopher Olsen writes: co> Nice any here toy with FreeNAS? I used it to make windows boxes into iSCSI targets so I could re-image them or virus-scan them. Now I use gentoo systemrescuecd instead. It includes IET and works a lot better. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: From pete at nomadlogic.org Tue May 19 16:53:00 2009 From: pete at nomadlogic.org (Pete Wright) Date: Tue, 19 May 2009 13:53:00 -0700 Subject: [nycbug-talk] OpenBSD large filesystem experiences? In-Reply-To: <1479372543-1242445793-cardhu_decombobulator_blackberry.rim.net-395363067-@bxe1122.bisx.prod.on.blackberry> References: <7708fd680905152008x4b03fb09t71c6ff1a69b5a053@mail.gmail.com> <1479372543-1242445793-cardhu_decombobulator_blackberry.rim.net-395363067-@bxe1122.bisx.prod.on.blackberry> Message-ID: <93B1A7BE-33DA-478A-B768-8630C59E8550@nomadlogic.org> On 15-May-09, at 8:49 PM, mark.saad at ymail.com wrote: > I hate to say it but zfs is your only choice in the 1TB size , and > its freebsd or solaris on whole disks not partitions is the only > option . ermm...UFS on FreeBSD supports file systems larger than 1TB. Although background fsck's will take a fiscal month to complete... thus said the manual: The UFS2 filesystem was introduced in 2003 as a replacement to the original UFS and provides 64 bit counters and offsets. This allows for files and filesystems to grow to 2^73 bytes (2^64 * 512) in size and hopefully be sufficient for quite a long time. UFS2 largely solved the storage size limits imposed by the filesystem. Unfortunately, many tools and storage mechanisms still use or assume 32 bit values, often keeping FreeBSD limited to 2TB. I'm not sure if obsd has merged the changes kirk et. al. made to the new UFS or not so ymmv. -pete > Sent from my Verizon Wireless BlackBerry > > -----Original Message----- > From: Brad Schonhorst > > Date: Fri, 15 May 2009 23:08:13 > To: > Subject: [nycbug-talk] OpenBSD large filesystem experiences? > > > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > > > > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk -------------- next part -------------- An HTML attachment was scrubbed... URL: From kacanski_s at yahoo.com Tue May 19 17:28:55 2009 From: kacanski_s at yahoo.com (Aleksandar Kacanski) Date: Tue, 19 May 2009 14:28:55 -0700 (PDT) Subject: [nycbug-talk] Approach for the NFS cluster In-Reply-To: References: <595270.63168.qm@web53602.mail.re2.yahoo.com> <4A12CD1E.40703@ceetonetechnology.com> <7A6DBDC2-404F-4379-BBDB-65BFEEADC94D@exit2shell.com> Message-ID: <892931.54760.qm@web53602.mail.re2.yahoo.com> >>>>> "sk" == Steven Kreuzer writes: sk> This is an extremely oversimplified explanation of how you sk> could provide HA NFS. yeah, though this is coming from someone who's never done it, that sounds like a good summary. Except that I don't know of any actual clustering software built over ggate, and it's not something you roll yourself with shell scripts. The volume cannot be mounted on both nodes at the same time because obviously the filesystem doesn't support that, so, like other HA stuff, there has to be a heartbeat network connection or a SCSI reservation scheme or some such magic so the inactive node knows it's time to take over the storage, fsck/log-roll it, mount it, export the NFS. It's not like they can both be ready all the time, and CARP will decide which one gets the work---not posible. Also the active node has to notice if, for some reason, it has lost control by the rules of the heartbeat/reservation scheme even though it doesn't feel crashed, and in that case it should crash itself. There may also be some app-specific magic in NFS. The feature that lets clients go through server reboots without losing any data, even on open files, should make it much easier to clusterify than SMB: on NFS this case is explicitly supported by, among other things, all the write caches in the server filesystem and disks are kept in duplicate in the clients so they can be re-rolled if the server crashes. But there may be some tricky corner cases the clustering software needs to handle. For example, on Solaris if using ZFS, you can ``disable the ZIL'' to improve NFS performance in the case where you're opening, writing, closing files frequently, but the cost of disabling is that you lose this stateless-server-reboot feature. sk> suggest you look at Isilon, NetApp and Sun, The solaris clustering stuff may actually be $0. I'm not sure though, never run it. The clustering stuff is not the same thing as the pNFS stuff. +1 on Steven's point that you can do this with regular NFS on the clients---only the servers need to be special. But they need to be pretty special. The old clusters used a SCSI chain with two host adapters, one at each end of the bus, so there's no external terminator (just the integrated terminator in the host adapters). These days probably you will need a SAS chassis with connections for two initiators. unless the ggate thing works, but there's a need to flush write buffers deterministically when told to for the NFS corner case, and some clusters use this SCSI-2 reservation command, so...shared storage is not so much this abstract modular good-enough blob. Miles, pNFS provide protocol support to take advantage of clustered server deployments including the ability to provide scalable parallel access to files distributed among multiple servers thus removing singlepoint of failure. Aproach is different but metadata is clustered >From CITI - UM : ...enables direct client access to heterogeneous parallel file systems. Linux pNFS features a pluggable client architecture that harnesses the potential of pNFS as a universal and scalable metadata protocol by enabling dynamic support for layout format, storage protocol, and file system policies. Experiments with the Linux pNFS architecture demonstrate that using the page cache inflicts an I/O performance penalty and that I/O performance is highly subject to I/O transfer size. In addition, Linux pNFS can use bi-directional parallel I/O to raise data transfer throughput between parallel file systems. Ether way I will not be able to experiment with NFS4.1 version and pNFS on the mission critical systems but DBDA on linux seems a right thing at this time. From isaac at diversaform.com Tue May 19 17:49:49 2009 From: isaac at diversaform.com (Isaac Levy) Date: Tue, 19 May 2009 17:49:49 -0400 Subject: [nycbug-talk] OpenBSD large filesystem experiences? In-Reply-To: <93B1A7BE-33DA-478A-B768-8630C59E8550@nomadlogic.org> References: <7708fd680905152008x4b03fb09t71c6ff1a69b5a053@mail.gmail.com> <1479372543-1242445793-cardhu_decombobulator_blackberry.rim.net-395363067-@bxe1122.bisx.prod.on.blackberry> <93B1A7BE-33DA-478A-B768-8630C59E8550@nomadlogic.org> Message-ID: <325970F7-C345-4EFB-ADF8-DCD54C0384C3@diversaform.com> Word, On May 19, 2009, at 4:53 PM, Pete Wright wrote: >> I hate to say it but zfs is your only choice in the 1TB size , and >> its freebsd or solaris on whole disks not partitions is the only >> option . > > ermm...UFS on FreeBSD supports file systems larger than 1TB. > Although background fsck's will take a fiscal month to complete... I believe Mark must have meant 'sanely' supports. :P > > thus said the manual: > The UFS2 filesystem was introduced in 2003 as a replacement to the > original UFS and provides 64 bit counters and offsets. This allows > for files and filesystems to grow to 2^73 bytes (2^64 * 512) in size > and hopefully be sufficient for quite a long time. UFS2 largely > solved the storage size limits imposed by the filesystem. > Unfortunately, many tools and storage mechanisms still use or assume > 32 bit values, often keeping FreeBSD limited to 2TB. > > > I'm not sure if obsd has merged the changes kirk et. al. made to the > new UFS or not so ymmv. Yes- UFS2 since version 4.2, I believe. /salute Best, .ike From jim at zaah.com Tue May 19 17:58:00 2009 From: jim at zaah.com (Jim Cassata) Date: Tue, 19 May 2009 17:58:00 -0400 Subject: [nycbug-talk] Approach for the NFS cluster In-Reply-To: <595270.63168.qm@web53602.mail.re2.yahoo.com> References: <595270.63168.qm@web53602.mail.re2.yahoo.com> Message-ID: We use DRBD and heartbeat with much success. We use it for MySQL database pairs as well. Thanks Jim Cassata Infrastructure Manager Zaah Technologies, Inc. Desk 631.873.2046 Cell 516.319.4267 On Tue, May 19, 2009 at 10:54 AM, Aleksandar Kacanski wrote: > > Hi, > I am interested if anyone implemented NFS in HA configuration and what are folks experiences with different scenarios. > I am interested in pNFS, but there is DRBD (harbeat) or just plain cluster of nfs servers without sharing metadata. > My requirement is that I need to make service available and I can't have one point of failure... > > Regards, --Aleksandar (Sasha) Kacanski > > > > > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > From cwolsen at ubixos.com Tue May 19 18:06:21 2009 From: cwolsen at ubixos.com (Christopher Olsen) Date: Tue, 19 May 2009 18:06:21 -0400 Subject: [nycbug-talk] Audit Solution Message-ID: <200905192206.n4JM670N027070@fulton.nycbug.org> I'm sure some of you here manage multiple servers as I do. Now I get all the daily/weekly/monthly run and security audits via email however I would like to know if there is a better way to do this as a 100+ servers shooting off 2+ emails daily becomes a lot of quick reading... Is there anything that can get all of these emails and build some more effective reports? -Christopher Ubix Technologies T: 212-514-6270 C: 516-903-2889 32 Broadway Suite 204 New York, NY 10004 http://www.tuve.tv/mrolsen -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at atopia.net Tue May 19 18:08:38 2009 From: matt at atopia.net (Matt Juszczak) Date: Tue, 19 May 2009 18:08:38 -0400 (EDT) Subject: [nycbug-talk] Audit Solution In-Reply-To: <200905192206.n4JM670N027070@fulton.nycbug.org> References: <200905192206.n4JM670N027070@fulton.nycbug.org> Message-ID: It's funny. My boss asked me JUST the other day to find a solution to this. Will be interested to find out what others think. On Tue, 19 May 2009, Christopher Olsen wrote: > I'm sure some of you here manage multiple servers as I do. Now I get all the daily/weekly/monthly run and security audits > via email however I would like to know if there is a better way to do this as a 100+ servers shooting off 2+ emails daily > becomes a lot of quick reading.. Is there anything that can get all of these emails and build some more effective > reports? > > -Christopher > > Ubix Technologies > T: 212-514-6270 > C: 516-903-2889 > 32 Broadway Suite 204 > New York, NY 10004 > http://www.tuve.tv/mrolsen > From bschonhorst at gmail.com Tue May 19 18:12:35 2009 From: bschonhorst at gmail.com (Brad Schonhorst) Date: Tue, 19 May 2009 18:12:35 -0400 Subject: [nycbug-talk] OpenBSD large filesystem experiences? In-Reply-To: <1479372543-1242445793-cardhu_decombobulator_blackberry.rim.net-395363067-@bxe1122.bisx.prod.on.blackberry> References: <7708fd680905152008x4b03fb09t71c6ff1a69b5a053@mail.gmail.com> <1479372543-1242445793-cardhu_decombobulator_blackberry.rim.net-395363067-@bxe1122.bisx.prod.on.blackberry> Message-ID: <7708fd680905191512h6ccce39ak9661cbf88b8fd74b@mail.gmail.com> On Fri, May 15, 2009 at 11:49 PM, wrote: > I hate to say it but zfs is your only choice in the 1TB size , and its > freebsd or solaris on whole disks not partitions is the only option . > Was hoping for an OpenBSD option but zfs in Free may be a good alternate. How stable is the FreeBSD port of zfs? Thought I read somewhere it was still considered "beta." -------------- next part -------------- An HTML attachment was scrubbed... URL: From cwolsen at ubixos.com Tue May 19 18:13:55 2009 From: cwolsen at ubixos.com (Christopher Olsen) Date: Tue, 19 May 2009 18:13:55 -0400 Subject: [nycbug-talk] Audit Solution Message-ID: <200905192213.n4JMDi4t006586@fulton.nycbug.org> How many servers are you managing? One of my techs mentioned something I forget the name but it merely parsed for key words I was looking for something a bit more robust. -Christopher Ubix Technologies T: 212-514-6270 C: 516-903-2889 32 Broadway Suite 204 New York, NY 10004 http://www.tuve.tv/mrolsen -----Original Message----- From: Matt Juszczak Sent: Tuesday, May 19, 2009 6:08 PM To: Christopher Olsen Cc: talk at lists.nycbug.org Subject: Re: [nycbug-talk] Audit Solution It's funny. My boss asked me JUST the other day to find a solution to this. Will be interested to find out what others think. On Tue, 19 May 2009, Christopher Olsen wrote: > I'm sure some of you here manage multiple servers as I do. Now I get all the daily/weekly/monthly run and security audits > via email however I would like to know if there is a better way to do this as a 100+ servers shooting off 2+ emails daily > becomes a lot of quick reading.. Is there anything that can get all of these emails and build some more effective > reports? > > -Christopher > > Ubix Technologies > T: 212-514-6270 > C: 516-903-2889 > 32 Broadway Suite 204 > New York, NY 10004 > http://www.tuve.tv/mrolsen > From bcully at gmail.com Tue May 19 18:18:27 2009 From: bcully at gmail.com (Brian Cully) Date: Tue, 19 May 2009 18:18:27 -0400 Subject: [nycbug-talk] Audit Solution In-Reply-To: <200905192206.n4JM670N027070@fulton.nycbug.org> References: <200905192206.n4JM670N027070@fulton.nycbug.org> Message-ID: <7C098009-17D1-4D86-98B2-D6D24C7CB68F@gmail.com> My first step with this kind of problem is to first stop sending mail unless there's something I need to look at. Strangely enough, I never get to step two. -bjc On May 19, 2009, at 18:06, Christopher Olsen wrote: > I'm sure some of you here manage multiple servers as I do. Now I get > all the daily/weekly/monthly run and security audits via email > however I would like to know if there is a better way to do this as > a 100+ servers shooting off 2+ emails daily becomes a lot of quick > reading.. Is there anything that can get all of these emails and > build some more effective reports? > > -Christopher > > Ubix Technologies > T: 212-514-6270 > C: 516-903-2889 > 32 Broadway Suite 204 > New York, NY 10004 > http://www.tuve.tv/mrolsen > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at atopia.net Tue May 19 18:20:00 2009 From: matt at atopia.net (Matt Juszczak) Date: Tue, 19 May 2009 18:20:00 -0400 (EDT) Subject: [nycbug-talk] Audit Solution In-Reply-To: <20090519221344.4CBC422946@pluto.atopia.net> References: <20090519221344.4CBC422946@pluto.atopia.net> Message-ID: > How many servers are you managing? One of my techs mentioned something I > forget the name but it merely parsed for key words I was looking for > something a bit more robust. About 60. Are you talking about swatch? From carton at Ivy.NET Tue May 19 18:20:07 2009 From: carton at Ivy.NET (Miles Nordin) Date: Tue, 19 May 2009 18:20:07 -0400 Subject: [nycbug-talk] Approach for the NFS cluster In-Reply-To: <7AD47D69-1B64-4A68-8F14-2099B7CAC368@diversaform.com> (Isaac Levy's message of "Tue, 19 May 2009 15:46:48 -0400") References: <595270.63168.qm@web53602.mail.re2.yahoo.com> <4A12CD1E.40703@ceetonetechnology.com> <7A6DBDC2-404F-4379-BBDB-65BFEEADC94D@exit2shell.com> <7AD47D69-1B64-4A68-8F14-2099B7CAC368@diversaform.com> Message-ID: >>>>> "il" == Isaac Levy writes: il> I was under the impression that disabling the ZIL was a il> developer debugging thing- it's dangerous, period. no, 100% incorrect. Disabling the ZIL does not increase likelihood of losing the pool at all. It does break the NFSv3 transaction/commit system if the server reboots (meaning: you lose recently written data, the client ``acts weird'' until you umount and remont the NFS shares involved), and also it breaks fsync() so it's not safe for filesystems-on-filesystems (databases, VM guest backing stores). however I think the dirty truth is that most VM's suppress sync's to win performance and are unsafe to guest filesystems if the host reboots, with or without a ZIL. databases and mail obviously it matters. The way they explained it on ZFS list, is that the ZIL is always present in RAM, even when disabled, and is part of how the POSIX abstraction in ZFS is implemented (which is layered on top of the object store, a sibling of zvol's). Normally the ZIL is committed to the regular part of the disk, the bulk part, when each TXG cimmits every 30 seconds or so. When you call fsync(), or when NFSv3 commits or closes a file, the relevant part of the ZIL in RAM is flushed to disk. It's flushed to a separated special disk area that acts as a log, so it is write-only unless there is a crash. Eventually when the next TXG commits, the prior ZIL flush is superceded, and the blocks both in RAM and on disk are free for reuse. Often that special area is incorrectly called ``the ZIL'', and writing to it is what you disable. so disabling it doesn't endanger data written more than 30 seconds before the crash. but it does break fsync() so you shouldn't do it just for fun. also i think it's a global setting not per filesystem, which kind of blows chunks. il> I really feel that some Ggate-ish thing could be written for il> the Geom subsystem which allowed for multiple writes? Or il> something which did writes according to some transactional il> model- (locking files, etc...) There are two kinds of multiple writers. first kind is SCSI layer. The better Linux iSCSI targets (like SCST, which I haven't used yet) support multiple initiators. This is a SCSI term. When you activate an SCST target, a blob of SCST springs to life in the kernel intercepting ALL scsi commands headed toward the disk. Applications and filesystems running on the same box as SCST represent one initiator and get routed through this blob. The actual iSCSI initiators out on the network become the second and further initiators. so, even if you have only one iSCSI initiator hitting your target, SCST multiple-initiator features are active. There are many multi-initiator features in the SCSI standard. I don't completely understand any of them. One is the reservation protocol, which can be used as a sort of heartbeat. However since a smart and physically-sealed device is managing the heartbeat rather than a mess of cabling and switches, split-brain is probably less likely when all nodes are checking in with one of their disks rather than pinging each other over a network. The disk then becomes single point of failure so then you need a quorum of 3 disks. I think maybe the reservation protocol can also block access to the disk, but I'm not sure. That is not its most important feature. Sun's cluster stuff has bits in the host driver stack to block access to a disk when the cluster node isn't active and isn't suppsoed to have access, and I suspect it can work on slice/partition level. A second kind of multi-initiator feature is to clean up all the standards-baggage of extra seldom-used SCSI features. For example if one node turns on the write cache and sets a read-ahead threshhold, the change will affect all nodes, but the other nodes won't know about it. SCSI has some feature to broadcast changes to mode pages. A third multi-initiator feature is to actually support reads and writes from multiple hosts. SCST claims they re-implement TCQ in their multi-initiator blob, to support SATA disks which have either no exposed queues or queue commandsets which don't work with multiple initiators. but SEPARATE FROM ALL THIS MULTI-INITIATOR STUFF, is the second kind of multiple writer. Two hosts being able to write to the disk won't help you with any traditional filesystem, including ZFS. You can't mount a filesystem from two hosts over the same block device. This is certain---it won't work without completely rearchitecting the filesystem. Filesystems aren't designed to accept input from underneath them. It's possible to envision such a filesystem. like the Berkeley/Sleepycat/Oracle BDB library can have multiple processes open the same database. but they cheat! They have shared memory regions so the multiple processes communicate with each other directly, and a huge chunk of their complexity is to support this feature (since it's often why programmers turn to the library in the first place). And it's been done with filesystems, too. RedHat GFS, Oracle OCFS, Sun QFS, all work this way, and all of them also cheat: you need a ``metadata'' node which has direct and exclusive access to a little bit of local disk (the metadata node might be a single-mounter active/passive HA cluster like we're talking about before). The point of these filesystems isn't so much availability as switching density. It's not that you want an active/active cluster so you can feel better---it's that there's so much filesystem traffic it can't be funneled through a single CPU. By having clients open big files, yes granted they are all funneled to the single metadata server still, but for all the other bulk access to the meat inside the files they can go straight to the disks. It's not so much for clustered servers serving non-cluster clients as for when EVERYTHING is part of the cluster. The incentive to design filesystems this way, having clients use the SCSI multi-initiator features directly, is the possibility of extremely high-bandwidth high-density high-price FC-SW storage from EMC/Hitachi/NetApp. Keeping both nodes in an HA cluster active is only slightly helpful, because you still need to handle your work on 1 node, and because 2 is only a little bigger than 1. But if you can move more of your work into the cluster, so only the interconnect is shared and not the kernel image, it's possible to grow much further, with the interconnect joining: [black box] storage system 2 metadata nodes n work nodes The current generation of split metadata/data systems (Google FS, pNFS, GlusterFS, Lustre) uses filesystems rather than SCSI devices as the data backing store, so now the interconnect joins: m storage nodes 2 metadata nodes n work nodes and you do not use SCSI-2 style multi-initiator at all, except maybe on the 2 metadata nodes. All (not sure for GoogleFS but all others) have separate metadata servers like GFS/OCFS/QFS. The difference is that the data part is also a PeeCee with another filesystem like ext4 or ZFS between the disks and the clients, instead of disks directly. I think this approach has got the future nailed down, especially if the industry manages to deliver lossless fabric between PeeCees like infiniband or CEE, and especially because everyone seems to say GFS and OCFS don't work. I think QFS does work though. It's very old. And Sun has been mumbling about ``emancipating'' it. http://www.auc.edu.au/myfiles/uploads/Conference/Presentations%202007/Duncan_Ian.pdf http://www.afp548.com/filemgmt/visit.php?lid=64&ei=nyMTSsuQKJiG8gTmq8iBBA http://wikis.sun.com/display/SAMQFS/Home il> something which did writes according to some transactional il> model- (locking files, etc...) I've heard of many ways to take ``crash consistent'' backups at block layyer. Without unounting the filesystem, you can back it up while the active node is stiill using it, with no cooperation from the filesystem. There are also storage layers that use this philosophy to make live backups, will watch the filesystem do its work, and replicate this asynchronously over a slow connection offsite without making the local filesystem wait. (thus, better than gmirror by far) They are supposed to work fine if you accumulate hours of backlog during the day, then catch up overnight. * Linux LVM2 multiple levels of snapshot. not sure they can be writeable though. + drbd.org - replicate volumes to a remote site. not sure how integrated it is with LVM2 though, maybe not at all. * ZFS zvol's multiple levels, can be writeable + zfs send/recv for replication. good for replication, bad for stored backups! * vendor storage (EMC Hitachi NetApp) they can all do it, not sure all the quirks some ship kits you can install in windows crap to ``quiesce'' the filesystems or SQL Server stores. it is suposed to be crash-consistent on its own, always, in case you actually did crash, but i guess NTFS and SQL Server are goofy and don't meet this promise, so there is a whole mess of fud and confusing terms and modules to buy. * Sun/Storagetek AVS ii (instant image) this isn't a proper snapshot because you only get 2 layer, only ``current'' and ``snap''. There is a ``bitmap'' volume to mark which blocks are dirty. no trees of snapshots. ii is a key part of AVS block-layer replication. only with ii is it possible to safely fail back from the secondary to the primary. + AVS (availability suite, a.k.a. sun cluster geographic edition, maybe other names) is like drbd AVS is old and mature and can do things like put multiple volumes into consistency groups that share a single timeline. If you have a datapbase that uses multiple volumes at once, or if you are mirroring underneath a RAID/raidz layer, then you need to put all the related volumes into a consistency group or else there's no longer such a thing as crash-consistency. If you think about it this makes sense. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: From cwolsen at ubixos.com Tue May 19 18:24:22 2009 From: cwolsen at ubixos.com (Christopher Olsen) Date: Tue, 19 May 2009 18:24:22 -0400 Subject: [nycbug-talk] OpenBSD large filesystem experiences? Message-ID: <200905192224.n4JMOBJN015393@fulton.nycbug.org> I know they call it experimental however I have a box running zfs on raid 5 with 4 1tb drives approx 3tb avail for some time now running smooth I did this after a few recommendations. -Christopher Ubix Technologies T: 212-514-6270 C: 516-903-2889 32 Broadway Suite 204 New York, NY 10004 http://www.tuve.tv/mrolsen -----Original Message----- From: Brad Schonhorst Sent: Tuesday, May 19, 2009 6:12 PM To: mark.saad at ymail.com Cc: talk at lists.nycbug.org Subject: Re: [nycbug-talk] OpenBSD large filesystem experiences? On Fri, May 15, 2009 at 11:49 PM, wrote: I hate to say it but zfs is your only choice in the 1TB size , and its freebsd or solaris on whole disks not partitions is the only option . Was hoping for an OpenBSD option but zfs in Free may be a good alternate.? How stable is the FreeBSD port of zfs? Thought I read somewhere it was still considered "beta." -------------- next part -------------- An HTML attachment was scrubbed... URL: From carton at Ivy.NET Tue May 19 18:28:28 2009 From: carton at Ivy.NET (Miles Nordin) Date: Tue, 19 May 2009 18:28:28 -0400 Subject: [nycbug-talk] Approach for the NFS cluster In-Reply-To: <53F6C5C1-9B97-4C63-A4E8-B9C08973DA5F@nomadlogic.org> (Pete Wright's message of "Tue, 19 May 2009 13:46:32 -0700") References: <595270.63168.qm@web53602.mail.re2.yahoo.com> <4A12CD1E.40703@ceetonetechnology.com> <4E169530-AB97-4505-A5EE-D25DB95DEF02@diversaform.com> <4A130BB7.1000708@3phasecomputing.com> <53F6C5C1-9B97-4C63-A4E8-B9C08973DA5F@nomadlogic.org> Message-ID: >>>>> "pw" == Pete Wright writes: pw> using a global filesystem behind your NFS servers (hadoop hadoop is a java library, not a mountable POSIX filesystem. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: From pete at nomadlogic.org Tue May 19 18:43:34 2009 From: pete at nomadlogic.org (Pete Wright) Date: Tue, 19 May 2009 15:43:34 -0700 Subject: [nycbug-talk] OpenBSD large filesystem experiences? In-Reply-To: <7708fd680905191512h6ccce39ak9661cbf88b8fd74b@mail.gmail.com> References: <7708fd680905152008x4b03fb09t71c6ff1a69b5a053@mail.gmail.com> <1479372543-1242445793-cardhu_decombobulator_blackberry.rim.net-395363067-@bxe1122.bisx.prod.on.blackberry> <7708fd680905191512h6ccce39ak9661cbf88b8fd74b@mail.gmail.com> Message-ID: <7963F3ED-13FC-4968-8901-89A2584852BB@nomadlogic.org> On 19-May-09, at 3:12 PM, Brad Schonhorst wrote: > > > On Fri, May 15, 2009 at 11:49 PM, wrote: > I hate to say it but zfs is your only choice in the 1TB size , and > its freebsd or solaris on whole disks not partitions is the only > option . > > Was hoping for an OpenBSD option but zfs in Free may be a good > alternate. > > How stable is the FreeBSD port of zfs? Thought I read somewhere it > was still considered "beta." > how good are your backups :) I've had it working as advertised on several R&D systems, seems to behave under load well etc. There is a bit of hand tuning you'll want to do but I've had a good experience with it so far and there may be some corner cases you may run into as well. So i'd hold off using it for mission critical data were any downtime is going to get you fired. -pete -------------- next part -------------- An HTML attachment was scrubbed... URL: From george at ceetonetechnology.com Tue May 19 18:46:41 2009 From: george at ceetonetechnology.com (George Rosamond) Date: Tue, 19 May 2009 18:46:41 -0400 Subject: [nycbug-talk] Audit Solution In-Reply-To: References: <20090519221344.4CBC422946@pluto.atopia.net> Message-ID: <4A1336D1.90004@ceetonetechnology.com> Matt Juszczak wrote: >> How many servers are you managing? One of my techs mentioned something I >> forget the name but it merely parsed for key words I was looking for >> something a bit more robust. > > About 60. > > Are you talking about swatch? or logwatch. . . Personally, I get a bunch of dailies, etc., not to mention cron job outputs that I want to see, like statuses of RAIDs, outputs of portaudit, etc. I read everything in the am, and quickly scan for glaring problems. Which is why I don't run sshd on 22. . . since if there's no firewall, you get the zombie attempts filling up the email and miss what you need to know. But that's another discussion :) We've had this discussion before offlist, and if someone has the golden answer, well, let us know. g From cwolsen at ubixos.com Tue May 19 18:51:45 2009 From: cwolsen at ubixos.com (Christopher Olsen) Date: Tue, 19 May 2009 18:51:45 -0400 Subject: [nycbug-talk] Audit Solution Message-ID: <200905192251.n4JMpWjQ023592@fulton.nycbug.org> Its funny you mention the zombie attempts my logs get cluttered with failed attempts nothing I worry about I considered moving the port but assumed they would eventually find it. How's the different port working for you? -Christopher Ubix Technologies T: 212-514-6270 C: 516-903-2889 32 Broadway Suite 204 New York, NY 10004 http://www.tuve.tv/mrolsen -----Original Message----- From: George Rosamond Sent: Tuesday, May 19, 2009 6:46 PM To: Matt Juszczak Cc: Christopher Olsen ; talk at lists.nycbug.org Subject: Re: [nycbug-talk] Audit Solution Matt Juszczak wrote: >> How many servers are you managing? One of my techs mentioned something I >> forget the name but it merely parsed for key words I was looking for >> something a bit more robust. > > About 60. > > Are you talking about swatch? or logwatch. . . Personally, I get a bunch of dailies, etc., not to mention cron job outputs that I want to see, like statuses of RAIDs, outputs of portaudit, etc. I read everything in the am, and quickly scan for glaring problems. Which is why I don't run sshd on 22. . . since if there's no firewall, you get the zombie attempts filling up the email and miss what you need to know. But that's another discussion :) We've had this discussion before offlist, and if someone has the golden answer, well, let us know. g From pete at nomadlogic.org Tue May 19 18:53:51 2009 From: pete at nomadlogic.org (Pete Wright) Date: Tue, 19 May 2009 15:53:51 -0700 Subject: [nycbug-talk] Approach for the NFS cluster In-Reply-To: References: <595270.63168.qm@web53602.mail.re2.yahoo.com> <4A12CD1E.40703@ceetonetechnology.com> <4E169530-AB97-4505-A5EE-D25DB95DEF02@diversaform.com> <4A130BB7.1000708@3phasecomputing.com> <53F6C5C1-9B97-4C63-A4E8-B9C08973DA5F@nomadlogic.org> Message-ID: <1846C1F2-3044-43E6-82C3-BE22522B7186@nomadlogic.org> On 19-May-09, at 3:28 PM, Miles Nordin wrote: >>>>>> "pw" == Pete Wright writes: > > pw> using a global filesystem behind your NFS servers (hadoop > > hadoop is a java library, not a mountable POSIX filesystem. oh that sucks. i guess if you've gone through the trouble of setting up hadoop you don't need nfs... -p From cwolsen at ubixos.com Tue May 19 18:55:25 2009 From: cwolsen at ubixos.com (Christopher Olsen) Date: Tue, 19 May 2009 18:55:25 -0400 Subject: [nycbug-talk] Audit Solution Message-ID: <200905192255.n4JMtCbh024708@fulton.nycbug.org> No not swatch it was call tailsend I think. -Christopher Ubix Technologies T: 212-514-6270 C: 516-903-2889 32 Broadway Suite 204 New York, NY 10004 http://www.tuve.tv/mrolsen -----Original Message----- From: Matt Juszczak Sent: Tuesday, May 19, 2009 6:20 PM To: Christopher Olsen Cc: talk at lists.nycbug.org Subject: RE: [nycbug-talk] Audit Solution > How many servers are you managing? One of my techs mentioned something I > forget the name but it merely parsed for key words I was looking for > something a bit more robust. About 60. Are you talking about swatch? From lists at zaunere.com Tue May 19 19:31:01 2009 From: lists at zaunere.com (Hans Zaunere) Date: Tue, 19 May 2009 19:31:01 -0400 Subject: [nycbug-talk] Audit Solution In-Reply-To: <4A1336D1.90004@ceetonetechnology.com> References: <20090519221344.4CBC422946@pluto.atopia.net> <4A1336D1.90004@ceetonetechnology.com> Message-ID: <011801c9d8d9$ded42e00$9c7c8a00$@com> > Personally, I get a bunch of dailies, etc., not to mention cron job > outputs that I want to see, like statuses of RAIDs, outputs of > portaudit, etc. > > I read everything in the am, and quickly scan for glaring problems. > > Which is why I don't run sshd on 22. . . since if there's no firewall, > you get the zombie attempts filling up the email and miss what you need > to know. But that's another discussion :) > > We've had this discussion before offlist, and if someone has the golden > answer, well, let us know. It'd be pretty easy to setup a system that receives these emails (and optionally forwards to other mailboxes), parses them, and then provide a nice digest as a web site report or email. Seems as though something like this exists - or perhaps this was the discussion we had offlist? H From george at ceetonetechnology.com Tue May 19 19:32:38 2009 From: george at ceetonetechnology.com (George Rosamond) Date: Tue, 19 May 2009 19:32:38 -0400 Subject: [nycbug-talk] another thread: sshd zombie attacks In-Reply-To: <638521075-1242775184-cardhu_decombobulator_blackberry.rim.net-1843881243-@bxe1247.bisx.prod.on.blackberry> References: <638521075-1242775184-cardhu_decombobulator_blackberry.rim.net-1843881243-@bxe1247.bisx.prod.on.blackberry> Message-ID: <4A134196.9000505@ceetonetechnology.com> matt at atopia.net wrote: > I just block connections after 3 failed login attempts for an hour. Works nicely. > > If anyone wants the script. I also have one that blocks after 3 attempts whether successful or not in 30 second period that only uses pf. > > ------Original Message------ > From: Christopher Olsen > To: george at ceetonetechnology.com > To: Matt Juszczak > Cc: talk at lists.nycbug.org > Subject: RE: [nycbug-talk] Audit Solution > Sent: May 19, 2009 18:51 > > Its funny you mention the zombie attempts my logs get cluttered with failed attempts nothing I worry about I considered moving the port but assumed they would eventually find it. How's the different port working for you? > moving this to a different thread, which has been beaten to death in the past. . .online and off :) There's potential issues with having sshd listening on a nonprivileged port. But the higher priority to me, at least at this point, is to be able to bypass even dealing with the zombies. I was convinced of it not because of "security by obscurity" (please, don't bait with that), but because I heard cases of disk i/o going through the ceiling under such attacks (in the ddos version of the attack), and switching the listening port quickly changed it. This is *without* various scripts, firewall rules, etc., having the hassle and the associated overhead in those respective cases. These are zombies. . .they are looking at port 22, not another port at this point. They aren't (yet) smart enough to find other ports listening for sshd and then adjusting from there. "Hiding" among 65535 tcp ports is looking for obscurity if you're talking about crackers. Is it defense against crackers or future mutations of the zombie attacks? No. . . but then use public/private ssh keys, strong passwds, firewall rules, etc. Measure and counter-measure, with a lot of layers before that. So, the answer is 'yes', I like it, because now I don't have the overhead, plus I read my relatively clean dailies everyday. George (please don't top-post. Reply inline or below. It makes the threads that get long easier to follow) From matt at atopia.net Tue May 19 19:19:58 2009 From: matt at atopia.net (matt at atopia.net) Date: Tue, 19 May 2009 23:19:58 +0000 Subject: [nycbug-talk] Audit Solution Message-ID: <638521075-1242775184-cardhu_decombobulator_blackberry.rim.net-1843881243-@bxe1247.bisx.prod.on.blackberry> I just block connections after 3 failed login attempts for an hour. Works nicely. If anyone wants the script. I also have one that blocks after 3 attempts whether successful or not in 30 second period that only uses pf. ------Original Message------ From: Christopher Olsen To: george at ceetonetechnology.com To: Matt Juszczak Cc: talk at lists.nycbug.org Subject: RE: [nycbug-talk] Audit Solution Sent: May 19, 2009 18:51 Its funny you mention the zombie attempts my logs get cluttered with failed attempts nothing I worry about I considered moving the port but assumed they would eventually find it. How's the different port working for you? -Christopher Ubix Technologies T: 212-514-6270 C: 516-903-2889 32 Broadway Suite 204 New York, NY 10004 http://www.tuve.tv/mrolsen -----Original Message----- From: George Rosamond Sent: Tuesday, May 19, 2009 6:46 PM To: Matt Juszczak Cc: Christopher Olsen ; talk at lists.nycbug.org Subject: Re: [nycbug-talk] Audit Solution Matt Juszczak wrote: >> How many servers are you managing? One of my techs mentioned something I >> forget the name but it merely parsed for key words I was looking for >> something a bit more robust. > > About 60. > > Are you talking about swatch? or logwatch. . . Personally, I get a bunch of dailies, etc., not to mention cron job outputs that I want to see, like statuses of RAIDs, outputs of portaudit, etc. I read everything in the am, and quickly scan for glaring problems. Which is why I don't run sshd on 22. . . since if there's no firewall, you get the zombie attempts filling up the email and miss what you need to know. But that's another discussion :) We've had this discussion before offlist, and if someone has the golden answer, well, let us know. g From nylug at sky-haven.net Tue May 19 20:00:18 2009 From: nylug at sky-haven.net (nylug at sky-haven.net) Date: Tue, 19 May 2009 20:00:18 -0400 Subject: [nycbug-talk] (a bit ot) web server weird stuff In-Reply-To: <4A12F4E8.1030408@gmail.com> References: <4A12F4E8.1030408@gmail.com> Message-ID: <4A134812.4030409@sky-haven.net> Scr?obh Steve Rieger: > starting yesterday i see the following in my access logs, and cant seem > to figure out what the heck is going on, > using lighttp, got any insight ? [removed] The CONNECT commands are attempts by an HTTP user agent to interface with a potential HTTP proxy in order to establish a tunnel session. Since your access.log indicates that your server is replying with 501 "not implemented", I don't think it's necessarily a sign of a compro or anything on your server. From riegersteve at gmail.com Tue May 19 20:04:05 2009 From: riegersteve at gmail.com (riegersteve at gmail.com) Date: Wed, 20 May 2009 00:04:05 +0000 Subject: [nycbug-talk] (a bit ot) web server weird stuff Message-ID: <1100285777-1242777853-cardhu_decombobulator_blackberry.rim.net-177735629-@bxe1277.bisx.prod.on.blackberry> Any idea how to stop em ? ------Original Message------ From: nylug at sky-haven.net To: Steve Rieger Cc: NYC BUG Subject: Re: [nycbug-talk] (a bit ot) web server weird stuff Sent: May 19, 2009 17:00 Scr?obh Steve Rieger: > starting yesterday i see the following in my access logs, and cant seem > to figure out what the heck is going on, > using lighttp, got any insight ? [removed] The CONNECT commands are attempts by an HTTP user agent to interface with a potential HTTP proxy in order to establish a tunnel session. Since your access.log indicates that your server is replying with 501 "not implemented", I don't think it's necessarily a sign of a compro or anything on your server. -- Sent via Blackberry I can be reached at 310-947-8565 From bonsaime at gmail.com Tue May 19 20:18:07 2009 From: bonsaime at gmail.com (Jesse Callaway) Date: Tue, 19 May 2009 20:18:07 -0400 Subject: [nycbug-talk] Approach for the NFS cluster In-Reply-To: <1846C1F2-3044-43E6-82C3-BE22522B7186@nomadlogic.org> References: <595270.63168.qm@web53602.mail.re2.yahoo.com> <4A12CD1E.40703@ceetonetechnology.com> <4E169530-AB97-4505-A5EE-D25DB95DEF02@diversaform.com> <4A130BB7.1000708@3phasecomputing.com> <53F6C5C1-9B97-4C63-A4E8-B9C08973DA5F@nomadlogic.org> <1846C1F2-3044-43E6-82C3-BE22522B7186@nomadlogic.org> Message-ID: On Tue, May 19, 2009 at 6:53 PM, Pete Wright wrote: > > On 19-May-09, at 3:28 PM, Miles Nordin wrote: > >>>>>>> "pw" == Pete Wright writes: >> >> ? ?pw> using a global filesystem behind your NFS servers (hadoop >> >> hadoop is a java library, not a mountable POSIX filesystem. > > oh that sucks. ?i guess if you've gone through the trouble of setting > up hadoop you don't need nfs... > > > -p > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > I think scalable, available storage is something everyone is thinking about. If anyone feels like presenting their experience to the group, please submit on the website's 'about' page.... maybe a few short presentations in one meeting like a X vs Y vs Z. I can see heated debate already. Who knows if it will get scheduled, but certainly good to post ideas. -jesse From kacanski_s at yahoo.com Tue May 19 19:45:48 2009 From: kacanski_s at yahoo.com (kacanski_s at yahoo.com) Date: Tue, 19 May 2009 23:45:48 +0000 Subject: [nycbug-talk] Audit Solution In-Reply-To: <200905192206.n4JM670N027070@fulton.nycbug.org> References: <200905192206.n4JM670N027070@fulton.nycbug.org> Message-ID: <101593797-1242776721-cardhu_decombobulator_blackberry.rim.net-174332611-@bxe1289.bisx.prod.on.blackberry> Sure, One way would be to use centralized login facility and python parser to comb through logs and severity levels. Sent from my Verizon Wireless BlackBerry -----Original Message----- From: Christopher Olsen Date: Tue, 19 May 2009 18:06:21 To: Subject: [nycbug-talk] Audit Solution _______________________________________________ talk mailing list talk at lists.nycbug.org http://lists.nycbug.org/mailman/listinfo/talk From matt at atopia.net Tue May 19 20:28:55 2009 From: matt at atopia.net (Matt Juszczak) Date: Tue, 19 May 2009 20:28:55 -0400 (EDT) Subject: [nycbug-talk] Audit Solution In-Reply-To: <101593797-1242776721-cardhu_decombobulator_blackberry.rim.net-174332611-@bxe1289.bisx.prod.on.blackberry> References: <200905192206.n4JM670N027070@fulton.nycbug.org> <101593797-1242776721-cardhu_decombobulator_blackberry.rim.net-174332611-@bxe1289.bisx.prod.on.blackberry> Message-ID: > Sure, > One way would be to use centralized login facility and python parser to comb through logs and severity levels. This is what I do with swatch and syslog-ng From jbaltz at 3phasecomputing.com Tue May 19 20:34:21 2009 From: jbaltz at 3phasecomputing.com (Jerry B. Altzman) Date: Tue, 19 May 2009 20:34:21 -0400 Subject: [nycbug-talk] another thread: sshd zombie attacks In-Reply-To: <4A134196.9000505@ceetonetechnology.com> References: <638521075-1242775184-cardhu_decombobulator_blackberry.rim.net-1843881243-@bxe1247.bisx.prod.on.blackberry> <4A134196.9000505@ceetonetechnology.com> Message-ID: <4A13500D.3010203@3phasecomputing.com> on 5/19/2009 7:32 PM George Rosamond said the following: > I was convinced of it not because of "security by obscurity" (please, > don't bait with that), but because I heard cases of disk i/o going > through the ceiling under such attacks (in the ddos version of the > attack), and switching the listening port quickly changed it. This is > *without* various scripts, firewall rules, etc., having the hassle and > the associated overhead in those respective cases. I can verify -- this happened *to me*. We had strange load spikes on machines that would otherwise be unused...and we saw *hundreds* of *simultaneous* inbound ssh attempts. Moving ssh to port .ne. 22 solved that problem in a jiffy. > Is it defense against crackers or future mutations of the zombie > attacks? No. . . but then use public/private ssh keys, strong passwds, > firewall rules, etc. Measure and counter-measure, with a lot of layers > before that. This was JUST TO FIX THE DOS problem. We didn't delude ourselves that it would deter someone committed to hacking in. > George //jbaltz -- jerry b. altzman jbaltz at 3phasecomputing.com +1 718 763 7405 From bcully at gmail.com Tue May 19 22:09:54 2009 From: bcully at gmail.com (Brian Cully) Date: Tue, 19 May 2009 22:09:54 -0400 Subject: [nycbug-talk] Approach for the NFS cluster In-Reply-To: References: <595270.63168.qm@web53602.mail.re2.yahoo.com> <4A12CD1E.40703@ceetonetechnology.com> <4E169530-AB97-4505-A5EE-D25DB95DEF02@diversaform.com> <4A130BB7.1000708@3phasecomputing.com> <53F6C5C1-9B97-4C63-A4E8-B9C08973DA5F@nomadlogic.org> <1846C1F2-3044-43E6-82C3-BE22522B7186@nomadlogic.org> Message-ID: <310EB9F7-EEE3-4180-AA72-9BBEFE56AB6C@gmail.com> On 19-May-2009, at 20:18, Jesse Callaway wrote: > I think scalable, available storage is something everyone is thinking > about. If anyone feels like presenting their experience to the group, > please submit on the website's 'about' page.... maybe a few short > presentations in one meeting like a X vs Y vs Z. I can see heated > debate already. Who knows if it will get scheduled, but certainly good > to post ideas. Databases have already solved many of these problems to the best they can be solved. If you need robust network-available storage, use a database and all that entails. If you want their qualities without giving up current POSIX APIs, you'll be waiting a long, long time. In other words, the difference between one-to-many and many-to-many is insurmountable: failure is not an option, it's a requirement. To re-re-rephrase, your requirements determine your solutions. Most people don't need ACID for most of the things they do, so they likewise don't need a proper database and all its requirements. By the same hand, most people are fine with simple storage solutions. What you require depends on your application and its requirements. Perhaps what the world needs is a configurable HA system, rather than one which tries to do one thing that works for everyone. To my knowledge, there is none such animal. -bjc From tekronis at gmail.com Tue May 19 22:49:41 2009 From: tekronis at gmail.com (H. G.) Date: Tue, 19 May 2009 22:49:41 -0400 Subject: [nycbug-talk] Approach for the NFS cluster In-Reply-To: References: <595270.63168.qm@web53602.mail.re2.yahoo.com> <4A12CD1E.40703@ceetonetechnology.com> <4E169530-AB97-4505-A5EE-D25DB95DEF02@diversaform.com> <4A130BB7.1000708@3phasecomputing.com> <53F6C5C1-9B97-4C63-A4E8-B9C08973DA5F@nomadlogic.org> <1846C1F2-3044-43E6-82C3-BE22522B7186@nomadlogic.org> Message-ID: <60131f920905191949m6e39112anbb35899d61253a60@mail.gmail.com> On Tue, May 19, 2009 at 8:18 PM, Jesse Callaway wrote: > On Tue, May 19, 2009 at 6:53 PM, Pete Wright wrote: > > > > On 19-May-09, at 3:28 PM, Miles Nordin wrote: > > > >>>>>>> "pw" == Pete Wright writes: > >> > >> pw> using a global filesystem behind your NFS servers (hadoop > >> > >> hadoop is a java library, not a mountable POSIX filesystem. > > > > oh that sucks. i guess if you've gone through the trouble of setting > > up hadoop you don't need nfs... > > > > > > -p > > _______________________________________________ > > talk mailing list > > talk at lists.nycbug.org > > http://lists.nycbug.org/mailman/listinfo/talk > > > > I think scalable, available storage is something everyone is thinking > about. If anyone feels like presenting their experience to the group, > please submit on the website's 'about' page.... maybe a few short > presentations in one meeting like a X vs Y vs Z. I can see heated > debate already. Who knows if it will get scheduled, but certainly good > to post ideas. > > -jesse > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > So..... has anyone tried DragonBSD's HAMMER? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mspitzer at gmail.com Tue May 19 23:06:44 2009 From: mspitzer at gmail.com (Marc Spitzer) Date: Tue, 19 May 2009 23:06:44 -0400 Subject: [nycbug-talk] Audit Solution In-Reply-To: References: <200905192206.n4JM670N027070@fulton.nycbug.org> <101593797-1242776721-cardhu_decombobulator_blackberry.rim.net-174332611-@bxe1289.bisx.prod.on.blackberry> Message-ID: <8c50a3c30905192006n39eb5c36j5b2f4951bafaf144@mail.gmail.com> On Tue, May 19, 2009 at 8:28 PM, Matt Juszczak wrote: >> Sure, >> One way would be to use centralized login facility and python parser to comb through logs and severity levels. > > This is what I do with swatch and syslog-ng splunk marc -- Freedom is nothing but a chance to be better. Albert Camus From carton at Ivy.NET Tue May 19 23:31:12 2009 From: carton at Ivy.NET (Miles Nordin) Date: Tue, 19 May 2009 23:31:12 -0400 Subject: [nycbug-talk] Approach for the NFS cluster In-Reply-To: Brian Cully's message of "Tue, 19 May 2009 22:09:54 -0400" References: <595270.63168.qm@web53602.mail.re2.yahoo.com> <4A12CD1E.40703@ceetonetechnology.com> <4E169530-AB97-4505-A5EE-D25DB95DEF02@diversaform.com> <4A130BB7.1000708@3phasecomputing.com> <53F6C5C1-9B97-4C63-A4E8-B9C08973DA5F@nomadlogic.org> <1846C1F2-3044-43E6-82C3-BE22522B7186@nomadlogic.org> <310EB9F7-EEE3-4180-AA72-9BBEFE56AB6C@gmail.com> <60131f920905191949m6e39112anbb35899d61253a60@mail.gmail.com> Message-ID: >>>>> "bc" == Brian Cully writes: >>>>> "hg" == H G writes: bc> Databases have already solved many of these problems bc> to the best they can be solved. not so simple. First, my impression is, a common use of QFS (the Sun clustered FS built over multi-initiator SCSI, usually FC-SW) is as backing store for Oracle RAC. DBMS and filesystem are not quite substitutable, but rather parts of a storage stack. Maybe big chunks of the lower stack go unused when a database is running, but for now the whole stack is generally still used. Second, Oracle RAC is state-of-the-art for expensive databases, and it does not scale anything close to linearly. CouchDB, Hadoop, and the next round of clustered filesystems, I've the impression, scale much closer to linearly and might be far more interesting in the long run. Third, you can use BDB as a backing-store (a ``storage brick'') for GlusterFS. I think the idea is, for tiny files, but not having actually used it I am not sure if it really does make sense for certain workloads, or if it's something they implemented as a toy. But it is proof-of-concept: POSIX filesystem inside a database. It may be a pointless exercise, but I'm not sure it's difficult or out-of-reach exercise. hg> anyone tried DragonBSD's HAMMER? +1, also would like to know. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: From matt at atopia.net Wed May 20 01:57:39 2009 From: matt at atopia.net (Matt Juszczak) Date: Wed, 20 May 2009 01:57:39 -0400 (EDT) Subject: [nycbug-talk] Audit Solution In-Reply-To: <8c50a3c30905192006n39eb5c36j5b2f4951bafaf144@mail.gmail.com> References: <200905192206.n4JM670N027070@fulton.nycbug.org> <101593797-1242776721-cardhu_decombobulator_blackberry.rim.net-174332611-@bxe1289.bisx.prod.on.blackberry> <8c50a3c30905192006n39eb5c36j5b2f4951bafaf144@mail.gmail.com> Message-ID: > splunk splunk allows searching right? From isaac at diversaform.com Wed May 20 02:16:45 2009 From: isaac at diversaform.com (Isaac Levy) Date: Wed, 20 May 2009 02:16:45 -0400 Subject: [nycbug-talk] Approach for the NFS cluster In-Reply-To: References: <595270.63168.qm@web53602.mail.re2.yahoo.com> <4A12CD1E.40703@ceetonetechnology.com> <7A6DBDC2-404F-4379-BBDB-65BFEEADC94D@exit2shell.com> <7AD47D69-1B64-4A68-8F14-2099B7CAC368@diversaform.com> Message-ID: <90C5EFFE-2D20-460F-AA99-100C248CAF1D@diversaform.com> On May 19, 2009, at 6:20 PM, Miles Nordin wrote: >>>>>> "il" == Isaac Levy writes: > > il> I was under the impression that disabling the ZIL was a > il> developer debugging thing- it's dangerous, period. > > no, 100% incorrect. OK- I'll concede being incorrect, (and I learned something new here), but I'd like to only concede 99% incorrect: http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#ZIL Sun's got tons of docs about using non-volatile RAM (solid state drive), or a dedicated drive to increase performance of ZFS, but in reading this, none of their docs speak clearly about disabling it. > Disabling the ZIL does not increase likelihood of > losing the pool at all. It does break the NFSv3 transaction/commit > system if the server reboots (meaning: you lose recently written data, > the client ``acts weird'' until you umount and remont the NFS shares > involved), and also it breaks fsync() so it's not safe for > filesystems-on-filesystems (databases, VM guest backing stores). > however I think the dirty truth is that most VM's suppress sync's to > win performance and are unsafe to guest filesystems if the host > reboots, with or without a ZIL. databases and mail obviously it > matters. I can see that disabling the ZIL could yield better performance, but even in active filesystems (without filesystems-on-filesystems), I can see potential problems in data loss- (loosing the box as file(s) are being written, or were recently written). So to me, disabling the ZIL doesn't seem quite rational for most circumstances. > > > The way they explained it on ZFS list, is that the ZIL is always > present in RAM, even when disabled, and is part of how the POSIX > abstraction in ZFS is implemented (which is layered on top of the > object store, a sibling of zvol's). Normally the ZIL is committed to > the regular part of the disk, the bulk part, when each TXG cimmits > every 30 seconds or so. When you call fsync(), or when NFSv3 commits > or closes a file, the relevant part of the ZIL in RAM is flushed to > disk. It's flushed to a separated special disk area that acts as a > log, so it is write-only unless there is a crash. Eventually when the > next TXG commits, the prior ZIL flush is superceded, and the blocks > both in RAM and on disk are free for reuse. Often that special area > is incorrectly called ``the ZIL'', and writing to it is what you > disable. so disabling it doesn't endanger data written more than 30 > seconds before the crash. Hrm. I'm chewing on this. To me, 30 seconds of lost files feels funy- I can't think of an acceptable case for this happening- (but I'm not saying there is no case where this is acceptable). > > > but it does break fsync() so you shouldn't do it just for fun. also i > think it's a global setting not per filesystem, which kind of blows > chunks. > > il> I really feel that some Ggate-ish thing could be written for > il> the Geom subsystem which allowed for multiple writes? Or > il> something which did writes according to some transactional > il> model- (locking files, etc...) > > There are two kinds of multiple writers. first kind is SCSI layer. > > The better Linux iSCSI targets (like SCST, which I haven't used yet) > support multiple initiators. This is a SCSI term. When you activate > an SCST target, a blob of SCST springs to life in the kernel > intercepting ALL scsi commands headed toward the disk. Applications > and filesystems running on the same box as SCST represent one > initiator and get routed through this blob. The actual iSCSI > initiators out on the network become the second and further > initiators. so, even if you have only one iSCSI initiator hitting > your target, SCST multiple-initiator features are active. > > There are many multi-initiator features in the SCSI standard. I don't > completely understand any of them. > > One is the reservation protocol, which can be used as a sort of > heartbeat. However since a smart and physically-sealed device is > managing the heartbeat rather than a mess of cabling and switches, > split-brain is probably less likely when all nodes are checking in > with one of their disks rather than pinging each other over a network. > The disk then becomes single point of failure so then you need a > quorum of 3 disks. > > I think maybe the reservation protocol can also block access to the > disk, but I'm not sure. That is not its most important feature. > Sun's cluster stuff has bits in the host driver stack to block access > to a disk when the cluster node isn't active and isn't suppsoed to > have access, and I suspect it can work on slice/partition level. > > A second kind of multi-initiator feature is to clean up all the > standards-baggage of extra seldom-used SCSI features. For example if > one node turns on the write cache and sets a read-ahead threshhold, > the change will affect all nodes, but the other nodes won't know about > it. SCSI has some feature to broadcast changes to mode pages. > > A third multi-initiator feature is to actually support reads and > writes from multiple hosts. SCST claims they re-implement TCQ in > their multi-initiator blob, to support SATA disks which have either no > exposed queues or queue commandsets which don't work with multiple > initiators. > > but SEPARATE FROM ALL THIS MULTI-INITIATOR STUFF, is the second kind > of multiple writer. Two hosts being able to write to the disk won't > help you with any traditional filesystem, including ZFS. You can't > mount a filesystem from two hosts over the same block device. This is > certain---it won't work without completely rearchitecting the > filesystem. Filesystems aren't designed to accept input from > underneath them. > > It's possible to envision such a filesystem. like the > Berkeley/Sleepycat/Oracle BDB library can have multiple processes open > the same database. but they cheat! They have shared memory regions > so the multiple processes communicate with each other directly, and a > huge chunk of their complexity is to support this feature (since it's > often why programmers turn to the library in the first place). > > And it's been done with filesystems, too. RedHat GFS, Oracle OCFS, > Sun QFS, all work this way, and all of them also cheat: you need a > ``metadata'' node which has direct and exclusive access to a little > bit of local disk (the metadata node might be a single-mounter > active/passive HA cluster like we're talking about before). The point > of these filesystems isn't so much availability as switching density. > It's not that you want an active/active cluster so you can feel > better---it's that there's so much filesystem traffic it can't be > funneled through a single CPU. By having clients open big files, yes > granted they are all funneled to the single metadata server still, but > for all the other bulk access to the meat inside the files they can go > straight to the disks. It's not so much for clustered servers serving > non-cluster clients as for when EVERYTHING is part of the cluster. > > The incentive to design filesystems this way, having clients use the > SCSI multi-initiator features directly, is the possibility of > extremely high-bandwidth high-density high-price FC-SW storage from > EMC/Hitachi/NetApp. Keeping both nodes in an HA cluster active is > only slightly helpful, because you still need to handle your work on 1 > node, and because 2 is only a little bigger than 1. But if you can > move more of your work into the cluster, so only the interconnect is > shared and not the kernel image, it's possible to grow much further, > with the interconnect joining: > > [black box] storage system > 2 metadata nodes > n work nodes > > The current generation of split metadata/data systems (Google FS, > pNFS, GlusterFS, Lustre) uses filesystems rather than SCSI devices as > the data backing store, so now the interconnect joins: > > m storage nodes > 2 metadata nodes > n work nodes > > and you do not use SCSI-2 style multi-initiator at all, except maybe > on the 2 metadata nodes. All (not sure for GoogleFS but all others) > have separate metadata servers like GFS/OCFS/QFS. The difference is > that the data part is also a PeeCee with another filesystem like ext4 > or ZFS between the disks and the clients, instead of disks directly. > I think this approach has got the future nailed down, especially if > the industry manages to deliver lossless fabric between PeeCees like > infiniband or CEE, and especially because everyone seems to say GFS > and OCFS don't work. > > I think QFS does work though. It's very old. And Sun has been > mumbling about ``emancipating'' it. > > http://www.auc.edu.au/myfiles/uploads/Conference/Presentations%202007/Duncan_Ian.pdf > http://www.afp548.com/filemgmt/visit.php?lid=64&ei=nyMTSsuQKJiG8gTmq8iBBA > http://wikis.sun.com/display/SAMQFS/Home > > il> something which did writes according to some transactional > il> model- (locking files, etc...) > > I've heard of many ways to take ``crash consistent'' backups at block > layyer. Without unounting the filesystem, you can back it up while > the active node is stiill using it, with no cooperation from the > filesystem. There are also storage layers that use this philosophy to > make live backups, will watch the filesystem do its work, and > replicate this asynchronously over a slow connection offsite without > making the local filesystem wait. (thus, better than gmirror by far) > They are supposed to work fine if you accumulate hours of backlog > during the day, then catch up overnight. > > * Linux LVM2 > multiple levels of snapshot. not sure they can be writeable > though. > > + drbd.org - replicate volumes to a remote site. not sure how > integrated it is with LVM2 though, maybe not at all. > > * ZFS zvol's > multiple levels, can be writeable > > + zfs send/recv for replication. good for replication, bad for > stored backups! > > * vendor storage (EMC Hitachi NetApp) > > they can all do it, not sure all the quirks > > some ship kits you can install in windows crap to ``quiesce'' the > filesystems or SQL Server stores. it is suposed to be > crash-consistent on its own, always, in case you actually did > crash, but i guess NTFS and SQL Server are goofy and don't meet > this promise, so there is a whole mess of fud and confusing terms > and modules to buy. > > * Sun/Storagetek AVS ii (instant image) > > this isn't a proper snapshot because you only get 2 layer, only > ``current'' and ``snap''. There is a ``bitmap'' volume to mark > which blocks are dirty. no trees of snapshots. > > ii is a key part of AVS block-layer replication. only with ii is > it possible to safely fail back from the secondary to the primary. > > + AVS (availability suite, a.k.a. sun cluster geographic edition, > maybe other names) is like drbd > > AVS is old and mature and can do things like put multiple > volumes into consistency groups that share a single timeline. > If you have a datapbase that uses multiple volumes at once, or > if you are mirroring underneath a RAID/raidz layer, then you > need to put all the related volumes into a consistency group or > else there's no longer such a thing as crash-consistency. If > you think about it this makes sense. Holy moses thanks for the overview in the thread Miles! Learned 20 new things today :) Rocket- .ike From akosela at andykosela.com Wed May 20 02:15:39 2009 From: akosela at andykosela.com (Andy Kosela) Date: Wed, 20 May 2009 08:15:39 +0200 Subject: [nycbug-talk] another thread: sshd zombie attacks In-Reply-To: <4A13500D.3010203@3phasecomputing.com> References: <638521075-1242775184-cardhu_decombobulator_blackberry.rim.net-1843881243-@bxe1247.bisx.prod.on.blackberry> <4A134196.9000505@ceetonetechnology.com> <4A13500D.3010203@3phasecomputing.com> Message-ID: <4a13a00b.rdoQ6H0r8qSsC1Js%akosela@andykosela.com> "Jerry B. Altzman" wrote: > on 5/19/2009 7:32 PM George Rosamond said the following: > > I was convinced of it not because of "security by obscurity" (please, > > don't bait with that), but because I heard cases of disk i/o going > > through the ceiling under such attacks (in the ddos version of the > > attack), and switching the listening port quickly changed it. This is > > *without* various scripts, firewall rules, etc., having the hassle and > > the associated overhead in those respective cases. > > I can verify -- this happened *to me*. We had strange load spikes on > machines that would otherwise be unused...and we saw *hundreds* of > *simultaneous* inbound ssh attempts. > Moving ssh to port .ne. 22 solved that problem in a jiffy. Fix your firewall. That issue has been discussed here before and I will state once again that it is dangerous opening 22/tcp to the whole world. --Andy From matt at atopia.net Wed May 20 02:20:52 2009 From: matt at atopia.net (Matt Juszczak) Date: Wed, 20 May 2009 02:20:52 -0400 (EDT) Subject: [nycbug-talk] another thread: sshd zombie attacks In-Reply-To: <4a13a00b.rdoQ6H0r8qSsC1Js%akosela@andykosela.com> References: <638521075-1242775184-cardhu_decombobulator_blackberry.rim.net-1843881243-@bxe1247.bisx.prod.on.blackberry> <4A134196.9000505@ceetonetechnology.com> <4A13500D.3010203@3phasecomputing.com> <4a13a00b.rdoQ6H0r8qSsC1Js%akosela@andykosela.com> Message-ID: > Fix your firewall. That issue has been discussed here before and I will > state once again that it is dangerous opening 22/tcp to the whole world. What if port 22 is open up to the world but it's only to certain "jump boxes" and those jump boxes are really sensitive to attacks? From akosela at andykosela.com Wed May 20 02:21:17 2009 From: akosela at andykosela.com (Andy Kosela) Date: Wed, 20 May 2009 08:21:17 +0200 Subject: [nycbug-talk] another thread: sshd zombie attacks In-Reply-To: References: <638521075-1242775184-cardhu_decombobulator_blackberry.rim.net-1843881243-@bxe1247.bisx.prod.on.blackberry> <4A134196.9000505@ceetonetechnology.com> <4A13500D.3010203@3phasecomputing.com> <4a13a00b.rdoQ6H0r8qSsC1Js%akosela@andykosela.com> Message-ID: <4a13a15d.PmydeKBSx2pJPJpi%akosela@andykosela.com> Matt Juszczak wrote: > > Fix your firewall. That issue has been discussed here before and I will > > state once again that it is dangerous opening 22/tcp to the whole world. > > What if port 22 is open up to the world but it's only to certain "jump > boxes" and those jump boxes are really sensitive to attacks? If you must have a box with sshd(8) widely open, then I would consider running at least pf(4) on it. It has some nice features to stop these kind of attacks. --Andy From matt at atopia.net Wed May 20 02:24:24 2009 From: matt at atopia.net (Matt Juszczak) Date: Wed, 20 May 2009 02:24:24 -0400 (EDT) Subject: [nycbug-talk] Do you guys/gals _____ify your _____ boxes? In-Reply-To: <52865BEF-67F4-477D-B164-A0BD7EA037AD@exit2shell.com> References: <52865BEF-67F4-477D-B164-A0BD7EA037AD@exit2shell.com> Message-ID: > If you have a master puppet server, it makes sense that all the > configuration you do to the box is done via puppet. > > If you master puppet server dies, it will allow you to say this is the > new master puppet server and have the box back online in a matter of > minutes. Perhaps I'm not far enough along in the puppet configuration, but all I'm using puppet for is to manage certain configuration files. I'm still installing packages manually and setting up boxes manually. Once the boxes are setup, I tell puppet to push config files to the boxes, such as /usr/local/etc, etc. Still a tedious task to setup boxes, but once they are up, easily changeable. > If someone changes something on your mater puppet server, its better to > have puppet discover and change it back and alert you instead of > discovering the change weeks later. OK. So puppetify the master puppet server. That's fine. But if you only have one or two people that have access to the puppet server, chances are it isn't going to have problems. > As for LDAP, I prefer to configure every machine to first auth against > the primary ldap server, the slave ldap sever and then files. You keep > root and system level accounts in /etc/passwd and user accounts are > stored in ldap. This allows you to login to the box if you break > something but keeps the auth subsystem of each server consistent I do this, too. IN fact, it's the same setup 100% that I'm using. Are there people that don't keep root and system accounts in /etc/passwd? That's dumb in my opinion. If I take the LDAP servers down, authentication still works, but I actually do files FIRST and then LDAP, since no accounts on the boxes exist in LDAP, and vice versa. So you're saying if I'm using that setup, it's okay to do it to the LDAP boxes too? From matt at atopia.net Wed May 20 02:25:23 2009 From: matt at atopia.net (Matt Juszczak) Date: Wed, 20 May 2009 02:25:23 -0400 (EDT) Subject: [nycbug-talk] another thread: sshd zombie attacks In-Reply-To: <4a13a15d.PmydeKBSx2pJPJpi%akosela@andykosela.com> References: <638521075-1242775184-cardhu_decombobulator_blackberry.rim.net-1843881243-@bxe1247.bisx.prod.on.blackberry> <4A134196.9000505@ceetonetechnology.com> <4A13500D.3010203@3phasecomputing.com> <4a13a00b.rdoQ6H0r8qSsC1Js%akosela@andykosela.com> <4a13a15d.PmydeKBSx2pJPJpi%akosela@andykosela.com> Message-ID: > If you must have a box with sshd(8) widely open, then I would consider > running at least pf(4) on it. It has some nice features to stop these > kind of attacks. Right. Exactly what I'm doing: ---/etc/pf.conf--- if = "em0" pass all table persist block drop in quick on $if from to any pass in quick on $if inet proto tcp from any to $if port 22 flags S/SA keep state (max-src-conn 50, max-src-conn-rate 3/30, overload flush global) ---end--- From spork at bway.net Wed May 20 02:43:59 2009 From: spork at bway.net (Charles Sprickman) Date: Wed, 20 May 2009 02:43:59 -0400 (EDT) Subject: [nycbug-talk] another thread: sshd zombie attacks In-Reply-To: References: <638521075-1242775184-cardhu_decombobulator_blackberry.rim.net-1843881243-@bxe1247.bisx.prod.on.blackberry> <4A134196.9000505@ceetonetechnology.com> <4A13500D.3010203@3phasecomputing.com> <4a13a00b.rdoQ6H0r8qSsC1Js%akosela@andykosela.com> <4a13a15d.PmydeKBSx2pJPJpi%akosela@andykosela.com> Message-ID: On Wed, 20 May 2009, Matt Juszczak wrote: >> If you must have a box with sshd(8) widely open, then I would consider >> running at least pf(4) on it. It has some nice features to stop these >> kind of attacks. > > Right. Exactly what I'm doing: > > > ---/etc/pf.conf--- > > if = "em0" > pass all > table persist > block drop in quick on $if from to any > pass in quick on $if inet proto tcp from any to $if port 22 flags S/SA > keep state (max-src-conn 50, max-src-conn-rate 3/30, overload > flush global) > ---end--- For anyone else considering this, for safety's sake drop a rule like this above the bruteforce rule: pass in quick on $ext_if proto tcp from to any port 22 flags S/SA keep state That "admin" table may contain another jump box. The bruteforce thing works well, but I have seen some weird sftp clients get tripped up on it somehow. Charles > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > From mspitzer at gmail.com Wed May 20 07:41:14 2009 From: mspitzer at gmail.com (Marc Spitzer) Date: Wed, 20 May 2009 07:41:14 -0400 Subject: [nycbug-talk] Audit Solution In-Reply-To: References: <200905192206.n4JM670N027070@fulton.nycbug.org> <101593797-1242776721-cardhu_decombobulator_blackberry.rim.net-174332611-@bxe1289.bisx.prod.on.blackberry> <8c50a3c30905192006n39eb5c36j5b2f4951bafaf144@mail.gmail.com> Message-ID: <8c50a3c30905200441r48ccbef2k71d0d1a6636e50ba@mail.gmail.com> On Wed, May 20, 2009 at 1:57 AM, Matt Juszczak wrote: >> splunk > > splunk allows searching right? > yes it does, it also does graphing of results. marc -- Freedom is nothing but a chance to be better. Albert Camus From mspitzer at gmail.com Wed May 20 09:27:21 2009 From: mspitzer at gmail.com (Marc Spitzer) Date: Wed, 20 May 2009 09:27:21 -0400 Subject: [nycbug-talk] Approach for the NFS cluster In-Reply-To: <90C5EFFE-2D20-460F-AA99-100C248CAF1D@diversaform.com> References: <595270.63168.qm@web53602.mail.re2.yahoo.com> <4A12CD1E.40703@ceetonetechnology.com> <7A6DBDC2-404F-4379-BBDB-65BFEEADC94D@exit2shell.com> <7AD47D69-1B64-4A68-8F14-2099B7CAC368@diversaform.com> <90C5EFFE-2D20-460F-AA99-100C248CAF1D@diversaform.com> Message-ID: <8c50a3c30905200627u4e634b13o5627466457a36d64@mail.gmail.com> On Wed, May 20, 2009 at 2:16 AM, Isaac Levy wrote: > Hrm. ?I'm chewing on this. ?To me, 30 seconds of lost files feels > funy- I can't think of an acceptable case for this happening- (but I'm > not saying there is no case where this is acceptable). > Ike, I think you are looking at it the wrong way. Its not a 30 sec data loss, it is a 30 sec data loss vs whatever plan B happens to be: can not transact business for 4 hours for example. Then you figure out if you want to buy the insurance. marc -- Freedom is nothing but a chance to be better. Albert Camus From jbaltz at 3phasecomputing.com Wed May 20 09:36:38 2009 From: jbaltz at 3phasecomputing.com (Jerry B. Altzman) Date: Wed, 20 May 2009 09:36:38 -0400 Subject: [nycbug-talk] another thread: sshd zombie attacks In-Reply-To: <4a13a00b.rdoQ6H0r8qSsC1Js%akosela@andykosela.com> References: <638521075-1242775184-cardhu_decombobulator_blackberry.rim.net-1843881243-@bxe1247.bisx.prod.on.blackberry> <4A134196.9000505@ceetonetechnology.com> <4A13500D.3010203@3phasecomputing.com> <4a13a00b.rdoQ6H0r8qSsC1Js%akosela@andykosela.com> Message-ID: <4A140766.8050002@3phasecomputing.com> on 5/20/2009 2:15 AM Andy Kosela said the following: > "Jerry B. Altzman" wrote: >> Moving ssh to port .ne. 22 solved that problem in a jiffy. > Fix your firewall. That issue has been discussed here before and I will > state once again that it is dangerous opening 22/tcp to the whole world. (This was a while ago, but...) Great! I'll be looking for the updates to PIXOS 6.1 that would "fix" this issue. We needed open-to-the-world ssh. Not everyone could easily have used VPN software at the time. We're talking about lesser of two evils, and we needed to stop that particular neck bleeding, and moving ssh to port .ne. 22 fixed it. I do not claim it is BEST security, I do claim that when you're being DOSed with ssh attempts, moving ssh's listening ports stems the DOS. > --Andy //jbaltz -- jerry b. altzman jbaltz at 3phasecomputing.com +1 718 763 7405 From carton at Ivy.NET Wed May 20 10:05:43 2009 From: carton at Ivy.NET (Miles Nordin) Date: Wed, 20 May 2009 10:05:43 -0400 Subject: [nycbug-talk] Approach for the NFS cluster In-Reply-To: <90C5EFFE-2D20-460F-AA99-100C248CAF1D@diversaform.com> (Isaac Levy's message of "Wed, 20 May 2009 02:16:45 -0400") References: <595270.63168.qm@web53602.mail.re2.yahoo.com> <4A12CD1E.40703@ceetonetechnology.com> <7A6DBDC2-404F-4379-BBDB-65BFEEADC94D@exit2shell.com> <7AD47D69-1B64-4A68-8F14-2099B7CAC368@diversaform.com> <90C5EFFE-2D20-460F-AA99-100C248CAF1D@diversaform.com> Message-ID: >>>>> "il" == Isaac Levy writes: il> I can see potential problems in data loss- (loosing the box as il> file(s) are being written, or were recently written). Even with ZIL disabled any data written more than 30 seconds ago is safe. If ZFS recovers at all, ZFS will always recover to some state it passed through prior to the crash (modulo fsync() which I think does fuck with the timeline). Disabling the ZIL does not at all increase the likelihood that ZFS won't recover at all (pool loss). If you are not using fsync() or NFS, the ZIL is never even written at all. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: From carton at Ivy.NET Wed May 20 10:11:18 2009 From: carton at Ivy.NET (Miles Nordin) Date: Wed, 20 May 2009 10:11:18 -0400 Subject: [nycbug-talk] another thread: sshd zombie attacks In-Reply-To: <4A140766.8050002@3phasecomputing.com> (Jerry B. Altzman's message of "Wed, 20 May 2009 09:36:38 -0400") References: <638521075-1242775184-cardhu_decombobulator_blackberry.rim.net-1843881243-@bxe1247.bisx.prod.on.blackberry> <4A134196.9000505@ceetonetechnology.com> <4A13500D.3010203@3phasecomputing.com> <4a13a00b.rdoQ6H0r8qSsC1Js%akosela@andykosela.com> <4A140766.8050002@3phasecomputing.com> Message-ID: >>>>> "jba" == Jerry B Altzman writes: jba> Not everyone could easily have used VPN software at the time. accordingto ike-ng working group mailing list, IKEv1 is full of DoS. not that it actually gets DoS'd in practice, but just saying, if you are imagining VPN layer makes it ``proper,'' foolproof, nope. in fact just the opposite because none of your tricks to remain open to Internet but avoid DoS will work with a closed-source appliance. (auto blacklist on fail won't work, can't move IKE port numbers unless you use proprietary/slow TCP NAT-T) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: From skreuzer at exit2shell.com Wed May 20 11:41:42 2009 From: skreuzer at exit2shell.com (Steven Kreuzer) Date: Wed, 20 May 2009 11:41:42 -0400 Subject: [nycbug-talk] Do you guys/gals _____ify your _____ boxes? In-Reply-To: References: <52865BEF-67F4-477D-B164-A0BD7EA037AD@exit2shell.com> Message-ID: <1B410E29-77FD-4FDC-BDC7-FD9CA7F3D4A3@exit2shell.com> On May 20, 2009, at 2:24 AM, Matt Juszczak wrote: >> If someone changes something on your mater puppet server, its >> better to have puppet discover and change it back and alert you >> instead of discovering the change weeks later. > > OK. So puppetify the master puppet server. That's fine. But if > you only have one or two people that have access to the puppet > server, chances are it isn't going to have problems. trust me, it will. -- Steven Kreuzer http://www.exit2shell.com/~skreuzer From jbaltz at 3phasecomputing.com Wed May 20 10:44:56 2009 From: jbaltz at 3phasecomputing.com (Jerry B. Altzman) Date: Wed, 20 May 2009 10:44:56 -0400 Subject: [nycbug-talk] another thread: sshd zombie attacks In-Reply-To: References: <638521075-1242775184-cardhu_decombobulator_blackberry.rim.net-1843881243-@bxe1247.bisx.prod.on.blackberry> <4A134196.9000505@ceetonetechnology.com> <4A13500D.3010203@3phasecomputing.com> <4a13a00b.rdoQ6H0r8qSsC1Js%akosela@andykosela.com> <4A140766.8050002@3phasecomputing.com> Message-ID: <4A141768.8030902@3phasecomputing.com> on 5/20/2009 10:11 AM Miles Nordin said the following: >>>>>> "jba" == Jerry B Altzman writes: > jba> Not everyone could easily have used VPN software at the time. > accordingto ike-ng working group mailing list, IKEv1 is full of DoS. Stipulated, but that is orthogonal to my original point. > not that it actually gets DoS'd in practice, but just saying, if you > are imagining VPN layer makes it ``proper,'' foolproof, nope. in fact I never believed that -- only that we couldn't apply VPN pixie-dust to stop the *ssh* DOS we were experiencing due to other constraints we had. Remember: the goal I had at the time was to stop the *ssh* DOS, not to pre-emptively fix every security hole we had. (We ended up taking more measures later.) We saw: - with ssh on port 22, much ssh DOS - with ssh on port !22, no ssh DOS That was my only point. //jbaltz -- jerry b. altzman jbaltz at 3phasecomputing.com +1 718 763 7405 From bcully at gmail.com Wed May 20 17:43:51 2009 From: bcully at gmail.com (Brian Cully) Date: Wed, 20 May 2009 17:43:51 -0400 Subject: [nycbug-talk] Approach for the NFS cluster In-Reply-To: References: <595270.63168.qm@web53602.mail.re2.yahoo.com> <4A12CD1E.40703@ceetonetechnology.com> <4E169530-AB97-4505-A5EE-D25DB95DEF02@diversaform.com> <4A130BB7.1000708@3phasecomputing.com> <53F6C5C1-9B97-4C63-A4E8-B9C08973DA5F@nomadlogic.org> <1846C1F2-3044-43E6-82C3-BE22522B7186@nomadlogic.org> <310EB9F7-EEE3-4180-AA72-9BBEFE56AB6C@gmail.com> <60131f920905191949m6e39112anbb35899d61253a60@mail.gmail.com> Message-ID: <5279E5B6-5608-483A-9731-8955BA7E3FBD@gmail.com> On 19-May-2009, at 23:31, Miles Nordin wrote: > Second, Oracle RAC is state-of-the-art for expensive databases, and it > does not scale anything close to linearly. CouchDB, Hadoop, and the > next round of clustered filesystems, I've the impression, scale much > closer to linearly and might be far more interesting in the long run. I was thinking of CouchDB specifically, but the reason it can do what it does are applicable to the larger context. The reason POSIX will never be able to support what people want is because what people want is impossible. The reason CouchDB works is because the application programmer has to take on the duty of handling things like aborted transactions. You simply cannot solve the problem in a generic sense and POSIX has no facilities for communicating the problem or its possible solutions. You're certainly not going to get a drop-in POSIX FS replacement with ACID qualities. If you want that stuff, you have to write the code in your own app, and nowhere anywhere near the kernel is appropriate. > Third, you can use BDB as a backing-store (a ``storage brick'') for > GlusterFS. I think the idea is, for tiny files, but not having > actually used it I am not sure if it really does make sense for > certain workloads, or if it's something they implemented as a toy. > But it is proof-of-concept: POSIX filesystem inside a database. It > may be a pointless exercise, but I'm not sure it's difficult or > out-of-reach exercise. You can put a POSIX FS in a DB pretty trivially, but the reverse is not true. Using a DB (with ACID qualities) requires entirely new APIs which are strictly more powerful than what you get out of POSIX. The fundamental problem here is that in an HA setup you typically have N nodes talking to the same set of M discs. Suddenly, time is a strictly localized concept and many of the operations you took for granted are no longer possible in the general sense. What do you do when two nodes try to write to the same part of disc at the same time? Who wins? What if the overwritten data is part of a complex structure and thus locking any individual set of blocks may still not prevent corruption? How do you tell the other node to go fuck itself? How can you prevent that kind of access in light of the many and varied types of topology splits that have opened before you? -bjc From carton at Ivy.NET Wed May 20 18:50:43 2009 From: carton at Ivy.NET (Miles Nordin) Date: Wed, 20 May 2009 18:50:43 -0400 Subject: [nycbug-talk] Approach for the NFS cluster In-Reply-To: <5279E5B6-5608-483A-9731-8955BA7E3FBD@gmail.com> (Brian Cully's message of "Wed, 20 May 2009 17:43:51 -0400") References: <595270.63168.qm@web53602.mail.re2.yahoo.com> <4A12CD1E.40703@ceetonetechnology.com> <4E169530-AB97-4505-A5EE-D25DB95DEF02@diversaform.com> <4A130BB7.1000708@3phasecomputing.com> <53F6C5C1-9B97-4C63-A4E8-B9C08973DA5F@nomadlogic.org> <1846C1F2-3044-43E6-82C3-BE22522B7186@nomadlogic.org> <310EB9F7-EEE3-4180-AA72-9BBEFE56AB6C@gmail.com> <60131f920905191949m6e39112anbb35899d61253a60@mail.gmail.com> <5279E5B6-5608-483A-9731-8955BA7E3FBD@gmail.com> Message-ID: >>>>> "bc" == Brian Cully writes: bc> CouchDB works is because the application programmer has to bc> take on the duty of handling things like aborted bc> transactions. You simply cannot solve the problem in a generic bc> sense and POSIX has no facilities for communicating the bc> problem or its possible solutions. yeah I guess CouchDB is very unlike BDB. I'm not ready to agree it's impossible. but I think in both cases it might be ~always silly. bc> You can put a POSIX FS in a DB pretty trivially, but bc> the reverse is not true. Using a DB (with ACID qualities) bc> requires entirely new APIs which are strictly more powerful bc> than what you get out of POSIX. This doesn't make sense. ~all existing databases run inside POSIX filesystems. Are you saying they're all broken? In what way? bc> The fundamental problem here is that in an HA setup bc> you typically have N nodes talking to the same set of M bc> discs. for HA NFS you typically have 1 node talking to M disks. The 2nd node is passive. It should be possible to make a logically-equivalent HA cluster using a robotic arm and a cold spare: 1. primary fails. 2. robot notices somehow. 3. robot unplugs primary's power cord. 4. robot moves all disks from primary chassis to secondary chassis. 5. robot plugs in secondary's power cord. 6. robot goes to sleep. not much magic. Most of the magic is in the NFS protocol itself, which allows servers to reboot, even cord-yank reboot, without clients noticing or losing any data at all---in fact, if the implementation is really solid, regular old NFS clients will safely regrab their POSIX advisory locks across a server reboot. SMB does not have any of this magic. Thus the HA part with NFS is relatively small, just to make the ``reboot'' step faster, and to add management so you've some idea when it's safe to fail back. It's bigger than just carp, but it's not big like Oracle, and not magical. It is worth doing to create NFS-to-mysteryFileMesh gateways, multiple gateways onto a single filesystem, so that a pool of clients can mount a single unified filesystem even if their aggregate bandwidth is too big to pass through a single CPU. This is the point of pNFS, of exporting regular NFS from multiple clustered Lustre clients, and seems to be the Isilon pitch if I read them right. but doing this involves larger and different magic, at least two layers of it (one to create this new fancy kind of filesystem itself, and then a second layer to make the multiple NFS servers present themselves to clients cohesively including moving lock custodianship from one server to another when a client switches to a different gateway) and you do not invoke said magic just for the availability improvement that the OP wanted. You do it to break through a performance wall. bc> What do you do when two nodes try to write to the same part of bc> disc at the same time? Who wins? the lockholder? bc> What if the overwritten data is part of a complex structure bc> and thus locking any individual set of blocks may still not bc> prevent corruption? The sentence unravels because ``all the blocks on the disk'' is also a set of blocks. You can safely update whatever you like---you just don't scale close to linearly with any obvious approach. I don't know what Oracle RAC does. The filesystems like QFS, GFS, OCFS attack this to a first order by storing only unstructured data on the shared SCSI targets. All the structured data (metadata) goes through an active/passive HA pair. The metadata server can also arbitrate locks and cache-leases, or clients can run a distributed locking protocol amongst themselves. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: From skreuzer at exit2shell.com Thu May 21 09:11:03 2009 From: skreuzer at exit2shell.com (Steven Kreuzer) Date: Thu, 21 May 2009 09:11:03 -0400 Subject: [nycbug-talk] ZFS 13 was MFC-ed into 7-STABLE Message-ID: <641D65B5-592D-4E12-8C07-B02D5936E373@exit2shell.com> Lots of changes: http://svn.freebsd.org/viewvc/base?view=revision&revision=192498 Very Important: If you are running 7-STABLE w/ zfs will need to rebuild world next time you update. You have been warned. -- Steven Kreuzer http://www.exit2shell.com/~skreuzer From matt at atopia.net Fri May 22 00:53:19 2009 From: matt at atopia.net (Matt Juszczak) Date: Fri, 22 May 2009 00:53:19 -0400 (EDT) Subject: [nycbug-talk] Make package-recursive problem In-Reply-To: <1592882363-1242604820-cardhu_decombobulator_blackberry.rim.net-163190730-@bxe1247.bisx.prod.on.blackberry> References: <1592882363-1242604820-cardhu_decombobulator_blackberry.rim.net-163190730-@bxe1247.bisx.prod.on.blackberry> Message-ID: I don't want this thread to die. If I'm correct, this is a huge bug in the ports system. I've started noticing more and more that packages I build are missing files after they are rebuilt. For instance, I have a build box (named atlantis for the sake of this email): on atlantis, I build all packages with make package-recursive, and then install them on all boxes. This works fine, except that over time, as I compile more packages, the ports system re-generates packages for existing built packages (for instance, if I make a nagios package, it recreates the apache package since that's a dependency. If I then install cacti, it recreates the apache package again). This is normally no big deal, as I haven't touched my source tree, config options, or anything like that. 99% of the time the packages are rebuilt consistently. However, since this point, I've had some php modules come up empty (as in my original email), and now, I'm having some other flukes as well. If you'll see below, somehow, fontconfig, mysql-client, and python25 got out of whack between my build box and a production webserver. Yet these are the same packages - I didn't change a thing, other than install them at different times. But my build tree on atlantis has not been updated or changed in any manor. This obviously occured because make package-recursive rebuild these packages at some point because they were dependencies for other packages being installed. Except that, obviously, it didn't build the packages 100% identically to the time before: local$ sh check2.sh barfy -> fontconfig-2.6.0,1 isn't right barfy -> mysql-client-5.0.77_1 isn't right barfy -> python25-2.5.4_1 isn't right local$ sh check3.sh Server 1: atlantis Server 2: barfy Package: fontconfig Password: Password: 65,67d64 < /usr/local/share/doc/fontconfig/fontconfig-user.html < /usr/local/share/doc/fontconfig/fontconfig-user.pdf < /usr/local/share/doc/fontconfig/fontconfig-user.txt From matt at atopia.net Fri May 22 04:32:19 2009 From: matt at atopia.net (Matt Juszczak) Date: Fri, 22 May 2009 04:32:19 -0400 (EDT) Subject: [nycbug-talk] bad file descriptor and fts_read error Message-ID: Hi all, For some reason, on one of my FreeBSD jump boxes, which has been acting up lately, I'm getting a few errors. This all caught my attention when I started noticing some setuid errors in my nightly report (files were disappearing as being set as setuid). First: bob# cd /usr/ports/packages/All && pkg_delete -f gettext-0.17_1 && pkg_add gettext-0.17_1.tbz mtree: bin/snmpbulkwalk: Bad file descriptor mtree: bin/snmpcheck: Bad file descriptor mtree: bin/snmpconf: Bad file descriptor mtree: bin/snmpdelta: Bad file descriptor mtree: bin/snmpdf: Bad file descriptor mtree: bin/snmpget: Bad file descriptor mtree: bin/snmpgetnext: Bad file descriptor mtree: bin/snmpinform: Bad file descriptor mtree: bin/snmpnetstat: Bad file descriptor mtree: bin/snmpset: Bad file descriptor mtree: bin/snmpstatus: Bad file descriptor mtree: man/man5/slapd-ldbm.5.gz: Bad file descriptor mtree: man/man5/slapd-ldif.5.gz: Bad file descriptor mtree: man/man5/slapd-meta.5.gz: Bad file descriptor mtree: man/man5/slapd-monitor.5.gz: Bad file descriptor mtree: man/man5/slapd-ndb.5.gz: Bad file descriptor mtree: man/man5/slapd-null.5.gz: Bad file descriptor mtree: man/man5/slapd-passwd.5.gz: Bad file descriptor mtree: man/man5/slapd-perl.5.gz: Bad file descriptor mtree: man/man5/slapd-relay.5.gz: Bad file descriptor mtree: man/man5/slapd-shell.5.gz: Bad file descriptor mtree: man/man5/slapd-sock.5.gz: Bad file descriptor Second: bob# cd /etc/periodic/security bob# ./100.chksetuid Checking setuid files and devices: find: fts_read: No such file or directory This seems to be really weird. Why I would be getting file descriptor and fts_read errors I'm not sure. Nothing in dmesg to indicate file system problems, however a running fdisk shows: ** /dev/amrd0s1a (NO WRITE) ** Last Mounted on / ** Root file system ** Phase 1 - Check Blocks and Sizes 70568 DUP I=16768 70569 DUP I=16768 70570 DUP I=16768 70571 DUP I=16768 70572 DUP I=16768 70573 DUP I=16768 70574 DUP I=16768 70575 DUP I=16768 70576 DUP I=16768 70577 DUP I=16768 70578 DUP I=16768 EXCESSIVE DUP BLKS I=16768 CONTINUE? [yn] Any ideas anyone? Thanks! -Matt From mspitzer at gmail.com Fri May 22 09:19:06 2009 From: mspitzer at gmail.com (Marc Spitzer) Date: Fri, 22 May 2009 09:19:06 -0400 Subject: [nycbug-talk] Make package-recursive problem In-Reply-To: References: <1592882363-1242604820-cardhu_decombobulator_blackberry.rim.net-163190730-@bxe1247.bisx.prod.on.blackberry> Message-ID: <8c50a3c30905220619u7bb510efxb3d81cb986963c39@mail.gmail.com> Matt, It s not a bug with the ports system. It is doing what it should do: I am the nagios port and I am building nagios: 1: I need/depend on apache 2: apache is not installed on the system 3: I tell the apache port to build ... N: nagios is built/installed/packaged N+1: have a beer yea That is the default behavior of ports, and it is doing the right thing by installing what it was told by the port maintainer it needed installed as a precondition to build/install/package-up Now my memory is a bit fuzzy here, but I think you can tell ports to attempt to use a binary packages first, then build. the thing is you need to setup your own package repository to make this work, it may be as simple as seting a var to point to /usr/ports/packages. Now there is another problem with ports, well port maintainers actually, such that ports will not build a package that will install correctly even when " make install" works. But that is a rant for another day. It also sounds like your problem is not so much ports as port maintainers. thanks, marc On Fri, May 22, 2009 at 12:53 AM, Matt Juszczak wrote: > I don't want this thread to die. ?If I'm correct, this is a huge bug in > the ports system. > > I've started noticing more and more that packages I build are missing > files after they are rebuilt. > > For instance, I have a build box (named atlantis for the sake of this > email): on atlantis, I build all packages with make package-recursive, and > then install them on all boxes. ?This works fine, except that over time, > as I compile more packages, the ports system re-generates packages for > existing built packages (for instance, if I make a nagios package, it > recreates the apache package since that's a dependency. ?If I then install > cacti, it recreates the apache package again). > > This is normally no big deal, as I haven't touched my source tree, config > options, or anything like that. ?99% of the time the packages are rebuilt > consistently. ?However, since this point, I've had some php modules come > up empty (as in my original email), and now, I'm having some other flukes > as well. > > If you'll see below, somehow, fontconfig, mysql-client, and python25 got > out of whack between my build box and a production webserver. Yet these > are the same packages - I didn't change a thing, other than install them > at different times. ?But my build tree on atlantis has not been updated or > changed in any manor. ?This obviously occured because make > package-recursive rebuild these packages at some point because they were > dependencies for other packages being installed. ?Except that, obviously, > it didn't build the packages 100% identically to the time before: > > local$ sh check2.sh > barfy -> fontconfig-2.6.0,1 isn't right > barfy -> mysql-client-5.0.77_1 isn't right > barfy -> python25-2.5.4_1 isn't right > local$ sh check3.sh > Server 1: atlantis > Server 2: barfy > Package: fontconfig > Password: > Password: > 65,67d64 > < /usr/local/share/doc/fontconfig/fontconfig-user.html > < /usr/local/share/doc/fontconfig/fontconfig-user.pdf > < /usr/local/share/doc/fontconfig/fontconfig-user.txt > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > -- Freedom is nothing but a chance to be better. Albert Camus From mspitzer at gmail.com Fri May 22 09:22:43 2009 From: mspitzer at gmail.com (Marc Spitzer) Date: Fri, 22 May 2009 09:22:43 -0400 Subject: [nycbug-talk] bad file descriptor and fts_read error In-Reply-To: References: Message-ID: <8c50a3c30905220622y550d262y8f7765bdf1da0c4b@mail.gmail.com> fsck the box from CD then try it again, simplest way to get everything including root. It helps if you have a copy of /etc/fstab somewhere else to read durring the process marc On Fri, May 22, 2009 at 4:32 AM, Matt Juszczak wrote: > Hi all, > > For some reason, on one of my FreeBSD jump boxes, which has been acting up > lately, I'm getting a few errors. ?This all caught my attention when I > started noticing some setuid errors in my nightly report (files were > disappearing as being set as setuid). > > First: > > bob# cd /usr/ports/packages/All && pkg_delete -f gettext-0.17_1 && > pkg_add gettext-0.17_1.tbz > mtree: bin/snmpbulkwalk: Bad file descriptor > mtree: bin/snmpcheck: Bad file descriptor > mtree: bin/snmpconf: Bad file descriptor > mtree: bin/snmpdelta: Bad file descriptor > mtree: bin/snmpdf: Bad file descriptor > mtree: bin/snmpget: Bad file descriptor > mtree: bin/snmpgetnext: Bad file descriptor > mtree: bin/snmpinform: Bad file descriptor > mtree: bin/snmpnetstat: Bad file descriptor > mtree: bin/snmpset: Bad file descriptor > mtree: bin/snmpstatus: Bad file descriptor > mtree: man/man5/slapd-ldbm.5.gz: Bad file descriptor > mtree: man/man5/slapd-ldif.5.gz: Bad file descriptor > mtree: man/man5/slapd-meta.5.gz: Bad file descriptor > mtree: man/man5/slapd-monitor.5.gz: Bad file descriptor > mtree: man/man5/slapd-ndb.5.gz: Bad file descriptor > mtree: man/man5/slapd-null.5.gz: Bad file descriptor > mtree: man/man5/slapd-passwd.5.gz: Bad file descriptor > mtree: man/man5/slapd-perl.5.gz: Bad file descriptor > mtree: man/man5/slapd-relay.5.gz: Bad file descriptor > mtree: man/man5/slapd-shell.5.gz: Bad file descriptor > mtree: man/man5/slapd-sock.5.gz: Bad file descriptor > > > Second: > > bob# cd /etc/periodic/security > bob# ./100.chksetuid > > Checking setuid files and devices: > find: fts_read: No such file or directory > > > > > > This seems to be really weird. ?Why I would be getting file descriptor and > fts_read errors I'm not sure. ?Nothing in dmesg to indicate file system > problems, however a running fdisk shows: > > ** /dev/amrd0s1a (NO WRITE) > ** Last Mounted on / > ** Root file system > ** Phase 1 - Check Blocks and Sizes > 70568 DUP I=16768 > 70569 DUP I=16768 > 70570 DUP I=16768 > 70571 DUP I=16768 > 70572 DUP I=16768 > 70573 DUP I=16768 > 70574 DUP I=16768 > 70575 DUP I=16768 > 70576 DUP I=16768 > 70577 DUP I=16768 > 70578 DUP I=16768 > EXCESSIVE DUP BLKS I=16768 > CONTINUE? [yn] > > > > > > Any ideas anyone? > > > Thanks! > > -Matt > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > -- Freedom is nothing but a chance to be better. Albert Camus From matt at atopia.net Fri May 22 11:49:05 2009 From: matt at atopia.net (Matt Juszczak) Date: Fri, 22 May 2009 11:49:05 -0400 (EDT) Subject: [nycbug-talk] bad file descriptor and fts_read error In-Reply-To: <8c50a3c30905220622y550d262y8f7765bdf1da0c4b@mail.gmail.com> References: <8c50a3c30905220622y550d262y8f7765bdf1da0c4b@mail.gmail.com> Message-ID: > fsck the box from CD then try it again, simplest way to get > everything including root. It helps if you have a copy of /etc/fstab > somewhere else to read durring the process Can a corrupt file system be enough to cause those setuid errors? From matt at atopia.net Fri May 22 11:54:56 2009 From: matt at atopia.net (Matt Juszczak) Date: Fri, 22 May 2009 11:54:56 -0400 (EDT) Subject: [nycbug-talk] Make package-recursive problem In-Reply-To: <8c50a3c30905220619u7bb510efxb3d81cb986963c39@mail.gmail.com> References: <1592882363-1242604820-cardhu_decombobulator_blackberry.rim.net-163190730-@bxe1247.bisx.prod.on.blackberry> <8c50a3c30905220619u7bb510efxb3d81cb986963c39@mail.gmail.com> Message-ID: > It s not a bug with the ports system. It is doing what it should do: > > I am the nagios port and I am building nagios: > 1: I need/depend on apache > 2: apache is not installed on the system > 3: I tell the apache port to build > ... > N: nagios is built/installed/packaged > N+1: have a beer yea > > That is the default behavior of ports, and it is doing the right thing > by installing what it was told by the port maintainer it needed > installed as a precondition to build/install/package-up I'm afraid there's a misunderstanding. I'm well aware that it's supposed to build dependencies. And I'm well aware that it's supposed to REBUILD packages already built if you make package-recursive a port that has shared dependencies with another port already built. Where my concern lies is that the packages are rebuilt with missing files! For example: I am the nagios port and I am building nagios: 1. I need/depend on apachge 2. apache is not on the system 3. I tell the apache port to build ... N: nagios is built/installed/packaged THEN: I am the cacti port and I am building cacti: 1. I need/depend on apache 2. apache *is* on the system 3. I re-do the apache package at the end of my install (make package-recursive rebuilds packages) ... N. cacti is build/installed/packaged The newly overwritten apache packages has the 2 start up scripts missing from it from /usr/local/etc/rc.d, but it had them prior to the nagios port rebuilding it. THAT's the issue. The behavior is unpredictable. Usually works the first time, doesn't work when the port is rebuilt. Only happens on some ports. So far, I've seen it only on: gettext python apache php5-mysql php5-pcre fontconfig Also, at this point, is there a way to fix dependencies if I removed a package item and added it again? For instance: pkg_add apache22 pkg_del -f pkg_add The package is now back, but the dependency on apache has been removed. Is the only way to reinstall the apache package? Is there a way to tell apache to only re-create its dependencies without actually re-installing files? -Matt From pete at nomadlogic.org Fri May 22 12:37:22 2009 From: pete at nomadlogic.org (Pete Wright) Date: Fri, 22 May 2009 09:37:22 -0700 Subject: [nycbug-talk] bad file descriptor and fts_read error In-Reply-To: References: <8c50a3c30905220622y550d262y8f7765bdf1da0c4b@mail.gmail.com> Message-ID: <04602883-9DDB-4646-B7A4-0BAAC89F56BF@nomadlogic.org> On 22-May-09, at 8:49 AM, Matt Juszczak wrote: >> fsck the box from CD then try it again, simplest way to get >> everything including root. It helps if you have a copy of /etc/fstab >> somewhere else to read durring the process > > Can a corrupt file system be enough to cause those setuid errors? i'd say that'd be symptomatic of a corrupt filesystem for sure. -pete From matt at atopia.net Fri May 22 12:42:31 2009 From: matt at atopia.net (Matt Juszczak) Date: Fri, 22 May 2009 12:42:31 -0400 (EDT) Subject: [nycbug-talk] bad file descriptor and fts_read error In-Reply-To: <04602883-9DDB-4646-B7A4-0BAAC89F56BF@nomadlogic.org> References: <8c50a3c30905220622y550d262y8f7765bdf1da0c4b@mail.gmail.com> <04602883-9DDB-4646-B7A4-0BAAC89F56BF@nomadlogic.org> Message-ID: > i'd say that'd be symptomatic of a corrupt filesystem for sure. > > -pete I have fsck_y_enable="YES" in rc.conf, so one would think it would fix on boot. From billtotman at billtotman.com Fri May 22 13:23:43 2009 From: billtotman at billtotman.com (Bill Totman) Date: Fri, 22 May 2009 13:23:43 -0400 Subject: [nycbug-talk] bad file descriptor and fts_read error In-Reply-To: References: <8c50a3c30905220622y550d262y8f7765bdf1da0c4b@mail.gmail.com> <04602883-9DDB-4646-B7A4-0BAAC89F56BF@nomadlogic.org> Message-ID: <98e9d1d30905221023w5afe68b8md510a5cec5c519c1@mail.gmail.com> While investigating this issue you might want to check out 'smartmontools'. You have to have SMART enabled hard drive(s) (I think all are today) but it can be helpful in warning of impending drive failure. -Bill Totman On Fri, May 22, 2009 at 12:42 PM, Matt Juszczak wrote: > > i'd say that'd be symptomatic of a corrupt filesystem for sure. > > > > -pete > > I have fsck_y_enable="YES" in rc.conf, so one would think it would fix on > boot. > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at atopia.net Fri May 22 13:25:44 2009 From: matt at atopia.net (Matt Juszczak) Date: Fri, 22 May 2009 13:25:44 -0400 (EDT) Subject: [nycbug-talk] bad file descriptor and fts_read error In-Reply-To: <98e9d1d30905221023w5afe68b8md510a5cec5c519c1@mail.gmail.com> References: <8c50a3c30905220622y550d262y8f7765bdf1da0c4b@mail.gmail.com> <04602883-9DDB-4646-B7A4-0BAAC89F56BF@nomadlogic.org> <98e9d1d30905221023w5afe68b8md510a5cec5c519c1@mail.gmail.com> Message-ID: Hi Bill, > While investigating this issue you might want to check out 'smartmontools'. I've used that tool before. But forgot about it. Thanks for reminding me. I'll consider getting it up and running when time permits =( From carton at Ivy.NET Fri May 22 13:57:42 2009 From: carton at Ivy.NET (Miles Nordin) Date: Fri, 22 May 2009 13:57:42 -0400 Subject: [nycbug-talk] bad file descriptor and fts_read error In-Reply-To: <98e9d1d30905221023w5afe68b8md510a5cec5c519c1@mail.gmail.com> (Bill Totman's message of "Fri, 22 May 2009 13:23:43 -0400") References: <8c50a3c30905220622y550d262y8f7765bdf1da0c4b@mail.gmail.com> <04602883-9DDB-4646-B7A4-0BAAC89F56BF@nomadlogic.org> <98e9d1d30905221023w5afe68b8md510a5cec5c519c1@mail.gmail.com> Message-ID: >>>>> "bt" == Bill Totman writes: bt> it can be helpful in warning of impending drive failure. agree, very important tool, which is utterly and shamelessly missing from Solaris. also see: http://www.usenix.org/event/fast08/tech/full_papers/bairavasundaram/bairavasundaram.pdf http://www.cs.wisc.edu/adsl/Publications/latent-sigmetrics07.ps http://labs.google.com/papers/disk_failures.html http://pages.cs.wisc.edu/~krioukov/Krioukov-ParityLost.pdf -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: From akosela at andykosela.com Fri May 22 15:35:09 2009 From: akosela at andykosela.com (Andy Kosela) Date: Fri, 22 May 2009 21:35:09 +0200 Subject: [nycbug-talk] bad file descriptor and fts_read error In-Reply-To: References: Message-ID: <4a16fe6d.2ymGfTmDk/PRfQZH%akosela@andykosela.com> Matt Juszczak wrote: > Hi all, > > For some reason, on one of my FreeBSD jump boxes, which has been acting up > lately, I'm getting a few errors. This all caught my attention when I > started noticing some setuid errors in my nightly report (files were > disappearing as being set as setuid). > > First: > > bob# cd /usr/ports/packages/All && pkg_delete -f gettext-0.17_1 && > pkg_add gettext-0.17_1.tbz > mtree: bin/snmpbulkwalk: Bad file descriptor > mtree: bin/snmpcheck: Bad file descriptor > mtree: bin/snmpconf: Bad file descriptor > mtree: bin/snmpdelta: Bad file descriptor > mtree: bin/snmpdf: Bad file descriptor > mtree: bin/snmpget: Bad file descriptor > mtree: bin/snmpgetnext: Bad file descriptor > mtree: bin/snmpinform: Bad file descriptor > mtree: bin/snmpnetstat: Bad file descriptor > mtree: bin/snmpset: Bad file descriptor > mtree: bin/snmpstatus: Bad file descriptor > mtree: man/man5/slapd-ldbm.5.gz: Bad file descriptor > mtree: man/man5/slapd-ldif.5.gz: Bad file descriptor > mtree: man/man5/slapd-meta.5.gz: Bad file descriptor > mtree: man/man5/slapd-monitor.5.gz: Bad file descriptor > mtree: man/man5/slapd-ndb.5.gz: Bad file descriptor > mtree: man/man5/slapd-null.5.gz: Bad file descriptor > mtree: man/man5/slapd-passwd.5.gz: Bad file descriptor > mtree: man/man5/slapd-perl.5.gz: Bad file descriptor > mtree: man/man5/slapd-relay.5.gz: Bad file descriptor > mtree: man/man5/slapd-shell.5.gz: Bad file descriptor > mtree: man/man5/slapd-sock.5.gz: Bad file descriptor > > This seems to be really weird. Why I would be getting file descriptor and > fts_read errors I'm not sure. Nothing in dmesg to indicate file system > problems, however a running fdisk shows: > > ** /dev/amrd0s1a (NO WRITE) > ** Last Mounted on / > ** Root file system > ** Phase 1 - Check Blocks and Sizes > 70568 DUP I=16768 > 70569 DUP I=16768 > 70570 DUP I=16768 > 70571 DUP I=16768 > 70572 DUP I=16768 > 70573 DUP I=16768 > 70574 DUP I=16768 > 70575 DUP I=16768 > 70576 DUP I=16768 > 70577 DUP I=16768 > 70578 DUP I=16768 > EXCESSIVE DUP BLKS I=16768 > CONTINUE? [yn] It seems that some of your files are referencing the same data block. Check those files using fsdb(8) and then fsck(8) the file system from CD. Did you have a crash on this box recently? These type of problems indicate that your file system is corrupt. --Andy From spork at bway.net Fri May 22 19:19:08 2009 From: spork at bway.net (Charles Sprickman) Date: Fri, 22 May 2009 19:19:08 -0400 (EDT) Subject: [nycbug-talk] Make package-recursive problem In-Reply-To: References: <1592882363-1242604820-cardhu_decombobulator_blackberry.rim.net-163190730-@bxe1247.bisx.prod.on.blackberry> <8c50a3c30905220619u7bb510efxb3d81cb986963c39@mail.gmail.com> Message-ID: On Fri, 22 May 2009, Matt Juszczak wrote: > For example: > > I am the nagios port and I am building nagios: > 1. I need/depend on apachge > 2. apache is not on the system > 3. I tell the apache port to build > ... > N: nagios is built/installed/packaged > > THEN: > > I am the cacti port and I am building cacti: > 1. I need/depend on apache > 2. apache *is* on the system > 3. I re-do the apache package at the end of my install (make > package-recursive rebuilds packages) > ... > N. cacti is build/installed/packaged > > > The newly overwritten apache packages has the 2 start up scripts missing > from it from /usr/local/etc/rc.d, but it had them prior to the nagios port > rebuilding it. I've got this behavior on a 7.1 box. I'm doing roughly the same thing - building in a jail that's just for building packages and then copying them into ezjail's pkg directory. Exact same behavior - had to copy the apache startup script over by hand since it was missing from the package. I believe I've seen this elsewhere as well, but I've pretty much given up on the issue. If you bring this up on one of the freebsd lists, let me know and I'll chime in with a "me too". Charles > THAT's the issue. The behavior is unpredictable. Usually works the first > time, doesn't work when the port is rebuilt. Only happens on some ports. > So far, I've seen it only on: > > gettext > python > apache > php5-mysql > php5-pcre > fontconfig > > Also, at this point, is there a way to fix dependencies if I removed a > package item and added it again? For instance: > > pkg_add apache22 > > > > pkg_del -f > pkg_add > > The package is now back, but the dependency on apache has been removed. > Is the only way to reinstall the apache package? Is there a way to tell > apache to only re-create its dependencies without actually re-installing > files? > > -Matt > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > From matt at atopia.net Fri May 22 19:22:04 2009 From: matt at atopia.net (Matt Juszczak) Date: Fri, 22 May 2009 19:22:04 -0400 (EDT) Subject: [nycbug-talk] Make package-recursive problem In-Reply-To: References: <1592882363-1242604820-cardhu_decombobulator_blackberry.rim.net-163190730-@bxe1247.bisx.prod.on.blackberry> <8c50a3c30905220619u7bb510efxb3d81cb986963c39@mail.gmail.com> Message-ID: > If you bring this up on one of the freebsd lists, let me know and I'll chime > in with a "me too". > > Charles Hi Charles, Was posted to freebsd-ports this morning. =) http://lists.freebsd.org/pipermail/freebsd-ports/2009-May/054764.html A "me too" would be great! If we don't get a reply, I guess I'll move up the latter to the more common -questions. -Matt From george at ceetonetechnology.com Fri May 22 23:50:25 2009 From: george at ceetonetechnology.com (George Rosamond) Date: Fri, 22 May 2009 23:50:25 -0400 Subject: [nycbug-talk] Intel ESB2 RAID Card Message-ID: <4A177281.2080805@ceetonetechnology.com> It doesn't officially support FBSD, and I have a 3Ware as backup, but has anyone use this card for hardware RAID? I see mostly "no's" from a quick search. . . g From brian.gupta at gmail.com Sat May 23 00:15:30 2009 From: brian.gupta at gmail.com (Brian Gupta) Date: Sat, 23 May 2009 00:15:30 -0400 Subject: [nycbug-talk] Audit Solution In-Reply-To: <8c50a3c30905200441r48ccbef2k71d0d1a6636e50ba@mail.gmail.com> References: <200905192206.n4JM670N027070@fulton.nycbug.org> <101593797-1242776721-cardhu_decombobulator_blackberry.rim.net-174332611-@bxe1289.bisx.prod.on.blackberry> <8c50a3c30905192006n39eb5c36j5b2f4951bafaf144@mail.gmail.com> <8c50a3c30905200441r48ccbef2k71d0d1a6636e50ba@mail.gmail.com> Message-ID: <5b5090780905222115k5e3e86bob0b0468461196137@mail.gmail.com> I've used Simple Event Correlator in the past for this. http://www.estpak.ee/~risto/sec/ It's basically swatch on crack. - Brian Gupta New York City user groups calendar: http://nyc.brandorr.com/ On Wed, May 20, 2009 at 7:41 AM, Marc Spitzer wrote: > On Wed, May 20, 2009 at 1:57 AM, Matt Juszczak wrote: >>> splunk >> >> splunk allows searching right? >> > > yes it does, it also does graphing of results. > > marc > > > -- > Freedom is nothing but a chance to be better. > Albert Camus > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > From isaac at diversaform.com Sat May 23 16:31:31 2009 From: isaac at diversaform.com (Isaac Levy) Date: Sat, 23 May 2009 16:31:31 -0400 Subject: [nycbug-talk] Intel ESB2 RAID Card In-Reply-To: <4A177281.2080805@ceetonetechnology.com> References: <4A177281.2080805@ceetonetechnology.com> Message-ID: <01C7D221-7BD9-47AE-B922-BA5939116202@diversaform.com> On May 22, 2009, at 11:50 PM, George Rosamond wrote: > It doesn't officially support FBSD, and I have a 3Ware as backup, but > has anyone use this card for hardware RAID? > > I see mostly "no's" from a quick search. . . > > g :( Out of curiosity, did it boot/install? Rocket- .ike From isaac at diversaform.com Sat May 23 16:40:33 2009 From: isaac at diversaform.com (Isaac Levy) Date: Sat, 23 May 2009 16:40:33 -0400 Subject: [nycbug-talk] DEFCON 17, reminder, time to book rooms! Message-ID: Hi All, Just a reminder- this is about time to get rooms at the Rivera: (Still plenty of time to get plane tix.) "DEFCON 17 will be held July 31 - August 2, 2009, at the Riviera Hotel and Casino in Las Vegas! Admission is $120 USD at the door." http://defcon.org/ Anyone else going? Some ike-bits on DC: A conference where if you don't have a drink in your hand, you're a fed. Defcon is the largest hacker conference in the world, the entire Rivera Casino is taken over by the event- a great time. Talks quality vary, (like any con), there's always tons of stuff that blows my mind every year. Last year had like 7 tracks?(!)? CTF is always my favorite thing to watch, yet the Kenshoto crew is not running it ths year, so who knows... As with most cons, DC is most fun to be there with people you know/ like/trust etc..., it's especially a place to be paranoid, (open network and hacking war zone), so it can be *very* lonely or strange to go alone. Would love to be there with a huge crew of NYC*BUG folks this year! Rocket- .ike From okan at demirmen.com Sat May 23 16:58:00 2009 From: okan at demirmen.com (Okan Demirmen) Date: Sat, 23 May 2009 16:58:00 -0400 Subject: [nycbug-talk] Audit Solution In-Reply-To: <5b5090780905222115k5e3e86bob0b0468461196137@mail.gmail.com> References: <200905192206.n4JM670N027070@fulton.nycbug.org> <101593797-1242776721-cardhu_decombobulator_blackberry.rim.net-174332611-@bxe1289.bisx.prod.on.blackberry> <8c50a3c30905192006n39eb5c36j5b2f4951bafaf144@mail.gmail.com> <8c50a3c30905200441r48ccbef2k71d0d1a6636e50ba@mail.gmail.com> <5b5090780905222115k5e3e86bob0b0468461196137@mail.gmail.com> Message-ID: <20090523205800.GZ6757@clam.khaoz.org> On Sat 2009.05.23 at 00:15 -0400, Brian Gupta wrote: > I've used Simple Event Correlator in the past for this. > http://www.estpak.ee/~risto/sec/ It's basically swatch on crack. i'll second using sec, as well as risto's logpp. From george at ceetonetechnology.com Sat May 23 20:32:41 2009 From: george at ceetonetechnology.com (George Rosamond) Date: Sat, 23 May 2009 20:32:41 -0400 Subject: [nycbug-talk] Intel ESB2 RAID Card In-Reply-To: <01C7D221-7BD9-47AE-B922-BA5939116202@diversaform.com> References: <4A177281.2080805@ceetonetechnology.com> <01C7D221-7BD9-47AE-B922-BA5939116202@diversaform.com> Message-ID: <4A1895A9.5010004@ceetonetechnology.com> Isaac Levy wrote: > On May 22, 2009, at 11:50 PM, George Rosamond wrote: > >> It doesn't officially support FBSD, and I have a 3Ware as backup, but >> has anyone use this card for hardware RAID? >> >> I see mostly "no's" from a quick search. . . >> >> g > > > :( > > Out of curiosity, did it boot/install? stuck with the hpt rocketraid. . . and all good :) vendor (gcs) had configured already with that, so didn't even try. g From spork at bway.net Sat May 23 22:33:48 2009 From: spork at bway.net (Charles Sprickman) Date: Sat, 23 May 2009 22:33:48 -0400 (EDT) Subject: [nycbug-talk] BSD News Sites? Message-ID: Just wondering, is there anything out there besides bsd.slashdot.org? bsdnews.com seems to be dead, kerneltrap, while linux-centric always had the occasional *BSD news but nothing much happening there now. I'm particularly interested in something as detailed as undeadly, but encompassing other *BSDs. Even something that summarizes interesting mailing list threads would be nice. Charles ___ Charles Sprickman NetEng/SysAdmin Bway.net - New York's Best Internet - www.bway.net spork at bway.net - 212.655.9344 From spork at bway.net Sat May 23 22:52:43 2009 From: spork at bway.net (Charles Sprickman) Date: Sat, 23 May 2009 22:52:43 -0400 (EDT) Subject: [nycbug-talk] Trusted HVAC vendors In-Reply-To: References: Message-ID: On Tue, 12 May 2009, Charles Sprickman wrote: > Hello all, > > Looking for an HVAC contractor with experience installing air conditioning > for a small server room. Anyone here have direct experience with someone > trustworthy in that space? Following up to my own post: We did a 180 on this. I had a Liebert sales guy come out and give a quote on their "DataMate" and "MiniMate" systems. Not too bad, but still out of our price range when factoring in the install costs. He turned out to be a very cool guy though who pointed me to a few companies that sell used/refurb equipment. I can't state strongly enough how impressed I was with Liebert sales. I was expecting the "f*** off" treatment we get from Cisco and Juniper, but ended up with very much the opposite. I'm working with someone here right now: http://www.criticalpower.com/ And both the Liebert guy and Critical Power recommended BP Air in queens: http://www.bpair.com/ Minimally-used Liebert stuff is very cheap. 5 Ton MiniMate is under $5K with a one year warranty. Thanks, Charles > Thanks, > > Charles > > ___ > Charles Sprickman > NetEng/SysAdmin > Bway.net - New York's Best Internet - www.bway.net > spork at bway.net - 212.655.9344 > > From dave at donnerjack.com Sun May 24 02:14:47 2009 From: dave at donnerjack.com (David Lawson) Date: Sun, 24 May 2009 02:14:47 -0400 Subject: [nycbug-talk] Trusted HVAC vendors In-Reply-To: References: Message-ID: <0A477638-F458-41CB-B1A9-CA9ADCC74D3C@donnerjack.com> Libert is, distinctly, the gold standard in datacenter cooling, at least in my experience. The various datacenters I've worked in/with have all had them and I've always been super happy with all my interactions with them. --Dave On May 23, 2009, at 10:52 PM, Charles Sprickman wrote: > On Tue, 12 May 2009, Charles Sprickman wrote: > >> Hello all, >> >> Looking for an HVAC contractor with experience installing air >> conditioning >> for a small server room. Anyone here have direct experience with >> someone >> trustworthy in that space? > > Following up to my own post: > > We did a 180 on this. I had a Liebert sales guy come out and give a > quote > on their "DataMate" and "MiniMate" systems. Not too bad, but still > out of > our price range when factoring in the install costs. He turned out > to be > a very cool guy though who pointed me to a few companies that sell > used/refurb equipment. I can't state strongly enough how impressed > I was > with Liebert sales. I was expecting the "f*** off" treatment we get > from > Cisco and Juniper, but ended up with very much the opposite. > > I'm working with someone here right now: > > http://www.criticalpower.com/ > > And both the Liebert guy and Critical Power recommended BP Air in > queens: > > http://www.bpair.com/ > > Minimally-used Liebert stuff is very cheap. 5 Ton MiniMate is under > $5K > with a one year warranty. > > Thanks, > > Charles > >> Thanks, >> >> Charles >> >> ___ >> Charles Sprickman >> NetEng/SysAdmin >> Bway.net - New York's Best Internet - www.bway.net >> spork at bway.net - 212.655.9344 >> >> > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > From matt at atopia.net Sun May 24 15:43:25 2009 From: matt at atopia.net (Matt Juszczak) Date: Sun, 24 May 2009 15:43:25 -0400 (EDT) Subject: [nycbug-talk] OT: OpenLDAP 23 client with 24 server? Message-ID: Hi All, A bit off topic, but I'm hoping someone knows the answer on the list. Most of our boxes are FreeBSD. FreeBSD has ports for openldap22, openldap23, and openldap24. Not using slurpd much anymore in my setups, I decided to run with openldap24 in our recent setup. Setup openldap24 server, and all the FreeBSD clients have openldap24 clients. Everything is working well. Recently, we had to introduce a few RHEL boxes into our setup. We're pointing to the redhat repositories, but they seem to only have openldap23-* client packages. I know I could potentially make my own packages, or perhaps get RPM's from the Internet, but I was wondering if by some chance openldap23 clients (and pam_ldap/nss_ldap libraries) are compatible with openldap24 servers? I would assume the other way around (openldap24 clients with openldap23 servers) would work fine. I will admit I cross-posted this entry to the ldap mailing list as well to get an expedited answer, as I'm running a bit tight on time, but I don't usually do this, so my apologies if someone received this twice. Thanks! -Matt From george at ceetonetechnology.com Mon May 25 09:40:44 2009 From: george at ceetonetechnology.com (George Rosamond) Date: Mon, 25 May 2009 09:40:44 -0400 Subject: [nycbug-talk] site with command line scripts Message-ID: <4A1A9FDC.6070502@ceetonetechnology.com> Caught this on fbsd chat http://www.commandlinefu.com/ g From drulavigne at sympatico.ca Mon May 25 09:43:17 2009 From: drulavigne at sympatico.ca (Dru Lavigne) Date: Mon, 25 May 2009 13:43:17 +0000 Subject: [nycbug-talk] article for BSD mag? Message-ID: Anyone have the time and interest to write an article for the upcoming issue of BSD mag? The article should be on installing and configuring OpenBSD 4.5 and the draft is due June 6th. If you're interested, ping me off list and I'll put you in touch with Karolina, the editor. Cheers, Dru -------------- next part -------------- An HTML attachment was scrubbed... URL: From akosela at andykosela.com Mon May 25 11:56:19 2009 From: akosela at andykosela.com (Andy Kosela) Date: Mon, 25 May 2009 17:56:19 +0200 Subject: [nycbug-talk] site with command line scripts In-Reply-To: <4A1A9FDC.6070502@ceetonetechnology.com> References: <4A1A9FDC.6070502@ceetonetechnology.com> Message-ID: <4a1abfa3.1EjUMBM1zW+6AoQ1%akosela@andykosela.com> George Rosamond wrote: > Caught this on fbsd chat > > http://www.commandlinefu.com/ Yeah, I have been aware of that site for quite some time now. Although most of the commands are Linux/bash oriented I found there some really nice jewels. --Andy From pete at nomadlogic.org Mon May 25 12:44:14 2009 From: pete at nomadlogic.org (Pete Wright) Date: Mon, 25 May 2009 09:44:14 -0700 Subject: [nycbug-talk] OT: OpenLDAP 23 client with 24 server? In-Reply-To: References: Message-ID: On 24-May-09, at 12:43 PM, Matt Juszczak wrote: > Hi All, > > A bit off topic, but I'm hoping someone knows the answer on the list. > > Most of our boxes are FreeBSD. FreeBSD has ports for openldap22, > openldap23, and openldap24. Not using slurpd much anymore in my > setups, I > decided to run with openldap24 in our recent setup. Setup openldap24 > server, and all the FreeBSD clients have openldap24 clients. > Everything > is working well. > > Recently, we had to introduce a few RHEL boxes into our setup. We're > pointing to the redhat repositories, but they seem to only have > openldap23-* client packages. I know I could potentially make my own > packages, or perhaps get RPM's from the Internet, but I was > wondering if > by some chance openldap23 clients (and pam_ldap/nss_ldap libraries) > are > compatible with openldap24 servers? I would assume the other way > around > (openldap24 clients with openldap23 servers) would work fine. > matt - it might be worth checking these alternative repos: EPEL (extra packages for enterprise linux): http://fedoraproject.org/wiki/EPEL and Dag's Yum Repo: http://dag.wieers.com/rpm/ sorry to lazy to check the versions they have - but one of them should have more recent builds of the openldap clients... > hth -p -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan at langille.org Mon May 25 19:46:37 2009 From: dan at langille.org (Dan Langille) Date: Mon, 25 May 2009 19:46:37 -0400 Subject: [nycbug-talk] BSD News Sites? In-Reply-To: References: Message-ID: <4A1B2DDD.8090504@langille.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Charles Sprickman wrote: > Just wondering, is there anything out there besides bsd.slashdot.org? > > bsdnews.com seems to be dead, kerneltrap, while linux-centric always had > the occasional *BSD news but nothing much happening there now. > > I'm particularly interested in something as detailed as undeadly, but > encompassing other *BSDs. Even something that summarizes interesting > mailing list threads would be nice. The PostgtreSQL project does something nice: http://www.postgresql.org/community/weeklynews/ It's a smaller project, so doing this for FreeBSD, let alone the other BSDs will probably involve more work. All it takes is one person to start it. Others will join. - -- Dan Langille BSDCan - The Technical BSD Conference : http://www.bsdcan.org/ PGCon - The PostgreSQL Conference: http://www.pgcon.org/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.11 (FreeBSD) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkobLd0ACgkQCgsXFM/7nTzQ6ACgiMhAEfCRm4dw3tyYixB8bGOc /zMAoMGzRuSPUlKUFZ2qGWcP+6eayPC5 =5wRc -----END PGP SIGNATURE----- From netmantej at gmail.com Mon May 25 20:50:56 2009 From: netmantej at gmail.com (tim jacques) Date: Mon, 25 May 2009 20:50:56 -0400 Subject: [nycbug-talk] BSD News Sites? In-Reply-To: References: Message-ID: <1aa60f4d0905251750i438c31baucadf36e0f9effe1f@mail.gmail.com> On Sat, May 23, 2009 at 10:33 PM, Charles Sprickman wrote: > Just wondering, is there anything out there besides bsd.slashdot.org? > > bsdnews.com seems to be dead, kerneltrap, while linux-centric always had > the occasional *BSD news but nothing much happening there now. > > I'm particularly interested in something as detailed as undeadly, but > encompassing other *BSDs. Even something that summarizes interesting > mailing list threads would be nice. > > Charles > > ___ > Charles Sprickman > NetEng/SysAdmin > Bway.net - New York's Best Internet - www.bway.net > spork at bway.net - 212.655.9344 > > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > --------------------------------------------------------------------------------------------------- Good afternoon all . Here are a few that I look at often: www.freebsdnews.net forums.freebsd.org bsdtalk.blogspot.com www.unix.com Tim .. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spork at bway.net Mon May 25 22:10:35 2009 From: spork at bway.net (Charles Sprickman) Date: Mon, 25 May 2009 22:10:35 -0400 (EDT) Subject: [nycbug-talk] BSD News Sites? In-Reply-To: <1aa60f4d0905251750i438c31baucadf36e0f9effe1f@mail.gmail.com> References: <1aa60f4d0905251750i438c31baucadf36e0f9effe1f@mail.gmail.com> Message-ID: On Mon, 25 May 2009, tim jacques wrote: > On Sat, May 23, 2009 at 10:33 PM, Charles Sprickman wrote: > >> Just wondering, is there anything out there besides bsd.slashdot.org? >> >> bsdnews.com seems to be dead, kerneltrap, while linux-centric always had >> the occasional *BSD news but nothing much happening there now. >> >> I'm particularly interested in something as detailed as undeadly, but >> encompassing other *BSDs. Even something that summarizes interesting >> mailing list threads would be nice. >> >> Charles > > --------------------------------------------------------------------------------------------------- > > Good afternoon all . > > Here are a few that I look at often: > > www.freebsdnews.net Wow. I hate the blog format for news (ie: it's flat), but that guy is mighty thorough. I'll watch that one. Not much on the other *BSD projects though (I really miss my Dragonfly news, which KernelTrap used to cover well). > forums.freebsd.org I had no idea they even had official forums. :) > bsdtalk.blogspot.com A great project would be to transcribe those. Thanks for all the links, very much appreciated. Charles > www.unix.com > > Tim .. > From dcolish at gmail.com Mon May 25 22:17:41 2009 From: dcolish at gmail.com (Dan Colish) Date: Mon, 25 May 2009 19:17:41 -0700 Subject: [nycbug-talk] BSD News Sites? In-Reply-To: References: <1aa60f4d0905251750i438c31baucadf36e0f9effe1f@mail.gmail.com> Message-ID: <20090526021741.GA16182@BIGGLE.spiretech.com> check out undeadly.org From spork at bway.net Mon May 25 22:27:07 2009 From: spork at bway.net (Charles Sprickman) Date: Mon, 25 May 2009 22:27:07 -0400 (EDT) Subject: [nycbug-talk] BSD News Sites? In-Reply-To: <20090526021741.GA16182@BIGGLE.spiretech.com> References: <1aa60f4d0905251750i438c31baucadf36e0f9effe1f@mail.gmail.com> <20090526021741.GA16182@BIGGLE.spiretech.com> Message-ID: On Mon, 25 May 2009, Dan Colish wrote: > check out undeadly.org Love it, especially the hackathon coverage. "new ports" stories... not so much. Looking for more of the same covering all BSDs. Thanks, Charles > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > From dcolish at gmail.com Mon May 25 22:41:25 2009 From: dcolish at gmail.com (Dan Colish) Date: Mon, 25 May 2009 19:41:25 -0700 Subject: [nycbug-talk] BSD News Sites? In-Reply-To: References: <1aa60f4d0905251750i438c31baucadf36e0f9effe1f@mail.gmail.com> <20090526021741.GA16182@BIGGLE.spiretech.com> Message-ID: <20090526024125.GB16182@BIGGLE.spiretech.com> > >> check out undeadly.org > > Love it, especially the hackathon coverage. "new ports" stories... not > so much. Looking for more of the same covering all BSDs. FWIW, I think there is plenty of space left for BSD advocacy. I like hearing the ways BSD is used in the field, so-to-speak. From matt at atopia.net Mon May 25 22:55:38 2009 From: matt at atopia.net (Matt Juszczak) Date: Mon, 25 May 2009 22:55:38 -0400 (EDT) Subject: [nycbug-talk] BSD News Sites? In-Reply-To: <20090526024125.GB16182@BIGGLE.spiretech.com> References: <1aa60f4d0905251750i438c31baucadf36e0f9effe1f@mail.gmail.com> <20090526021741.GA16182@BIGGLE.spiretech.com> <20090526024125.GB16182@BIGGLE.spiretech.com> Message-ID: > FWIW, I think there is plenty of space left for BSD advocacy. I like > hearing the ways BSD is used in the field, so-to-speak. I run bsdjobs.net. I don't really do much advertising of the site at all, but the last job added was on April 17th. The site is fairly clean and I wouldn't mind expanding it, if anyone wanted to assist in potentially creating some bsd news areas that potentially focus on enterprise freebsd use (workplace, consulting, etc.). Would only be one area, but I feel like it's an area people could really benefit from. I have and always will be open to turning bsdjobs.net into a 100% collaborative project (CVS/SVN, CMS, etc.) if others were interested. I would also gladly turn the domain name over to NYCBUG or any other organization that wanted to take part in expanding the site. -Matt From skreuzer at exit2shell.com Tue May 26 12:09:56 2009 From: skreuzer at exit2shell.com (Steven Kreuzer) Date: Tue, 26 May 2009 12:09:56 -0400 Subject: [nycbug-talk] BSD News Sites? In-Reply-To: References: Message-ID: On May 23, 2009, at 10:33 PM, Charles Sprickman wrote: > Just wondering, is there anything out there besides bsd.slashdot.org? > > bsdnews.com seems to be dead, kerneltrap, while linux-centric always > had > the occasional *BSD news but nothing much happening there now. > > I'm particularly interested in something as detailed as undeadly, but > encompassing other *BSDs. Even something that summarizes interesting > mailing list threads would be nice. http://planet.freebsdish.org/ - Aggregates freebsd developer blogs from all over the place http://ivoras.sharanet.org/freebsd/freebsd8.html - Documents changes that will be included in FreeBSD 8 http://www.youtube.com/bsdconferences - Videos of talks from various BSD conferences -- Steven Kreuzer http://www.exit2shell.com/~skreuzer From marylynn at blueskystudios.com Tue May 26 14:20:34 2009 From: marylynn at blueskystudios.com (ML Kirby) Date: Tue, 26 May 2009 14:20:34 -0400 Subject: [nycbug-talk] Freebsd and openldap 2.4 Message-ID: <4A1C32F2.8040402@blueskystudios.com> Hello all, Are there folks out there with experience doing sync replication on openldap 2.4? I have installed the port and confirmed that the syncprov option was checked when it was installed. In version 2.3 and below you had to put the line moduleload syncprov.la into your slapd.conf. The official sync replication docs on openldap.org for 2.4 don't show the need for that module to be loaded, however there are other docs I've been reading that insist you do have to include it. It is not however in the modulepath directory. Has anyone had experience upgrading to 2.4 and can confirm that you no longer have to explicitly load that module? Or is this a possible broken port? Thanks in advance, Mary Lynn -- Mary Lynn Kirby UNIX/Networking Systems Administrator Blue Sky Studios From george at ceetonetechnology.com Wed May 27 13:56:26 2009 From: george at ceetonetechnology.com (George Rosamond) Date: Wed, 27 May 2009 13:56:26 -0400 Subject: [nycbug-talk] Internet Week NY Message-ID: <4A1D7ECA.30302@ceetonetechnology.com> A little birdie told me about this http://internetweekny.com/ g From matt at atopia.net Wed May 27 15:59:11 2009 From: matt at atopia.net (Matt Juszczak) Date: Wed, 27 May 2009 15:59:11 -0400 (EDT) Subject: [nycbug-talk] New Setup Questions In-Reply-To: <5b5090780904182056n41ac6997sd3a8414e9e12eb52@mail.gmail.com> References: <5b5090780904171937m46735746q8a34a8db72732ba9@mail.gmail.com> <5b5090780904172349y27e0eb24ycccc002bffd3a46c@mail.gmail.com> <5b5090780904182054j2fc428ffl66c3b8fcd584015a@mail.gmail.com> <5b5090780904182056n41ac6997sd3a8414e9e12eb52@mail.gmail.com> Message-ID: Hi all, Two more questions for everyone. Just an update a month later - we've been working hard on the migration project, but we're coming down to the wire. I've gotten everything setup nicely - internal DNS, everything uses LDAP, I'm using package distribution (via NFS) with custom FreeBSD packages, everything is in sync and config'd the same, tuned, etc. The two things I haven't been able to complete that I wanted to are: - A good dev environment - using puppet At this point, I need to find a temporary solution for us to keep our webserver configuration and code in sync. For now, I was thinking of: - configuring puppet on the webs so that /usr/local/etc/apache22 is managed 100% by puppet (since its 100% identical across all webservers). The other option would just be to temporarily make this directory an svn checkout, but ... eh.... - putting all of our code in an SVN repository (temporarily) and checking it out to all the webs. Somehow, I would need to tell the webs when it's ok to run "svn update" and on which directory to do that. I could do that with a script, or I could do it via puppet potentially? I haven't had much time to play with puppet, and we only have a few more days. Can someone with puppet experience let me know if the temporary solution I propose above is an ok idea, or if it would be better to go another route? Thanks! -Matt From pete at nomadlogic.org Wed May 27 16:34:19 2009 From: pete at nomadlogic.org (Pete Wright) Date: Wed, 27 May 2009 13:34:19 -0700 Subject: [nycbug-talk] New Setup Questions In-Reply-To: References: <5b5090780904171937m46735746q8a34a8db72732ba9@mail.gmail.com> <5b5090780904172349y27e0eb24ycccc002bffd3a46c@mail.gmail.com> <5b5090780904182054j2fc428ffl66c3b8fcd584015a@mail.gmail.com> <5b5090780904182056n41ac6997sd3a8414e9e12eb52@mail.gmail.com> Message-ID: <5D3E60CA-FEC8-4900-9FAE-4D478F316023@nomadlogic.org> On 27-May-09, at 12:59 PM, Matt Juszczak wrote: > Hi all, > > Two more questions for everyone. > > Just an update a month later - we've been working hard on the > migration > project, but we're coming down to the wire. > > I've gotten everything setup nicely - internal DNS, everything uses > LDAP, > I'm using package distribution (via NFS) with custom FreeBSD packages, > everything is in sync and config'd the same, tuned, etc. > > The two things I haven't been able to complete that I wanted to are: > > - A good dev environment > - using puppet > > At this point, I need to find a temporary solution for us to keep our > webserver configuration and code in sync. For now, I was thinking of: > > - configuring puppet on the webs so that /usr/local/etc/apache22 is > managed 100% by puppet (since its 100% identical across all > webservers). > The other option would just be to temporarily make this directory an > svn > checkout, but ... eh.... > > - putting all of our code in an SVN repository (temporarily) and > checking > it out to all the webs. Somehow, I would need to tell the webs when > it's > ok to run "svn update" and on which directory to do that. I could > do that > with a script, or I could do it via puppet potentially? > > I haven't had much time to play with puppet, and we only have a few > more > days. Can someone with puppet experience let me know if the temporary > solution I propose above is an ok idea, or if it would be better to go > another route? my general rule of thumb is that having all of your configuration data stored in puppet/cfengine/etc is a good thing for sure. I would not suggest having all of your servers depend upon svn to keep their configs in sync. now, having all of your puppet configs in svn is probably a good thing - and setting up some post commit triggers in svn could even be done. for example: - all of /usr/local/apache/ is managed via puppetd on client side, and all configs are in svn on the puppet server side - admin updates $PUPPET_HOME/modules/apache2/some-config-file and checks it into svn - you have a svn hook (or call back) that pushes this new config to puppet and automatically gets pushed to your web servers. i'd have a hard think about the last part though as if used improperly you can really shot yourself in the foot :) regarding puppet configuration schema it's pretty straight forward. something like this may work... define all of your httpd systems somewhere like $PUPPET_HOME/manifests/ httpd.pp node my_server { # common configs for all servers in production include generic-server-config # snmpd config files include snmpd # apache configs include apache2 } then create a httpd config class (i think puppet uses the term "module"). here is one we use which may help: $ find $PUPPET_HOME/modules/apache2/ apache2/ apache2/manifests apache2/manifests/init.pp apache2/files apache2/files/apache-testfile $ cat apach2/manifests/init.pp class apache2 { package { apache2: ensure => installed } file { "/etc/apache2/testfile": source => "puppet://$servername/apache2/apache- testfile" } } in this example we just have a dummy apache test file, you could obviously have your httpd.conf, sites-enabled/site-config and such in there. you also may, or may not want the package{} statement. HTH -pete ps -> i'm relatively new to puppet (coming from cfengine), so if any puppet guru's out there some obvious mistakes don't hesitate to let me know! From mark.saad at ymail.com Wed May 27 19:42:58 2009 From: mark.saad at ymail.com (mark.saad at ymail.com) Date: Wed, 27 May 2009 23:42:58 +0000 Subject: [nycbug-talk] New Setup Questions In-Reply-To: <5D3E60CA-FEC8-4900-9FAE-4D478F316023@nomadlogic.org> References: <5b5090780904171937m46735746q8a34a8db72732ba9@mail.gmail.com><5b5090780904172349y27e0eb24ycccc002bffd3a46c@mail.gmail.com><5b5090780904182054j2fc428ffl66c3b8fcd584015a@mail.gmail.com><5b5090780904182056n41ac6997sd3a8414e9e12eb52@mail.gmail.com><5D3E60CA-FEC8-4900-9FAE-4D478F316023@nomadlogic.org> Message-ID: <527170174-1243467776-cardhu_decombobulator_blackberry.rim.net-234641813-@bxe1122.bisx.prod.on.blackberry> Hello talk Some what related to this , has anyone used radmind to keep configuration files and apps in sync across mutuple servers . I am now working somewhere where radmind is king and I have never touched it . Sent from my Verizon Wireless BlackBerry -----Original Message----- From: Pete Wright Date: Wed, 27 May 2009 13:34:19 To: Matt Juszczak Cc: Subject: Re: [nycbug-talk] New Setup Questions On 27-May-09, at 12:59 PM, Matt Juszczak wrote: > Hi all, > > Two more questions for everyone. > > Just an update a month later - we've been working hard on the > migration > project, but we're coming down to the wire. > > I've gotten everything setup nicely - internal DNS, everything uses > LDAP, > I'm using package distribution (via NFS) with custom FreeBSD packages, > everything is in sync and config'd the same, tuned, etc. > > The two things I haven't been able to complete that I wanted to are: > > - A good dev environment > - using puppet > > At this point, I need to find a temporary solution for us to keep our > webserver configuration and code in sync. For now, I was thinking of: > > - configuring puppet on the webs so that /usr/local/etc/apache22 is > managed 100% by puppet (since its 100% identical across all > webservers). > The other option would just be to temporarily make this directory an > svn > checkout, but ... eh.... > > - putting all of our code in an SVN repository (temporarily) and > checking > it out to all the webs. Somehow, I would need to tell the webs when > it's > ok to run "svn update" and on which directory to do that. I could > do that > with a script, or I could do it via puppet potentially? > > I haven't had much time to play with puppet, and we only have a few > more > days. Can someone with puppet experience let me know if the temporary > solution I propose above is an ok idea, or if it would be better to go > another route? my general rule of thumb is that having all of your configuration data stored in puppet/cfengine/etc is a good thing for sure. I would not suggest having all of your servers depend upon svn to keep their configs in sync. now, having all of your puppet configs in svn is probably a good thing - and setting up some post commit triggers in svn could even be done. for example: - all of /usr/local/apache/ is managed via puppetd on client side, and all configs are in svn on the puppet server side - admin updates $PUPPET_HOME/modules/apache2/some-config-file and checks it into svn - you have a svn hook (or call back) that pushes this new config to puppet and automatically gets pushed to your web servers. i'd have a hard think about the last part though as if used improperly you can really shot yourself in the foot :) regarding puppet configuration schema it's pretty straight forward. something like this may work... define all of your httpd systems somewhere like $PUPPET_HOME/manifests/ httpd.pp node my_server { # common configs for all servers in production include generic-server-config # snmpd config files include snmpd # apache configs include apache2 } then create a httpd config class (i think puppet uses the term "module"). here is one we use which may help: $ find $PUPPET_HOME/modules/apache2/ apache2/ apache2/manifests apache2/manifests/init.pp apache2/files apache2/files/apache-testfile $ cat apach2/manifests/init.pp class apache2 { package { apache2: ensure => installed } file { "/etc/apache2/testfile": source => "puppet://$servername/apache2/apache- testfile" } } in this example we just have a dummy apache test file, you could obviously have your httpd.conf, sites-enabled/site-config and such in there. you also may, or may not want the package{} statement. HTH -pete ps -> i'm relatively new to puppet (coming from cfengine), so if any puppet guru's out there some obvious mistakes don't hesitate to let me know! _______________________________________________ talk mailing list talk at lists.nycbug.org http://lists.nycbug.org/mailman/listinfo/talk From spork at bway.net Wed May 27 20:44:04 2009 From: spork at bway.net (Charles Sprickman) Date: Wed, 27 May 2009 20:44:04 -0400 (EDT) Subject: [nycbug-talk] green monster server Message-ID: Hi all, I was given a pretty nice 3U box that I'd like to use for a home media tank and for some ZFS experimentation (hoping the two go well hand in hand). It's great: Dual 2.4GHz Xeon processors 2GB RAM 3Ware 7508 Escalade card (8 ATA ports) 8 250GB ATA drives Backplane for said drives/card The problem is that at idle the thing draws over 300 watts. I'm looking for some input on how to make it less power hungry. I'm assuming the power supplies are really overkill, but unplugging any one of them results in a screeching alarm that can't be disabled. I'm thinking of putting a single standard PS in there, hopefully something that's relatively efficient (pointers welcome - most power supplies are lucky to be 70% efficient). I can do without dual processors. Getting rid of one should leave me with enough to serve files on a FE network. Too many fans, at least 10. I'll remove some until I see the temp getting too warm for the drives. As I can afford it, I'll retire drives and put in larger units to bring the drive count down (see ZFS above - hopefully it's magical enough to allow this type of tinkering and migration). Any other ideas? Thanks, Charles ___ Charles Sprickman NetEng/SysAdmin Bway.net - New York's Best Internet - www.bway.net spork at bway.net - 212.655.9344 From carton at Ivy.NET Thu May 28 00:50:54 2009 From: carton at Ivy.NET (Miles Nordin) Date: Thu, 28 May 2009 00:50:54 -0400 Subject: [nycbug-talk] green monster server In-Reply-To: (Charles Sprickman's message of "Wed, 27 May 2009 20:44:04 -0400 (EDT)") References: Message-ID: >>>>> "cs" == Charles Sprickman writes: cs> I'm looking for some input on how to make it less power cs> hungry. cs> most power supplies are lucky to be 70% efficient). no, there's an 80plus logo consortium now. This means the power supply is over 80% efficient even when running at much less than full load. most but not all supplies on newegg have the logo. cs> 8 250GB ATA drives 2 or 3 1TB drives instead? i think dual parity is a good idea though. cs> Too many fans, at least 10. I'll remove some until I see the cs> temp getting too warm for the drives. this case is pretty quiet: http://www.servercase.com/miva/miva?/Merchant2/merchant.mv+Screen=PROD&Store_Code=SC&Product_Code=CK4020&Category_Code=MS cs> As I can afford it, I'll retire drives and put in larger units cs> to bring the drive count down (see ZFS above - hopefully it's cs> magical enough to allow this type of tinkering and migration). nope. you can't decrease drive count without destroying and recreating the pool. The only exception is to remove a mirror component---that you can do. You can increase the size of individual drives obviously, and also if you increase the size of all the drives in a stripe the pool will get bigger (proportonally to the smallest drive in the stripe). but there's no way to decrease the size of a drive, nor to migrate data off a drive and remove it. they are working on this capability. they call it bp rewrite i think. but they've been working on it for more than a year. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: From spork at bway.net Thu May 28 01:19:20 2009 From: spork at bway.net (Charles Sprickman) Date: Thu, 28 May 2009 01:19:20 -0400 (EDT) Subject: [nycbug-talk] green monster server In-Reply-To: References: Message-ID: On Thu, 28 May 2009, Miles Nordin wrote: >>>>>> "cs" == Charles Sprickman writes: > > cs> I'm looking for some input on how to make it less power > cs> hungry. > > cs> most power supplies are lucky to be 70% efficient). > > no, there's an 80plus logo consortium now. This means the power > supply is over 80% efficient even when running at much less than full > load. most but not all supplies on newegg have the logo. Excellent. I got the 70% figure from some article I read a few years ago about a push for a "new" power DC power standard for datacenters that would run either 12V or 5V to cabinets rather than AC or 48V DC. The idea was to basically eliminate the waste factor inherent in every server in the datacenter. The power savings were astronomical. Chances of this thing being standardized and implemented seem minimal though... Is there any way to determine which power supplies will draw the least power when not actually loaded up? Or is part of that 80% initiative to mandate that even if the box is idle and pulling like 40W the PS is not pulling 100W out of the wall? Just nuking this cheap-ass triple-redundant sham will surely bring me under 150W at idle I suspect/hope. I can live with that power draw for now. > cs> 8 250GB ATA drives > > 2 or 3 1TB drives instead? Eventually. I got this for exactly $0. No budget beyond maybe $80 for the power supply. > i think dual parity is a good idea though. I have much reading to do on ZFS. All I've seen are the gee-whiz writeups. Now that FreeBSD has imported the latest and there are more committers involved I'm ready to learn and play. > cs> Too many fans, at least 10. I'll remove some until I see the > cs> temp getting too warm for the drives. > > this case is pretty quiet: > > http://www.servercase.com/miva/miva?/Merchant2/merchant.mv+Screen=PROD&Store_Code=SC&Product_Code=CK4020&Category_Code=MS It's in the garage, so noise is no problem. The frontend will likely be a $250 hackintosh running Plex (http://www.plexapp.com/). I have no cable TV or satellite, just Netflix and BT for video entertainment and a sizable library of DVDs to rip. > cs> As I can afford it, I'll retire drives and put in larger units > cs> to bring the drive count down (see ZFS above - hopefully it's > cs> magical enough to allow this type of tinkering and migration). > > nope. crap. > you can't decrease drive count without destroying and recreating the > pool. The only exception is to remove a mirror component---that you > can do. You can increase the size of individual drives obviously, and > also if you increase the size of all the drives in a stripe the pool > will get bigger (proportonally to the smallest drive in the stripe). > but there's no way to decrease the size of a drive, nor to migrate > data off a drive and remove it. Not even remove it? That's weird. So if you put together an array today with 1TB drives and 5 years from now you are stuffing 5TB drives in, you're SOL? I could probably start with fewer drives, leaving some slots open. When I get the 1TB drives, I could make a new pool there and at least copy the data over. Anyhow, it should be fun to figure out. > they are working on this capability. they call it bp rewrite i think. > but they've been working on it for more than a year. I'll google around a bit. Thanks, Charles From bschonhorst at gmail.com Thu May 28 07:47:03 2009 From: bschonhorst at gmail.com (Brad Schonhorst) Date: Thu, 28 May 2009 07:47:03 -0400 Subject: [nycbug-talk] New Setup Questions In-Reply-To: <527170174-1243467776-cardhu_decombobulator_blackberry.rim.net-234641813-@bxe1122.bisx.prod.on.blackberry> References: <5b5090780904171937m46735746q8a34a8db72732ba9@mail.gmail.com><5b5090780904172349y27e0eb24ycccc002bffd3a46c@mail.gmail.com><5b5090780904182054j2fc428ffl66c3b8fcd584015a@mail.gmail.com><5b5090780904182056n41ac6997sd3a8414e9e12eb52@mail.gmail.com><5D3E60CA-FEC8-4900-9FAE-4D478F316023@nomadlogic.org> <527170174-1243467776-cardhu_decombobulator_blackberry.rim.net-234641813-@bxe1122.bisx.prod.on.blackberry> Message-ID: <1AC31081-8645-4C57-A348-F15D5F2F8941@gmail.com> On May 27, 2009, at 7:42 PM, mark.saad at ymail.com wrote: > Hello talk > Some what related to this , has anyone used radmind to keep > configuration files and apps in sync across mutuple servers . I am > now working somewhere where radmind is king and I have never touched > it . > Sent from my Verizon Wireless BlackBerry > Mark- I setup Radmind to manage the integrity of a medium sized LAN (75-90 OS X boxes) and it worked extremely well. Had scripts to reimage a machine nightly or when an admin logged in with a specific account. Setup was a bit of a challenge with some userland apps but for config files it should be no problem. Email me offline if you have specific questions. -brad > -----Original Message----- > From: Pete Wright > > Date: Wed, 27 May 2009 13:34:19 > To: Matt Juszczak > Cc: > Subject: Re: [nycbug-talk] New Setup Questions > > > > On 27-May-09, at 12:59 PM, Matt Juszczak wrote: > >> Hi all, >> >> Two more questions for everyone. >> >> Just an update a month later - we've been working hard on the >> migration >> project, but we're coming down to the wire. >> >> I've gotten everything setup nicely - internal DNS, everything uses >> LDAP, >> I'm using package distribution (via NFS) with custom FreeBSD >> packages, >> everything is in sync and config'd the same, tuned, etc. >> >> The two things I haven't been able to complete that I wanted to are: >> >> - A good dev environment >> - using puppet >> >> At this point, I need to find a temporary solution for us to keep our >> webserver configuration and code in sync. For now, I was thinking >> of: >> >> - configuring puppet on the webs so that /usr/local/etc/apache22 is >> managed 100% by puppet (since its 100% identical across all >> webservers). >> The other option would just be to temporarily make this directory an >> svn >> checkout, but ... eh.... >> >> - putting all of our code in an SVN repository (temporarily) and >> checking >> it out to all the webs. Somehow, I would need to tell the webs when >> it's >> ok to run "svn update" and on which directory to do that. I could >> do that >> with a script, or I could do it via puppet potentially? >> >> I haven't had much time to play with puppet, and we only have a few >> more >> days. Can someone with puppet experience let me know if the >> temporary >> solution I propose above is an ok idea, or if it would be better to >> go >> another route? > > my general rule of thumb is that having all of your configuration data > stored in puppet/cfengine/etc is a good thing for sure. I would not > suggest having all of your servers depend upon svn to keep their > configs in sync. > > now, having all of your puppet configs in svn is probably a good thing > - and setting up some post commit triggers in svn could even be done. > for example: > - all of /usr/local/apache/ is managed via puppetd on client side, and > all configs are in svn on the puppet server side > - admin updates $PUPPET_HOME/modules/apache2/some-config-file and > checks it into svn > - you have a svn hook (or call back) that pushes this new config to > puppet and automatically gets pushed to your web servers. > > i'd have a hard think about the last part though as if used improperly > you can really shot yourself in the foot :) > > regarding puppet configuration schema it's pretty straight forward. > something like this may work... > > define all of your httpd systems somewhere like $PUPPET_HOME/ > manifests/ > httpd.pp > > node my_server { > # common configs for all servers in production > include generic-server-config > # snmpd config files > include snmpd > # apache configs > include apache2 > } > > then create a httpd config class (i think puppet uses the term > "module"). here is one we use which may help: > > $ find $PUPPET_HOME/modules/apache2/ > apache2/ > apache2/manifests > apache2/manifests/init.pp > apache2/files > apache2/files/apache-testfile > $ cat apach2/manifests/init.pp > class apache2 { > > package { > apache2: ensure => installed > } > > file { "/etc/apache2/testfile": > source => "puppet://$servername/apache2/apache- > testfile" > } > > } > > > in this example we just have a dummy apache test file, you could > obviously have your httpd.conf, sites-enabled/site-config and such in > there. you also may, or may not want the package{} statement. > > > HTH > -pete > > > ps -> i'm relatively new to puppet (coming from cfengine), so if any > puppet guru's out there some obvious mistakes don't hesitate to let me > know! > > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk From okan at demirmen.com Thu May 28 09:42:49 2009 From: okan at demirmen.com (Okan Demirmen) Date: Thu, 28 May 2009 09:42:49 -0400 Subject: [nycbug-talk] New Setup Questions In-Reply-To: <527170174-1243467776-cardhu_decombobulator_blackberry.rim.net-234641813-@bxe1122.bisx.prod.on.blackberry> References: <5D3E60CA-FEC8-4900-9FAE-4D478F316023@nomadlogic.org> <527170174-1243467776-cardhu_decombobulator_blackberry.rim.net-234641813-@bxe1122.bisx.prod.on.blackberry> Message-ID: <20090528134249.GZ6757@clam.khaoz.org> On Wed 2009.05.27 at 23:42 +0000, mark.saad at ymail.com wrote: > Hello talk > Some what related to this , has anyone used radmind to keep configuration files and apps in sync across mutuple servers . I am now working somewhere where radmind is king and I have never touched it . radmind is kinda like rdist on crack, but it is not quite cfengine. From michael.bubb at gmail.com Thu May 28 11:08:19 2009 From: michael.bubb at gmail.com (Michael Bubb) Date: Thu, 28 May 2009 11:08:19 -0400 Subject: [nycbug-talk] nycbsdcon / certification exams Message-ID: <534a4cab0905280808q61981612ge6c16aed2c064712@mail.gmail.com> Hello - I was curious about 2 related things: 1) when is the nycbsdcon for 2009? I dont neccessarily mean exact dates, is it generally in October, etc? 2) is this the best/only time to take the certification test in the NYC area? thank you for any info Michael -- Michael Bubb | Hoboken, NJ | 201.736.0870 www.linkedin.com/in/mpbubb "make up yr mind you Tiresias if you know know damn well or else you dont" From alex at pilosoft.com Thu May 28 11:26:48 2009 From: alex at pilosoft.com (Alex Pilosov) Date: Thu, 28 May 2009 11:26:48 -0400 (EDT) Subject: [nycbug-talk] green monster server In-Reply-To: Message-ID: On Thu, 28 May 2009, Charles Sprickman wrote: > > Is there any way to determine which power supplies will draw the least > power when not actually loaded up? Or is part of that 80% initiative to > mandate that even if the box is idle and pulling like 40W the PS is not > pulling 100W out of the wall? Yes. Even old and crappy power supply is not likely to be <75% efficient even at low load. > Just nuking this cheap-ass triple-redundant sham will surely bring me > under 150W at idle I suspect/hope. I can live with that power draw for > now. No. What will bring it down to 150W is removing second CPU. If this is a "Nocona" generation CPU, power consumption per CPU is 120W at full load. Prestonia is 90W. Replacing PSU might get you from 75% efficient to 85% efficient, but that ain't that much. > > cs> 8 250GB ATA drives > > > > 2 or 3 1TB drives instead? > > Eventually. I got this for exactly $0. No budget beyond maybe $80 for > the power supply. Each drive uses ~5W at idle, 12W at load. From mark.saad at ymail.com Thu May 28 11:21:00 2009 From: mark.saad at ymail.com (Mark Saad) Date: Thu, 28 May 2009 08:21:00 -0700 (PDT) Subject: [nycbug-talk] New Setup Questions Message-ID: <819277.48244.qm@web43413.mail.sp1.yahoo.com> Okan When the guys here asked me if I heard of it I said no. Then when they explained it to me I called it a poor man's cfengine. It sounds interesting but I am going to have to say I agree with Ike's comments about the puppet meeting. "Do we really need this thing ? " -- Mark Saad mark.saad at ymail.com --- On Thu, 5/28/09, Okan Demirmen wrote: > From: Okan Demirmen > Subject: Re: [nycbug-talk] New Setup Questions > To: talk at lists.nycbug.org > Date: Thursday, May 28, 2009, 1:42 PM > On Wed 2009.05.27 at 23:42 +0000, mark.saad at ymail.com > wrote: > > Hello talk > >???Some what related to this , has > anyone used radmind to keep configuration files and apps in > sync across mutuple servers . I am now working somewhere > where radmind is king and I have never touched it . > > radmind is kinda like rdist on crack, but it is not quite > cfengine. > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > From drulavigne at sympatico.ca Thu May 28 11:34:15 2009 From: drulavigne at sympatico.ca (Dru Lavigne) Date: Thu, 28 May 2009 15:34:15 +0000 Subject: [nycbug-talk] nycbsdcon / certification exams In-Reply-To: <534a4cab0905280808q61981612ge6c16aed2c064712@mail.gmail.com> References: <534a4cab0905280808q61981612ge6c16aed2c064712@mail.gmail.com> Message-ID: > 2) is this the best/only time to take the certification test in the NYC area? It is easy to get a proctor in NYC, so this does not need to be the only time to take the exam. We just need someone to provide a quiet space and at least 4 people interested in taking the exam to make it worth the proctor's time. Cheers, Dru -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.bubb at gmail.com Thu May 28 11:49:24 2009 From: michael.bubb at gmail.com (Michael Bubb) Date: Thu, 28 May 2009 11:49:24 -0400 Subject: [nycbug-talk] nycbsdcon / certification exams In-Reply-To: References: <534a4cab0905280808q61981612ge6c16aed2c064712@mail.gmail.com> Message-ID: <534a4cab0905280849n45dd5a82jaa418bf0de240878@mail.gmail.com> >> 2) is this the best/only time to take the certification test in the NYC >> area? > > > It is easy to get a proctor in NYC, so this does not need to be the only > time to take the exam. We just need someone to provide a quiet space and at > least 4 people interested in taking the exam to make it worth the proctor's > time. > > Cheers, > > Dru > Thank you Is there a list of people interested in taking this? ie 3 people waiting for a 4th... yrs Michael -- Michael Bubb | Hoboken, NJ | 201.736.0870 www.linkedin.com/in/mpbubb "make up yr mind you Tiresias if you know know damn well or else you dont" From bonsaime at gmail.com Thu May 28 11:51:41 2009 From: bonsaime at gmail.com (Jesse Callaway) Date: Thu, 28 May 2009 11:51:41 -0400 Subject: [nycbug-talk] nycbsdcon / certification exams In-Reply-To: References: <534a4cab0905280808q61981612ge6c16aed2c064712@mail.gmail.com> Message-ID: On Thu, May 28, 2009 at 11:34 AM, Dru Lavigne wrote: > > >> 2) is this the best/only time to take the certification test in the NYC >> area? > > > It is easy to get a proctor in NYC, so this does not need to be the only > time to take the exam. We just need someone to provide a quiet space and at > least 4 people interested in taking the exam to make it worth the proctor's > time. > > Cheers, > > Dru > > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > > Hey, I'll take the exam with you. Now we need 2 more people. -jesse From carton at Ivy.NET Thu May 28 15:38:34 2009 From: carton at Ivy.NET (Miles Nordin) Date: Thu, 28 May 2009 15:38:34 -0400 Subject: [nycbug-talk] green monster server In-Reply-To: (Charles Sprickman's message of "Thu, 28 May 2009 01:19:20 -0400 (EDT)") References: Message-ID: >>>>> "cs" == Charles Sprickman writes: cs> Not even remove it? That's weird. So if you put together an cs> array today with 1TB drives and 5 years from now you are cs> stuffing 5TB drives in, you're SOL? you can replace all eight drives with 1TB and have a pool four times larger. but until you replace the eighth drive, you get nothing. and you must have eight drives, forever. Another approach would be to make the 8 drives into two raidz2 stripes of 4 disks each. Then you could replace 4 drives, and that stripe becomes larger, getting you a larger pool. The problem is, with two raidz2 stripes you get only 1TB out of your eight drives. With all eight disks in one raidz2 stripe you get 1.5TB. (you should be using dual parity.) cs> if the box is idle and pulling like 40W the PS is not pulling cs> 100W out of the wall? well that's not 80%. the 80plus logo means more than a claim of 80% efficiency because the 80% efficient claim is surely at the power supply's most efficient point, which will be at close to max power, while 80plus requires the supply to be more efficient at all points along a proscribed curve of fraction-of-capacity vs. efficiency. just read about it. It's not complicated and addresses just what you want. cs> a "new" power DC power standard for datacenters that would run cs> either 12V or 5V to cabinets that's stupid. A good standard would be 300VDC to cabinets, 252V batteries and breakers in each cabinet, and dual 12V switching supplies in each cabinet. I'd probably wire one supply to the batteries, and the second straight to the 300V input. This way the battery shelf could be removed for maintenance with minimal chance of fuckup. Each piece of equipment should take dual 12V inputs. For peecees this means eliminating the powersupply entirely, not swapping it for a 12V supply. motherboards that eat 12V period. Most chips now run off 1.4V or 1.8V or 2.5V these days, especially power-hungry stuff, so mandating that motherboards switch 12V down to whatever voltage they need will add only a tiny cost because they are already doing this for most of the watts they consume. The lower voltages in ATX are just a waste of copper. The power inputs can go where the PS/2 keyboard used to plug in. cases can include 12V downto 5/3.3V switchers on their SATA backplanes to accomodate the unfortunate SATA power standard. Another clever design might be to wire peecees in _series_. THese would have some kidn of hot-plug connector, and when you remove a peecee, the cabinet shorts its power input. Design a power supply to go in each peecee that cooperates with its neighbors to settle on the minimum satisfactory amount of current passing through the whole stack. If one of the peecees in the stack is using more power than the others, then all the others have to shunt power through a fat resistor to keep current flwoing through the stack. You can make them use the resistor very seldom by making the current negotiation rule rather complicated---for example, allow that each peecee can operate with 6 - 24V across it, so if your neighbor needs more power you allow an increase stack current flowing through yourself but decrease your own resistance so you've fewer volts across yourself. Once you get down to 6V you start blowing on the resistor. With a rack filled with identical equipment, 6 - 24V might be enough to cover the difference between idle and peak load. The series stack of peecees doesn't need the fat per-cabinet 12V supplies, so you save the cost of buying them, as well as their minimal heat loss (though high quality DC switchers are like >95%). The idea is it might be cheaper to make something that is auto-switch-itself-off tolerant of 300V than something which can actually eat 300V, and that these imaginary datacenter standards all presume racks full of identical equipment without fully exploiting the similarity. It's also funny to wire cabinets so that if one bulb pops the whole strand goes out. -- READ CAREFULLY. By reading this fortune, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: From spork at bway.net Thu May 28 20:29:46 2009 From: spork at bway.net (Charles Sprickman) Date: Thu, 28 May 2009 20:29:46 -0400 (EDT) Subject: [nycbug-talk] New Setup Questions In-Reply-To: <20090528134249.GZ6757@clam.khaoz.org> References: <5D3E60CA-FEC8-4900-9FAE-4D478F316023@nomadlogic.org> <527170174-1243467776-cardhu_decombobulator_blackberry.rim.net-234641813-@bxe1122.bisx.prod.on.blackberry> <20090528134249.GZ6757@clam.khaoz.org> Message-ID: On Thu, 28 May 2009, Okan Demirmen wrote: > On Wed 2009.05.27 at 23:42 +0000, mark.saad at ymail.com wrote: >> Hello talk >> Some what related to this , has anyone used radmind to keep configuration files and apps in sync across mutuple servers . I am now working somewhere where radmind is king and I have never touched it . > > radmind is kinda like rdist on crack, but it is not quite cfengine. That vaguely sounds like what I've been looking for. :) Charles > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org/mailman/listinfo/talk > From drulavigne at sympatico.ca Fri May 29 04:08:37 2009 From: drulavigne at sympatico.ca (Dru Lavigne) Date: Fri, 29 May 2009 08:08:37 +0000 Subject: [nycbug-talk] nycbsdcon / certification exams In-Reply-To: <534a4cab0905280849n45dd5a82jaa418bf0de240878@mail.gmail.com> References: <534a4cab0905280808q61981612ge6c16aed2c064712@mail.gmail.com> <534a4cab0905280849n45dd5a82jaa418bf0de240878@mail.gmail.com> Message-ID: > > It is easy to get a proctor in NYC, so this does not need to be the only > > time to take the exam. We just need someone to provide a quiet space and at > > least 4 people interested in taking the exam to make it worth the proctor's > > time. > > > > Cheers, > > > > Dru > > > > Thank you > > Is there a list of people interested in taking this? ie 3 people > waiting for a 4th... If someone is willing to provide a room, we can provide a proctor the last weekend of July or first weekend of August. Should know exact date in a week or so. Dru -------------- next part -------------- An HTML attachment was scrubbed... URL: From george at ceetonetechnology.com Fri May 29 04:59:44 2009 From: george at ceetonetechnology.com (George Rosamond) Date: Fri, 29 May 2009 04:59:44 -0400 Subject: [nycbug-talk] nycbsdcon / certification exams In-Reply-To: References: <534a4cab0905280808q61981612ge6c16aed2c064712@mail.gmail.com> <534a4cab0905280849n45dd5a82jaa418bf0de240878@mail.gmail.com> Message-ID: <4A1FA400.7020601@ceetonetechnology.com> Dru Lavigne wrote: > > > > > It is easy to get a proctor in NYC, so this does not need to be the > only > > > time to take the exam. We just need someone to provide a quiet > space and at > > > least 4 people interested in taking the exam to make it worth the > proctor's > > > time. > > > > > > Cheers, > > > > > > Dru > > > > > > > Thank you > > > > Is there a list of people interested in taking this? ie 3 people > > waiting for a 4th... > > > If someone is willing to provide a room, we can provide a proctor the > last weekend of July or first weekend of August. Should know exact date > in a week or so. I can always do that. . . Let me know. g From cwolsen at ubixos.com Fri May 29 06:08:48 2009 From: cwolsen at ubixos.com (Christopher Olsen) Date: Fri, 29 May 2009 06:08:48 -0400 Subject: [nycbug-talk] nycbsdcon / certification exams Message-ID: <200905291008.n4TA8ZVn026179@fulton.nycbug.org> I can provide the space if you still need it. -Christopher Ubix Technologies T: 212-514-6270 C: 516-903-2889 32 Broadway Suite 204 New York, NY 10004 http://www.tuve.tv/mrolsen -----Original Message----- From: George Rosamond Sent: Friday, May 29, 2009 4:59 AM To: Dru Lavigne Cc: talk at lists.nycbug.org; michael.bubb at gmail.com Subject: Re: [nycbug-talk] nycbsdcon / certification exams Dru Lavigne wrote: > > > > > It is easy to get a proctor in NYC, so this does not need to be the > only > > > time to take the exam. We just need someone to provide a quiet > space and at > > > least 4 people interested in taking the exam to make it worth the > proctor's > > > time. > > > > > > Cheers, > > > > > > Dru > > > > > > > Thank you > > > > Is there a list of people interested in taking this? ie 3 people > > waiting for a 4th... > > > If someone is willing to provide a room, we can provide a proctor the > last weekend of July or first weekend of August. Should know exact date > in a week or so. I can always do that. . . Let me know. g _______________________________________________ talk mailing list talk at lists.nycbug.org http://lists.nycbug.org/mailman/listinfo/talk From lists at stringsutils.com Fri May 29 12:36:39 2009 From: lists at stringsutils.com (Francisco Reyes) Date: Fri, 29 May 2009 12:36:39 -0400 Subject: [nycbug-talk] nycbsdcon / certification exams References: <534a4cab0905280808q61981612ge6c16aed2c064712@mail.gmail.com> Message-ID: Jesse Callaway writes: > Hey, I'll take the exam with you. Now we need 2 more people. I am interested.. Now we need only one more. :-)