From mcevoy.pat at gmail.com Fri Feb 1 11:09:54 2019 From: mcevoy.pat at gmail.com (Patrick McEvoy) Date: Fri, 01 Feb 2019 11:09:54 -0500 Subject: [talk] Next NYCBug: 2/6, plus March meeting subject ... Message-ID: <5C546F52.5000507@gmail.com> Hello Folks, Does anyone have any requests/suggestions for future NYCBug meetings? We have a possible, but not firm speaker for March, so if anyone else has a talk idea, we would love to hear it. Stay warm. P Next NYC*BUG: Using Shell as a Deployment Tool, Ivan Ivanov 2019-02-06 @ 18:45 - Suspenders, 108 Greenwich Street, 2nd Floor (hopefully) Abstract: Tools like Ansible provide a convenient way to deploy software. However, they come with complexity that may not be justified for certain tasks. The presentation will describe a real-world use case of converting an ansible-based deployment procedure to shell scripts in order to simplify it. I will explain how it is done and why it is done. More info: https://www.nycbug.org/index?action=view&id=10664 From schmonz-lists-netbsd-public-nycbug-talk at schmonz.com Fri Feb 1 13:32:37 2019 From: schmonz-lists-netbsd-public-nycbug-talk at schmonz.com (Amitai Schleier) Date: 1 Feb 2019 12:32:37 -0600 Subject: [talk] Next NYCBug: 2/6, plus March meeting subject ... In-Reply-To: <5C546F52.5000507@gmail.com> References: <5C546F52.5000507@gmail.com> Message-ID: On 1 Feb 2019, at 10:09, Patrick McEvoy wrote: > Does anyone have any requests/suggestions for future NYCBug meetings? > We have a possible, but not firm speaker for March, so if anyone else > has a talk idea, we would love to hear it. I've been meaning to get back to NYCBUG since returning from the midwest, but we're living in Rockland and had a kid. Being on the hook for a talk would help me make the shlep. :-) For instance, I gave an Ignite talk at DevOpsDays NYC last week called "Run Your Own Email Server". I've been investing time and effort into making this easy with qmail (really) for most any platform, thanks to pkgsrc. If there's interest, I'd be happy to present my recent work. From kmsujit at gmail.com Sat Feb 2 08:55:12 2019 From: kmsujit at gmail.com (Sujit K M) Date: Sat, 2 Feb 2019 19:25:12 +0530 Subject: [talk] Next NYCBug: 2/6, plus March meeting subject ... In-Reply-To: References: <5C546F52.5000507@gmail.com> Message-ID: On Sat, Feb 2, 2019 at 12:03 AM Amitai Schleier wrote: > > On 1 Feb 2019, at 10:09, Patrick McEvoy wrote: > > > Does anyone have any requests/suggestions for future NYCBug meetings? > > We have a possible, but not firm speaker for March, so if anyone else > > has a talk idea, we would love to hear it. > > I've been meaning to get back to NYCBUG since returning from the > midwest, but we're living in Rockland and had a kid. Being on the hook > for a talk would help me make the shlep. :-) > > For instance, I gave an Ignite talk at DevOpsDays NYC last week called > "Run Your Own Email Server". I've been investing time and effort into > making this easy with qmail (really) for most any platform, thanks to > pkgsrc. If there's interest, I'd be happy to present my recent work. Always intrigued what goes into the mail server ops. Like security, encryption, deployments etc. From edlinuxguru at gmail.com Mon Feb 4 09:37:14 2019 From: edlinuxguru at gmail.com (Edward Capriolo) Date: Mon, 4 Feb 2019 09:37:14 -0500 Subject: [talk] Next NYCBug: 2/6, plus March meeting subject ... In-Reply-To: References: <5C546F52.5000507@gmail.com> Message-ID: On Saturday, February 2, 2019, Sujit K M wrote: > On Sat, Feb 2, 2019 at 12:03 AM Amitai Schleier > wrote: > > > > On 1 Feb 2019, at 10:09, Patrick McEvoy wrote: > > > > > Does anyone have any requests/suggestions for future NYCBug meetings? > > > We have a possible, but not firm speaker for March, so if anyone else > > > has a talk idea, we would love to hear it. > > > > I've been meaning to get back to NYCBUG since returning from the > > midwest, but we're living in Rockland and had a kid. Being on the hook > > for a talk would help me make the shlep. :-) > > > > For instance, I gave an Ignite talk at DevOpsDays NYC last week called > > "Run Your Own Email Server". I've been investing time and effort into > > making this easy with qmail (really) for most any platform, thanks to > > pkgsrc. If there's interest, I'd be happy to present my recent work. > > Always intrigued what goes into the mail server ops. Like security, > encryption, > deployments etc. > > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org:8080/mailman/listinfo/talk > A while back I did qmail, ldap, squirrel mail, courier imap, NFs(netapp), openldap multi master, freebsd 8, as a no SPOF mail system. Lots of fun stuff like patching qmail. -- Sorry this was sent from mobile. Will do less grammar and spell check than usual. -------------- next part -------------- An HTML attachment was scrubbed... URL: From njt at ayvali.org Mon Feb 4 16:46:08 2019 From: njt at ayvali.org (N.J. Thomas) Date: Mon, 4 Feb 2019 13:46:08 -0800 Subject: [talk] zfs disk outage and aftermath Message-ID: <20190204214608.GI67246@ayvali.org> Well, yesterday morning I suffered my first drive failure on one of my ZFS boxes (running FreeBSD 12.0), it actually happened on my primary backup server. "zpool status" showed that my regularly scheduled scrub had found (and fixed) some errors on one of the disks in a mirrored pair. I made sure that my replica box had up to date snapshots transferred over, shut down the machine, and asked the datacenter team to check. They indeed found that the drive was faulty and replaced it. It took about 4 hours for the drive to be resilvered, and that was it. Back to normal with almost no issues -- apart from the few minutes that the machine was down while its drive was being replaced. My takeaways: - use ZFS - take regular snapshots - replicate your snapshots to another machine - scrub your disks regularly (unlike fsck, this can be run while the drive is mounted and active) - monitor zfs health (I use this script from Calomel.org: https://calomel.org/zfs_health_check_script.html) The first three points are kinda obvious, the last two I picked up from other, more experienced, ZFS users. I had been waiting for this day since I first started using ZFS years ago and am very happy with that decision to use this filesystem. Thomas From john at netpurgatory.com Mon Feb 4 16:56:59 2019 From: john at netpurgatory.com (John C. Vernaleo) Date: Mon, 4 Feb 2019 16:56:59 -0500 (EST) Subject: [talk] zfs disk outage and aftermath In-Reply-To: <20190204214608.GI67246@ayvali.org> References: <20190204214608.GI67246@ayvali.org> Message-ID: I read the subject of this had that sinking feeling in my stomach as I'm becoming more and more reliant on ZFS and no email with 'disk' in the subject line is ever good news. I was afraid there would be a horror story to make me rethink my reliance on ZFS, Was pleasantly surprissed to see I was wrong. And that health check script looks like a really good idea. John ------------------------------------------------------- John C. Vernaleo, Ph.D. www.netpurgatory.com john at netpurgatory.com ------------------------------------------------------- On Mon, 4 Feb 2019, N.J. Thomas wrote: > Well, yesterday morning I suffered my first drive failure on one of my > ZFS boxes (running FreeBSD 12.0), it actually happened on my primary > backup server. > > "zpool status" showed that my regularly scheduled scrub had found (and > fixed) some errors on one of the disks in a mirrored pair. > > I made sure that my replica box had up to date snapshots transferred > over, shut down the machine, and asked the datacenter team to check. > They indeed found that the drive was faulty and replaced it. > > It took about 4 hours for the drive to be resilvered, and that was it. > Back to normal with almost no issues -- apart from the few minutes that > the machine was down while its drive was being replaced. > > My takeaways: > > - use ZFS > > - take regular snapshots > > - replicate your snapshots to another machine > > - scrub your disks regularly (unlike fsck, this can be run while the > drive is mounted and active) > > - monitor zfs health (I use this script from Calomel.org: > https://calomel.org/zfs_health_check_script.html) > > The first three points are kinda obvious, the last two I picked up from > other, more experienced, ZFS users. > > I had been waiting for this day since I first started using ZFS years > ago and am very happy with that decision to use this filesystem. > > Thomas > > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org:8080/mailman/listinfo/talk > > From george at ceetonetechnology.com Tue Feb 5 15:51:00 2019 From: george at ceetonetechnology.com (George Rosamond) Date: Tue, 05 Feb 2019 20:51:00 +0000 Subject: [talk] mickey@ photobook Message-ID: <5e166915-e634-04e8-de46-166105ee2f99@ceetonetechnology.com> Greetings, on a far-off unrelated thread. Many of you probably remember mickey@ (Mickey Shalayeff) who past two summers ago. He was a long-time NYC hacker, and was with the OpenBSD project for many years. He was as talented as he was out of his mind. He spent a lot of time around NYCBUG, and did a few meetings for us including one on PAE (https://www.nycbug.org/index?action=view&id=10016) and the always memorable one about porting OpenBSD to PA-RISC (https://www.nycbug.org/index?action=view&id=00083). Anyways, I'll have the book tomorrow at the meeting for others to sign. g From viewtiful.icchan at gmail.com Tue Feb 5 15:55:30 2019 From: viewtiful.icchan at gmail.com (Robert Menes) Date: Tue, 5 Feb 2019 15:55:30 -0500 Subject: [talk] mickey@ photobook In-Reply-To: <5e166915-e634-04e8-de46-166105ee2f99@ceetonetechnology.com> References: <5e166915-e634-04e8-de46-166105ee2f99@ceetonetechnology.com> Message-ID: Share stories tomorrow. I'll definitely sign the book. See you tomorrow! --Robert On Tue, Feb 5, 2019, 15:52 George Rosamond Greetings, on a far-off unrelated thread. > > Many of you probably remember mickey@ (Mickey Shalayeff) who past two > summers ago. He was a long-time NYC hacker, and was with the OpenBSD > project for many years. He was as talented as he was out of his mind. > > He spent a lot of time around NYCBUG, and did a few meetings for us > including one on PAE (https://www.nycbug.org/index?action=view&id=10016) > and the always memorable one about porting OpenBSD to PA-RISC > (https://www.nycbug.org/index?action=view&id=00083). > > Anyways, I'll have the book tomorrow at the meeting for others to sign. > > g > > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org:8080/mailman/listinfo/talk > -------------- next part -------------- An HTML attachment was scrubbed... URL: From raulcuza at gmail.com Tue Feb 5 16:45:53 2019 From: raulcuza at gmail.com (Raul Cuza) Date: Tue, 5 Feb 2019 16:45:53 -0500 Subject: [talk] mickey@ photobook In-Reply-To: References: <5e166915-e634-04e8-de46-166105ee2f99@ceetonetechnology.com> Message-ID: See you tomorrow! On Tue, Feb 5, 2019, 15:56 Robert Menes Share stories tomorrow. I'll definitely sign the book. See you tomorrow! > > --Robert > > On Tue, Feb 5, 2019, 15:52 George Rosamond wrote: > >> Greetings, on a far-off unrelated thread. >> >> Many of you probably remember mickey@ (Mickey Shalayeff) who past two >> summers ago. He was a long-time NYC hacker, and was with the OpenBSD >> project for many years. He was as talented as he was out of his mind. >> >> He spent a lot of time around NYCBUG, and did a few meetings for us >> including one on PAE (https://www.nycbug.org/index?action=view&id=10016) >> and the always memorable one about porting OpenBSD to PA-RISC >> (https://www.nycbug.org/index?action=view&id=00083). >> >> Anyways, I'll have the book tomorrow at the meeting for others to sign. >> >> g >> >> _______________________________________________ >> talk mailing list >> talk at lists.nycbug.org >> http://lists.nycbug.org:8080/mailman/listinfo/talk >> > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org:8080/mailman/listinfo/talk > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcevoy.pat at gmail.com Wed Feb 6 17:25:24 2019 From: mcevoy.pat at gmail.com (Pat McEvoy) Date: Wed, 6 Feb 2019 17:25:24 -0500 Subject: [talk] Next NYC*BUG: tonight Message-ID: Just confirmed with Suspenders, we will be on the first floor in the back. Patrick From jonathan at kc8onw.net Wed Feb 6 18:06:04 2019 From: jonathan at kc8onw.net (Jonathan) Date: Wed, 06 Feb 2019 18:06:04 -0500 Subject: [talk] zfs disk outage and aftermath In-Reply-To: References: <20190204214608.GI67246@ayvali.org> Message-ID: <18bf2a83605bc3fc4f4fc011ceba8c06@kc8onw.net> I've had a home setup with consumer drives in a horrible high vibration environment for almost 10 years now with multiple reconfigurations and migrations for the array. I've probably lost 6 or 7 drives over that time and I have yet to lose data because of it. I did have a file that had bitrot and then I lost a drive and had to restore it from original media but that's why scrubs are so important. Jonathan On 2019-02-04 16:56, John C. Vernaleo wrote: > I read the subject of this had that sinking feeling in my stomach as > I'm becoming more and more reliant on ZFS and no email with 'disk' in > the subject line is ever good news. I was afraid there would be a > horror story to make me rethink my reliance on ZFS, > > Was pleasantly surprissed to see I was wrong. And that health check > script looks like a really good idea. > > John > > ------------------------------------------------------- > John C. Vernaleo, Ph.D. > www.netpurgatory.com > john at netpurgatory.com > ------------------------------------------------------- > > On Mon, 4 Feb 2019, N.J. Thomas wrote: > >> Well, yesterday morning I suffered my first drive failure on one of my >> ZFS boxes (running FreeBSD 12.0), it actually happened on my primary >> backup server. >> >> "zpool status" showed that my regularly scheduled scrub had found (and >> fixed) some errors on one of the disks in a mirrored pair. >> >> I made sure that my replica box had up to date snapshots transferred >> over, shut down the machine, and asked the datacenter team to check. >> They indeed found that the drive was faulty and replaced it. >> >> It took about 4 hours for the drive to be resilvered, and that was it. >> Back to normal with almost no issues -- apart from the few minutes >> that >> the machine was down while its drive was being replaced. >> >> My takeaways: >> >> - use ZFS >> >> - take regular snapshots >> >> - replicate your snapshots to another machine >> >> - scrub your disks regularly (unlike fsck, this can be run while >> the >> drive is mounted and active) >> >> - monitor zfs health (I use this script from Calomel.org: >> https://calomel.org/zfs_health_check_script.html) >> >> The first three points are kinda obvious, the last two I picked up >> from >> other, more experienced, ZFS users. >> >> I had been waiting for this day since I first started using ZFS years >> ago and am very happy with that decision to use this filesystem. >> >> Thomas >> >> _______________________________________________ >> talk mailing list >> talk at lists.nycbug.org >> http://lists.nycbug.org:8080/mailman/listinfo/talk >> >> > > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org:8080/mailman/listinfo/talk From viewtiful.icchan at gmail.com Wed Feb 6 18:15:27 2019 From: viewtiful.icchan at gmail.com (Robert Menes) Date: Wed, 6 Feb 2019 18:15:27 -0500 Subject: [talk] Next NYC*BUG: tonight In-Reply-To: References: Message-ID: On my way. Should be there soon. --Robert On Wed, Feb 6, 2019, 17:25 Pat McEvoy Just confirmed with Suspenders, we will be on the first floor in the back. > > Patrick > > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org:8080/mailman/listinfo/talk > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ipsens at ripsbusker.no.eu.org Wed Feb 6 18:32:03 2019 From: ipsens at ripsbusker.no.eu.org (Ipsen S Ripsbusker) Date: Wed, 06 Feb 2019 23:32:03 +0000 Subject: [talk] Next NYC*BUG: tonight In-Reply-To: References: Message-ID: <1549495923.111562.1652502888.03D29DD9@webmail.messagingengine.com> I advise against taking the 4/5 to tonight's meeting. From mcevoy.pat at gmail.com Wed Feb 6 16:57:48 2019 From: mcevoy.pat at gmail.com (Pat McEvoy) Date: Wed, 6 Feb 2019 16:57:48 -0500 Subject: [talk] Next NYC*BUG: tonight Message-ID: <862959E6-DFEA-49EB-8586-0F0E8159BF7C@gmail.com> Just confirmed we will be on the first floor in the back. -------------- next part -------------- A non-text attachment was scrubbed... Name: image1.jpeg Type: image/jpeg Size: 423934 bytes Desc: not available URL: -------------- next part -------------- Patrick From ipsens at ripsbusker.no.eu.org Tue Feb 12 20:55:31 2019 From: ipsens at ripsbusker.no.eu.org (Ipsen S Ripsbusker) Date: Wed, 13 Feb 2019 01:55:31 +0000 Subject: [talk] Concisely determining exit codes of background jobs Message-ID: <20190213015532.3D8EEE4597@mailuser.nyi.internal> As a tangent to Ivan's talk, we discussed the merits of checking of exit status (or other indicators of success) of parallel calls to scp. If I recall correctly, Ivan started multiple background scp processes with &, and then he waited for them to complete with the following command, which always exits 0. wait The following command waits for the same background processes, and it still exits 0 if all processes exited 0. It differs in that it exits 1 if any process exited with something other than 0. wait `jobs -p` Note that these are of course the standard wait(1) and jobs(1), which are usually builtins in sh(1). From matthewstory at gmail.com Wed Feb 13 14:40:32 2019 From: matthewstory at gmail.com (Matthew Story) Date: Wed, 13 Feb 2019 11:40:32 -0800 Subject: [talk] Concisely determining exit codes of background jobs In-Reply-To: <20190213015532.3D8EEE4597@mailuser.nyi.internal> References: <20190213015532.3D8EEE4597@mailuser.nyi.internal> Message-ID: On Tue, Feb 12, 2019 at 5:56 PM Ipsen S Ripsbusker < ipsens at ripsbusker.no.eu.org> wrote: > As a tangent to Ivan's talk, we discussed the merits of checking of exit > status (or other indicators of success) of parallel calls to scp. > If I recall correctly, Ivan started multiple background scp processes > with &, and then he waited for them to complete with the following > command, which always exits 0. > > wait > > The following command waits for the same background processes, and > it still exits 0 if all processes exited 0. It differs in that it > exits 1 if any process exited with something other than 0. > This isn't quite right. Wait with multiple pids will exit with whatever the exit code was of the last pid specified so: $ { sh -c 'sleep 1; exit 1;' & sh -c 'sleep 1; exit 0;' & }; wait `jobs -p` [1] 237790 [2] 237791 [1]- Exit 1 sh -c 'sleep 1; exit 1;' [2]+ Done sh -c 'sleep 1; exit 0;' $ echo $? 0 More from the spec: "If one or more operands were specified, all of them have terminated or were not known by the invoking shell, and the status of the last operand specified is known, then the exit status of *wait* shall be the exit status information of the command indicated by the last operand specified." source: http://pubs.opengroup.org/onlinepubs/007904975/utilities/wait.html > > wait `jobs -p` > Why not just use $!? > > Note that these are of course the standard wait(1) and jobs(1), > which are usually builtins in sh(1). > > _______________________________________________ > talk mailing list > talk at lists.nycbug.org > http://lists.nycbug.org:8080/mailman/listinfo/talk > -- regards, matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From ipsens at ripsbusker.no.eu.org Thu Feb 14 01:08:44 2019 From: ipsens at ripsbusker.no.eu.org (Ipsen S Ripsbusker) Date: Thu, 14 Feb 2019 06:08:44 +0000 Subject: [talk] Concisely determining exit codes of background jobs In-Reply-To: References: <20190213015532.3D8EEE4597@mailuser.nyi.internal> Message-ID: <20190214060845.03343E412F@mailuser.nyi.internal> Matthew Story writes: > This isn't quite right. Wait with multiple pids will exit with whatever the > exit code was of the last pid specified so: > > ... > > Why not just use $!? Both points are wonderful! wait must be handled differently, and $! is the correct alternative to jobs -p. I was trying to avoid $! as that would be more lines than the other suggestions, but it is more correct, as it avoids some race conditions. If we include both points, the procedure is like this (untested). __pids= append_pid() { __pids="$__pids $!" } safe_wait() { status=0 for pid in $__pids; do wait $pid || status=1 done return status } for remote in foo{1,2,3,4,5}; do ssh "$remote" bar baz & append_pid done safe_wait From spork at bway.net Tue Feb 26 23:26:09 2019 From: spork at bway.net (Charles Sprickman) Date: Tue, 26 Feb 2019 23:26:09 -0500 Subject: [talk] where to mail some stickers? Message-ID: <5C9A0D88-519F-44C3-A1C2-C63CA00B053F@bway.net> Hi all, I asked this guy for some stickers: https://twitter.com/FiLiS/status/1090940241194696704 I?m honestly not sure who he actually is, but he?s German and he sent me some stickers: https://i.imgur.com/zhcmnMw.jpg I told him that if he?s mailing from overseas to add a few so I can send them to you guys. So where can I mail these for distribution at the next meeting? I have like 20 or so beyond the 4-5 I?m keeping. :) LMK - Charles From george at ceetonetechnology.com Thu Feb 28 10:00:00 2019 From: george at ceetonetechnology.com (George Rosamond) Date: Thu, 28 Feb 2019 15:00:00 +0000 Subject: [talk] where to mail some stickers? In-Reply-To: <5C9A0D88-519F-44C3-A1C2-C63CA00B053F@bway.net> References: <5C9A0D88-519F-44C3-A1C2-C63CA00B053F@bway.net> Message-ID: <4d4c927b-2f01-dd21-c74e-1b0a1f5102a9@ceetonetechnology.com> Charles Sprickman: > Hi all, > > I asked this guy for some stickers: > > https://twitter.com/FiLiS/status/1090940241194696704 > > I?m honestly not sure who he actually is, but he?s German and he sent me some stickers: > > https://i.imgur.com/zhcmnMw.jpg > > I told him that if he?s mailing from overseas to add a few so I can send them to you guys. So where can I mail these for distribution at the next meeting? I have like 20 or so beyond the 4-5 I?m keeping. :) Cool stuff Spork! Great. You can send to me, although not positive I'll be at the Wed meeting. But they'll get around at some point... I'll ping you offlist g