[nycbug-talk] Hypothetical: the end of the sysadmin/systems engineer/DBA?

Isaac Levy isaac at diversaform.com
Wed Mar 24 11:55:22 EDT 2010


Hey All,

Hrm, regarding visions of a fairly "dystopian future for sysadmins in the cloud",

On Mar 24, 2010, at 11:12 AM, Chris Snyder wrote:

> On Wed, Mar 24, 2010 at 12:00 AM, Matt Juszczak <matt at atopia.net> wrote:
> 
>> So I wonder: will there be a time when those who have an idea simply spin up
>> some ready to go cloud servers, point and click the necessary security
>> they'd like and setup they'd like, and run with it?  At that point, would
>> the only positions remaining be developers/programmers?
> 
> You may be underestimating the experience and judgement required to
> enact effective security and tuning policies, even if (especially if?)
> the interface is point-and-click.
> 
> That kind of environment will make a good sysadmin even more valuable,
> because you have to be able to "see through the cloud" to avoid the
> gotchas that could take down or corrupt your systems. It's a different
> set of skills, perhaps, but they are still skills.
> 
> On a related note, imagine the chaos that will ensue when one of the
> big cloud providers discovers that a disgruntled sysadmin rooted
> millions of systems on his way out the door. Or can't that happen?

Good thoughts in this thread, I'll toss in one other angle:

  computational resource hype vs. reality

(Reminder for those who don't know me, I was a partner running a Jail-based Virtual Private Server hosting company, long before anyone called it a cloud...)

In the last year, I've engaged 'the cloud' in 2 variants:

---------------------
COMPUTATIONAL CLUSTER
(grid computing for data crunching application)
I performed a fairly exhaustive cost comparison, in an attempt to move our application from racks of servers over to Amazon's EC2, and though I can't share specifics, (company property), I can share the outcome:

+ Co-Located Servers in NJ, 32 servers per rack, many sata disks, (power constraints govern density)
vs.
+ Using Equivalent Amazon EC2 instances over 1 year, with instances 'turned on' 15% of the time, (also with a penalty of taking about 1hr to turn-up our app on 32 machines)

OUTCOME: Amazon EC2 was just over 3x the price, with 15% uptime.

(this doesn't even *touch* the amount of application changes/work which would have to be done to mitigate the risks of putting the company data into 'the cloud' within acceptable tolerances for the business)

----------------------------
WEB APPLICATION 'OFFLOADING'
(offloading file-upload-download/image/storage to EC2)
In a preliminary test of EC2 for serving up web content, the servers simply performed sub-optimally- both for serving web content, and taking files (via http and our apps).  We geographically distributed nodes, tuned the servers, as well as maxing out the a-la-carte server options- and they simply didn't perform anywhere close to our then-paltry web application server infrastructure.
When engaging Amazon on the issue, because our use was such a minor profit for them, their support had nothing to offer us to attempt to help- we were on our own, and even more constrained than dropping our own boxes.

OUTCOME: The www bandwidth did not come close to meeting expectations, and the servers themselves under-performed the sold specs, (esp. in comparison to actual hardware).

--
Extremely disappointing in both cases, and to bring it back to this thread; even in the context of better 'Enterprise' cloud offerings, shared infrastructure will always have it's place- but I strongly believe it will never 'take over' or meet the expectations generated by the hype.

With that, just from a reality check based on computational needs, I don't believe we'll all 'become developer/programmers' anytime soon- (or AI robots would be running entire datacenters by now, at the least :)

my .02¢

Best,
.ike





More information about the talk mailing list