[nycbug-talk] New Setup Questions
Brian Gupta
brian.gupta at gmail.com
Sun Apr 26 20:44:56 EDT 2009
What are these packages? Tarballs? Native OS packages?
Typically people either:
1) Use network repos for packages. eg: ports reops, debian apt repos, etc.
(Setting up a standards compliant network repo, and usign the coresponding
puppet package provider is prolly the best practice).
2) Use the "puppet files" filebucket, and serve them out of puppet itself.
(As one would distribute config files). Puppet probably isn't the best file
server in the world, but the advantage is you don't have to setup seperate
servers. We started with this option, and then eventually for performance
and scaling optimization, setup custom network repos to handle custom
packages. (Which we built using puppet). :)
3) Use some other webserver to distribute the packages, and use execs to DL
the packages from the hosts. (You can always manage these in puppet, but use
the other webserver for distribution.)
One thing, I think you are probably at a stage when you want to keep your
puppet files and puppet code in SVN.. Let me know if you need help with
this. (Basically you edit the files on your desktop laptop, check them into
svn, and then check them out on the puppetmaster server. We do this
automatically. If some one checks something into SVN, within 5 mins, it will
be on the puppetmaster. I'm not sure if this is a best practice, but it sure
does make life simple.
Cheers,
Brian
On Sun, Apr 26, 2009 at 7:04 PM, Matt Juszczak <matt at atopia.net> wrote:
> We're working on using puppet, but I had a quick question. I've setup a
> build server and build packages for everything we need. Now I'm trying to
> figure out the best way to deploy those packages. My initial reaction is to
> use an SVN repository to "checkout" the packages to each box and install the
> ones needed. But does puppet include some sort of file transfer
> configuration so I can push packages using that?
>
>
>
> On Mon, 20 Apr 2009, Matt Juszczak wrote:
>
> Makes sense :) I'm actually enjoying working with it right now. Tying it
>> into LDAP.
>>
>> On Sat, 18 Apr 2009, Brian Gupta wrote:
>>
>> Just realized a thinko in my original email:
>>>
>>> "I'm gonna talk about puppet since that's what I know. With puppet, since
>>> you are running a centralized configuration
>>> management system, you can keep your config files in puppet."
>>>
>>> was supposed to read:
>>>
>>> "I'm gonna talk about puppet since that's what I know. With puppet, since
>>> you are running a centralized configuration
>>> management system, you can keep your config files and puppet recipes in
>>> SVN."
>>>
>>> On Sat, Apr 18, 2009 at 11:54 PM, Brian Gupta <brian.gupta at gmail.com>
>>> wrote:
>>> Feel free to ping me if you have any questions, or better yet,
>>> ping the mailing list for the NY Puppet UG:
>>> http://groups.google.com/group/puppet-nyc
>>>
>>>
>>> On Sat, Apr 18, 2009 at 7:17 PM, Matt Juszczak <matt at atopia.net>
>>> wrote:
>>> Setting this up on two test servers and seeing how it does
>>> :) I had just read before that it had serious limitations
>>> working with multiple operating systems.
>>>
>>>
>>> On Sat, 18 Apr 2009, Brian Gupta wrote:
>>>
>>> Matt,
>>>
>>> I'm gonna talk about puppet since that's what I know. With
>>> puppet, since you are running a centralized configuration
>>> management system, you can keep your config files in
>>> puppet.
>>>
>>> Puppet understands a number of resources types. These
>>> include:
>>> - Files
>>> - Users
>>> - Packages
>>> - Services
>>> - Cron
>>> - sshkeys
>>>
>>> and many more.. See here for a relatively full list:
>>> http://reductivelabs.com/trac/puppet/wiki/TypeReference
>>>
>>> In addition.. Puppet can exec arbitrary code in the event
>>> that what you need to do is not yet supported.
>>>
>>> Puppet let's you structure nodes and classes in an object
>>> hierarchy. Very cool when work with related machine types.
>>>
>>> I'm curious how you found puppet limited? (Particularly as
>>> compared to your SVN proposal).
>>>
>>> Thanks,
>>> Brian
>>>
>>> On Fri, Apr 17, 2009 at 11:01 PM, Matt Juszczak
>>> <matt at atopia.net> wrote:
>>> That's what I'm trying to figure out. These two
>>> questions sort of intertwine themselves. If we decide to
>>> go
>>> the "ports scripted" route, we'll most likely have
>>> scripts like this in SVN:
>>>
>>> ./webserver-setup.sh -h<option1> -i<option2>
>>>
>>> which will basically do a cvsup /etc/ports-supfile,
>>> install necessary ports (all the same version of course),
>>> install php, etc. Then, we'd push the configuration
>>> files via svn as well.
>>>
>>> If we decide to go a package route, we might even put
>>> the packages in SVN, so that you can "check out" the
>>> repository of packages.
>>>
>>> I've looked at puppet, and I've looked at CF engine:
>>> puppet seems limited, and CF Engine seems complex.
>>> Seems like it's a pick your poison.
>>>
>>>
>>> On Fri, 17 Apr 2009, Brian Gupta wrote:
>>>
>>> Not to start up the cfengine vs puppet debate again,
>>> but one question. How do you plan to handle
>>> package installation?
>>> That's one thing where CMS can really help.
>>>
>>> -Brian
>>>
>>> On Fri, Apr 17, 2009 at 1:42 PM, Matt Juszczak
>>> <matt at atopia.net> wrote:
>>> We're launching an entirely new setup across
>>> FreeBSD boxes - about 50
>>> servers total. I have two things which I'm
>>> still somewhat debating, and
>>> thought I'd get a second opinion.
>>>
>>> First, instead of using CFEngine to manage the
>>> boxes, I was thinking of
>>> using an SVN-based setup. Each server would
>>> checkout their appropriate
>>> files via SVN, and I would "trigger" each server
>>> when it needs an update
>>> via config files that would be fetched often via
>>> either ftp or svn. This
>>> is neat and flexible, but not as complex as
>>> CFEngine. Thoughts?
>>>
>>> Second, I'm trying to decide how to do packages.
>>> Across the 50 servers
>>> we'll have about 6 or 7 different hardware sets.
>>> Some will be Dell, some
>>> IBM, etc. Most will be 64 bit boxes (to address
>>> larger memory ranges).
>>> Should I set up a single server for each class
>>> (and do make package to
>>> create packages for each box), or should I just
>>> compile ports from source
>>> on each box, verifying that I'm installing the
>>> same package version each
>>> time (which will allow each box to take
>>> advantage of the benefits of its
>>> specific hardware).
>>>
>>> Those are my two questions, and I'd appreciate
>>> any input anyone can
>>> provide. Thanks!
>>>
>>> -Matt
>>> _______________________________________________
>>> talk mailing list
>>> talk at lists.nycbug.org
>>> http://lists.nycbug.org/mailman/listinfo/talk
>>>
>>>
>>>
>>>
>>> --
>>> - Brian Gupta
>>>
>>> New York City user groups calendar:
>>> http://nyc.brandorr.com/
>>>
>>>
>>>
>>>
>>>
>>> --
>>> - Brian Gupta
>>>
>>> New York City user groups calendar:
>>> http://nyc.brandorr.com/
>>>
>>> [tooltip_18px_18px.png]
>>>
>>>
>>>
>>>
>>> --
>>> - Brian Gupta
>>>
>>> New York City user groups calendar:
>>> http://nyc.brandorr.com/
>>>
>>>
>>>
>>>
>>> --
>>> - Brian Gupta
>>>
>>> New York City user groups calendar:
>>> http://nyc.brandorr.com/
>>>
>>>
>>>
--
- Brian Gupta
New York City user groups calendar:
http://nyc.brandorr.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.nycbug.org:8443/pipermail/talk/attachments/20090426/f5413974/attachment.htm>
More information about the talk
mailing list