[talk] Syslog Eats Rsyslog

Raul Cuza raulcuza at gmail.com
Wed Aug 5 14:21:02 EDT 2015

On Tue, Aug 4, 2015 at 10:47 PM, Jesse Callaway <bonsaime at gmail.com> wrote:
> The logstash "syslog input" receiver doesn't hold to either of these
> specifications. You'll have to set up a proper rsyslog receiver on the other
> end and then pipe it to a socket using the "unix input".
> For more info you are probably best to hit up the elasticsearch fora.
> On Tue, Aug 4, 2015 at 6:19 PM, Raul Cuza <raulcuza at gmail.com> wrote:
>> Hola,
>> I've been researching this too long and not getting headway. I'm
>> hoping this is a "doh!" question.
>> Unlike RFC 3195, my reading of RFC 5424 indicates that the 1024
>> message size is no longer in place. But when I try to tell rsyslog
>> (v7.4.4) this I still get my long messages broken up into 1k chunks. I
>> want to send jumbo log entries (i.e. ~4k) over the wire to a logstash
>> server that will munch it into JSON and throw it up into
>> elasticsearch.
>> Am I trying to do the impossible with rsyslog? I can't run logstash on
>> the device that is generating the logs because it is extremely
>> resource limited.
>> Thanks for any help you can provide.
>> Raúl
> -jesse

Sorry about the OT. I exhausted my number of smart decisions for the
day and just sent an email to my default $group_of_smart_people.

My current guess is that `logger` is imposing the 1024 byte limit. My
C skills are not strong enough for me to prove it via

I did prove that rsyslog does not have a 1024 byte limit. I was using
`logger` in my shell script to generate syslog events. When I switched
my rsyslog.conf to read from a file where my shell script wrote lines
of increasing size (thanks to `curl` and the Bacon Ipsum API), I was
able to add messages to `/var/log/messages` of over 5k size.

Thank you, again.


More information about the talk mailing list