matthew at FreeBSD.org
Mon Aug 7 09:22:12 UTC 2017
On 07/08/2017 07:20, Dennis Glatting wrote:
> On Sun, 2017-08-06 at 22:39 -0700, Aleksandr Miroslav wrote:
>> I'm looking for a mechanism to collect and store all logs into a
>> centralized location. I'm not looking for a fancy graphical interface
>> (a la Splunk) to search those logs just yet, just collecting them on
>> centralized server is fine for the moment.
>> Is there something available in ports/base that I can use for this
>> purpose? I took a quick look at ELK, it seems overly complicated, but
>> iIve never used it.
> The simple approach is to have a central MySQL database fed from
> rsyslog across the servers of interest. Costume devices, such as HVAC,
> could point to a rsyslog server which then feeds the database.
> Periodically run scripts against the database to generate summary
> information, build firewall rule sets, and for maintenance.
> For weird things, such as netflow off the switches and routers,
> forward the flows to a server, parse it, and then stuff it into the
> You can also create multi-master databases in case one goes offline or
> local optimization. I was looking at Cassandra for multi-master.
You can just use the default system syslog to collect the logs onto a
central logging server, but this would write everything to a log file,
so probably only really satisfactory for quite a low-traffic setup.
rsyslog will allow you much greater flexibility in where and how you
write the logging data, including creating separate log files for each
day or hour, or writing into a database or interfacing with various ELK
type things like logstash.
Note that anything based on (r)syslog doesn't guarantee successful
delivery of log data to your server. Anything that fails to be received
will be silently dropped. There's no concept like queuing up log
messages for later delivery should the log server be temporarily
off-line[*]. This is a fairly typical requirement: you don't want your
webserver to stop responding simply because it cannot send syslog
messages for a while.
If you want more resilience, then consider an ElasticSearch cluster --
this will work best if you use a parser on the incoming log data to
structure the messages appropriately for searching. Use Kibana as a
query tool or to generate dashboards showing live performance data.
Something like logstash will work for processing the raw log messages
into something more readily searchable in ElasticSearch. However, think
twice about running logstash clients on your frontend machines -- that's
a big fat dollop of java or python to add to the load on your alreay
hardworking servers. You can use (r)syslog to feed data into a remote
Logstash setup pretty well.
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 931 bytes
Desc: OpenPGP digital signature
More information about the freebsd-questions