Logging with Logstash updates (new charm branch proposals)

Samuel Cozannet samuel.cozannet at canonical.com
Wed Oct 22 07:35:08 UTC 2014


Hi Charles,

Just focusing on the logstash-agent charm, I really like how you managed
additional log files with a server centric view. Thanks for that. Now
adding new "log management workloads" just became much easier than it used
to be. And it also became much more "cloud opiniated" which I like.

One thing I would eventually add is a way to import inputs from an external
repository without having to update the charm itself. There are about 40
inputs available for LogStash, divided in 2 groups:
* Those that are remote collection (logstash agents collects data from a
remote node), which therefore require a specific relation or the agent to
be in fact a broker (so I would host that only on an indexer node or create
a new class of logstash agent is are not subordinate)
==> Those can be handled later or community based. They are collectd,
drupal_dblog, elasticsearch,  ganglia... and the rest of the remote
logstash from http://logstash.net/docs/1.4.2/. I would see that as a
relation exposing a name + port (+gist?), which would basically make
logstash create a new input_<name>_<port>.conf file with content of the
gist or a default configuration.
* Those that only talk to a local instance, thus match the subordinate
mechanism. For those, offering a way to download a remote configuration
file would really be helpful. My personal view on this is that people would
probably maintain a git repo with many configuration files (one per input
probably, maybe one per class) hence a way to manage this would be to offer
them 2 options: one for the git repo, one for a cron refresh of the
configuration.
In essence, this would trigger the download of the git repo, copy/link
files to the logstash config folder and restart the agent.

I think this would make the charm even more awesome :)

More feedback as I test this.
Thanks for this, it opens a new field to move to production with Juju!
Sam



On Mon, Oct 20, 2014 at 10:57 PM, Charles Butler <
charles.butler at canonical.com> wrote:

> I just pushed up a new demonstration bundle which leverages my changes to
> the trusty logstash branch for routing all logs through the redis interface
> of the indexer (which is apparently faster than just scaling elasticsearch
> and just directing the inputs there...)
>
> I'd love it if someone could pull this branch and give it a go with
> feedback:
>
> https://code.launchpad.net/~lazypower/charms/bundles/big-data-logger/bundle
>
> branch that, and deploy with:
>
> juju deployer -c bundles.yaml
> juju expose kibana
>
> you should be able to see events from every service in the cluster at:
>
> http://{ip-of-kibana}/#/dashboard/file/logstash.json
>
>
> Thanks in advance for any feedback!
>
> All the best,
>
> Charles
>
> --
> Juju mailing list
> Juju at lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>


-- 
Samuel Cozannet
Cloud, Big Data and IoT Strategy Team
Strategic Program Manager
Changing the Future of Cloud
Ubuntu <http://ubuntu.com> / Canonical <http://canonical.com> UK LTD
samuel.cozannet at canonical.com
+33 616 702 389
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/juju/attachments/20141022/5af92d64/attachment.html>


More information about the Juju mailing list