Yeah, ssh -t host [h]top is the existing default and rtop doesn't seem to bring anything more to the table. Maybe it's another one of those "I wanted to learn golang" projects, but there's nothing in the README.md about it.
I thought at first that this was going to be a thing where you could feed it a list of hostnames and it would go to all the hosts in parallel and show you aggregated metrics across them, like "what is the distribution of cpu load caused by my main app server" or "what is the distribution of free memory on these 20 machines" over the next N minutes while I change something", sort of a halfway point between opening a few windows with "ssh -t host1 top" "ssh -t host2 top" ... etc, and having full-blown centralized stats gathering like with e.g. datadog.
That would be a pretty sweet tool; I haven't heard of anything in this space yet.
So the whole point of what I was getting at is that it's a halfway point between super low-tech "ssh -t host top" and full-blown metrics reporting, the kind of thing where you want to check on some things across a bunch of hosts in a one-off way,
1) when you don't have something like opsdash or datadog already set up
or
2) when you don't want to bother creating some new metrics just for this one-off thing that will live forever in your auto-complete list of metrics
Having e.g. datadog set up is strategic: knowing what is going on in your fleet over time (over long periods of time) is very useful (for noticing performance regressions, finding out cyclic traffic patterns, the list goes on). Something like what I'm describing would be tactical rather than strategic. It's bubbling up to the top of my "when I have some spare time" side projects list, just because I want to explore whether it would be useful.
It's in beta, so I imagine they want to be able to manage to support load as they tweak various bits and pieces of the system before having it go live.
That's true. I should have been more clear: If it's not using any of their resources for me to run it (which is what I assume "self hosted" means), what purpose is served by the invite only system? While the other commenters mentioned support and stuff which kind of makes sense, it feels to me like their definition of self hosted is different from my definition.
Well self-hosted to me implies that they don't have anything on their end that has to run per-instance, which is the usual reason to do invite only betas.
the agentless/agentful security question is an important one. I mean, all existing snmpd implementations suck, right? but the idea is that if you run snmpd (or another agent) on your monitored servers, if an attacker takes over your monitoring server, they still have to find an exploit in snmpd or whatever agent you use; whereas in an agentless setup like this, if your attacker gains access to your monitoring server, said attacker has a (hopefully non-root) shell on all the monitored boxes.
On the other hand, installing this agentless setup is going to be dramatically easier, and this is way less likely to accidentally blow up the box being monitored, say, than snmpd.
(clearly, with snmpd and other agents, you want to make it so that only the monitoring server can see those services, either through tunnels or what have your. Leaving a snmpd port open to the general internet is crazy.)
> whereas in an agentless setup like this, if your attacker gains access to your monitoring server, said attacker has a (hopefully non-root) shell on all the monitored boxes.
I think the idea with rtop in particular is not to have it running headless ever, but you'd only use it interactively. There would not be a "monitoring server", at least not running rtop.
If you wanted to use it this way, note that rtop currently sshes around places and then execs a bunch of local commands. That gives me both the heebies and the jeebies. I'd much rather deploy an rtop-server compiled binary and restrict the ssh key that rtop uses to connect with something like "command=/usr/bin/rtop-server" in the .ssh/authorized_keys file for the local user that will accept rtop-initiated ssh connections.
>I think the idea with rtop in particular is not to have it running headless ever, but you'd only use it interactively. There would not be a "monitoring server", at least not running rtop.
Aah. I misunderstood. I've not figured out how to run a cluster without giving myself through one of my workstations or a jump box or something ssh access to all of them, so I'm generally okay with a program that automates some of that SSHing I do, like ansible; far more comfortable than I am with a box that is running some monitoring web frontend having any kind of access to my backend servers. (I usually try to give that box read-only access to the storage that another server which queries the agents writes to.)
To be clear, I'm not super comfortable with this situation, but I don't see any way around of having my own login everywhere, or of making it so that if I do have my own login everywhere, my logging in from a compromised machine is not a game over.
(as an aside, you and I mean different things by headless; usually when i say 'headless' - i mean 'the video port is disabled or unused' - all the servers I use are managed via serial console, and I would call them 'headless' - I think my usage is more standard than yours, but I could be wrong.)
>If you wanted to use it this way, note that rtop currently sshes around places and then execs a bunch of local commands. That gives me both the heebies and the jeebies. I'd much rather deploy an rtop-server compiled binary and restrict the ssh key that rtop uses to connect with something like "command=/usr/bin/rtop-server" in the .ssh/authorized_keys file for the local user that will accept rtop-initiated ssh connections.
Yeah, that is what I meant by "some sort of agent." - I personally make rather extensive use of the forced-command bit in ssh.
`go get` seems to be increasingly considered harmful -- for example, by its poor dependency versioning practices which promotes non-reproducible builds. See Dave Cheney's `gb` project for an alternative. [1]
As a fellow gopher, I've decided to consider support for `go get` a non-goal in my projects for similar reasons.
"go get" on its own isn't to blame -- the culture folks behind Go are trying to promote is that dependencies are intrinsically part of your code and thus you have to maintain them. I'm not going to say that version pinning harms this, but the lack of it is no reason to throw away good conventions around imports mapping to URLs, etc.
Instead of depending on a running agent, SSH based remote monitors depend on the consistency of the SSH configuration across the fleet of machines. This is sometimes non-trivial configuration find in heterogeneous environments (usernames, authentication, permissions, sshd_config...).
This isn't directed at this particular post, but more to HN in general. Why do I now constantly see "Tool XYZ (written in Go)" as though this is a hugely important aspect? I've never seen a "Tool ABC (written in Java)", so what's so great about Go? I know the features of the language help ensure "safe" development, but if someone was releasing software, I'd expect it to be relatively safe no matter what the language. So why is a tool being written in Go so important?
To me, "written in Go" is essentially shorthand for zero-dep, statically linked, single binary deployment.
This is a pretty big selling point for command-line utilities like rtop, as opposed to utilities written in Java/Python/Ruby/Node/etc. which require their respective runtime to be installed.
I would agree with this, but rtop is not meant to be installed on every server you want to monitor. The point is that it can remotely monitor any number of servers without previous agent installation. So, being written in go is kind of "useless" in this case, as you only need to install it in your C&C server, so to speak.
I'm curious to know more about these. Are any free? Are any open source? These aren't rhetorical questions, there's a ton of Java stuff I'd want to test it with.
I tried robovm, it actually worked nicely but apparently only does 32 bit. Also, the simple reference HelloWorld was nearly 10MB, the build process took 20 seconds, and running it took 15x longer than a comparable one in Go.
I don't mean to knock it, what it does is amazing, but it's not a viable replacement for simple static command line tools.
It is as easy to achieve in so many languages (zero-dep, statically linked, single binary). I think this is part of the best practices more than the language itself.
It doesn't seem that easy. How would you do that in Python or Ruby? Indeed, you can't easily do it with anything that links against glibc as it wont do static linking for some things, so even for C code it can be non trivial (install Musl libc).
Most != all. Which version of python are you talking about? Has it been updated recently? Are there other python libraries required by the project? I could go on. There's a huge difference between "no dependencies" and "a simple dependency".
This is a client-side tool. You do not have to deploy it. Any such tool would have zero dependencies server side. Client-side you can pip install most things in a few seconds.
Shipping python projects isn't easy. Even if you have a prefect package on pypi, that specifies its dependencies, it may conflict with other tools using other versions of those libraries.
Compiling python does not feel right.
Using native packages is better[1], but you have to plan it out. You can package virtualenvs into native packages[2] - and this is probably the most solid way, but you'll need packaging setups for various package managers.
In short, "written in Go" for me means with a high probability, just run "go get x" - or download a binary.
Since we are at it, I'd love to hear about better shipping strategies for Python projects.
When you mean shipping, do you mean deploying production apps to the cloud or just downloading/using top-like tools as a user?
I thought we were just talking about resource monitoring something. I think deployment is a totally separate subject. But for a top-like Python tool, I really like "glances". It has way more features than this Go version.
But regarding deployments, everything works if you make your own pip packages stored locally for your project. That way if it works on your clean VM, it works when you push to the cloud.
Agreed in general... But, if you DO direct it to this particular post, you can see it is basically shelling out to get its parameters, or looking at /proc/, which you could just as easily do with bash + ssh. So the only benefit to using go for this is... the runtime overhead of the go libraries?
I think this project was maybe just posted way too early, it doesn't even provide 'top' - there is no breakdown of CPU by PID which is why top has its name... all the files are committed yesterday...
I don't think it's about Go itself (as if that makes the software better), but moreso that if you're a Go* developer, you may find the source of such a tool interesting - while Go seems particularly well-suited to writing system utils, perhaps there are parts of the codebase that seem like they may be difficult to implement in the language. Additionally, if the source is in a language one is familiar with, one may find that codebase leading to a deeper understanding of the tool itself.
* Go here being a proxy for any language. The same goes for Rust, Haskell, Javascript, etc - this trend certainly isn't unique to Go.
In the early days of Java, people did say "written in Java" and pure Java libraries were a thing since they're easier to ship as part of a Java app than wrappers of native libraries.
Java has never been very good for command-line tools, though, due to startup time.