Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Quoting from the source code:

   [[ $# -gt 0 ]] && while [ "${1:0:2}" == '--' ]; do OPTION=${1:2}; [[ $OPTION =~ = ]] && declare "BOCKER_${OPTION/=*/}=${OPTION/*=/}" || declare "BOCKER_${OPTION}=x"; shift; done
If the ambition is to write lines like this, you can make it into ~1 line of code.


I dont even read or write bash shell scripts regularly and i can understand what that line is doing fine. I would not be able to understand if the entire thing was one line, so i think there is a significant difference.

Just because something doesn't follow general rules for readable code, doesn't mean it is actually unreadable.


I don't think soygul is so much challenging the readability of the line, but rather pointing out the hazards of focusing on the # of lines of code as a unit of simplicity/complexity.


Do you actually know what it's doing, or are you guessing and inferring from patterns? There's a difference when you actually need to read the source for details rather than a quick skim.

I can definitely guess the pattern and I do write bash regularly, but I see at least 2 things I'd need to double-check the man page for the behaviour.


If you are asking if i have memorized bash syntax fully and know that everything he did was valid, than the answer is no. However the intent of the code and what each component is supposed to do is clear, which is what i'm looking for when reading code.

Heck, by this definition i'm not sure any code is really good enough. In my job i have to write php daily, i still need to regularly look at docs to figure out order of arguments for various library functions. I wouldn't be able to tell if explode($a, $b) in php has the right argument order at a glance without looking it up. But i understand the intent and generally assume the superficial aspects of the code are right unless that part appears not to work.

And furthermore, adjusting the number of newlines isn't going to help with the question of if that piece of code is using bash syntax correctly.


I don't see anyone claiming that the syntax is incorrect. But it definitely causes me to have to stop and do some mental processing (mainly thinking about order of operations) to make sense of it.


Admittedly, PHP is kinda legendary for having accidentally ended up with all sorts of odd argument orders.

Other languages seem to've made things at least somewhat more predictable.

(and then, being software, suck in different ways of course)


Good code should not only be "readable", but it should be as easy to read as possible. Potentially a lot of people will read this code, and all of them will need to spend extra-time processing it because it's hard to read. Long lines are usually harder to read, which is why most coding guidelines recommend limiting lines length to 80 chars, this one is 178.


If you pick bash as your programming language, don't you already throw "readable" out of the window?

Perl gets a lot of flak for being unreadable, but I personally have more difficulty understanding bash code than perl (not implying that perl is super readable).


Lets say you write in bash like this:

  if [[ $OPTION =~ = ]]
  then
      declare "BOCKER_${OPTION/=*/}=${OPTION/*=/}"
  else
      declare "BOCKER_${OPTION}=x"
  fi
Going from here, if you rewrite this in golang you'd probably improve readability.

But the readability has not been "thrown out of the window" yet, like in this case:

  [[ $OPTION =~ = ]] && declare "BOCKER_${OPTION/=*/}=${OPTION/*=/}" || declare "BOCKER_${OPTION}=x"


Oh I see your point.

Yeah, that was unnecessary. Ironically now I remember that I was once frustrated by someone who was using Boolean logic to emulate if/else in Python.


It would take very little effort to put it in multiple lines, which would probably make it more readable.


I don't think it's that hard to read, but breaking over multiple lines probably wouldn't hurt.


That roughly corresponds to the following C-like pseudocode:

    for (i = 0; i < argc && (arg = argv[i]).startswith("--"); ++i)
        if (sep = arg.find("=")) opts[arg[..sep]] = arg[sep+1..]; else opts[arg] = "x";
It might be too much for one line, but it doesn't seem to be super complicated either. In fact that line is (as of now, commit 0006330) only the second longest in that file.


I'd definitely refactor your C-like pseudocode to multiple lines :-D


Of course, in real projects I don't write such codes. But that's not because it is harder to read, but because it is harder to alter; say, if you have another condition for say `-v` vs. `--verbose` then you would have to reformat it to multiple lines. The meaning of the quoted line is not hard to figure out if you know a bit of bash, and that's all I would expect from a showcase.


Of course what you fight for is readability and only readability. Especially in large projects. There will be literally 10 man-hours spent on reading this for each 1 man-hour of reading-followed-by-altering-it.


I think a perl expert can write it even shorter.


I love perl because it's fast, but it definitely follows the WORN pattern. Write Once Read Never. No matter how many commnets I leave for future me, it's always a voyage of discovery reading my own code, forget reading anyone else's :)


I wonder how you achieve this. The way I see it, Perl got that reputation mostly from a few sources: Heavy use of regexes when that still wasn't as common, the default variables like $_ and of course sigils (@list, $scalar, %hash). Sure, you can golf it to have some outrageous results in your Usenet signature while remaining McQ, but C has an obfuscation contest and never got as teased about that.

Sure, if you're coming from structured Pascal 101, that might be an issue, but in the day where deeply nested functional rat kings tend to replace the humble WHILE loop, is a Schwartzian transform that confusing?


> Heavy use of regexes when that still wasn't as common, the default variables like $_ and of course sigils (@list, $scalar, %hash).

And implicit variables. Also, implicit variables. Composed types didn't make it any easier; and references, that broke all the logic implicit on the sigils.

Oh, and did I mention implicit variables?


> and references, that broke all the logic implicit on the sigils.

They don't break the logic, they just follow different logic than many people assume. Either you're indicating a collection (a hash or array), or you're indicating an item. @ is a plurality indicator, while $ is an individual indicator. You reference the single item of the @foo array with $foo[0] for the same reason you say "the first apple of the bunch" instead of "the first apples of the bunch" when you want the first one. Yes, it's probably better not done that way (it's not something you can usefully change in Perl 5 at this point), but it does follow well defined rules, apparently just not the ones you assumed.

Implicit variables can usually be avoided (except for $_, but that's pretty normal these days, and I'm pretty sure it topical variables didn't begin with Perl), and often they're lexical so usually other people's code messing with them is contained. Except for $_ itself which isn't lexical, which is it's own story and a pain point. :/


Implicit variables all came from UNIX shells which are in use to this day. What was the exit code of the last program? "$?".

C works similarly. What was the error from the last system call? "errno".

Certainly there are problems with this; we've all seen the programs that output "Error doing something: Success". But it isn't Perl that invented this. It's a UNIX tradition.


Sure, the logic of the sigils is easily broken, but that just brings us back to every other language, with the added noise of the "$" sign (so pretty much to PHP).

As for implicit variables, I rarely see anything else than $?, $|, $_ and @_, of course. And I'd argue that $_ often makes code a bit easier to understand than cluttering it up with a variable declaration. No worse than point-free programming in more modern languages.

Don't get me wrong, I don't think that Perl is that well architected and that Perl5 would've required some more courage at dropping backwards compatibility, but what I don't get is why Perl is singled out here. It's not that radically different like e.g. APL or ScalaZ.

Compared to other languages of its day, this feels a bit like the "Reformed Baptist Church of God, reformation of 1915" joke. Then again, perfectly fitting into similar conflicts about brace styles or 1-based indexes…


> It's not that radically different like e.g. APL or ScalaZ.

Both of what never got as much popular.

Perl is singled out because it was used. And too often on short scripts that grew into thousands of lines, so the problems showed up.


Re: implicit variables, I understand there are environments where "use strict" is not an option. As someone working in Perl heavy shops for about 10 years, I hardly see any Perl program that does not use strict.

Now, Python on the other hand...


Strict does not forbid implicit variables.

It fixes a hole lot of more important problems, but it's mostly aimed at problems that will make your code misbehave, not the ones that will make it hard to read.


One of the early foundational principles of Perl was "TMTOWTDI" (http://wiki.c2.com/?ThereIsMoreThanOneWayToDoIt for the unfamiliar). The intention behind this was cool: do "it" whichever way makes the most sense for your particular situation.

But the end result was horrible: everyone did "it" every possible way, and that, IMO, is the underlying reason for Perl's reputation for being unreadable.

I also think it had a great deal of impact on the development of later languages and principles, which tend to focus much more on removing freedom from the programmer and enforcing idiomatic ways of doing "it". Today, you're far more likely to encounter modern code written by different programmers which looks very similar -- at least on a line-by-line basis, anyway. At the architectural level, it's still a big game of Calvinball.


If that's really what you think and you think it's down to the language, look up Modern::Perl (0) and, if you're object inclinded, Moo (1), Mouse (2) or Moose (3). Then there's always perlcritic (4), too.

In my mind, it's down to it having been popular, attracting many amateurs. You'll find equally unmaintainable bash or PowerShell scripts and definitely as much garbage, WORN JavaScript code. Writing maintainable code requires both knowledge and effort.

[0]: https://metacpan.org/pod/Modern::Perl

[1]: https://metacpan.org/pod/Moo

[2]: https://metacpan.org/pod/Mouse

[3]: https://metacpan.org/pod/Moose

[4]: https://metacpan.org/pod/perlcritic


Definitely.

Here's a small IO::Async + Moo app that I still consider to basically be how I'd write it today - I invite people to browse the source code and tell me what they do/don't find unreadable compared to more "raw" perl.

https://metacpan.org/release/App-Procapult


ALL of the listed above libraries are complete garbage, impossible to use productively among several developers.

I worked in a company that had huge Perl codebase, which made extensive use of the Moose library. After trying to make sense of it, I gave up and used plain Perl, writing it as unidiomatic and simple as possible, so that hundreds of other devs, also new to Perl, would be able to understand the code I wrote. This was the common sentiment - most of the people followed the same path.

The library is just a nightmare - Perl is dynamically typed, there is NO adequate IDE support (compared to the one statically typed languages have), so good luck with working out how the library works underneath. And if I cannot understand that, how on Earth will I understand what even my code is doing? (Never mind the others')

In my mind, the amateurs are those that created the libraries without any idea on how they are going to be abused, thinking everyone should use unreadable incomprehensible syntax coupled with unapproachable internals.

I apologize for the rant, I had no idea this topic moved me so much.


Basically your problem looks like dynamic programming languages are hard to work with?

I mean types do make software engineering craft a little tolerable and its not exactly a new thing to say here.

But how would this situation be any different than using Python or Clojure?

Talking of artificial bolt-on's. We are living in an era where we do 'from typing import *' and core.spec for Clojure all the time these days. How does this change only when it comes to Perl?


I do dislike dynamically typed languages a bit for building large systems, where are plenty of alternatives available.

However, I still reach out to Python and Bash and Perl for some one-off tasks or gluing scripts and I do appreciate the brevity and clarity they bring for this sort of problems.

Except when it comes to building somewhat large systems (I am talking ~5 mil LOC here) - then every kind of "abstraction", like this disaster of a library Moose, only increases the complexity of the project by a large margin, and acts only as job-security for the original authors of the code, making most of the codebase impenetrable for the rest.

I have not worked with similar large systems in other dynamically-typed languages, so I cannot compare other languages to Perl in that regard. I do know, however, that Perl is simply a disaster to use in that scale.


> Except when it comes to building somewhat large systems (I am talking ~5 mil LOC here)

I think most Perl developers would agree that if you expect you code base to reach in the millions of lines of code (at least if it's all one code base and not an ecosystem using an API), Perl (or any dynamic language) may be stretched to the point where its benefits are outweighed by its drawbacks, similar (on the other side of the spectrum) as if you used C/C++ to build a simple web app.


I work on a similarly nightmare-ish Perl codebase, that makes use of Moose and MooseX (and all the other various plugins people have made), along with various hacks Perl and Catalyst hacks only lifelong Perl monks can understand. The only way to figure out anything is `Pry` and `Data::Dumper` everywhere. Perl critic also conflicts with some of the other libraries in the ecosystem, like the one that provides `method` and `func` (not sure which one it is).

Perl is great for text manipulation and one offs, not large, production systems.


You might find the Dwarn command available from http://p3rl.org/Devel::Dwarn (original) and http://p3rl.org/Devel::DDCWarn (newer version with extra-compact output) helpful - it lets you change e.g.

    return $foo->bar->baz;
to

    return Dwarn $foo->bar->baz;
which will 'warn' out the dumped structure and then return it, so you can instrument code for debugging trivially.

DDCWarn also provides a DwarnT so you can add a tag that gets printed but not returned, i.e.

    return DwarnT TAG_NAME => $foo->bar->baz;
There's not really a book on Moose, but the Moose::Manual pages plus Modern Perl plus The Definitive Guide to Catalyst work out pretty well between them for bringing people up to speed.


We use Mojolicious specifically in order to avoid needless complexity inferred by Catalyst. Perl is ok for small to medium production systems but library support is quite lacking for 2020. We'd probably use something else if we had to start fresh today.


Moose is pretty much a straight up implementation of The Art Of The Meta-Object Protocol, which is a seminal work on the subject (plus a few extra bits like Roles inspired by smalltalk traits and CLOS-style method modifiers).

Over-use of meta stuff is something people often get tempted into when they first use Moose and can make things a bit more complicated, but the core syntax is honestly simpler and easier to use than raw perl OO and I've found it much easier to cross-train non-perl devs to Moo(se) style OO.

If you want an opinionated/terse syntax that encourages you to only be as clever as strictly necessary, I'd suggest looking at http://p3rl.org/Mu


Moo is quite nice actually. It's quite minimal, with minimal dependencies. I discard most if not all Perl modules that depend on Moose and always look for alternatives. Not very keen on dependency hell and importing heavyweight modules in order to develop trivial functionality.


> there is NO adequate IDE support ... so good luck with working out how the library works underneath

LOL, WUT? Why do you need IDE support to figure out how the library works underneath? You have the code, you have the docs, what else do you need to figure it out?


Hmmm. Not sure if that is on purpose, but I don't suffer from that; I can read code I wrote in Perl 20 years ago just fine. I'm not even sure how you would write Perl code like you suggest. It looks noisy sure, but once you know what it means, how is it hard to read?

I guess if you do deliberate golfing/obfuscation you can make anything unreadable.


    #Before enlightenment,
    use strict;
    use warnings;
    
    #After enlightenment,
    use strict;
    use warnings;


I'm more likely to have:

    use strictures 2;
    use Moo;
since strictures fatalizes most warnings as well as turning strict on for maximum "telling perl if I made a mistake to barf immediately rather than trying to be helpful and guess"


Curious to know how much Perl code do you write on an every day basis?


10 years ago, my daily grind, nowadays rarely, but good to have in the back pocket


This isn't really that horrible; just condensing the boilerplate that comes with command line argument parsing. Sure it's quick-and-dirty, but it gets it done and moves focus your elsewhere.

Interestingly, I didn't know you could do this without an `eval`:

    declare something${something_else}=whatever




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: