Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Replacing Loops in Swift (wilshipley.com)
45 points by tambourine_man on Aug 28, 2015 | hide | past | favorite | 29 comments


Over the past few years there's been a tendency towards a more functional style of programming, like what's demonstrated in this blog post. I think there are a lot of benefits to this, but the functional style doesn't hold a monopoly on good code.

One of the advantages of the "slightly better", more imperative, method of finding a 3D piece under the mouse over the "beautiful", more functional, one is that it is extremely clear what the code is doing, while the functional case needs some thought to understand; even for someone who knows what map, filter, etc. do!

This isn't to say that functional is bad and imperative is good, or really to choose a style at all. But sometimes the cleanest idea mathematically isn't the best way to write code.


"One of the advantages of the "slightly better", more imperative, method of finding a 3D piece under the mouse over the "beautiful", more functional, one is that it is extremely clear what the code is doing, while the functional case needs some thought to understand; even for someone who knows what map, filter, etc. do!"

This is not at all self evident. See map and immediately think "oh, this will invoke the function once for each item and collect the results". See filter and think "oh, this will invoke the function once for each item and collect the ones for which the function returned true".

What about that is not extremely clear?


I find this sort of functional-style code much easier to skim, because there are more obvious keywords with immediate connotations, rather than each line of code potentially doing something bespoke.


Really? For me, a chain of maps and filters is far easier to understand than nested for-loops or something.

I think it's a lot to do with exposure: imperative code is far more well-known, and most people are more exposed to it than they are to more declarative stuff. I'd say that if someone was exposed to both equally they'd find the declarative easier to quickly scan and understand.


I agree. I'm an Android developer, but I have a knack for FP-flavored programming style - you get some of it in C# (LINQ), Kotlin, even Java (Streams, or Guava), and I find it readable.

I never wrote a line of code in Swift, and the examples from the article were quite clear to me, too.

Of course you can create an unreadable monstrocity with maps, filters, group-bys etc. but it's possible in goodd ol' OOP just the same. Every construct can be used right or abused.


Imperative code is more explicit, but higher-level loop abstractions like map and filter are more restrictive and thus both easier to reason about and less error prone. When you program with functional patterns you remove whole classes of possible mistakes (or, when reading code, overlooked subtleties) surrounding the explicit mutation of index variables.

The APL family eschews explicit "for" loops entirely in favor of functional forms and assigns special symbols to each of them. This results in extremely dense code, but at the same time code which often has fewer "moving parts" and is less likely to be subtly incorrect.

In JavaScript I could write

    var sum=0; for(var x=0; x<100; x++) { sum += x; }
or I could write

    var sum=range(100).reduce(function(a,b) { return a+b; });
(provided an appropriate definition of `range()`)

In K I would write

    +/!100
The sum (+) over (/) the range 0 up to but not including (!) 100.

In the first example, I have to worry about all sorts of boundary conditions and typos. The second example is easier to reason about, but it's fairly wordy. The third is so simple and concise it is "obviously correct". These examples are contrived, but I think they provide some food for thought.


if you were to program in JS in a style which calls for passing operations around you would have the operator `+` wrapped, so the second version would probably look like this

    var sum = range(100).reduce(0, plus);
or rather

    var total = sum(range(100));
this does not take away from the beauty of APL.


   one is that it is extremely clear what the code is doing, while the functional case needs some thought to understand;
In my experience this is just familiarity, and I've experienced it in both directions.


Are you sure that it isn't just harder for you to understand because you're more familiar with the traditional imperative control flow tools? If you had spent as much time with map and filter as you likely have with for loops, maybe they would seem just as clear?


One pitfall I often fall into with map, filter, etc. is that I chain them together, forgetting each of them takes another pass through the sequence. With an explicit loop, it's often easier to reduce multiple passes into a single pass.


> One pitfall I often fall into with map, filter, etc. is that I chain them together, forgetting each of them takes another pass through the sequence.

Lazy maps, filters, etc. that do not do this are available in many languages (and the default in some.)

Transducers provide a similar benefit, from a somewhat opposite direction (composing reduction functions rather than chaining lazy iterating functions).

Other than familiarity for some people, I don't see a real edge to imperative iteration over functional operations here.


I guess I should have been more clear.

I'm aware that they do not add extra passes in some languages.

But in Swift, C++, Common Lisp, OCaml, Python, Ruby, Perl, Java, Scheme, and many others, adding another map/filter/etc. will add another pass through the sequence unless you're going out of the way to avoid it.


> But in Swift, C++, Common Lisp, OCaml, Python, Ruby, Perl, Java, Scheme, and many others, adding another map/filter/etc. will add another pass through the sequence unless you're going out of the way to avoid it.

I think this is wrong in the case of Python -- map/filter/reduce are lazy, and so chaining them does not result in multiple iterations of the source sequence.

In Ruby, its definitely sometimes wrong, because whether or not iteration is lazy depends on the particular Enumerable object; and, in the worst case, in Ruby, you don't have to go very far out of your way. In the event that the Enumerable you are operating on isn't already a lazy one, you just call its lazy method, and chain your operations after that.


In Swift:

    (0...100)
      .filter { $0 % 2 == 0 }
      .map    { $0 * 2 }
Will pass through the sequence multiple times, yes. But to avoid it it's as easy as:

    (0...100)
      .lazy
      .filter { $0 % 2 == 0 }
      .map    { $0 * 2 }


Just to add to the chorus of "well, maybe not by default, but it's pretty easy to do", the streams library in Java 8 is lazy, and there's this library for C++14[0].

[0]: https://github.com/jscheiny/Streams.


how much of a performance drop have you measured in real world? compare

    my @judges = ();
    for my $sport (@sports) {
      if (is_watersport $sport) {
        push @judges, get_judges $sport;
      }
    }
to

    my @judges = map get_judges, grep is_watersport, @sports;
is it worth fretting over an added pass? how many sports are we dealing with anyway? and IME, nested loops induce errors in the form of accidental `O(N*M)`.


Your example is a straw man.

IRL, I've seen this crop up while processing database results and large datasets. On several occasions I've shaved up to several minutes off of execution times by replacing chained map, filter, and zip calls that were iterating through the data multiple times with more complicated expressions that only performed the iteration once.

I've also written a graphics applications where converting several chained map/filters into a single pass made interacting with the application noticeably less laggy.

I'm actually surprised so many people have challenged this as a potential pitfall. I didn't pull it out of the air or from trivial examples. I've seen it happen.


> each of them takes another pass through the sequence

Not necessarily. In C# (LINQ) you have deferred execution / lazy evaluation, when possible of course - it's possible for filtering, not possible for a group-by, but that's just reality.

Combine it with yield-return pattern, and it's very easy to avoid the problem you brought up.


Some comments on this article: first of all it's not about the amount of lines of code that attribute to the complexity of the code, but the amount of things that actually happen and can fail in your code that can make it fragile. A godegolfed line of code is not more reliable than everything neatly expanded and tabbed in lines.

Frequently in C# LINQ-queries turn out to be very long indiscernible chains of things happening that aren't completely obvious in what they're doing. Same goes for filter / map / forEach. I use them a lot but I put them often in several variables ('let' in this case of course) just to have the "one thing happening in one line" kind of code that's so much nicer to the eyes.

I never benchmarked it, but I never had to. I assume modern compilers that see a constant that is only being used once understand that it comes down to the same as chaining directly.

The best takeaway from this story is that filter / map / etcetera better show the intent of the code. That's why I love the `guard` statement so much. It doesn't do anything I couldn't do before but it simply tells the other programmer "unless this passes nothing else below this should happen".


LINQ queries are horrible. I tend to rewrite them using the extension methods instead, it usually makes them a lot clearer.


It's nice to note when you think about for loops, while, etc. versus map/each/reduce that the latter offer a convenience that is not discussed enough: the ability to reason about a single element of a collection without having ever to think about the collection itself (no thinking about bounds, indices or things like that).

This in turn, makes it easier to think in terms of :

I have a thing, I put it in the box and a new different thing comes out of that box. That in turn can more easily lead to thinking about series of connected boxes.


And after that, you can start thinking about other kinds of boxes: Maybe/Optional, Either/Try, futures, infinite collections, lazy collections, I/O streams, message passing systems... all of which can be manipulated with the same set of primitives. And at some point you realize there's a name for this fancy abstraction and it starts with an "m" and ends with a "d" and Haskellers like it very much.


One downside of this approach is that you don't have as much discrete control over your access patterns.

In his intersection case he's having to hit at least some items twice, depending his cache locality that could be pretty painful.

That said I do map/filter is great for areas that aren't performance critical.


Even though he does not like LISP, welcome to the 1960s with map, remove, ...

Those who learned SICP in school or university will have no problem to recognize the concepts.


"What first struck me when learning Swift was how much play the "map()" and "filter()" functions got on the web [also "flatmap()" but I'm not as excited about that one yet because I don't love LISP and it's not 1970]."

Cute. But this article demonstrates Eric Raymond's point:

"LISP is worth learning for a different reason — the profound enlightenment experience you will have when you finally get it."

Much of the history of main stream programming languages is simply a slow progression of adopting the features of Lisp. So in this case, someone who already knew Lisp would not have to learn how to use the functional constructs in Swift fluently and idiomatically, because they would already have experience doing so.

(Yes, I know this does not apply to all programming languages and programming language features, but it's surprising how often it's true! Paul's Graham take on this idea here: http://www.paulgraham.com/diff.html)


When you're standing here you're in for a real treat once flatmap starts making sense and making its way into your hear^H^H^H^H code.


I don't understand why so many people are so enthusiastic about flatmap. It's never seemed particularly difficult to understand, and in my experience is less useful than map.

The Scala community in particular seems to have a comical number of articles praising it and explaining how to use it. I just don't get it.


Flatmap is "bind" from Haskell - a part of the interface called "Monad", about which many people are excited for both good reasons and poor reasons.


Wow.. didn't know Objective C has been around for that long (32 Years), this guy has been programming in it for 26 yrs..




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: