Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree with you but see another relevant reason: in FP, you HAVE to consider side effects 1) from the beginning and 2) completely, which as anyone can guess is quite a task.

In imperative you can just ignore it and produce objectively worse code, as you are not even aware of all side effects possible. And sure, for the LED project it wouldn't even matter, but the decision FP vs imperative is then more of a design / quality criterion in general - the notion of one being better than the other is just wrong.

Also a monad is much more complicated if you don't really understand it which makes judging it a bit unfair



What is a side effect? Getting the time? Pushing a result to an output channel? A debug printf? Setting a flag to cache a computation as an optimization? Is it not: evaluating a thunk? Implicitly allocating some memory to store the result of a computation?

Haskellers are trained to have a very inflexible view of what a side effect is. It is dictated by the runtime / the type system. In my views, there are lots of things that Haskellers call "side effects" that I would just shrug my shoulder on, and also lots of things that they do not call side effects but I care about them. It really depends on the situation.

This fixed dichotomy imposed by the language does more harm than good in my experience. NB: I'm aware that running a computation that for example gets the system time will get a different time it runs. That does not mean that I _have_ to consider it a side effect. I usually do not have a good reason to run the procedure multiple times and expect the runs to be totally identical by all means. In an imperative language, I have very precise control when this procedure runs.


Apart from language-imposed limitations (haskell is nowhere near the theoretical completeness of category theory, e.g. bottom-type), the "pure" nature of FP forces the use of abstract structures able to handle it (e.g. Monads), thus, wanting to write code, you first need to think even possible side effects trough to be able to write code containing them, which by definition is a stronger criterion of catching unwanted effects compared to imperative, where you can produce whatever you want. And sure, it is in no way a guarantee to produce good code, it is just a stronger condition. It effectively boils down do "assembly is just as good as C", and we see where it took us.

Anyone telling me to think it through as rigorous in imperative is lying to him/her self practically, unless they're actually verifying their code.


I don't know, I'm positive I'm not part of the sacred circle, but just a data point. The Haskell applications that I've managed to produce were all uniformly slow-compiling and unmaintainable. And I promise it wasn't for lack of thinking about "side effects".

In my view, the problem is that functional languages give you a toolset to compose functions (code) by connecting them in structures. In Haskell, that is made harder by the restricting type system (very limited language to do type computations) that you must champion, including a myriad of extension, which invariably lead me down to dead-end paths that I didn't know how to back out of without starting all over.

But Haskell's restricting type system aside, every programmer that I consider worth their salt has understood that it's not about the code. Good programmers worry about the data, not the code. Composing code is not a problem for me; I just write one code after the other, there isn't much else that is needed. I just think about aligning the data such that the final thing that the machine has to do is as straightforward as possible. Then the code becomes easy to write as a result.

The possibilities to design data structures in Haskell are obviously limited by its immutability. Which is, quite frankly, hilarious. "State" is almost by definition central to any computation - and Haskell tries to eliminating it (which of course is only an illusion; in practice we're bending over backwards to achieve mutability). For Haskell in particular, which does not even have record syntax, basic straightforward programming is often just not possible in my perception. I refuse to reduce a hard to use library like "Lenses" to do basic operations thing that should be _easy_ to code.

Even though Haskell is popular, and many programmers (including me) go through a Haskell phase, I haven't seen many large mature Haskell codebases (I know basically about Pandoc; and ghc if a Haskell compiler counts). Why is that?


I think trying to eliminate state and replicate the unnecessary state with getter is generally a good thing. One of biggest bug category programmers encounter `forgot to sync XXX` can be totally eliminated by this if you don't copy these state at first place.

But eliminate all of them... just looks silly to me. You need state anyway, why not just write them in a sane way?


As a concrete example, here is a video (and github link) of a concrete program that I'm currently working on, and that I think is not a bad program.

https://vimeo.com/605017327

I already have plans for improving it (especially the layout system), but overall it works pretty well and is reasonably featureful with little code. It's not perfect but "state" is certainly not a problem at all.

I can't tell you something like this can't be coded in maintainable Haskell, but I can tell you that _I_ wouldn't have managed, and googling around it doesn't seem like there are a lot of people who can do it.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: