Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your LED example is an interesting one. In the basic model of a computer architecture, the screen is abstracted as a pixel array in memory - set those bits and the screen will render the pixels. The rest is hand waved as hardware.

A pixel array can be trivially modelled as a pure datastructure and then you can use the whole corpus of transformations which are the bread and butter of FP.

A screen is as IO as it comes for the most average consumers of a screen, we aren't peeking into its internals.

And for me, that's the point of FP - it's not that IO is to be avoided, it's about finding ways of separating your IO from the core logic. I loosely see the monad (as used in industry) as a formalised and more generic "functional core imperative shell"

Now when it comes to pure FP languages, they keep you honest and guide you along this paradigm. That said, it's perfectly possible to write very impure imperative Haskell - I've seen it with my own eyes in some of the biggest proprietary Haskell codebases

But imperative languages don't generally help you in the same way, if you want to do functional core imperative shell, you need a tonne of discipline and a predefined team consensus to commit to this



> In the basic model of a computer architecture, the screen is abstracted as a pixel array in memory - set those bits and the screen will render the pixels. The rest is hand waved as hardware.

It was. I still remember the days.

It was nice to be able to put pixels on the screen by poking at a 2D array directly. It simplified so much. Unfortunately, it turned out that our CPUs aren't as fast as we'd like them at this task, said array having 10^5, and then 10^6 cells - and architecture evolved in a way that exposes complex processing API for high-level operations, where the good ol' PutPixel() is one of the most expensive ones.

It's definitely a win for complex 3D games / applications, but if all you want is to draw some pixels on the screen, and think in pixels, it's not so easy these days.


Screen real estate size in memory increased by square law while clock speed and bus speeds increased only linearly, it was pretty clear that hardware acceleration was the way forward by the mid-eighties when the first GDPs became available. I even wrote a driver for one attached to the BBC Micro to allow all of the VDU calls to be transparently routed to the GDP for a fantastic speed increase.


I don't think you could have made the GPs point any better for them.


I don't know. What was the GP's point? That FP people like to think too much and sometimes you just want to get stuff done?

Or that FP purists don't know how to actually build useful things? Trololol it took Haskell until the mid 90s to figure out how to do Hello World with IO

To be honest FP is a moving target but I see it as one of the mainstream frontiers of PLT crossing over into industry.

I can accept that to some, exploring FP is not a good for their business requirements today but if companies didn't keep pushing the boat with language adoption, we'd still be stuck writing fortran, cobol or even assembly.

Once upon a time lexical scoping was scoffed at as being quaint and infeasible.

Ruby and Python were also once quaint languages.

Java added lambdas in Java 8.

Rust uses HM type inference.

So what was their point? That FP people spend too much time thinking and don't know how to ship? In which case - I'm grateful that there are people out there treading alternative paths in the space of ways to write code in search for improvement.

In any case their example was pretty spurious, anyone who's written real code in production knows IO boundaries quickly descend into a mess of exception handling because things fail and that's when patterns like railway oriented programming assist developers in containing that complexity


q.e.d.


Would love to know what has been proved? Very up for an open and honest discussion.

I'm back to writing imperative after years of functional. I think it is a very pragmatic choice today to go with an imperative language but I find class-oriented programming to be backwards and I think functional code will yield something more robust and maintainable given how IO and failure are treated explicitly. I'm not quite sure where the balance tips between move fast but ship something unmaintainable vs moving slower but having something more robust and maintainable.

Programming in a pure language is quite radical, it's a full paradigm shift so it feels cumbersome especially if you've invested 10+ years in doing something different. I'd liken it to trying to play table tennis with your off hand in terms of discomfort. There are plenty of impure functional languages around - OCaml, Scala, Clojure, Elixir.... And Javascript (!?!?)

FP is relatively new as a discipline and still comparatively untrodden. What if equal amounts of investment occured in FP - maybe an equivalent of that ease of led.turn_on will surface.

And tbh it probably just looks like a couple of bits - one for each LED and a centralised event loop. Which so happens to have been a pattern which works quite nicely in FP but emerged in industry to build some of the most foundational things we rely on...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: