Hacker Newsnew | past | comments | ask | show | jobs | submit | acedTrex's commentslogin

You really think the 33k people that starred a 40 line markdown file realize that?

You mean the 33k bots that created a nearly linear stars/day graph? There's a dip in the middle, but it was very blatant at the start (and now)

Stars are more akin to bookmarks and likes these days, as opposed to a show of support or "I use this"

I use them like bookmarks.

I intentionally throw some weird ones on there just in case anyone is actually ever checking them. Gotta keep interviewers guessing.

I use them as likes

The amount of cargo culting amongst AI halfwits (who seem to have a lot of overlap with influencers and crypto bros) is INSANE

I mean just look at the growth of all these "skills" that just reiterate knowledge the models already have


Sigh here we go again, model release day is always the worst day of the quarter for me. I always get a lovely anxiety attack and have to avoid all parts of the internet for a few days :/

I feel this way too. Wish I could fully understand the 'why'. I know all of the usual arguments, but nothing seems to fully capture it for me - maybe it' all of them, maybe it's simply the pace of change and having to adapt quicker than we're comfortable with. Anyway best of luck from someone who understands this sentiment.

Really? I think it's pretty straightforward, at least for me - fear of AI replacing my profession and also fear that it will become harder to succeed with a side project.

Yeah I can understand that, and sure this is part of it, just not all of it. There is also broader societal issues (ie. inequality), personal questions around meaning and purpose, and a sprinkling of existential (but not much). I suspect anyone surveyed would have a different formula for what causes this unease - I struggle to define it (yet think about it constantly), hence my comment above.

Ultimately when I think deeper, none of this would worry me if these changes occurred over 20 years - societies and cultures change and are constantly in flux, and that includes jobs and what people value. It's the rate of change and inability to adapt quick enough which overwhelms me.


I have some of those too, to a limited extent.

Not worried about inequality, at least not in the sense that AI would increase it, I'm expecting the opposite. Being intelligent will become less valuable than today, which will make the world more equal, but it may be not be a net positive change for everybody.

Regarding meaning and purpose, I have some worries here too, but can easily imagine a ton of things to do and enjoy in a post-AGI world. Travelling, watching technological progress, playing amazing games.

Maybe the unidentified cause of unease is simply the expectation that the world is going to change and we don't know how and have no control over it. It will just happen and we can only hope that the changes will be positive.


> fear of AI replacing my profession

See i don't have any of this fear, I have 0 concerns that LLMs will replace software engineering because the bulk of the work we do (not code) is not at risk.

My worries are almost purely personal.


Thank you thank you, misery loves company lol! I haven't fully pinned down what the exact cause is as well, an ongoing journey.

I felt this way from a year ago up until February 2026. Claude Code and Codex becoming the norm cemented for me that a lot of the projects people are working on (including mine) are totally obsolete. As far as I'm concerned, most code is now abstracted away, and people only want better agents - not traditional software products, except as infrastructure or platforms.

It also looks like the final form of the AI roll-out: whatever the model or application, this is the era of agents, and probably in the near-future mostly automated agents. We'll see an overflow of bespoke automation and in-house agents doing everything from personal task management to enterprise business processes, so releasing a "Personal Fitness Tracker" or a "CRO Auditor" in 2026 doesn't make any sense.

All of my anxiety around it has evaporated because I can see what it actually is: an ouroboros of AI output generating automation of more AI output. What most software engineers will be working on now is guiding that output, making it easier to inspect/configure it, optimizing it, and improving the consumer and developer experience.

Otherwise, we just have to drop our old concepts for projects and work on something else.

For the consumer the floor is rising, and for the experienced developer the ceiling is rising. I personally hate web dev anyway, and I'm glad I can work on interesting engineering problems (even with the help of an AI) instead of having to manually stitch together yet another REST API, or website, or service pipeline.


Why? Good anxiety or bad?

Ya im totally sure its a good idea to use a tool written by the creator of openclaw with a sensitive account that is closely tied to day to day needs.

What could possibly go wrong with that.


Its so depressing that it took widespread LLM psychosis to finally get company leadership to invest in actual CLI tooling.

No, the customers never mattered but the mythical "LLM agent" is vitally important to cater too.


This is not what a compiler is in any sense.

If this was me you couldn't waterboard this info out of me.

Why? Is this is because of shame or fear of losing your job?

Because the info is no longer in their brain.

Because its incredibly embarrassing to admit you can no longer do very basic programming tasks as a "professional" in that field.

I think it's a matter of what "very basic programming tasks" actually mean keeps sliding across the years. Surely in the beginning, being able to write Assembly was "very basic programming tasks" but as Algol and Fortran took over, suddenly those instead became the "very basic programming tasks".

Repeat this for decades, and "very basic programming tasks" might be creating a cross-platform browser by using LLMs via voice dictation.


Skill atrophy is intrinsically embarrassing, no matter what those skills are. I am embarrassed to admit that I have forgotten a lot of how to hand-optimize C code with inline assembly, even though few people do that anymore.

And the person you are responding is asserting that the response to incompetence of this level should be the SAME as if it directed and intentional malice. Which is a completely valid way to view a fuckup like this.

>response to incompetence of this level should be the SAME

sure.

but this was not a deliberate attack by microsoft employees to shutdown wireguard. that is what i was trying to say and the essence of the quote in question.


Microsoft drove a truck through a school yard at 150mph. It was not a deliberate attack, it was just the fastest route and their map says there's a highway there. Is it malice?

A certain level of recklessness is automatically malice.


>[...] It was not a deliberate attack [...]

in that case, it certainly wouldnt be called a deliberate attack, right?

the edit in my original comment should hopefully clear up any confusion of my intended point. and, well... the comment you replied to should also make it clear that my entire point is centered around something being deliberate attack vs. ridiculous incompetence.

the deliberateness of it is the entirety of the reason i wrote my comment. choosing the phrase "malice vs. incompetence" was a poor choice on my part, when i should have been extremely explicit. it would have avoided all of this back-and-forth.


They are saying that "deliberate attack" or not does not matter and is not worth pointing out. The response is the same so its a worthless point.

whether something is a deliberate attack or not is not worth pointing out?

its, like, the only thing worth pointing out. if microsoft is deliberately targeting projects and literally attacking them, that would be huge fucking news. like crazy news. lawsuits galore.


> whether something is a deliberate attack or not is not worth pointing out?

Correct in cases like this we are discussing it as a meaningless distinction.


> Checking if your plugin/extension/mod works

What makes you think they do this with any of their products these days?


If this is true then a PMs jira tickets are an abstraction over an engineers code. It's not necessarily wrong by some interpretations but is not how the majority of engineers would define the word.

Its a buggy pos though, "popular and successful" have never been indicators of quality in any sense.

I think this is a pretty interesting comment because it gets to the heart of differing views on what quality means.

For you, non-buggy software is important. You could also reasonably take a more business centered approach, where having some number of paying customers is an indicator of quality (you've built something people are willing to pay for!) Personally I lean towards the second camp, the bugs are annoying but there is a good sprinkling of magic in the product which overall makes it something I really enjoy using.

All that is to say, I don't think there is a straightforward definition of quality that everyone is going to agree on.


What do I care if Anthropic makes money? Do you think Oracle makes money because they have a quality product?

ok, well if youd like to trade in 14billion dollars of revenue for better quality feel free.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: