Hacker Newsnew | past | comments | ask | show | jobs | submit | nate's commentslogin

Similarly, I feel like book publishers are about to become a thriving business soon again. With any book being most likely just a bot creation, trusting "Random House" sounds like a thing more of us will start paying attention to to make sure we're buying a human made thing.

That's assuming publishers don't decide to replace all their authors with AI.

Are you asking about the 3 body problem version of this? Spoiler alert: The folks doing the eradicating aren't spending much time/energy/anything on eradicating. It's one large missile through space.

I think the gist is: sure, we humans can't conceive of getting to anyone else in the universe in any timescale, but if we can keep ourselves from destroying ourselves, we'll eventually figure it out. And we'll spread. And we'll kill everything that isn't us in the process as we've done as explorers on this planet.

So really in 3BP: it's inexpensive to eradicate. But insanely expensive to possibly get the intention wrong of any other civilization you encounter. They might kill you.

(again, this is just my interpretation of what 3BP said)


I don't think it's correct that we destroyed everything that isn't us. If we take all living beings, we have destroyed only a small percentage.

Not counting by total terrestrial vertebrate biomass.

i am absolutely on the fence here. I do like the ai cleanup of my rambling can do. but yes, i'm tempted to just leave it rambly, misspelled, etc. i find myself swearing more in my writing, just to give it more signal that: yeah, this probably aint an ai talking (writing) like this to you :) and yes, caps, barely.


sorry. i didn't mean to say that's the only thing this agent is doing is screenshotting. just that it was a thing my agent is doing which has this neat property. i also have a host of other things going on when it does need to grab and understand the contents of the page. the screenshot is used in conjunction with the html to navigate and find things. but it's also doing things this particular test tries (hidden divs, aria=hidden, etc.). also tries to message the model about what's trusted and untrusted.

but the big thing I have in here is simply a cross domain check. if the domain is about to be navigated away from, we alert the user to changing domains. this is all in a browser context too so a browsers csrf protection is also being relied on. but its the cross domain navigation i'm really worried about. and trying to make sure i've gotten super hardened. but this is the trickiest part in a browser admittedly. i feel like browsers are going to need a new "non-origin" kind of flow that knows an agent is browsing and does something like blocking and confirming natively.


I'm about to launch an agent I made. Got an A+. One big reason it did so well though, right or wrong, is the agent screenshots sites and uses those to interpret what the hell is going on. So obviously removes the secret injections you can't see visibly. But also has some nice properties of understanding the structure of the page after it's rendered and messed with javascript wise. e.g. "Click on an article" makes more sense from the image than traversing the page content looking for random links to click. Of course, it's kinda slow :)


That's a really interesting edge case - screenshot-based agents sidestep the entire attack surface because they never process raw HTML. All 10 attacks here are text/DOM-level. A visual-only agent would need a completely different attack vector (like rendered misleading text or optical tricks). Might be worth exploring as a v2.


Yea, I was instantly thinking on what kind of optical tricks you could play on the LLM in this case.

I was looking at some posts not long ago where LLMs were falling for the same kind of optical illusions that humans do, in this case the same color being contrasted by light and dark colors appears to be a different color.

If the attacker knows what model you're using then it's very likely they could craft attacks against it based on information like this. What those attacks are still need explored. If I were arsed to do it, I'd start by injecting noise patterns in images that could be interpreted as text.


author obviously isn't wrong. it's easy to fall into this trap. and it does take willpower to get out of it. and the AI (christ i'm going to sound like they paid me) can actually be a tool to get there.

i was working for months on an entity resolution system at work. i inherited the basic algo of it: Locality Sensitive Hashing. Basically breaking up a word into little chunks and comparing the chunk fingerprints to see which strings matched(ish). But it was slow, blew up memory constraints, and full of false negatives (didn't find matches).

of course i had claude seek through this looking to help me and it would find things. and would have solutions super fast to things that I couldn't immediately comprehend how it got there in its diff.

but here's a few things that helped me get on top of lazy mode. Basically, use Claude in slow mode. Not lazy mode:

1. everyone wants one shot solutions. but instead do the opposite. just focus on fixing one small step at a time. so you have time to grok what the frig just happened. 2. instead of asking claude for code immediately, ask for more architectural thoughts. not claude "plans". but choices. "claude, this sql model is slow. and grows out of our memory box. what options are on the table to fix this." and now go back and forth getting the pros and cons of the fixes. don't just ask "make this faster". Of course this is the slower way to work with Claude. But it will get you to a solution you more deeply understand and avoid the hallucinations where it decides "oh just add where 1!=1 to your sql and it will be super fast". 3. sign yourself up to explain what you just built. not just get through a code review. but now you are going to have a lunch and learn to teach others how these algorithms or code you just wrote work. you better believe you are going to force yourself to internalize the stuff claude came up with easily. i gave multiple presentations all over our company and to our acquirers how this complicated thing worked. I HAD TO UNDERSTAND. There's no way I could show up and be like "i have no idea why we wrote that algorithm that way". 4. get claude to teach it to you over and over and over again. if you spot a thing you don't really know yet, like what the hell is is this algorithm doing. make it show you in agonizingly slow detail how the concept works. didn't sink in, do it again. and again. ask it for the 5 year old explanation. yes, we have a super smart, over confident and naive engineer here, but we also have a teacher we can berate with questions who never tires of trying to teach us something, not matter how stupid we can be or sound.

Were there some lazy moments where I felt like I wasn't thinking. Yes. But using Claude in slow mode I've learned the space of entity resolution faster and more thoroughly than I could have without it and feel like I actually, personally invented here within it.



Haven't moved for years, but yeah same over here. Darksky data seemed perfect and now no matter what source of data I use in places like Carrot or the ios weather app gives me the accuracy Darksky had. Is it just climate change? I have no idea, but I agree, accuracy seems lost now without Darksky proper.


checkout forecastadvisor.com and see what's the best for your area.

I've sort of transitioned to using Ventusky and Windy to checkout the big picture stuff, then I make up my own mind about precipitation. I live in the PNW of the US and our terrain is so varied that forecasting services are kind of meh in general. They're decent for "it might rain for a while today" but anything hyperlocal tends to get bad because of the terrain in Oregon.


I know there’s a lot of Tesla/Elon hate here. I’m not denying any of it. I’m just sharing a genuinely strange experience I wasn’t expecting.

We needed a car again. Sold ours a year ago and got by with Uber, rentals, taxis. Life changed a bit and we needed something more predictable. I was planning to buy something used and boring and didn’t really care what.

My wife asked, “What about an EV?” We can’t charge in our rental garage, but there’s a Tesla Supercharger literally across the street. Took a Tesla test drive mostly out of curiosity.

And… I drove maybe 1% of that drive. The rest was on full self driving (FSD).

Fast forward, I now own a Tesla, and about 99% of my driving is on FSD.

Important context: when we picked it up, it was still on v13. It immediately made an illegal turn and scared some pedestrians in a crosswalk. So yes, I get the concern and skepticism. I had it too.

Then v14.2 landed.

Whatever they changed in that release feels real. It’s not just incremental. It feels like a different system. Elon says “we finally cracked it” (and probably says that all the time), so take that with a grain of salt, but with my very small sample size… it kind of looks like they might have.

Two moments that really stuck with me:

While self-driving, the car clearly anticipated a bus making a massive wide turn into our lane and hung way back until the maneuver was complete. It saw that developing long before I did.

At ~70 mph, I was mid lane-change with my blinker on when a driver towing a large trailer decided to drift into the same lane without checking their blind spot. The Tesla instantly aborted the lane change and smoothly moved back, avoiding what would’ve been a nasty accident. No panic, no hard braking, no drama.

I know this probably sounds like shilling. I’m not interested in the politics and don’t want to defend any of that. But it genuinely feels like stepping into the future, and honestly a much safer way to drive.

I want Rivian, Waymo, whoever to nail this too. I hope they do. But right now, Tesla seems to actually have something that crossed a line from “demo” to “wow, this is real.”

I didn’t expect to come away thinking that. But here we are.


Tesla drivers, the road is for everyone and the road is not an experiment. Please drive carefully & responsibly as accidents can destroy families.


The sad reality is that tons of people are terrible drivers. I’d much rather have Tesla self driving over a significant portion of the population.


This genuine technological breakthrough is real and should be a main topic when discussing Tesla.

Admittedly, the road to a working version of FSD has been a bumpy one, with many overly optimistic timelines, but now it's finally here, and it is almost completely ignored.


It’s been “here” or “almost here” for a decade according to Elon. The world and media are sleeping the hype because they ate up the hype for so long and never saw results.


This reads like “ChatGPT is a better programmer than me, so I let it do all the work”

You have a real obligation to learn how to drive. Your examples indicate neglect to take the safety of your family and others’ seriously


I stopped at "this reads like ChatGPT" but maybe I'm old and cynical :/

Agree with your take 100%.


Other than yelling at people, how are you getting drunk drivers off the road? Even though it's not perfect this shit works better than those assholes. Don't let perfect be the enemy of the good. Unless you're volunteering to drive Uber for free for everybody everywhere, telling people to just be more responsible hasn't worked in the whole history of humanity.


What percent of your driving is on highways vs urban? Almost all car brands today have incredible ADAS systems for highway driving. When Consumer Reports compared ADAS systems in 2023 Tesla was ranked 8th

https://www.consumerreports.org/-a2103632203/

If almost all of your driving is on highways then you could probably rely on ADAS for 99% of your driving with almost any other car brand as well


Why did you choose to run this through an LLM?


It's great until it isn't and it runs over some kids or smashes into a school bus. It doesn't matter how good the software is, the hardware is inadequate to be safe.


Does Elon’s politics and DOGE’s impact on the US change at all how you feel? Regardless of how great a Tesla, starlink, etc is I could never purchase on myself after the gutting DOGE did.


[flagged]


Aren't those "smart quotes" associated with Mac software? And which part of the story did you find irrelevant?


It feels unnatural compared to the rest of the comment history.

Eventually, we will all be sitting here with LLMs trying to digest each other’s LLM generated comments.


Didn't forget. Just don't think it makes sense for these things to try and do both jobs. Slides that work as documents are bad presentations. Slides that work as presentations are useless documents. Even Slideshare's own description of itself isn't "for people that missed the original presentation here's your missing thing".

Its: "Slideshare is the presentation-sharing platform for anyone looking for slide inspiration, to showcase their knowledge, or to build on their own ideas."

So for people to: 1) get inspired building their own deck? 2) "Showcase knowledge" whatever that really means. 3) "Build on their own ideas" => get a bullet point or graph you could use?

No one is learning how to do anything from Slideshare because you missed a presentation. Instead, you're coming to this place to read blogs and actual long form content or watching youtube videos.


You want a backdrop to your speech. I want to be able to understand what message you presented when all I have is your scenery.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: