Hacker Newsnew | past | comments | ask | show | jobs | submit | janalsncm's commentslogin

The problem with this argument is you can’t know or control what will happen in the future with something you built. This is the same moral dilemma the scientists faced after developing nuclear bombs.

And the future is not deterministic (or if it is, it is highly chaotic) so the existence of a thing does not have a simple relationship with what will happen in the future. Scientists who developed convolutional neural nets could not know how much good or evil was caused by image recognition technologies. The same technologies that are used to detect tumors in images can be used to target people for assassination.

There are exceptions, but my opinion is the supply chain of evil is paved with mundane inventions.


Yes, yes, true, but you've massively moved the goalpost. The original commenter was referring to people working at xAI right now. To continue your comparison, your argument would be like Oppenheimer claiming "How could I have ever known my work would be used as a weapon? I just wanted to make big explosions."

I don't know why this argument often pops up in these kinds of discussions. Approximately no one is judging people who have done their best effort to avoid doing harm. We are judging people who don't care in the first place.


Well if I moved it, consider this to be me putting it back where it was: people who continue to work on things which are concurrently being used in mostly harmful ways and have means to find a different job have no excuse.

As far as Oppenheimer is concerned, his argument is not that nukes are harmless, but that they are less harmful than Nazis, and much less harmful than Nazis with nukes.


Thanks, I can very much agree with that.

Re Oppenheimer: I know. My point was that he very much knew what his work was being used for, as should people working at xAI at the moment.


Plenty of the scientists involved in the Manhattan projects had immediate regrets. Plenty of rich people working in tech don't. That's the difference between having morals and not having morals, and the latter group needs to be judged and shunned.

Aside from the Maslow’s hierarchy of needs points others are making, I believe it has something to do with the history of AI research.

There is a big overlap between the “rationalist” and “effective altruist” crowds and some AI research ideas. At a minimum they come from the same philosophy: define an objective, and find methods to optimize that objective. For AI that’s minimizing loss functions with better and better models of the data. For EA, that’s allocating money in ways they think are expectation-maximizing.

Note this doesn’t apply to everyone. Some people just want to make money.


I would argue it was both. No doubt this company was marketing it in a way to make it seem very reliable. And all of the procedural things afterwards made the error so much more damaging.

But imo this is why local police departments should not have access to this kind of tool. It is too powerful, and the statistical interpretation is too complicated for random North Dakota cops to use responsibly. Neither the company nor the PD have an incentive to be careful.


It's not an AI error. The face recognition AI simply said that it's a "potential match", which is correct. It's the humans' job to confirm that a potential match is in fact a match, especially when the suspect is 1,900 kms away.

I read it as her arrested and held in Tennessee temporarily then flown to North Dakota.

“Lipps would sit in that Tennessee jail cell for nearly four months. As a fugitive, she was held without bail”

> In all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial

This is from the Sixth Amendment. Where the rubber hits the road is what “speedy” means.


I don’t know what tool they used, but it was very likely not an LLM. They probably have some database of drivers’ licenses and they ran a similarity search against the surveillance footage. This poor lady happened to be the top match.

Even if it also output a score, that score depends on how the model was trained. And the cops might ignore it anyways.


LLMs have significantly reduced the time I’ve spent chasing down cryptic errors on stack overflow, old github issues, or asking in random slack channels about it. Even if that’s all they did, they would be very valuable.

If that means I’m actually coding instead of figuring out why xyz random plugin isn’t doing its job right now, some subsystem that I need but don’t care to learn the internals of, then I am happy.


If that is the case, you could consider a different website like chatgpt.com which will give you much more immediate feedback on your ideas.

I am here to express my ideas and opinions. They might not always be popular, but they are my opinions (that is reason that I have 3x less karma than you but I was here 11 years longer). And some people will debate my opinions and try to convince me that I am wrong. And sometimes I learn soemthing.

But if we start ignoring ideas and opinions and instead focus on superficial things like how they are written or communicated, then the whole point of HN is lost.


> I am here to express my ideas and opinions

If that is true you shouldn't have any objection to a rule against letting a chatbot express your ideas and options for you. Express yourself, because asking a chatbot to do your thinking and writing for you is not a superficial thing.

> But if we start ignoring ideas and opinions and instead focus on superficial things like how they are written or communicated, then the whole point of HN is lost.

How a message is communicated matters and always has. Even before this rule, I could express opinions here in ways that would get me banned from this website, and I could express those exact same opinions in ways that would not. Ideas and opinions still matter, but so does how we communicate them. It's a very small ask that you express your own thoughts in your own words while participating here.


Writing is the product of thinking and understanding. An LLM can write for you but it cannot understand for you.

I tend to think these things are self correcting. Understanding still matters, I hope.


The amount of resources is not a social construct but how they are distributed is.

The mean American has a net worth of $620k. The median American net worth is $192k.

The global mean net worth is $95k. The median is $9k.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: