Hacker Newsnew | past | comments | ask | show | jobs | submit | more EGreg's commentslogin

My libertarian view on discrimination (independent of the Civil Rights Act) is this:

If a service is not widely available in the region, any systematic discrimination leading to refusing to provide service, or specific level of service or care, based on anything unrelated to the ability to provide it, should be illegal, locally, in that community. Rules like ousting disruptive customers apply across the board.

If a service is widely available, however, then “x-only” service providers should be allowed to operate (as indeed they are with women-only gyms, Jewish-only clubs, or nightclubs that let women in first and charge the men) as long as they advertise it up front and not make people go there only to find out that “ladies can go in free of charge, men pay $300 for a table with bottle service”

PS: replace “ladies” and “men” with “whites” and “blacks” and hear how that sounds. And no, citing crime or violence statistics shouldn’t play a role in shaping whether people can get into places, whether it’s women citing male vs bear violence / harassment or people citing racial FBI statistics on violence / harassment. This is the prosecutor’s fallacy.


Yes, I think the argument that "discrimination is fine so long as it doesn't result in complete shutout of a vendor/customer" is reasonable. But that argument didn't fly for the cake controversy case, so society doesn't seem to agree.

Interesting

One of you is afraid that YOU are going to get assaulted or worse.

The other is afraid you’re going to get ACCUSED of it.

What has this society become?


A low-trust society.

Im also gonna be honest

Ever since LLM generated content proliferated we now have “This isn’t X. It’s Y” shibboleths EVRYWHERE!

A person doesn’t normally start a sentence with “This isn’t a silly minor thing that you wouldn’t think it was but I had to say it out of the blue as a set up for the next sentence.” only to be followed by “This is a major deal worthy of you resharing and liking!”

They might do the clauses in the other order, though. “This is a huge deal! Not just business as usual.”


> Ever since LLM generated content proliferated we now have...

Or maybe, ever since you became aware of it, you started increasingly becoming aware of it?

See: https://en.wikipedia.org/wiki/Frequency_illusion


Nope. It is generated by LLMs, and a few people got influenced by it now.

It isn’t like em-dashes


I've definitely been writing like that for a long time.

"Nope. It isn’t like em-dashes. It is generated by LLMs, and a few people got influenced by it now."

Your comment can be rearranged in that it's not X it's Y format too.


Yes, but it's not natural to say "It's not <non sequitur thing no one was talking about>. It's <amazing globally impactful thing that should make you pay attention>." That's how LLMs write though.

Amazing as in his stuff actually works?

I just hear him promoting OBLITERATUS all day long and trying to get models to say naughty things


Yeah but i think the philosophy is to show how precarious the guardrails are

Previous discussion a few months ago -- same problems persist: https://news.ycombinator.com/item?id=45632336

Literally came here to write this

There are some models that everyone wants but companies discontinue or never make

iPhone 13s was the last one

Another example was the Cadiallac Ciel at Pebble Beach. Only ever appeared in Entourage after that.


I wonder what people will say to that.

I personally think neither Go nor Java would be good for "agents". Better to have them sandboxed in WASM.


Sandboxing is a completely orthogonal issue and WASM is probably not a good direct target for LLMs.

Of course writing a language that compiles to Wasm is certainly a way, but you would have to sandbox also all the other tools that is used during development (e.g. agents can just call grep/find/etc).


WASM isn't a language you'd want to program with. you can't verify outputs nor is there any proper training data aside from examples and such


It isn’t about training anymore. It is about harnesses.

Just look at new math proofs that will come out, as one example. Exploration vs Exploitation is a thing in AI but you seem to think that human creativity can’t be surpassed by harnesses and prompts like “generate 100 types of possible…”

You’re wrong. What you call creativity is often a manual application of a simple self-prompt that people do.

One can have a loop where AI generates new ideas, rejects some and ranks the rest, then prioritizes. Then spawns workloads and sandboxes to try out and test the most highly ranked ideas. Finally it accretes knowledge into a relational database.

Germans also underestimated USA in WW2, saying their soldiers were superior, and USA just had technology — but USA out produced tanks and machinery and won the war through sheer automation, even if its soldiers were just regular joes and not elite troops.

Back then it was mechanized divisions. Now it is mechanized intelligence.

While Stalin said: Quantity has a quality all its own.


There is no "new ideas" with AI. Claiming the opposite is a fundamental misunderstanding of the technology.


While that’s kind of true in some sense, I think there’s an argument to be made for the contrary: that the mechanism for generating new ideas in humans is not quite as special as we would like to think.

In other words, creativity in humans is arguably just as derivative as in machines.


I think this can be falsified by just considering the history of humanity. It wasn't that long ago that human language literally did not even exist. And our collective knowledge wasn't all that much more than 'poke him with the pointy end'. Somehow we went from that to putting a man on the Moon, unlocking the secrets of the atom, and more. And if you consider how awful we are at retaining/sharing information and just general inefficiencies due to the fact that we're humans and not just logical information processing machines, we did all of this in little more than the blink of an eye. This is something that seems to certainly be rather special.


All that humanity has achieved happened due to the simple loop of identifying a desire/need and finding a way to satisfy it. Also known as reinforcement learning. The only thing that really differentiates humans from machines is... history. We've been learning and passing on our knowledge to successive generations over millennia. Nothing really special there; give the machines a few years to learn and see what happens.


What needs do machines have? What desires do they have?


None, yet. But you can be 100% sure it's something we'll eventually succeed in adding, as it's through the guidance of desires and needs that intelligence really expresses.


Not sure how that follows but okay


What a ridiculous take lmao.

What are your contributions again?


And what exactly is ridiculous about it?


Reinforcement learning requires a well defined goal and a well defined way to quantitatively measure progress along that goal. In reality these don't exist without a hand of God guiding you. In the case of machine learning that hand of God is our own. Even given infinite processing power, you could not construct a reinforcement learning system that would mimic humanity's progress - it simply is a nonstarter due to the nature of reinforcement learning itself.


Conceptually, it's really not as hard as you make it seem. There are layers, but once you peel them away there's only one thing left, which all living things share: the drive to survive (maintain internal state parameters within a certain range by accessing nutrition, protection from environmental elements, security from other survival-seeking entities, reproduction to pass on genes, etc). No need to bring God/gods into it.

There's also no need to specifically mimic humanity's progress; that's just an accident of survival facilitated by opposable thumbs and language ability. We've already made machines with the base abilities, and emulated the drive (see evolutionary algorithms[0] for example). We just need to put it all together in a few units and let them "loose" to evolve on their own for a while. It took humans ~300,000 years to get where we are today; I'm positive that it'll take machines a small fraction of that. Nothing special.

[0] https://en.wikipedia.org/wiki/Evolutionary_algorithm


Very little, certainly approximately 0%, of what humanity has done has been driven by base survival instincts. You're describing the process to mimic a roach, not a human.


You do work, don't you (or at least are in school)? Do you do it for sheer fun? Or to be able to afford things that allow you to survive?

Even those who do something like art "for fun" do it because it sates an internal need (not all actions need to make sense, because the inherent randomness of evolution is messy and leaves artifacts). Though also the desire to create some form of legacy can be considered a kind of survival: to be remembered by others beyond one's own lifespan.


I’m not claiming an LLM is structurally or functionally equivalent to a human brain. I just said that what we call “creativity” is in fact a very derivative thing.


I hear this sentiment a lot but it doesn’t ring true for me.

What is an idea really and what’s your definition of new?

If i get a LLM to spit out, I dunno, a deployment system written in haskell that uses bittorrent or something, none of those bits are new, but certainly there will be unique challenges to solve in the code and it’s a new system.

Where is the line for new? Is it in combining old ideas? If not then does any software have “new” ideas? It’s all combinations of processor instructions after all…


What I am excited about is the possibility of LLMs to draw conclusions from the last 150years of scientific papers.

There have been lots of instances of knowledge being rediscovered even when it was previously published but sitting on some shelf forgotten. LLMs ability to digest large volumes of data will I think help with this issue.

We will still need to reproduce and verify conclusions but will be interesting to see what might come from this.


i don't think all sides of this discussion agree on what a "new idea" is. i am a very creative person but i've never had a truly original thought and i don't know how having one would be possible


It depends on what layer you look at I think, shoulders of giants and all that..


that's only partially true.

AI can innovate in synthetic-realm of novel ideas, while real-world novelty will remain untouched.

There are different types of novelties


If AI could innovate it wouldn't be a public product. It would be a cash cow. Why give your customers the ability to come up with new and amazing ideas when you can just keep it for yourself and launch a thousand products? USA is a capitalist society. It doesn't share profitable ideas.

And if AI was really about productivity they'd be talking about doing more faster with the same workforce, not reducing the workforce.


if you like, the business model is called Innovation-as-a-Service :)

That's perfectly aligned with capitalistic motivations


What is a "real-world novelty" and what prevents AI from touching it?


Isn’t that exactly how humans (and even animals) operate?

Human societies look for actual major correlations and establish classifications. Except with scientific-minded humans, we often also want, to know the why behind the correlations. David Hume got involved w that… https://brainly.com/question/50372476

Let me ask a provocative question. What, ultimately, is the difference between knowledge and bias?


To a certain degree yes.

https://en.wikipedia.org/wiki/Esagil-kin-apli

In this Mesopotamian text, diagnostic rules are structured as a nest of if then else rules. So I have been told, not that I have read it myself.


Let me clear it up

The Trump administration acts cartoonish and fickle. They can easily punish one group, and then agree to work with another group on the same terms, to save face, while continuing to punish the first group. It doesn't have to make consistent sense. This is exactly how they have done with tariffs for example.

Secondly, the terms are technically different because "all lawful uses" are preserved in this OpenAI deal, and it's just lawyering to the public. Really it was about the phrase "all lawful uses", internally at the DoD I'm sure. So the lawyers were able to agree to it and the public gets this mumbo-jumbo.

I thought mass surveillance of Americans was unlawful by the DoD, CIA and NSA? We have the FBI for that, right? :)


Sure, but OpenAI is also being disingenuous here pretending they’re operating under the same principles Anthropic is. It’s not and the things they’re comfortable with doing Anthropic said they’re not


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: