Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does anyone working in tech ACTUALLY thin transhumanism is a good idea? It seems like a field ripe for criminal exploitation and hacking, even in an optimistic future.

Unless I lose a limb, or an eye, keep my body offline.



In a way I already use my smartphone as an extension of my body. Compared to the pre-smartphone era I remember fewer facts, knowing that I can easily look them up. I outsource even moderate arithmetic to my smartphone instead of calculating it myself. I use pictures and notes to aid short-term memory.

Of course that's dangerous (just imagine the gaslighting potential), but also very useful. The only real drag is the awfully low bandwidth between my brain and my phone. The display-to-brain connection over the visual cortex is ok-if-wasteful, but the input with touch is a real drag. How cool would it be to just have a math-coprocessor implanted that I can communicate with at the speed of thought. Or a knowledge database that I can query with my inner voice?

Is it incredibly dangerous? Yes. But many things are, and hopefully we will learn to mitigate the dangers.


People are already transhuman now. I have two pieces of technology glued to the outside of my body as we speak, they are monitoring my blood glucose levels and adjusting the amount of insulin my body receives in near real time.

Could someone hack it and attempt to overdose me with insulin. Yes, it is a risk.

But it is a risk I'm a thousand percent willing to take because life before it was chore where you risk falling to sleep and never waking up again.


Yes, but would you have it if you didn't need it? That was my point. I can see using transhuman augmentations to help those in need...give the blind sight, amputees limbs, etc. But my concern is when it becomes a vanity issue over a needs-based issue. Replacing body parts should not be a light decision unless it is something needful.


I mean, it's not a light issue currently because the risks are high. As high risk individuals have these parts replaced over time we'll be able to quantify these risks. The risks will reduce and eventually people in 'no risk' scenarios with their current body parts will be able accurately judge if added risks of mechanical parts are worth it. Eventually the risks will be borne by the people that do not accept augmentations. But typically these things will take very long periods of time to play out.


That's a pessimistic and myopic view of transhumanism. Where it becomes a good idea is a context that humanity is limited in its current (biological) form.

There are lots and lots of things that could kill our species over the next several thousand years and the transhumanists simply think it is worth investing in looking into how we could potentially use technology to adapt to those threats or enhance the human experience.


Working in tech + have a pathologist for a father.

Unequivocally yes, I think transhumanism is a good idea.

Arguing against transhumanism is arguing that you believe either (a) the body was created perfectly or (b) that we'll do more harm than good if we tinker with it.

The body is a machine. A bloody complicated machine, with many parts we don't understand, but a machine nonetheless.

I don't believe in a future where substantial body modification doesn't make us better, in all objective senses.

And we do it today: vaccinations, pacemakers, artificial hips and joints and valves, cochlear implants, the myriad of drugs to modify various bodily processes, limb replacements, mRNA drugs.

The body is programmable: it's just very complex. But that hasn't stopped up as a species from achieving other equally difficult things.


Thanks, and well put. It's amazing how people who think, "I don't want to mess up my body," will come to a different conclusion, once they are facing death or disability.

Perhaps experience really is the best teacher. ;)


My main area of concern is people who can exploit these augmentations for political, personal, and profit reasons. Even in the best scenario possible, these things are going to happen. All code is flawed in some fashion. Unless we plan on keeping these augmentations completely offline, they can and WILL be exploited by hackers. Imagine how giddy North Korea would be to be able to mess with the President's body, or having a competitor hack your eyes to see corporate secrets. Or a disgraced lover wanting revenge. These are already real threats with current technology to an extent...how much more deadly will these threats become when we can't just throw the tech out or modify them, because they are parts of ourselves?

Even better yet, what happens when those parts of yours no longer get support? There are already people who were able to gain sight through robotics, but lost vision again once the company that sold them the eyes went out of business. We can't just blindly think about the best case scenario and call it good. We need to think of worst case scenarios to provide a safer future.


Even in the best scenario possible, these things are going to happen. All code is flawed in some fashion. Unless we plan on keeping these augmentations completely offline, they can and WILL be exploited by hackers.

I think that companies are capable of making secure systems when they have the right incentives. The Xbox One, launched 9 years ago, has never had its security broken such that it could play copied games. That's even though customers have physical access to the hardware and ample incentives to hack it. Microsoft prioritized security because it's directly aligned with the money the ecosystem makes from game sales. Implants don't need to run all the complicated software a modern game console does (which includes a web browser and video players as well as actual games), so they could have a much smaller software attack surface in the first place.


While you are correct about the xbox, it will eventually be cracked one of these days. Its not a matter of if, but when. But would you seriously trust that when it is your own body we are talking about? What is the incentive for companies to continue supporting old parts? Good will? Maybe if there is regulation, but if it is more profitable to pay the fees than issue a recall or consider continuing support, then why bother?

Even Microsoft's security isn't perfect...a group of hackers found the full specs and software for the Xbox One months before it was released or details were announced. Look up "XBox Underground." Even if the device itself is safe, there are other alternative ways of jacking things up. Imagine if the eyes you use are safe...but the server they connect to isn't, and some creep decides to watch you and your lover having sex.


Your argument applies to biological machines too. Think of advertising. The only reason your mechanical augmentations seem more exploitable is because the technology involved seems better understood fnord.


> Does anyone working in tech ACTUALLY think transhumanism is a good idea?

No but...as long as technology advancement proceeds at a faster pace than evolution, human obsolescence is bound to happen someday. Transhumanism was an optimistic movement that humans could somehow stay in control as cyborgs (when it is much more likely that artificial life replace humans without human components).


I'll be keeping my body offline. But I think fusion with AGI is inevitable if we want to remain somewhat in control of our future. Otherwise we will be super ceded by super intelligent AI and become their pets.


We wouldn’t even notice we’ve become pets of any kind of superintelligence; one can’t notice intelligence vastly superior to their own, same way your cat can’t even remotely grasp what the code on your screen is for.


Heh - this sounds kind of like theism. :)


Except that I'm not talking about some mythical "supreme beings", I'm talking about things that we've ourselves created. When you look at politics, you'll see that in many countries, from USA to Russia, "collective interest" matter more than those of humans; from corporations "sponsoring" regulations, to steadily lowering life quality and, at the same time, skyrocketing company income.


Oh, okay.

I personally wouldn't have called those things "intelligences" - more like clashes of unintended perverse incentives and ignored negative externalities.

That said, I didn't think you were talking about a hypothetical deity - it's just what your metaphor made me think of.


>I personally wouldn't have called those things "intelligences"

Why, though? By which criteria they are not intelligent beings?


Because in my experience these systems act dumber than most individual humans manage to.

They function much more like a drunk, demented DNN optimizing for some unknown set of variables than like a human using its intelligence to solve problems.

IMO. YMMV.


They wield power not available to individual humans. They achieve things individual humans can’t. They get people to sacrifice themselves (up to a literal karoshi) for company’s interests.

Are you sure you don’t do things that appear stupid to your cat?

>They function much more like a drunk, demented DNN optimizing for some unknown set of variables than like a human using its intelligence to solve problems.

That’s a weird sentence. It’s written as if there was some fundamental difference between the two.


> They wield power not available to individual humans. They achieve things individual humans can’t. They get people to sacrifice themselves (up to a literal karoshi) for company’s interests.

Do these have something to do with intelligence? I see power and intelligence as largely orthogonal. Ditto control and intelligence.

See Donald Trump during his presidency, for instance. Mountains of power, control of the nuclear arsenal, massive private wealth. People threw their careers away to tell lies for him.

Still an idiot.

> Are you sure you don’t do things that appear stupid to your cat?

Not at all. I see your point here and it's a valid one (though I still can't do anything but reason from the reference frame I've got).

Though I'm not sure my cats even have a category for "stupid".

> > They function much more like a drunk, demented DNN optimizing for some unknown set of variables than like a human using its intelligence to solve problems.

> That’s a weird sentence. It’s written as if there was some fundamental difference between the two.

I'm not sure there's a fundamental difference, but I'm also not convinced there isn't.

Certainly the output I've seen from image recognizers, GPT-N, image generators and similar have seemed recognizably different to me from skilled human makers.

They're occasionally quite good at aping a particular style or topic, but I don't remember seeing anything so far that reminds me of the flashes of inspiration, insight, and understanding I'm used to seeing from other humans while they're problem-solving.

I've also seen people much more informed than I make cogent-seeming arguments that DNNs are necessary but not sufficient for something like human intelligence, and that we'll need other paradigms to combine with them before we really see progress.

I've seen the opposite claimed too, in a reasonable and coherent way - that DNNs are all we need and they just need more scale than we've been able to throw at them so far.

I'm not sure which stance I think is right (if either - not too sure human intelligence is actually a tractable problem, given that we don't even have a coherent definition of it, nor a clear idea of whether the not-understood nature of consciousness may have anything to do with it).


where do people get this idea that a flat plane of silicon can ever out-compete the chemical interconnectivity of fractally-folded grey matter (don't get me started on performance-per-watt)

AI's greatest risk is people assuming it's superior while being little more than a charismatic magic 8-ball


Who said anything about silicon? Maybe AGI will be built on our current bread of processors or maybe the next generation. The only way i see you can be correct is if today's exponential technological advancements come to a halt or reverse. I do not foresee a future where we do not eventually create an AGI superior to us. Maybe in 5 years time, maybe in a 100. But it is coming.


Because that fractally folder grey matter has only been generated via random walk and what survives to breed. It is also filled with massive stupidities that can end an individuals life in a moment.

You're also thinking quite 2 dimensionally and limiting yourself to one medium. Carbon nanotube computing, chemical computing, even computing with DNA are all platforms that we can harness and have the potential to master.


There was a span of 14 years between an implant allowi g control of an artficial hand, in 2005 , and an implant allowing speech synthesis from brain signals , in 2019 [0]. I am not sure the pace of innovation is accelerating drastically. (Edit: Sorry, I posted the wrong url in the previous version of my reply)

[0] https://roboticsbiz.com/the-history-of-brain-computer-interf...


Which actually might not be all that bad.

Or we could go all conspiracy theory and suggest that we already are pets. ;)


Hah, A lot of us already are pets, but at least the masters are still human.


IMHO Technology has the potential to be awesome.

It's just that we're focusing most of the engineering effort on selling ads and generating dark patterns in phones.


My main area of concern is hacking. Imagine how easy it would be to jack someone's life up because they bought a "smart arm" or got AR eyes. Where technology exists, bad actors will exploit it.


Agreed. The smartphone market ruined the idea of cybernetic augmentation for me. It made it clear that any augmentation would inevitably have to be sold and maintained by some company, and I couldn't think of any companies I'd trust to support hardware in my body.


Does anyone working in tech actually think it's a PLAUSIBLE idea? The idea of a computer being able to work AT ALL similarly to a human brain is laughable to me. The brain is a physical organ that we barely understand. Simulating a human mind would probably require simulating interactions at a molecular level, which may not ever be possible.


It's also not just your brain. Like... what are you without your stomach?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: