Hacker Newsnew | past | comments | ask | show | jobs | submit | more the_pwner224's commentslogin

According to Victor Frankl (psychiatrist concentration camp survivor, author of Man's Search for Meaning) being in a shitty situation without any control over it isn't necessarily bad. Humans can still be mentally okay in the face of extreme unavoidable suffering like in the camps. The key thing is that you need to have some purpose / meaning in life (often a loved one or dependent).


Flax seeds & other seeds provide ALA but not EPA & DHA. You need all 3.

The body has some ability to convert ALA to EPA & DHA, but at extremely low rates (particularly for DHA) - it's not a consideration in practice.

So no, eating seeds will not fulfill your body's requirements.


> That’s not too shabby for a 2022 non EV car

It very much is! (no offense) And EV vs ICE doesn't make a difference, the manufacturers put the same ADAS systems regardless of the powertrain.

BMW has had radar cruise control + lane keeping since 2016 I think? In 2019 they added full hands-free operation in highway traffic (up to 40 mph) as well as auto lane change when you tap the turn signal. In 2023 they have full hands free up to 85 mph on highways, plus auto lane change w/ navigation integration and auto-overtaking (car promps you to check the mirror, then changes lane completely touchless).

A frickin' 2020 Honda Civic has the same ADAS functionality as your '22 Porsche, even on the base trim ($21k). Porsche is way, way behind. And that's before you even get to all of the non-ADAS drivers assistance systems for parking, reversing, etc., which again the other Germans trash Porsche on.


Why would you care about ADAS on a drivers' car? Sure that might be useful on a Camry or another point A -> point B appliance, but I doubt Porsche buyers give any thought to those features.


You can get a BMW for $40k or $120k. Big spectrum. As another datapoint, I have one of those higher tier BMWs and even the top trim Lucid's interior feels like a downgrade compared to my car. The $50-80k BMWs also feel cheap and crappy to drive when I've tested them. Tesla can't compete on anything except their ADAS which is superior.

If you're transitioning from a barebones 330i then yeah the Tesla is probably better. But it's not even close when you compare to the top end German vehicles.


Thanks for the counter counter point. I actually don't know which models BMW were in this guy's past. The last one was either a 5 or 3 series, four doors, not an M.


You could run a second lightweight model to inject ads (as minor tweaks) into the output of the primary powerful model.


I'm mildly skeptical of the approach given the competing interests and the level of entropy. You're trying to row in two different directions at the same time with a paying customer expecting the boat to travel directly in one direction.

Imagine running the diagnostics on that not working as expected.


It takes a solid 45 seconds for me to enable zram (compressed RAM as swap) on a fresh Arch install. I know that doesn't solve the issue for 99% of people who don't even know what zram is / have no idea how to do it / are trying to do it for the first time, but it would be pretty easy for someone to enable that in a distro. I wouldn't be shocked if it is already enabled by default in Ubuntu or Fedora.


Zram has been enabled on Fedora by default since 2020:

https://fedoraproject.org/wiki/Changes/SwapOnZRAM


Zswap is arguably better. It confers most of the benefits of zram swap, plus being able to evict to non-RAM if cache becomes more important or if the situation is dire. The only times I use zram are when all I have to work with for storage is MMC, which is too slow and fragile to be written to unless absolutely necessary.


that just pushes away the problem ,it doesn't solve it. I still hit that limit when i ran a big compile while some other programs were using a lot of memory.


For games that have FSR built-in you can enable it in the game settings, then it'll only scale up the game content while rendering the HUD at native resolution. And can use the better upscaling algorithms that rely on internal game engine data / motion vectors, should reduce artifacts.

The other cool things is they also have Frame Gen available in the driver to apply to any game, unlike DLSS FG which only works on a few games. You can toggle it on in the AMD software just below the Super Res option. I quickly tried it in a few games and it worked great if you're already getting 60+ FPS, no noticeable artifacts. Though going from 30=>60 doesn't work, too many artifacts. And the extra FPS are only visible in the AMD software's FPS counter overlay, not in other FPS counter overlays.

I recently got a Asus Rog Flow Z13 gaming "tablet" with the AMD Strix Halo APU. It has a great CPU + shared RAM + ridiculously powerful iGPU. Doesn't have the brute power of my previous desktop with a 4090, but this thing can handle the same games at 4k with upscaling on high settings (no raytracing), it's shockingly capable for its compact form factor.


And the 285H is lower performance than a 275HX.

Their laptop naming scheme at least is fairly straightforward once you figure it out.

U = Low-TDP, for thin & light devices

H = For higher-performance laptops, e.g. Dell XPS or midrange gaming laptops

HX = Basically the desktop parts stuffed into a laptop form factor, best perf but atrocious power usage even at idle. Only for gaming laptops that aren't meant to be used away from a desk.

And within each series, bigger number is better (or at least not worse - 275HX and 285HX are practically identical).


Don't forget the V series in there. I have an Intel(R) Core(TM) Ultra 7 258V in my Thinkpad. I think they're still being made. I bought an open box Thinkpad T14s Gen 6 with it - they come with a nicer GPU than the Ultra 7 255U.


The V series is a one-off thing Intel did, but they don't have a direct successor planned.

Previously, they had a P series of mobile parts in between the U and H series (Alder Lake and Raptor Lake). Before that, they had a different naming scheme for the U series equivalents (Ice Lake and Tiger Lake). Before that, they had a Y series for even lower power than U series.

So they mix up their branding and segmentation strategy to some extent with almost every generation, but the broad strokes of their segmentation have been reasonably consistent over the past decade.


Very interesting. I was a bit out of the loop on Intel mobile CPUs; I looked up the benchmark specs for it when purchasing and saw that it generally trounces the 255U.

I've been really quite happy with it - most of the time the CPU runs at about 30 deg C, so the fan is entirely off. General workloads (KDE, Vivaldi, Thunderbird, Konsole) puts it at about 5.5 watts of power draw.


You don't need CUDA for gaming but software is still just as big of a moat. Gaming GPU drivers are complex and have tons of game-specific patches.

With their new Radeon/RDNA architecture it took AMD years to overcome their reputation for having shitty drivers on their consumer GPUs (and that reputation was indeed deserved early on). And I bet if you go read GPU discussion online today you'll still find people who avoid AMD because of drivers.

That won't stop them, but it's a big barrier to entry.

Oh and that's just to get the drivers to work. Not including company-specific features that need to be integrated by the game devs into their game codebase, like DLSS / FrameGen and FSR. And in the past there was other Nvidia/AMD-specific stuff like PhysX, hair rendering, etc.


Cuda is 20 years old and it shows. Time for a new language that fixes the 20 years of rough edges. The Guy (Lattner) who made LLVM is working on this: https://www.modular.com/mojo

Good podcast on him: https://newsletter.pragmaticengineer.com/p/from-swift-to-moj...


What I gather from this comment is that you haven't written CUDA code in a while, maybe ever.

Mojo looked promising initially. The more details we got though, the more it became apparent that they weren't interested in actually competing with Nvidia. Mojo doesn't replace the majority of what CUDA does, it doesn't have any translation or interoperability with CUDA programs. It uses a proprietary compiler with a single implementation. They're not working in conjunction with any serious standardization orgs, they're reliant on C/C++ FFI for huge amounts of code and as far as I'm aware there's no SemVer of compute capability like CUDA offers. The more popular Mojo gets, the more entrenched Nvidia (and likely CUDA) will become. We need something more like OpenGL with mutual commitment from OEMs.

Lattner is an awesome dude, but Mojo is such a trend-chasing clusterfuck that I don't know what anyone sees in it. I'm worried that Apple's "fuck the dev experience" attitude rubbed off on Chris in the long run, and made him callous towards appeals to openness and industry-wide consortiums.


Most of the stuff you pointed out is addressed in a series of blog posts by Lattner : https://www.modular.com/democratizing-ai-compute


Many of those posts are opinionated and even provably wrong. The very first one about Deepseek's "recent breakthrough" was never proven or replicated in practice. He's drawing premature conclusions, ones that especially look silly now that we know Deepseek evaded US sanctions to import Nvidia Blackwell chips.

I can't claim to know more about GPU compilers than Lattner - but in this specific instance, I think Mojo fucked itself and is at the mercy of hardware vendors that don't care about it. CUDA, by comparison, is having zero expense spared in it's development at every layer of the stack. There is no comparison with Mojo, the project is doomed if they intend any real comparison with CUDA.


what is provably wrong ?


mojo been in the works for 3+ years now.... not sure the language survives beyond the vc funding modular has.


Yea, but less than in the past. Modern graphics APIs are much thinner layers.

This was even proven in practice with Intel’s Arc. While they had (and to some extent still have) their share of driver problems, at a low enough price that isn’t a barrier.


> Gaming GPU drivers are complex and have tons of game-specific patches.

I don't think the Chinese government will be too upset if cheap Chinese GPUs work best with China-made games. It will be quite the cultural coup if, in 20 years time, the most popular shooter is a Chinese version of Call of Duty or Battlefield.


They made the most popular RPG last year already - why do you think it'll take 20 years for them to make the most popular shooter? For that matter, the Singapore-HQed SEA makes Free Fire, which topped Google Play in 2019.


Im aware of Genshin Impact, and that NetEase is behind Marvel Rivals. FPS tend to have sticker fanbases, but I chose 20 years because that's what I guess is how long it may take not only for the domestic EUV to launch and get yields good enough for a cheap but competitive GPU out the door.


FPS like Valorant, owned by Riot Games, owned by Tencent?


Gacha mobages are rarely considered the same kind of entertainment as actual RPGs, and even then the Japanese and the Koreans give them stiff competition. When it's not a skinner box FOMO with titillating skins the Chinese barely register on the radar.


I think OP is about Wukong, not gachas.


On the other hand, all it would take would be one successful Steam Deck/Steam Machine-style console to get all the developers of the world making sure that their games work on that hypothetical GPU.

I don't think that it will happen in the next 5 years, but who knows?


I believe the software will follow the hardware. Not immediately, of course, but if I want to learn to do ML and have to pick between a $2500 Nvidia GPU and a $500 Chinese GPU that's 80% as fast, I would absolutely take the cheap one and keep an eye out for patches.

When it comes to drivers, IMO all they really need is reasonable functionality on linux. That alone would probably be enough to get used in a budget steam machine or budget pc builds, with Windows 11 being a disaster and both RAM and GPU prices shooting through the roof. The choice may soon be Bazzite Linux with a janky GPU or gaming on your phone.


Its not really just that AMD drivers are not that great (they are not) but they have been stable for a long time.

Its that nvidia relentlessly works with game developers to make sure their graphics tricks work with nvidia drivers. Its so obvious you miss it. Look in the nvidia driver updates they always list games that have fixes, performance ect. AMD never (used?) to do this they just gave you the drivers and expected developers to make their game work with it. The same strategy that MS used for their OS back in the 90's.

Thats at least how things got where they are now.


AMD provides this. Example: "Fixed Issues and Improvements

Intermittent driver timeout or crash may be observed while playing Warhammer 40,000: Space Marine 2 on some AMD Graphics Products, such as the AMD Ryzen™ AI 9 HX 370.)

Lower than expected performance may be observed in Delta Force on Radeon™ RX 7000 series graphics products.

Intermittent stutter may be observed while playing Marvel Rivals when AMD FidelityFX™ Super Resolution 3 frame generation is enabled. "

https://www.amd.com/en/resources/support-articles/release-no...


Glad to see. Ive been 100% certian that nvidia will ultimately abandon the gfx market since 2022. If AMD doesnt pick up the torch computer gfx will stagnate for at least a decade. Its already regressing.


The whole “improve a game’s performance on the driver side” thing: does AMD simply not do that at all? Or just far less?


They definitely do it some, like Starfield came out with FSR out of the box but they didn’t add DLSS for several months. I got Starfield for free when I bought my 7800X3D which was a nice bonus. Definitely to a lesser degree than Nvidia though.


Frankly, this always seemed like dirty hacks - either the game or the drivers don't actually comply woth the graphics API and then the drivers need to hack around that. :P


I don't work in the industry but as I've understood from reading stuff from people that are. Basically the entire industry is dirty hacks from the top down and bottom up.

That is not to say this is good or bad. Just that it appears common.


There is nothing magical about CUDA


Afaik if your account is banned Valve still lets you log in to Steam and access your existing library of purchased games. You just lose access to all the other platform features. Obviously that's their policy that they can change anytime... but in this case, it's not inconsistent to their "nice Linux guys" persona.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: