Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The strange case of virtual machines on telephones (apenwarr.ca)
67 points by MaysonL on Aug 26, 2010 | hide | past | favorite | 60 comments


Straw man alert !

I have multiple arguments on why I prefer virtual machines (like the JVM), and none of them is security ...

     - portability between hardware architectures
     - portability between operating systems (mostly)
     - memory management that doesn't suck
     - and yes, decent speed in return for a high-level programming language
Also, in time Java gained an extra advantage ... open-source libraries that are fucking top-notch, in some cases with no replacement in the "native" department.

Regarding Android versus iPhone ... that iPhone emulator doesn't run on anything else other than OS X, or does it?


There are pros and cons to each of your points. It just so happens Apple chose almost the exact opposite when compared to Android.

     - portability between hardware architectures
Android went with the VM solution. Apple chose fat binaries when going from PPC to X86. They also use fat binaries for ARMv6 and ARMv7.

     - portability between operating systems (mostly)
Apple handles this at the OS, framework, C language, and compiler (LLVM) level. The Unix bits of OS X and iOS are essentially the same. CoreFoundation is essentially the same across iOS and OS X. Cocoa and Cocoa Touch share the same design ethos as well as a number of classes.

If you assume Unix then a VM isn't that big of an OS portability win.

In general, Apple doesn't believe in cross-platform Apps. They believe in trans-platform Apps. Every single pixel in a Mac, iPhone, or iPad App should be pixel perfect. Java and a VM doesn't really give you any benefits with respect to this.

     - memory management that doesn't suck
Objective-C 2.0 on Mac OS X has garbage collection. I imagine garbage collection will come to the iOS world when Apple believes that the performance tradeoffs make sense.

In general, memory management is hard. Memory leaks are trivial to find for an experienced programmer with Instruments. Leaks are tough for a novice programmer. But hey, life is tough for a novice programmer.

Memory management involves more than just garbage collection. Running out of memory is a big problem on handheld devices. A VM can compact memory but you can't do anything about VM memory overhead.

     - and yes, decent speed in return for a high-level programming language
I'd argue that a message passing language like Objective-C is actually just as high-level (if not more) than a method calling language like Java.


memory management that doesn't suck

You can have GC and AOT compilation, as the post says in a footnote.

decent speed in return for a high-level programming language

High-level languages can be AOT compiled.

Java gained an extra advantage ... open-source libraries that are fucking top-notch

So maybe Android should have used Java, but compiled it direct to ARM rather than burning users' cycles on an interpreter.


> So maybe Android should have used Java, but compiled it direct to ARM rather than burning users' cycles on an interpreter.

It doesn't have to be an interpreter, the processor itself can be optimized for Java (or whatever bytecode you'd like), a point the article also missed.

This has been going on since 1996 ... http://bwrc.eecs.berkeley.edu/CIC/announce/1996/java-procs.h... (and the concept itself was first tried with the Lisp machines I think).

The difference between high-end phone makers, like Apple, and the low-end phone makers ... low-end phone makers cared and still care only about price, not about how snappy those Java apps run.

Again ... the article sets up a straw man.

Desktop Java applications are slow and ugly, but show me an open-source IDE that can compete with the breath and depth of Eclipse / IntelliJ IDEA and their plugin ecosystem.

There's some merit to Java in that ... open-source developers would rather spend their time working on the features they want, rather than hunting down memory-leaks or deal with all the shit that cross-platform development brings.


> It doesn't have to be an interpreter, the processor itself can be optimized for Java (or whatever bytecode you'd like), a point the article also missed.

This is slower than compiling it to a more easily implemented ISA.


show me an open-source IDE that can compete with the breath and depth of Eclipse

... Eclipse?


AOT in many instances is less performant than JIT compilation...


I doubt it, but, regardless, AOT burns AC power once, on some development PC. JIT/interpretation burns power forever from a tiny LiPoly cell that barely lasts 10 hrs for many people. And with 160k new devices per day, I wonder how many unnecessary tons of CO2 it produces each year?


Funny, I think mobile phones are one of the few places where VMs actually make a whole lot of sense, given that you'll need to keep your applications compatible with whatever CPU happens to run generation 'X' of the phones out there, today, and more importantly, in the future.

Like this manufacturers are not restricted by backwards compatibility. All they have to do is guarantee that the VM will work as advertised and in a compatible way.

All the apps are then automatically available for the new processor platform.


Yeah, those phones keep switching processors. They used to use ARM, but then they switched to, uh...

Processor portability? YAGNI.


Ah, but Android isn't just phones!

Settop Boxes (MIPS): http://www.mips.com/android/ (There's also a new MIPS chipset aimed at smartphones: http://www.h-online.com/open/news/item/Android-mobile-device...)

Netbooks (Intel): http://liliputing.com/2010/08/toshiba-ac100-android-netbook-...

And then there is the whole Chinese ecosystem of Android (and Android forks?) on Loongson processors. I don't know a lot about this, but here's an example: http://www.netbooknews.de/12409/lemote-open-source-netbook-m...


Just because they are ARM, does not mean they are the same.

When taking iPhones into account, you must build for ARM6 and ARM7. On Android and native code, situation is the same: G1/myTouch is CPU-wise another ARM than Nexus One, which is different than Galaxy S.

The bytecode solves nicely this problem now, as in You Needed It Yesterday Already.



I'll believe this when it actually ships. Also, there's a kind of karmic justice to see Intel on the losing side of an instruction set monopoly.


There's a good reason for vms (like Java/Dalvik) on mobile phones: different processors. Google can't make sure every Android device rubs on the same processor and Dalvik bytecode runs unmodified regardless of what processor you have in the device.

Native binaries work for Apple because all iThings use ARM and that won't change anytime soon.


The author claims this is not a legitimate reason: "Just ask any Mac developer". I'm assuming here he means that compilers can solve this problem for you (and hence don't incur a run time performance tax).


If you've ever written cross-platform code, especially between little and big endian machines, or even worse, different 64-bit implementations, then you know that compilers can't even come close to solving everything for you.

Cross platform code needs lots of thought and hard work. THAT is why VM languages are nice.

As for JITs, yes, they're getting much faster and much better at recognizing code to optimize. Then again, so are static compilers. But both of these technologies are slowly merging together. JITs are getting ahead of time compilation, and static compilers are getting profile directed compilations.

You just have to ask yourself, is a 40% boost in performance worth hours, possibly weeks, of rewriting code for a different platform?


You don't need a VM to solve endian issues. This can be handled at the framework or library level. For example, the NSCoder class handles most of this for you in the OSX/iOS world. Conform to the NSCoding protocol and endian issues during serialization are handled for you.


Many apps and libraries are ported, not written from scratch. You may not have an option to "conform to the NSCoding protocol", because you already have existing code, written before Apple Era.


It is actually a problem on Macs: there are entire libraries of applications, that will never run on any new Mac. PPC/Classic is not supported, PPC/OSX may or may not run, depending on the app. Forget 68K/Classic.

(Yeah, I should have bought PC versions of Lucasfilm games, not the Mac ones.)



You can't run them on ScummVM?


And you just made the point for why use VMs ;-)

Wouldn't it be nice to have a ScummVM that can run not only games but real applications?


Yes, in this case, that's exactly what I do.

But the point is that I shouldn't rely on group of volunteers to preserve compatibility. For Lucasfilm games it works, for others it doesn't (other "classic game", Diablo II, has PPC/Classic installer and PPC/Carbon update. It is uninstallable on current Macs). It is not just about games either - how many people were satisfied with Photoshop 7 (PPC/Classic), but they had to purchase newer version just to be compatible with their platform? Specifically Apple had quite a lot of ABI breaks in their history.

But back to the article - it claimed, that architecture specific binaries are not a problem. I think, and many people agree, that having architecture independent binary is a good thing and provides real benefits not just to developers (fewer builds and reduced support), but to end-users too - compatibility with wide array of devices. You can develop for Snapdragon/Hummingbird (ARM) device and the same binary will run on Moorestown (IA32) or some random Freescale (PPC) too.


Great ... so you're gonna build a dozen binaries (or more) and upload all of them to the App Store?

That's rich.


Actually, the compiler would output a single duodecimal-fat binary. (Analogous to quad-fat binaries from the NeXT days.)


On a phone that has limited storage space?

And there's another thing ... with Android I'm not sure if it will run correctly on a "Samsung Galaxy S" if I've only tested my app with the Nexus One, but at least I know it will execute.

If you're reuploading an update for Samsung, it will surely not be one about binary-compatibility.


iTunes could perform liposuction (seriously, `man 1 lipo`) on the app bundle when transferring to the device.


But how does the compiler create binaries for CPUs that don't even exist yet?


He is wrong.

While ARM is a good target, there are many variations, with different feature-sets, added instructions and JVM acceleration that make for really different targets (much like x86 and amd64) that he would have to upload and test independently. And I am talking about ARM, which is only the dominant mobile platform (and that wasn't dominant a couple tears ago). One may also consider x86 (lpia, maybe) and their own endless SSE variations.


Universal Binaries have made multi-architecture deployment a solved problem since the NeXT days.


Really? They did ARM7 builds at the time?


While its's tempting to pick minor nits in this post, the only appropriate answer is this: So what?

For server-side web applications, VMs (and the JVM in particular) already won. Sorry. They've made industrial strength GC, JIT and multi-threading features for the mass market.

As for phones, I really doubt whether Android's use of a VM will have anything to do with its success. There are many factors which will be much more important, like fragmentation.


Wow, I hate it when someone writes a scathing commentary on Java and I have to say they are completely wrong.

Java is still much slower than native code, and you can see it clearly just by looking at any app

Let's see some numbers. Oh, here are some:

http://shootout.alioth.debian.org/u32/benchmark.php?test=all...

2x difference. Yeah, it's slower, but do you really notice the difference between 2ms and 1ms when your web page loads? Didn't think so.

With people regularly writing server-side apps that handle millions of users a day in languages whose benchmarks run 50x slower than C++, and those apps working fine, it's not really a stretch to pick a language that runs 2x slower than C++ on a device that doesn't even do anything computationally intensive. ("Received 'incoming call interrupt'. Display something pretty on the screen." Guess what, not CPU intensive!)


> " Yeah, it's slower, but do you really notice the difference between 2ms and 1ms when your web page loads? Didn't think so."

But that's not what happens in real life. A 2x performance difference means 1000ms instead of 500ms to load a page - and that is noticeable. It's also the difference between 30 fps in a game vs. 15 fps - that is really noticeable.

The list goes on. A 2x performance penalty is huge. For desktop applications, our hardware is so ludicrously overpowered for what we do with it that this penalty large doesn't matter (as you said, 1ms vs. 2ms), but in mobile land, where hardware constraints are still very, very real, this is absolutely a critical issue still.

Apple's platform is by far the most responsive out of all of the current smartphones, and I think it'd be foolish to discount native code as one of the primary factors in this. The responsiveness is also part of what attracts users - people want their machines to work on human time, not slower-than-human time.


but in mobile land, where hardware constraints are still very, very real, this is absolutely a critical issue still

A machine with a 1Ghz processor, a half a gig of RAM, dedicated video processing hardware, dedicated audio compression hardware (the GSM modem, etc.), and so on is "constrainted".

The premise of the article is that's impossible to achieve reasonable performance on a mobile device while using a virtual machine. This is obviously false, as Android performs wonderfully on my phone. A call comes in, the "slide to accept" action works smoothly. I visit a web page, it loads as fast as the network can give it to me.

Apple's OS may be more responsive, but I'm not really sure about that. The browser feels the same. Maps feels the same. Receiving a call feels the same.

We've passed the point where every bit of performance matters. Now, it's about features -- let me know when I can use irc over ssh on the iPhone or use Google Voice to make calls from the native Dialer application. "Slow" VM or not, my Android phone can do that stuff, but the iPhone can't.


>> We've passed the point where every bit of performance matters.

We really haven't. In a twitter client? Sure. In a real-time game? Hell no.

Don't assume that all software is the same, or that it will never change (See Bill Gates's famous '640k is enough memory for anyone' comment).


I've used iPhones and my HTC Incredible seems faster and more responsive.


I haven't used an iPhone enough to really know for sure. But the premise that "using a VM will make it impossible to have a usable phone" is obviously false.


My understanding regarding iOS responsiveness is that it leverages the GPU for many UI elements while Android does not, relying instead on the 2D Skia engine.

source: http://code.google.com/p/android/issues/detail?id=6914


     A 2x performance difference means 1000ms instead of 500ms to load a 
     page ... It's also the difference between 30 fps in a game vs. 15 fps - that 
     is really noticeable.
You don't know what you're talking about ...

In web apps it really depends on where your bottleneck is. A 2x performance difference could mean just a 5% difference in response time.

On the other hand, go ahead and try handling 100.000+ connections on a single server from C/C++, like these dudes right here ... http://amix.dk/blog/viewEntry/19456

Of course it can be done, but you'll need to read a whole book about it, and at the end of the day your app also has to do something useful other than returning "Hello".

And as game development goes, game developers have been using high-level languages for the business logic for years, including Lua, Python, Java and C#.

     The responsiveness is also part of what attracts users
True, but I'll bet I can get better responsiveness in Java than you can in C/C++.


On the other hand, go ahead and try handling 100.000+ connections on a single server from C/C++, like these dudes right here ... http://amix.dk/blog/viewEntry/19456*

This doesn't really help make your point. With libev or libevent, it's very easy to do this in C or C++... or Perl, or Java, or Python, or Ruby. The OS does all the heavy lifting and the OS is native code.


In real life, the way applications are programmed usually has far more to do with speed than the language they are programmed in.


I think an awful lot of the code run on Android will be native wrapped in some JIT-ed API calls. I think the parts of the browser that do the heavy lifting are all native so this might not be the best example to use.

For the differences in responsiveness between iOS and Android look elsewhere.


> Let's see some numbers. Oh, here are some

Those are not numbers for "virtual machines on telephones".

They are numbers for Java HotSpot(TM) Server VM on a quad-core machine.


I stopped reading as soon as he trotted out the "JVM is slow" line. If you still believe this, you are ignoring reality or arguing from something some guy told you ten years ago.

Yes, you can argue that in some cases, you can get about a 2x speedup with native code, or more by using hardware features not exposed in java (stream processing), but that's not relevant to the argument this guy appears to be making.


Yeah, but I tried opening Eclipse and it took a while for my 3400 RPM disk to read it into memory. Since Eclipse is written in Java, Java is slow!

Incidentally, Visual Studio and KDevelop are native code, and they take a while to load too. Could it be that big bloated apps are big and bloated, VM or not?


Not only Eclipse is slow, but pretty much every Java program that I've tried. Also, a lot of code in Visual Studio is managed, especially in the UI.


Actually the JVM is slow on startup. It has to decide if it's going to use the full Hotspot optimizer or something for shorter runs (Hotspot wont buy you anything for a short-running app like e.g. cat'ing a file).

Given that many phone apps are going to be started, looked at and shut down within a few seconds I don't think the comment is completely out of line.


dalvik is different.

for example dalvik bytecode can be adapted to match host endianness after being installed

Running JVM bytecode which is designed to be big-endian (sparc was big-endian) requires some adaptation when executed (especially when constants are loaded from the constants pool) on little endian machines (x84), it seems that the JIT compiler takes care of it. However I don't know how much does this inpact on the JIT compilation speed.

furthermore dalvik wasn't even a jit until android 2.2.

I assume that this startup latency issue was taken into heavy consideration when designing the dalvik JIT, can anyone point to any resources about this?


Lots of straw men, lots of flawed premises. Google didn't choose Java because of VM security, they chose it because everyone on the planet knows it. Can the same be said for Objective-C?

But every step of the way, they're going to have this giant anchor of [UniConf] Dalvik tied around their neck, and Apple won't

Kinda like... Objective-C and a limited library?


It's not clear that Objective-C is hampering developer uptake of the App Store.



I always thought Java's slowness is due to the scads of layers of abstraction you have to go through to get anything done. You can't do anything without hitting a mile of stack trace.


The author is clearly correct and yet he is wrong because he asked the wrong question.

How fast it runs is not as important as "is it fast enough for the customer" and "will I be able to produce this at a reasonable price?"

Everything in engineering is a trade-off.

A VM in a puny chip is the price one pays for a developer audience, to build a thriving ecosystem of apps so that consumers will want to buy the device.


A VM in a puny chip is the price one pays for a developer audience, to build a thriving ecosystem of apps so that consumers will want to buy the device.

Too bad Apple didn't pay that price, because otherwise, they might have had consumers and a thriving... oh, wait...


Apple already had a toolchain and developers before iPod Touch arrived. It made sense for them.

A newcomer with no audience does not have too many choices. The world simply has more developers who program to JVM or CLR.


The author completely devastates their core point when they say-

"Now I've told you why Android's use of a Java-like VM was demonstrably wrong (Apple demonstrated it) from the beginning"

What was demonstrated, the author claims, was that Android chose "Java" for security, where the iPhone demonstrates that native can be secure.

But...Android didn't choose Java for security. You have the full ability to use native code in your apps, courtesy of the NDK, and of course Android runs apps in an isolated process (using the inherent security functionality of Linux). Apps can't stomp on the system or walk outside of the lines and allowances granted them, Dalvik engine or not.

While you still need a Java main, you can build almost all of your app in the NDK if you really want to. But being pragmatic, 99% of most apps are comprised of a light veneer over the SDK (itself usually high-performance native), and in such cases native or not makes absolutely no practical difference. Where you need to go native for specific cases, go native.

So why, then, does Android use Dalvik? Probably because they intend for the platform to be usable on a heterogeneous array of platforms -- you can run Android on your PC, for instance, and upcoming smartphones may feature the Intel Moorsetown x86 mobile processor. By encouraging Dalvik, the platform, specifically the application portfolio, is freed from being tied to a specific processor architecture. The author tries to diffuse this obvious advantage by saying that such a change is "just a recompile away", but that is simply absurd -- when the first Moorestown phone comes out, off the bat it will be able to run the majority of market apps with no change at all, including those that used the NDK (as the Android norm is to conditionally load native modules, using a managed fallback if it isn't appropriate).

This is all ignoring that Android has a far more granular, and I would say robust, security system than the iPhone: recall that the iPhone is where someone got an app that ostensibly was a "flashlight" app, but in actuality was a tethering app, which absolutely demolishes any illusions that the review process is any sort of security check. But that isn't necessarily a part of the managed/unmanaged argument (because it has been shown that is a misdirection to begin with).


I thought the NDK just interfaces to the Android Java API, so there would still be a lot of Java code in your app?


The primary API was provided in Java, so yes from the NDK for many needs you have to step back to Java-world momentarily. This is a simply a limit of time however (given that Dalvik was their primary app runtime for other reasons) -- Google has been pushing more and more functionality directly to the NDK (e.g. 2.2 gives you the ability to fully use OpenGL from native code).

However that is neither here nor there. You have the full ability to fall to native, to go hog wild with memory references and uninitialized pointers and stack overflows and trying to break free of the box you are placed within (the process boundaries). You have the ability to include C code from other projects for codecs and processing, etc.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: