This can happen only because of a design flaw in the security architecture of Android (L). Unlike iOS and like traditional PCs, the disk encryption key is always in memory when the device is booted and nothing is really protected if you get a device in that state. It's an all-or-nothing proposition. On iOS, there are different encryption keys that are used to protect different classes of information. Specifically, when an iOS device is locked, the operating system loses a key that can decrypt files that are of a certain protection class that are not supposed to be readable when the device is locked. Only by entering the password when unlocking the device (or reloading it from Touch ID memory), that key can be derived again. The effect is, for example, all your mail can be protected and rendered unreadable while the device is locked.
With this security architecture, no bug in the password "gatekeeper UI" can lead to you being able to read the protected information if the device gets locked successfully.
Android has a similar architecture with the 'keystore' service (for keys rather than files though). Crashing SystemUI will 'unlock' the phone, but none of the keys in the keystore will be usable. Unfortunately, no apps use the keystore in practice because it's unusably badly designed. You're vanishingly unlikely to notice if you're using a phone with a locked keystore.
I quote from the link you posted: "...opens up access to the phone application to listen to a user's voice mails, place calls, and view contact information. Attempting to go beyond that sends users back to the passcode screen."
That's the point: what you can access is not encrypted, but lots of stuff are on iOS9: photos, mail, etc. I'd say the difference in what you can access is staggering.
Yes. If you reboot an iOS phone, and then call it without ever entering the passcode before, it will display the phone number but not look it up, because contacts are encrypted. The encryption class here is "locked until first unlock", so they stay unlocked afterwards not to disrupt basic expected user functionality. The whole security design is still much better as many other things stay completely locked.
I stand corrected then. Photos are probably still encrypted under "accessible after first unlock" class. Sigh. Sorry for this mistake.
The idea of having a security architecture with different classes of data still remains, so a third party app can quite easily leverage this, for instance. Unfortunately there doesn't seem to be a comprehensive list of protection classes that each app uses for its files.
Please pardon my ignorance. I'm not a security expert.
Is there any reason this can't be fixed by just copying iOS in this regard ??? I guess the question I'm asking is, this bug is entirely fixable right ???
Not trying to diminish the seriousness of it. It sounds pretty horrible. Just saying that if Android's architecture will allow you to do something like iOS does, but it's simply unimplemented currently, that's one thing. It would be quite another thing if Android's architecture would NOT allow you to do something like iOS does. As I said... I'm not a security expert... but it SOUNDS like you can implement a fix that would mirror the protections provided in iOS ???
Yes. It requires serious engineering across the OS stack though. Also, it is generally harder and less effective to this retroactively; existing third party apps may not switch to leverage the new APIs when it is done retroactively. This makes it hard for PC operating systems to effectively adopt it, though I really hope they do it soon.
Right now it seems the low hanging fruit for Android would be to encrypt devices _at all_ (by default). Different classes of encryption is a luxury.
I did hear they did encrypt by default on the Nexus 6. Unfortunately, it seriously affected performance and was then turned off. I guess this may be due not having a fast hardware encryption path to and from the flash memory.
iOS encrypts, even if you don't set a passphrase or something and their security white paper does mention having dedicated hardware to make it seamless.
If an Android device has encrypted storage, it has to ask you for the decryption password before it will boot (ie BEFORE you get to a lock screen). If it doesn't, then it's not encrypted.
> I guess the question I'm asking is, this bug is entirely fixable right ???
Of course. The problem android has had in the past, though, is that actually getting security updates to users could be incredibly convoluted - with every vendor and network having their own, slightly tweaked (and heavily branded) versions of android.
You're right, but the KeyStore API isn't even exposed to 3P apps. AOSP, however, does use it. May be the Framework could use the state of the KeyStore (which would be set to "locked") to detect a breach?
There are a few ways that can and should be fixed to "fail closed". For instance, "locked" status should get stored somewhere outside the memory of any one process, so that if the system UI crashes, when it comes back up, before it displays any application, it should notice that it was previously in the locked state and go back to the locked state.
Jesus, why do Android engineers keep getting security so wrong so often? And they even bragged so much about how L is (finally) getting storage encryption by default, which they promptly killed after the Nexus 6 launch (because apparently they are stupid and launched the device without hardware accelerated encryption, even though the chip supported...ugh).
We still haven't heard anything about it being re-enabled in Android M again (or ever). Also, I remember reading about Android <4.4 (optional) storage crypto being seriously broken as well.
It's unfortunate because many people probably still believe they are getting default storage encryption. Heck, all the new stories on encryption and the FBI still (falsely) mention how Android is encrypting the storage by default. It reminds of media sites saying Skype is P2P and secure against eavesdropping years after Microsoft announced it changed its infrastructure to a centralized wiretap-friendly model, just like everyone else.
The gap between iOS and Android in terms of security (and privacy, too) seems to only be increasing every year. If they keep this up I may finally switch to an iPhone after years of being an Android-only user.
And that's even without mentioning the horrible update problem that Android has. I've just learned to live with that by installing custom ROMs after my phone stops being supported a year into its life-cycle, which is obviously not ideal in terms of security nor is it something I should have to do to get better security.
"Jesus, why do Android engineers keep getting security so wrong so often?"
Security is hard. Really profoundly hard, not just superficially hard.
And despite the fact that nominally I'd expect to get upvotes for stating what you'd think would be the groupthink consensus, my observation is that in the field, even engineers explicitly tasked with security still often think it's really easy and they're really good at it, so there's no great need for them to budget lots of time for it. This turns out poorly. I do not toss this around lightly because it's become cliche to just fling it around without much care, but there's a lot of real, no-foolin' Dunning-Kruger effect operative in this area.
(Your test for Dunning-Kruger on this matter is: If you're told you need to secure something and you're going to be personally responsible for its security, do you A: Say "Oh, that's easy, I'll 'just'..." followed by pretty much any series of words, or B: get the cold flop sweats, regardless of how easy it may superficially seem?)
> Security is hard. Really profoundly hard, not just superficially hard.
Indeed. There is an army of naked celebrities dancing around the internet today because of an "iOS" hole that didn't even involve the device OS at all.
Picking on specific features to announce "Apple is more secure" is very much missing the point.
That said, yeah: I'm not a big fan of Android's security architecture either, though it has been improving rapidly.
This is an important factor, and maybe the fatal flaw in android's carrier relationships. Google is always very fast to push a fix out, but the carriers always drag their feet in getting the fix to the actual devices.
It's really not. We're talking about security architecture here (which rolls out only with OS releases) and not security bugs (which get rapid patches on all but the most negligent of OEM devices).
Having a better security architecture works (to stretch the metaphor) like immunization: past a certain adoption level you reach a "herd immunity" state where the marginal benefit to an attacker of a specific flaw drops rapidly to zero even though there are still "old" devices out there in the market in small numbers.
Basically, the market refresh cycle is still at something like 18 months, so even given that OEMs are slow new Android releases are reaching the public in large numbers fairly promptly.
That is actually a great point, hardware refresh is such that updates get performed regularly. Also, this exploit requires hardware access to the phone, which is generally considered game over in any respect.
It would be beneficial if carriers could come up with updates faster however. I've seen Verizon drag its feet on more than one occasion, especially with phones that are a generation or two behind.
It's actually a bizarre thing when anybody gets security right the first time. If Apple happened to do something well, that's a huge credit to Apple. Heck, even Android's experience here is above the baseline for our industry.
Complexity is the bane of Security. Android has to work on a ton of devices with a ton of variation and it has its upsides and downsides. In this case many OEMs ship their own lock screens that update via Play Store and so for example none of my dozen Android devices have this issue - the stock ones are already patched and non stock ones didn't have it in first place. The downside is that the code has to cater to far more variations than something simpleton and that means more probability of security issues.
Given the scale and complexity I would say Google's done an acceptable job at Android security, but I do think they need to be even more serious about it. And it's not like iOS (remember lock screen bugs) or Mac OS X (remember password less logins succeeding?) haven't had their fair share of issues.
> and you're going to be personally responsible for its security
Well... but Android is a year long (8 years since the first beta) effort backed by Google (55k employees) and companies like Samsung (490k employees). They might spare a few people coming up with a scheme for data encryption that works?
And really, I don't claim it's easy, and I don't claim that I personally could come up with something even halfway resembling Apple's solution. But Android seems, unfortunately, to not even be trying.
No, encryption in Android 5.0 works really well, really. It is really well designed, not state-of-art like the one in iOS however it is still leaps ahead from any other encryption implementation currently in the industry. Perfomance wise, it's great too. I have a Moto X with Android 5.1, running encryption from day one, and didn't even notice any slowdown (and it uses soft encryption, no support for hardaware).
Android users seems to care more about benchmarks them security. The fact that Google removed the encryption by default was after all the bitching that people made after some releases showed that Nexus 6 was faster without encryption. However, there is no proof that this had any significant impact on performance, and I think Google put this restriction after a careful study that encryption by default, even on pure software implementation, did not impact usability on devices. OEMs do like benchmarks and likes to be lazy ("no encryption by default means I don't need to create a module implementing encryption that my hardware already supports! less money to spend, more profit!") because this makes people buy more phones, so they probably pressured Google to remove default encryption because encryption "makes performance looks worse" (on paper, not in real life).
If I was part of the midia, I would do a blind test running Nexus 6 with both encryption enabled and disabled. This would show if the performance impact was significant. But the majority of midia prefers the easier way using benchmarks, so we had all these Android users crying rivers that they lost performance. Serious, wtf.
Compared to "no encryption", a S4 mini (Dual-core 1.7 GHz. 1.5 GB RAM) is almost unusable (~30 seconds to switch to an app that's stale, i.e. not one of the last few apps used) under Cyanogenmod 12.1 with full disk encryption on. Without encryption is was ~"ok" to use, not fast, but bearable.
When speaking of disk encryption the bottleneck is the CPU. In order for encryption to work without a noticeable performance hit you need a microprocessor with hardware acceleration for AES. For example the latest Intel and AMD processors have the AES-NI instruction set. Ditto for the latest Qualcomm Snapdragon SoC.
I'm not sure if hardware acceleration is enabled in Nexus 6, but the hardware is certainly capable of acceleration. I now have an encrypted Nexus 6 and the performance is fine. My previous Nexus 4 was also encrypted and after encryption the performance hit was noticeable when booting up, but after boot it performs fine.
30 seconds to switch an app that's stale sounds like there's something really wrong either with your device or with the Cyanogenmod version you've got installed, judging by those specs at least.
In modern processors the bottleneck generally is the IO. While it is not the same think (desktop vs phone processors), this author benchmarks OpenSSL in an algorithm similar that Android uses, with and without AES-NI (Intel HW encryption): https://mjanja.ch/2013/11/disabling-aes-ni-on-linux-openssl/.
Even without AES-NI, this processor is doing more than 100MB/s of encryption. Mobile storage is a lot worse than this (like less than 50MB/s in sustained sequential writes), and you have to remember this would be the worst case (since the majority of writes for common users is small files instead of big files).
Just to put in perspective: my Moto X can write around ~20MB/s when transferring files by USB 2.0. And it can still write the same ~20MB/s with encryption on.
There is Windows Phone if you only consider "industry" mobile phones, however I was considering the whole personal computing industry. In this case, includes Mac OS X, Windows, Linux and you get the idea.
Or you only encrypt your phone and leaves your notebook unecrypted? This would be silly.
Everything about Android is a rushed project. Rubin's team was running scared of OpenMoko hype, Blackberry success in enterprise, and Windows Mobile everywhere else. Then Google snatch it up and ran scared of Apple. And now here we are.
Android needs an 18-24 "gap" period of re-engineering everything they rushed, but at that point you might as well just write a new OS. I suspect there is a secret 2nd version of Android that isn't virtual machine based, doesn't use Java, but has whatever backwards compatibility needed to work with the existing applications. If things get too hot from Oracle, then that will be the next version of Android. Switching to ART seems to suggest that Google has some portability chops here. Incremental updates like they're doing now is the path of least resistance, but considering how much more polished iOS is, I really doubt Android will ever be on that level. Its always going to be a little quick and dirty and I think if you buy an Android product you have to accept that.
I switched to the Nexus products from my 3GS. I knew Android was a step down in a lot of ways, but I wanted the hardware flexibility it offered. Now that I can get a big Apple phone with a swype keyboard AND it works with Android smartwatches, it seems the Android value proposition keeps getting weaker and weaker.
>>Jesus, why do Android engineers keep getting security so wrong so often?
What can you expect from a company that names its version releases after chocolates, candy and ice cream? That's literally the level of maturity with which they view their own product. So no wonder security is an afterthought.
Even with the Android/PC architecture, you can have a much better design.
For example, jwz's XScreenSaver separates the graphical process (more complex and bug-prone) from the daemon. If you manage to crash the password input screen (or the screensaver itself), the daemon will just restart it.
Not true, actually. See https://www.jwz.org/xscreensaver/toolkits.html ... (I'm assuming this hasn't changed, of course, as this was written over 10 years ago... but the fundamental reasons why he was unable to write the dialog as a separate process should still remain, since they're fairly well-known and not-particuarly-surmountable issues with X.)
The password input screen uses only low-level X11 calls (and not a higher-level GUI toolkit) in an attempt to reduce the library dependency list and therefore the possibility of crasher bugs. But if you do indeed manage to find a crash bug in libX11 (or a dep) that you can trigger from xscreensaver's password entry box, that will indeed unlock the screen.
Now, the xscreensaver configuration panel is written using GTK, which is indeed a separate process, and a crash in the config panel won't affect the screen locker.
That's one of the things that I find inexcusably and also rather sad: Given the amount of customization that Android systems require to run on a particular vendors hardware, they don't seem to make use of the multitude of security features present in modern processors, or just the ability to solder on a dedicated TPM/security processor.
But I'm genuinely curious, maybe someone here on HN knows: Which of the other mobile phone systems (Windows Phone; Firefox OS; Ubuntu for Mobile; Tizen; Blackberry; ...) have a halfway decent security architecture, similar to iOS?
The whole point of Symbian OS S60 3rd Edition (aka Symbian OS 9.1) release was to introduce DRM support and a security model based on certificates for API sets.
Many, included myself, were disappointed that they didn't took the opportunity of such breaking change in the programming model, to throw the Symbian C++ dialect out of the window.
You can do that on PC by hibernating instead of locking the screen (assuming it works on the OS and hardware being used) and then rebooting instead of shutting down after hibernation.
If you do that, you'll lose 100% of your PC's functionality, whereas iOS is quite useful in locked mode. With some neat tricks, it even fetches new mail for you, for instance: basically it encrypts new messages by encrypting them with a public key whose private key is encrypted and the key is not available while the device is locked[+]. A hibernated PC cannot do this and lots more.
[+]: technically, it generates a new AES key, encrypts the message under it, encrypts that key under a key derived from ECDH over Curve25519 with the system public key (whose private key is encrypted and inaccessible while locked) and saves the encrypted file and the wrapped key somewhere to be decrypted when it gets unlocked and gets rid of the AES key.
That's not as secure as hibernation, because someone who can break into the running device can at least read all your new mail and if they aren't using an ad-hoc e-mail protocol or implemented it carelessly also read all mail and probably get the e-mail username and password too.
This also only really works for software specifically designed for this mode.
EDIT: it can actually be mostly secure if the client public key is sent before locking and then the encryption is done by the server: the attacker would only learn the number and timing of new messages and their size (assuming an ad-hoc protocol and very carefully implemented software).
It's also sort of possible to do it with generic software by simply dumping HTTPS TCP traffic to disk and decoding it once the system is unlocked (although it allows a MITM to fill up your disk unless the ciphersuite allows authentication with a key that doesn't allow decryption).
Windows 9x was never secure, and I believe that screenshot was taken from Windows 98 or 98SE (hard to tell which).
Arguably this is a bug in HP's printer driver. But the fact Microsoft exposed printing to begin with was misguided.
I will say, even then, Microsoft intended for businesses and schools to utilise Windows NT 4.0 for this type of scenario. Unfortunately for them 9x become too popular for its own good, so people were buying machines with it and then having it log into AD (which was never really secure scenario, NT is designed for this, 9x wasn't).
It is a moving gif, and regardless of the era that shouldn't have been a valid entry point into windows in the first place. Hence why I am amazed that it works.
There was a small and relatively unsuccessful F1 racing team around about 10 years ago, who's name I shall not say here. Internally such teams were known as "pit dodgers".
When it came to the design of their gearbox, the work was outsourced (as much of it is in F1 unless you are Ferrari).
The primary gearbox designer did most of the work, however reverse gear proved to be a problem, but rather than fix the problem, it was left to the junior designer who couldn't solve it either, who also left it pretty much unfunctional. This was based on the concept that:
a) Reverse is rarely (if ever needed) (sidenote: disqualified if used in the pits during a race)
b) The "pit dodger" team was unlikely to last the entire race anyway, so the gearbox didn't need to standup to a full race duration anyway. The effort put in to designing the components (i.e. precision) can be reduced to match the expectation of success and failure rate of other third party components. Design and production time can be saved and extra profit made.
Thus lies the lesson of the emergency call feature, or any other feature that designers feel don't really require that much attention because they are rarely used. These features are often handed to junior developers and engineers, because of the fact that people have a tendency to deem lesser-used features as unimportant.
Which is fine of course until you have an emergency and you desparately need to make that emergency call because your, or someone else's life, depends on it. Or you need to reverse out of the way of a damn F1 car coming towards you at 150mph down a straight and you are sitting in the middle of the track. WTF, reverse doesn't engage....oh shiiiiiiit....
TLDR: Features that are very infrequently used are not always the features of least importance.
A Formula 1 car is disqualified if its reverse doesn't work (and this is the case for quite some time now). Not sure which team you're referring to, I mean Minardi was sold to Red Bull in 2005, so it has to be before that ;)
You are right. It has to have reverse and that can be checked at scrutineering, but it just has to eventually reverse. It doesn't say anything about having to do it quickly, or effectively:
All cars must be able to be driven in reverse by the driver at any time during the Event.
Emergency call is one of the features that must work for a device to pass certification (numbers: 911, 112, and, a third one I cannot remember). In that sense, this is not an unimportant feature.
However, AFAIK, not being able to make regular calls without using a password isn't part of certification testing.
safety issue i would assume. many pit crew members have been killed over the years in motorsports so precautions and rules have been progressively been put in place to prevent them
How ironic that this bug affects the only really secure locking method. Neither the pattern nor the pin are really safe to protect a device as both are easily snoopable.
Security is not an absolute, and there is no such thing as "really/totally/completely secure". The different locking methods allow you to strike a balance between security and convenience that's appropriate for you. All of these are useless against a determined attacker, for a given level of 'determined'.
I don't think he was saying it was "absolutely" secure or totally or completely. Just that it was the "more secure" method, which is not an absolute. There may not be an absolute in security but there are definitely differences in types of security.
Pattern lock is insecure in the most analog way possible: it's real easy to read fingerprint smear on the screen, and unless you assiduously wipe your phone screen with a lint-free cloth and a bit of isopropyl alcohol after every unlock, chances are you're at least sonewhat vulnerable.
Well, internally the pattern password is literally stored as numbers, just like the pin. So If you go top right to middle right then to middle, then to bottom middle, your pattern will be saved as 3658. It's less secure than a pin, because obviously with a pattern lock a 3 can only be followed by a 2,5 or 6, and no number can repeat, so it's easier to crack than a pin.
My pattern left a trail on my screen pretty much telling you my password. You'd have to guess on the direction a couple times, but you'd get it fairly quickly. I think the pattern lock is really just a way to keep kids off your phone. Greasy fingers on glass are way too telling.
Oh, no doubt. Realistically I don't expect my phone to be secure from the lockscreen, it's more to keep pests out. Anything secure is from full disk encryption, passworded SU, etc. As far as person-looking-over-you protection, they're not bad.
Is it a buffer overflow? - I suspect the OS is killing the LockScreen task due to memory exhaustion from the length of time it takes to trigger the bug.
Previously, any crash in the Keyguard (which used to reside in the same process boundary as the system_server) would have taken down the OS, and a (soft) reboot would have ensued. Now that the Keyguard runs in its own process, any crashes now only gets rid of the Keyguard window and exposes whatever window is behind it (usually a launcher window).
I wonder if the actual fix is to have the "watchdog" ping the Keyguard's process for a heartbeat like it does so with other services within the system_server... That way, any flaw / crash in Keyguard and you essentially loose access to the OS too (until it come backs up from the reboot, and starts from a clean slate).
NT has a neat feature for essential processes: you can tell the kernel, "if I ever exit, for whatever reason, panic the system and reboot immediately". There's little room for error with a system like that. It wouldn't be that hard to implement this feature in Linux too.
(Of course, you need to have special privileges to mark your process as critical.)
In Linux, if PID 1 ever goes down the kernel panics. You could put a watchdog in your PID 1 that will take it down if the timer isn't tickled for a certain period of time.
Systemd has a couple of nice features around this: init can open the hardware watchdog and ping it as required, so if init ever stops then the machine will reboot. At the same time, services can be set up to need to write to their own watchdog. So if a service dies, either init will notice and take appropriate action, or init will also have died and the hardware watchdog will reboot the system.
Also -- and this has been useful to me on a laptop -- init will configure the watchdog as a last-resort to kill the system when shutting down.
This is the number one problem I have with Android. Once I buy an Android phone I pretty much expect 6 months of updates. Maybe longer if it is a really big bug. Sure some OEMs have made "promises" to provide 2 years of updates but that updates don't come out all that quickly, often several months after it hits the main AOSP line.
If you want an iOS like experience with Android you have to buy a Nexus phone of which the Nexus 6 sucked balls due to its size.
I wish Google hadn't killed the Play Edition phones. I would love a choice of phones running stock Android, maybe with a few extra, optional, apps from the OEM similar to what Motorola does. A Galaxy S6 running stock Android with a guarantee of update within 2 weeks of it hitting a Nexus would be my perfect phone.
Just select your brand based on their ability to update phones for a couple of years past purchase. Nexus phones are kept up to date. Motorolla too. LG seems to be on that track, but History will tell. Samsung also pledged to update for two years.
That is false. As long as the OS is updated, it does not matter whether it is the OS developer, or the equipment manufacturer, publishing the update.
I wonder, if Apple delegated OS update publishing onto third parties, if you'd keep your position. The Jobs distortion field extends beyond the grave, it seems.
Yeah, Linux in general doesn't have a driver "framework", it has kernel modules, and the in-kernel API is not stable, by design - the position is that keeping a stable API is extra complexity if you want to continue improving the kernel.
If you're a driver developer, there are essentially three options: (1) submit the driver to the kernel tree, in which case (if it gets accepted) the developer making the API change will update your module, (2) keep pulling the kernel tree locally and update the module yourself and (3) simply stick with a kernel version.
Nexus phones get security updates every few weeks now. I know I got one this month and one in August on nexus 5. Apparently Samsung said they'll do the same - not sure if they did.
OS X had a curiously similar screensaver vulnerability back in the early days.
On 10.2.6 and earlier, attempting to submit a very long string (>65,537 characters if memory serves) into the screensaver password prompt would crash the screensaver and drop to the desktop.
Making exploitation much easier was how around the same time Cocoa widgets got emacs-style bindings like ctrl-k, ctrl-y, ctrl-a. Through combining these shortcuts an attacker could quickly and exponentially increase the input string length.
I never saw it documented online but I remember applying this same trick to logonwindow and OS X dropping into the so-called 'secret >console mode' -- full screen terminal, Linux-style.
I know you can just put a limit on it, but a string much less than 1MB in length entered into an edit field can crash a device with probably 1GB or more RAM? If there's a fixed-length buffer somewhere that's being overflowed, it's absurdly large for a password; and if there isn't and it's actually dynamically allocated, that's still pretty sad. I just can't help but ask "what were the people who designed and wrote this code thinking?"
This bug just happens to have been brought up in a security context, but if other text edit fields elsewhere in the OS and apps will cause crashes too when fed long strings, that's not just a security issue.
Yeah. It doesn't work on my Note 3. Samsung disabled copy & paste in the Emergency Call dialer.
The author clearly didn't do his research, simply assuming that the vulnerability affects all Android devices:
> But there's no telling when it'll reach Android devices made by Samsung, LG and others. Blame the Android's fractured updating system, which is slowed down by phone manufacturers and cellphone network carriers.
"Already patched" isn't really as meaningful on Android as it is on, e.g., iOS where the average user can expect to (and frequently does) update their operating system.
How many phones are still vulnerable to StageFright?
I wonder if my kids found something similar to this on their Kindle Fire Kid edition. I have parental controls enabled but sometimes the tablet is still running after the predefined cut off time. I remember seeing one time that the PIN entry was popped up with a really long string of numbers in it. They are very very young and wouldn't be able to Google something like this so it was trial and error if they found it.
Did you read the article? They'd need to type somewhere in the region of 160,000 characters to get past the lock screen. If you're not intending to do so (via copying and pasting repeatedly), it seems unlikely you'd manage.
It's interesting how most Nexus devices only have a LMY48M image https://developers.google.com/android/nexus/images?hl=en but the razorg skipped that and has LMY48P (which I flashed yesterday, phew!). Also, the 2012 Nexus 7 is still stuck at LMY47V.
I can't understand in the first place, why copy & paste is active on a password screen that should lock the whole device at all!
Of course, deactivating copy&paste would not be a valid solution for this, but the paste feature also could potentially leak user data to unauthorized persons. The copy feature could by accident reveal the password at places where they do not belong.
A password manager for a locked device? Does that make any sense?
I do not have an Android phone or a password manager, but I still do not get the point here. The device is locked, so how do I access the password manager?
Those attackers could instead install a hardware based touch screen logger or write a similar attack in software and write it directly to your phone's flash. It's more complicated, but it's still true, once someone has physical access, all bets are off.
Yes, a device which has been accessed physically shouldn't be trusted anymore (theoretically) but I was talking about the case of a booted device being stolen by an attacker, not the case of one having been returned.
In practice, If you can detect intrusion and the device has a secure boot system, wipe the device and issue it new keys, re-encrypt all data to the new keys with some backup cold-stored keys.
Disk encryption is supposed to be the security measure that thwarts that, to an extent. The attacker can probably use your device, but if you have it set up correctly, they shouldn't be able to read your data.
I tried this on my phone last week when I read the article. One I can't cut/paste on my Note 4 Lollipop (At the emergency password screen). Two if you even try to cut/paste or take longer than a second to the phone locks and you have to start over on the password.
Maybe it's because I use the finger print reader and password is for emergency. On my phone this hack seems virtually impossible after 10 minutes of trying really hard to get it to work.
With this security architecture, no bug in the password "gatekeeper UI" can lead to you being able to read the protected information if the device gets locked successfully.