BSides Zurich – Nail in the JKS coffin

On Saturday I was happy to speak at the fabulous BSides Zurich about the Java Key Store topic. You can find my slides “Nail in the JKS coffin” as a PDF here. It was my second time at a BSides format and I really like the idea of having a short talk and then some more time to discuss the topic with interested people. I also included the “after the presentation” slides we used for roughly 50% of the discussion time. I hope you enjoyed the talk and I’m looking forward to hear some feedback. Although it was sold out, you should definitely come next year, it was one of my favorite public conferences.

cheers,
floyd

Android Nougat’s worst anti-security mechanism

If you are a pentester like me, you are doing mobile application reviews on Android. One of the most important things to check is the server API. On the other hand we might want to see what possibilities a server has to influence the Android app with its responses. For both the easiest and most straight forward method is to do a Man-In-The-Middle attack in the lab and look at the network traffic. How do we do this if the mobile app uses TLS? Easy, just install a user CA certificate.

Before Android 7 that was a good solution and straight forward. There was a nag screen showing up in the notifications every time you start up your phone (which was already a little funny), but it worked fine for everyone. However, starting with Android 7 it will not work, I tested that and the official announcement about this user-added certificate security is here. So let’s look at this new “security” feature of Google’s Android.

First of all who is affected? I think only the defender side has to jump through this hoop. Because every attack vector I can think of is ridiculous. First of all, a user would need to fully cooperate to let an attacker exploit this. As Android is not opening the security settings automatically when you download a certificate (like iOS), an attacker would have to convince the user to go to the settings dialogue, go to security, scroll down, tap on “install certificate” and choose the correct file from the file system. Let’s say an attacker will setup a Wi-Fi access point and forces the user to do this or otherwise the user won’t get internet access. This is the only scenario I can even think of where a user might at all consider installing such a certificate. You might say that can happen with non-technical users, but then why don’t we just add a big red warning that this is probably the worst idea ever? That would totally suffice in my opinion. If a user would be so stupid to install an unknown CA despite the warnings, everything is lost anyway. That user will also type all his passwords into any forms that look remotely like a known login form the attacker provides. Let’s also consider corporate Android phones. I can understand that administrators don’t want their users to decide on such a security critical topic. But why doesn’t Android just implement an Administrator API rule that would disable installation of user CA certificates and delete all already installed ones on managed phones?

Secondly, why the hell does Android think that a user installed certificate is less trusted than the hundreds of preinstalled, nation-state-attacker-owned CAs?

Android, you are raising the bar for defenders, not for attackers. You don’t defend against any attack vector. You are not doing security here, you pretend to.

And yes, I know how to disassemble an app and reassemble it to circumvent this “security”. I even consider building an Android app for rooted phones that will pull the CA certificate of Burp, remount the system partition and install the CA there automatically.

Maybe the Android team is just sour because they are losing the rooting-detection game with SafetyNet to Magisk root (good job Magisk guys!). I seriously don’t have a better explanation.

And by the way I’ve heard the joke “Android is open source, change it!” already.

I thought I’ve seen many stupid Android security decisions, but this is exceptionally stupid. Or it’s me, please enlighten me in the comments!

Java Key Store (JKS) format is weak and insecure (CVE-2017-10356)

While preparing my talk for the marvelous BSides Zurich I noticed again how nearly nobody on the Internet warns you that Java’s JKS file format is weak and insecure. While users only need to use very strong passwords and keep the Key Store file secret to be on the safe side (for now!), I think it is important to tell people when a technology is weak. People should stop using JKS now, as I predict a very long phase-out period. JKS was around and the default since Java had its first Key Store. Your security relies on a single SHA-1 calculation here.

Please note that I’m not talking about any other Key Store type (BKS, PKCS#12, etc.), but see the cryptosense website for articles about them.

I don’t want to go into the details “why” JKS is insecure, you can read all about it here:

I wrote an email to the Oracle security team, as I think assigning a CVE number would help people to refer to this issue and raise awareness for developers. My original email sent on September, 18 2017:

I would like to ask Oracle to assign a CVE Number for Java’s weak
encryption in JKS files for secure storage of private keys (Java Key
Store files). JKS uses a weak encryption scheme based on SHA1.

I think it is important to raise awareness that JKS is weak by assigning
a CVE number, even when it is going to be replaced in Java 1.9 with PKCS#12.

The details of the weakness are published on the following URLs:

– As an article in the POC||GTFO 0x15 magazine, I attached it to this
email, the full magazine can also be found on
https://www.alchemistowl.org/pocorgtfo/pocorgtfo15.pdf
– https://cryptosense.com/mighty-aphrodite-dark-secrets-of-the-java-keystore/
– https://github.com/floyd-fuh/JKS-private-key-cracker-hashcat

As the article states, no documentation anywhere in the Java world
mentions that JKS is a weak storage format. I would like to change this,
raise awareness and a CVE assignment would help people refer to this issue.

The timeline so far:

September, 18 2017: Notified Oracle security team via email
September, 18 2017: Generic response that my email was forwarded to the Oracle team that investigates these issues
September, 20 2017: Oracle assigned a tracking number (S0918336)
September, 25 2017: Automated email status report: Under investigation / Being fixed in main codeline
October, 10 2017: Requested an update and asked if they could assign a CVE number
October, 11 2017: Response, they are still investigating.
October, 13 2017: Oracle writes “We have confirmed the issue and will be addressing it in a future release”. In an automated email I get Oracle states “The following issue reported by you is fixed in the upcoming Critical Patch Update, due to be released at 1:00 PM, U.S. Pacific Time, on October 17, 2017.”.
October 17, 2017: Oracle assigned a CVE in their Oracle Critical Patch Update Advisory – October 2017: CVE-2017-10356. The guys from Cryptosense got credited too it seems. However, the documentation of Oracle so far didn’t change anywhere I could see it.

I’ll update this post to let you know how it goes.