Activity wrap-up including polyglots, RIPS, UploadScanner and Java fuzzing

A tweet of takesako including a C/C++/Perl/Ruby/Python polyglot got me interested, so I created two follow-up polyglots based on his work and put them on github.

Recently I also evaluated the RIPS PHP scanner and I did that with some randomly chosen WordPress plugins. Afterwards I manually looked at the code of the plugins, to see if the scanner missed anything. Long story short, RIPS is probably going to have two new issue definition/checks in its future version, so hopefully it will find PHP type unsafe comparisons like the one I found in this WordPress plugin in the future. Additionally, they are planning to flag when a static string is used as an input for a hash function. Hashing a static string is pointless and bad from a performance perspective. But it might also indicate the creation of default or backdoor user accounts with static passwords. While discussing the idea of type unsafe comparisons, albinowax also added a new check for the backslash powered scanner Burp extension.

I will be giving a workshop on modzero’s yet unreleased Burp Proxy UploadScanner extension at the area41 conference in Zurich. I’ve been developing it for more than a year and I’m really looking forward to releasing it after the workshop (it will go public on github). It can be used to test HTTP based file uploads. The “presales” tickets are gone, but if you catch me at the conference in the morning you might be able to get one of the last seats.

I’ve also released a Java security manager policy generator, which is just a little hack but at least it works. I’m doing some research in the area of Java fuzzing at the moment, more about that later this year.

Schubser and his cookie dealing friend

I actually forgot to post this in February, so I’m a little late but the topic is as current as it was back then. One week in February my colleague at modzero AG, Jan Girlich and me took some time to review our tools and make three of them available on github.

Jan wrote a Proof of Concept (PoC) Android app that allows exploiting Java object deserialization vulnerabilities in Android and named this project modjoda (Modzero Java Object Deserialization on Android). To test the issue, he also wrote a vulnerable demo application to try the exploit.

I wrote mod0schubser, which provides a simple TCP- and TLS-level Man-In-The-Middle (MITM) proxy for people with python experience. It can be used when all the other proxy tools seem to be too complicated and you just want to do some modifications of the traffic in Python. Additionally, I wrote the mod0cookiedealer tool, a tool to demonstrate the impact of missing HTTP cookie flags (secure and HTTPonly). If you remember Firesheep, mod0cookiedealer is a modern implementation of Firesheep as a browser web-extension.

BSides Zurich – Nail in the JKS coffin

On Saturday I was happy to speak at the fabulous BSides Zurich about the Java Key Store topic. You can find my slides “Nail in the JKS coffin” as a PDF here. It was my second time at a BSides format and I really like the idea of having a short talk and then some more time to discuss the topic with interested people. I also included the “after the presentation” slides we used for roughly 50% of the discussion time. I hope you enjoyed the talk and I’m looking forward to hear some feedback. Although it was sold out, you should definitely come next year, it was one of my favorite public conferences.


Android Nougat’s worst anti-security mechanism

If you are a pentester like me, you are doing mobile application reviews on Android. One of the most important things to check is the server API. On the other hand we might want to see what possibilities a server has to influence the Android app with its responses. For both the easiest and most straight forward method is to do a Man-In-The-Middle attack in the lab and look at the network traffic. How do we do this if the mobile app uses TLS? Easy, just install a user CA certificate.

Before Android 7 that was a good solution and straight forward. There was a nag screen showing up in the notifications every time you start up your phone (which was already a little funny), but it worked fine for everyone. However, starting with Android 7 it will not work, I tested that and the official announcement about this user-added certificate security is here. So let’s look at this new “security” feature of Google’s Android.

First of all who is affected? I think only the defender side has to jump through this hoop. Because every attack vector I can think of is ridiculous. First of all, a user would need to fully cooperate to let an attacker exploit this. As Android is not opening the security settings automatically when you download a certificate (like iOS), an attacker would have to convince the user to go to the settings dialogue, go to security, scroll down, tap on “install certificate” and choose the correct file from the file system. Let’s say an attacker will setup a Wi-Fi access point and forces the user to do this or otherwise the user won’t get internet access. This is the only scenario I can even think of where a user might at all consider installing such a certificate. You might say that can happen with non-technical users, but then why don’t we just add a big red warning that this is probably the worst idea ever? That would totally suffice in my opinion. If a user would be so stupid to install an unknown CA despite the warnings, everything is lost anyway. That user will also type all his passwords into any forms that look remotely like a known login form the attacker provides. Let’s also consider corporate Android phones. I can understand that administrators don’t want their users to decide on such a security critical topic. But why doesn’t Android just implement an Administrator API rule that would disable installation of user CA certificates and delete all already installed ones on managed phones?

Secondly, why the hell does Android think that a user installed certificate is less trusted than the hundreds of preinstalled, nation-state-attacker-owned CAs?

Android, you are raising the bar for defenders, not for attackers. You don’t defend against any attack vector. You are not doing security here, you pretend to.

And yes, I know how to disassemble an app and reassemble it to circumvent this “security”. I even consider building an Android app for rooted phones that will pull the CA certificate of Burp, remount the system partition and install the CA there automatically.

Maybe the Android team is just sour because they are losing the rooting-detection game with SafetyNet to Magisk root (good job Magisk guys!). I seriously don’t have a better explanation.

And by the way I’ve heard the joke “Android is open source, change it!” already.

I thought I’ve seen many stupid Android security decisions, but this is exceptionally stupid. Or it’s me, please enlighten me in the comments!

Java Key Store (JKS) format is weak and insecure (CVE-2017-10356)

While preparing my talk for the marvelous BSides Zurich I noticed again how nearly nobody on the Internet warns you that Java’s JKS file format is weak and insecure. While users only need to use very strong passwords and keep the Key Store file secret to be on the safe side (for now!), I think it is important to tell people when a technology is weak. People should stop using JKS now, as I predict a very long phase-out period. JKS was around and the default since Java had its first Key Store. Your security relies on a single SHA-1 calculation here.

Please note that I’m not talking about any other Key Store type (BKS, PKCS#12, etc.), but see the cryptosense website for articles about them.

I don’t want to go into the details “why” JKS is insecure, you can read all about it here:

I wrote an email to the Oracle security team, as I think assigning a CVE number would help people to refer to this issue and raise awareness for developers. My original email sent on September, 18 2017:

I would like to ask Oracle to assign a CVE Number for Java’s weak
encryption in JKS files for secure storage of private keys (Java Key
Store files). JKS uses a weak encryption scheme based on SHA1.

I think it is important to raise awareness that JKS is weak by assigning
a CVE number, even when it is going to be replaced in Java 1.9 with PKCS#12.

The details of the weakness are published on the following URLs:

– As an article in the POC||GTFO 0x15 magazine, I attached it to this
email, the full magazine can also be found on

As the article states, no documentation anywhere in the Java world
mentions that JKS is a weak storage format. I would like to change this,
raise awareness and a CVE assignment would help people refer to this issue.

The timeline so far:

  • September, 18 2017: Notified Oracle security team via email
  • September, 18 2017: Generic response that my email was forwarded to the Oracle team that investigates these issues
  • September, 20 2017: Oracle assigned a tracking number (S0918336)
  • September, 25 2017: Automated email status report: Under investigation / Being fixed in main codeline
  • October, 10 2017: Requested an update and asked if they could assign a CVE number
  • October, 11 2017: Response, they are still investigating.
  • October, 13 2017: Oracle writes “We have confirmed the issue and will be addressing it in a future release”. In an automated email I get Oracle states “The following issue reported by you is fixed in the upcoming Critical Patch Update, due to be released at 1:00 PM, U.S. Pacific Time, on October 17, 2017.”.
  • October 17, 2017: Oracle assigned a CVE in their Oracle Critical Patch Update Advisory – October 2017: CVE-2017-10356. The guys from Cryptosense got credited too it seems. However, the documentation of Oracle so far didn’t change anywhere I could see it.
  • November 16, 2017: I asked again to clarify what the countermeasures are and what they are planning to do with JKS. They seem to be mixing my CVE and the JKS issues with other issue in other Key Store types.
  • November 17, 2017: Oracle replied (again, mixing-in issues of other Key Store types): “In JDK 9 the default keystore format is PKCS#11 which doesn’t have the limits of the JKS format — and we’ve put in some migration capability also. For all versions we have increased the iteration counts [sic!] used significantly so that even though the algorithms are weak, a brute-force search will take a lot longer. For older versions we will be backporting the missing bits of PKCS#11 so that it can be used as the keystore type.”. That was the good part of the answer, even though JKS has no iteration count. The second part where I asked if they could add some links to their Critical Path Update Advisory was: “In order to prevent undue risks to our customers, Oracle will not provide additional information about the specifics of vulnerabilities beyond what is provided in the Critical Patch Update (or Security Alert) advisory and pre-release note, the pre-installation notes, the readme files, and FAQs.”.

That’s it for me for now. I’m too tired to start arguing about keeping technical details secret. So basically I have to hope that everyone finds this blog posts when searching for CVE-2017-10356.

Cracking Java’s weak encryption – Nail in the JKS coffin

POC||GTFO journal edition 0x15 came out a while ago and I’m happy to have contributed the article “Nail in the JKS coffin”. You should really read the article, I’m not going to repeat myself here. I’ve also made the code available on my “JKS private key cracker hashcat” github repository.

For those who really need a TL;DR, the developed cracking technique relies on three main issues with the JKS format:

  1. Due the unusual design of JKS the key store password can be ignored and the private key password cracked directly.
  2. By exploiting a weakness of the Password Based Encryption scheme for the private key in JKS described by cryptosense, the effort to try a password is minimal (one SHA-1 calculation).
  3. As public keys are not encrypted in the JKS file format, we can determine the algorithm and key size of the public key to know the PKCS#8 encoded fingerprint we have to expect in step 2.

For a practical TL;DR, see the github repository on how JksPrivkPrepare.jar can be used together with the hashcat password cracking tool to crack passwords.

Not affected of the described issues are other key store file formats such as JCEKS, PKCS12 or BKS. It is recommended to use the PKCS12 format to store private keys and to store the files in a secure location. For example it is recommended to store Android app release JKS files somewhere else than a repository such as git.

Crash bash

Fuzzing Bash-4.4 patch 12 with AFL mainly fork bombed the fuzzing machine, but it also found this crash (they all have the same root cause):


It also works on a Bash 3.2.57, but some friends told me that they needed the following to reproduce:

echo -ne '<&-<${}'|bash

A Ubuntu user told me it was not reproducible at all, but I rather suspect his whoopsie didn’t want him to see it. Edit: As pointed out by Matthew in the comments it also works on Ubuntu.

It looks like a nullpointer dereference to me:

Program received signal SIGSEGV, Segmentation fault.
0x000912a8 in buffered_getchar () at input.c:565
565	  return (bufstream_getc (buffers[bash_input.location.buffered_fd]));
(gdb) bt
#0  0x000912a8 in buffered_getchar () at input.c:565
#1  0x0002f87c in yy_getc () at /usr/homes/chet/src/bash/src/parse.y:1390
#2  0x000302cc in shell_getc (remove_quoted_newline=1) at
#3  0x0002e928 in read_token (command=0) at
#4  0x00029d2c in yylex () at /usr/homes/chet/src/bash/src/parse.y:2675
#5  0x000262cc in yyparse () at
#6  0x00025efc in parse_command () at eval.c:261
#7  0x00025de8 in read_command () at eval.c:305
#8  0x00025a70 in reader_loop () at eval.c:149
#9  0x0002298c in main (argc=1, argv=0xbefff824, env=0xbefff82c) at
(gdb) p bash_input.location.buffered_fd
$1 = 0
(gdb) p buffers
$2 = (BUFFERED_STREAM **) 0x174808
(gdb) x/10x 0x174808
0x174808:	0x00000000	0x00000000	0x00000000	0x00000000
0x174818:	0x00000000	0x00000000	0x00000000	0x00000000
0x174828:	0x00000000	0x00000000

The maintainers of bash were notified.

iOS TLS session resumption race condition (CVE-2016-10511)

Roughly three months ago when iOS 9 was still the newest version available for the iPhone, we encountered a bug in the Twitter iOS app. When doing a transparent proxy setup for one of our iOS app security tests, a Twitter HTTPS request turned up in the Burp proxy log. This should never happen, as the proxy’s HTTPS certificate is not trusted on iOS and therefore connections should be rejected. Being shocked, we checked that certainly we did not install the CA certificate of the proxy on the iPhone and verified with a second non-jailbroken iPhone. The bug was repoducible on iOS 9.3.3 and 9.3.5.

After opening a Hackerone bug report with Twitter I took some time to further investigate the issue. Changing the seemingly unrelated location of the DHCP server in our test setup from the interception device to the WiFi access point made the bug non-reproducible. Moving the DHCP server back to the interception device the issue was reproducible again. This could only mean this was a bug that needed exact timing of certain network related packets. After a lot of back and forth, I was certain that this has to be a race condition/thread safety problem.

Dissecting the network packets with Wireshark, I was able to spot the bug. It seems that if the server certificate in the server hello packet is invalid, the TLS session is not removed fast enough/in a thread safe manner from the TLS connection pool. If the race condition is triggered, this TLS session will be reused for another TLS connection (TLS session resumption). During the TLS session resumption the server hello packet will not include a server certificate. The TLS session is already trusted and the client has no second opportunity to check the server certificate. If an attacker is able to conduct such an attack, the authentication mechanism of TLS is broken, allowing extraction of sensitive OAuth tokens, redirecting the Twitter app via HTTP redirect messages and other traffic manipulations.

I was not able to reproduce the issue on iOS 10. Twitter additionally fixed the issue on their side in Twitter iOS version 6.44, but noted that this was probably related to an Apple bug. We did not further investigate the issue, but the assumption seems plausible.

The issue was rated high severity by Twitter. The entire details are published on Hackerone.

Update: CVE-2016-10511 was assigned to this security issue.

Activity wrap-up including AFL, CRASS and Burp

Here’s a little overview of my last few months:


Rating conference submissions

Hi everyone

This blog post is about something I had in mind for quiet a while and is a topic from the “meta” corner. I think this topic will become more important with new forms of conference submission ratings such as Open CFPs. This blog post is about IT security conferences, but might apply to other conferences too.

A few years ago I was asked (and many others too) to review talk submissions for a (the biggest?) IT security conference in Europe, the CCC congress. As a reviewer you are able to access the material the speakers submitted in written form including links and papers. Usually you are only part of one reviewer team, which will only rate a certain track. You rate submissions between 1 and 5 stars (half stars allowed) and you write a review comment reasoning your decision. Rating a talk without reasoning in the review comment is possible, but in my opinion plain rude. I did review a couple of talks in the last few years but I wasn’t always comfortable with the way I did it. This blog post is approaching that by reflecting how I could do reviews differently. I hope this helps others to do the same.

Should I really care that much about my “criteria” and if I’m “doing it right”? That’s one of the first questions I asked myself. Maybe the whole point is that I throw in my opinion? I see two main aspects here: Someone of the conference organisation team chose me to review submissions, so it’s probably desired that I throw in my own opinion. On the other hand it’s important to questions one’s own methods. I decided it’s worth taking some time to think about how I review talks. I encourage you to think about the questions in this blog post and reflect your ratings, but you probably will and should disagree with some of my opinions.

The goal of reviewing submissions is choosing high quality talks for the conference. But should the talks be high quality to me or rather something I guess is an average conference participant? That’s probably hard to answer, but I usually try to adopt to the conference participants and especially to the conference purpose. But what’s quality? I thought about some criteria that might make up “quality” regarding the content of a talk:

  • What does the talk contribute to the overall IT security field? I know this is a very broad question. But maybe you should write in your review if you don’t see what the talk will contribute.
  • Novelty/creativity of research area/topic. For example the novelty of the target. I think this criterion is overrated, a talk shouldn’t be rated high just because it is about car hacking or hacking an IoT barbie. However, this criterion can contribute to an interesting talk.
  • Novelty/creativity of used techniques/developed tools/analysis approach. For me this is way more important than a fancy research topic. I guess the first talk about DOM based XSS was pretty cool, but if you start to explain that to people nowadays, not so much. In the past I ran into questions like “Is threat visualization a helpful feature or just a fancy gimmick?”. These questions aren’t always easy to answer.
  • Novelty/creativity of talk in general. I’ve heard a lot of malware talks, but I was often bored about “new” obfuscation techniques that malware writers invented. Although I couldn’t really say that it wasn’t new, it just didn’t feel new at all. But then maybe I’m just not a malware analyst.
  • The people’s/conference’s/personal relation to the topic and relevance. If the conference is about hardware hacking, an SQL injection talk is maybe not the thing people are after. But if they talk about a new CPU security feature of an exotic CPU architecture it might not be of relevance for everyone. However, due to my personal preferences I might still give a high rating.
  • Focus. I think you can often spot bad talks that use a lot of buzz words and do not talk about anything specific, but about IT security in general. These talks are often combined with humor. Nearly everybody can tell a funny security story or two, but is it really relevant?
  • Completeness. Is the research finished and covers all topics you would expect? Is the speaker biased and therefore not mentioning certain topics?
  • Ability to understand the talk. If it’s only understandable for the 0.2% of people who did manual chip decaping themselves, this might be just too hardcore. Again it depends on the conference’s focus. Maybe it’s important that there are at least some of these talks, so people don’t forget what the security community is working on.
  • Learning/knowledge/stimulation. Can I/people learn from the talk? Is the talk stimulating and people want to work on the topic after hearing all the details?
  • Everyday usefulness. Can people apply it right away at home? I guess it is important that there are some of these talks, but it’s not too important.
  • Is the information well written? Adds to the overall impression.
  • Was the research presented before at other conferences? I think you should mention in the comments if you’ve heard a talk before.
  • Personal overall feeling in three categories (and the amount of talks I rate that way): Accept (20%), undecided (60%) and reject (20%).
  • Would I go to the talk?

But then there is as well a more human component in this entire conference talk thing:

  • Speaker’s presence. There are a lot of people that talk a lot, are nice to listen to and afterwards I do think the talk was good. But sometimes it still feels like they didn’t say anything I didn’t know before. A good example is a Ted talk about nothing. Maybe I was blinded by the speaker being able to make me feel good, because I had that “oh, i thought that before!” moment. Keynotes often make me feel this way. I think that’s fine for keynotes.
  • Humor. I never rate a talk better because it is funny and I think it shouldn’t be part of the submission text (but maybe of the presentation). I think humor is very often making a good talk brilliant, because hard topics are easier to digest this way. It allows to repeat an important information while the repetition doesn’t seem boring. I think fun talks can be very entertaining, doing a hacker jeopardy is hilarious when everybody knows what’s coming. Humor can never replace good content.
  • Entertainment. Exactly like humor, the dose is important. I think it shouldn’t be part of the submission text.
  • Do I rate talks of people I personally know/dislike/admire? Do I rate talks better, because the speaker is well-known? Because I heard good things about his talks? Sometimes I do, sometimes I don’t, but I write about it in the review comment. Being honest is the key.
  • Equality, gender neutrality, quotas. I try to treat everyone the same.
  • What are red flag criteria? For me the most important red flag criteria is talking about research results, but not releasing the developed tool open source. If the speaker is not Aleph One, a talk should never have a title with “for fun and profit”. For me it is important to spot pure marketing stunts: It’s not only about corporations trying to do this, it is as well about open source tool maintainers who simply love their project and want to promote it. What’s the reason this topic should get a time slot?
  • When do I intervene with the conference board? For example if a research is obviously fake or plagiarism or in the wrong track.
  • Which talks should I rate? I start rating submissions for topics I’m very familiar with, starting with those I did research myself. If I have time I try to rate all talks I was asked to rate. I try to be honest in the comments and write if I’m not too familiar with the topic but I’m rating anyway.
  • Did I understand the submission’s topic? Maybe read it again? Maybe I shouldn’t rate it if I didn’t get it?

It’s a complicated topic. If you would like to do some further reading, I couldn’t find very much. If you know something or have a different opinion, leave it in the comments. Here are a couple of links: