New year, new host, same hacks

Wow, it’s been a while, last post was 2018. I’ve been very busy. Back in 2019 my co-founder Martin and me created Pentagrid. Many things also fell asleep during the pandemic, as social interaction got harder.

I’ve moved this blog to a different hoster, let me know if anything broke. If you are into old weird stuff, check out my TI Voyage 200 calculator functions I wrote more than ten years ago and I’ve also moved for nostalgic reasons.

I still do some Open Source related things. For example, stuff that nerd-snipes me, such as balancing the use of memory, disc and number of HTTP Range requests by creating a Python file-like object that does HTTP Range requests in the read function. My CRASS project is still alive and maintained. Looking a little more at Windows domain things lately, but mostly using awesome tools from other people. I’m very much looking forward to the new Burp Extension API and multi-language extensibility as well and I still love the tool, even when it sometimes seem that I’m constantly, complaining, a lot, about Burp. I’ve been blogging mostly on the Pentagrid blog. So head over there and check out the new Response Overview Burp Suite extension or how we broke AWS Cognito for example. Going forward I’ll try to still blog from time to time, even if takes 5 years.

Activity wrap-up including polyglots, RIPS, UploadScanner and Java fuzzing

A tweet of takesako including a C/C++/Perl/Ruby/Python polyglot got me interested, so I created two follow-up polyglots based on his work and put them on github.

Recently I also evaluated the RIPS PHP scanner and I did that with some randomly chosen WordPress plugins. Afterwards I manually looked at the code of the plugins, to see if the scanner missed anything. Long story short, RIPS is probably going to have two new issue definition/checks in its future version, so hopefully it will find PHP type unsafe comparisons like the one I found in this WordPress plugin in the future. Additionally, they are planning to flag when a static string is used as an input for a hash function. Hashing a static string is pointless and bad from a performance perspective. But it might also indicate the creation of default or backdoor user accounts with static passwords. While discussing the idea of type unsafe comparisons, albinowax also added a new check for the backslash powered scanner Burp extension.

I will be giving a workshop on my yet unreleased Burp Proxy UploadScanner extension at the area41 conference in Zurich. I’ve been developing it for more than a year and I’m really looking forward to releasing it after the workshop (it will go public on github). It can be used to test HTTP based file uploads. The “presales” tickets are gone, but if you catch me at the conference in the morning you might be able to get one of the last seats.

I’ve also released a Java security manager policy generator, which is just a little hack but at least it works. I’m doing some research in the area of Java fuzzing at the moment, more about that later this year.

Activity wrap-up including AFL, CRASS and Burp

Here’s a little overview of my last few months:

cheers,
floyd

Rating conference submissions

Hi everyone

This blog post is about something I had in mind for quiet a while and is a topic from the “meta” corner. I think this topic will become more important with new forms of conference submission ratings such as Open CFPs. This blog post is about IT security conferences, but might apply to other conferences too.

A few years ago I was asked (and many others too) to review talk submissions for a (the biggest?) IT security conference in Europe, the CCC congress. As a reviewer you are able to access the material the speakers submitted in written form including links and papers. Usually you are only part of one reviewer team, which will only rate a certain track. You rate submissions between 1 and 5 stars (half stars allowed) and you write a review comment reasoning your decision. Rating a talk without reasoning in the review comment is possible, but in my opinion plain rude. I did review a couple of talks in the last few years but I wasn’t always comfortable with the way I did it. This blog post is approaching that by reflecting how I could do reviews differently. I hope this helps others to do the same.

Should I really care that much about my “criteria” and if I’m “doing it right”? That’s one of the first questions I asked myself. Maybe the whole point is that I throw in my opinion? I see two main aspects here: Someone of the conference organisation team chose me to review submissions, so it’s probably desired that I throw in my own opinion. On the other hand it’s important to questions one’s own methods. I decided it’s worth taking some time to think about how I review talks. I encourage you to think about the questions in this blog post and reflect your ratings, but you probably will and should disagree with some of my opinions.

The goal of reviewing submissions is choosing high quality talks for the conference. But should the talks be high quality to me or rather something I guess is an average conference participant? That’s probably hard to answer, but I usually try to adopt to the conference participants and especially to the conference purpose. But what’s quality? I thought about some criteria that might make up “quality” regarding the content of a talk:

  • What does the talk contribute to the overall IT security field? I know this is a very broad question. But maybe you should write in your review if you don’t see what the talk will contribute.
  • Novelty/creativity of research area/topic. For example the novelty of the target. I think this criterion is overrated, a talk shouldn’t be rated high just because it is about car hacking or hacking an IoT barbie. However, this criterion can contribute to an interesting talk.
  • Novelty/creativity of used techniques/developed tools/analysis approach. For me this is way more important than a fancy research topic. I guess the first talk about DOM based XSS was pretty cool, but if you start to explain that to people nowadays, not so much. In the past I ran into questions like “Is threat visualization a helpful feature or just a fancy gimmick?”. These questions aren’t always easy to answer.
  • Novelty/creativity of talk in general. I’ve heard a lot of malware talks, but I was often bored about “new” obfuscation techniques that malware writers invented. Although I couldn’t really say that it wasn’t new, it just didn’t feel new at all. But then maybe I’m just not a malware analyst.
  • The people’s/conference’s/personal relation to the topic and relevance. If the conference is about hardware hacking, an SQL injection talk is maybe not the thing people are after. But if they talk about a new CPU security feature of an exotic CPU architecture it might not be of relevance for everyone. However, due to my personal preferences I might still give a high rating.
  • Focus. I think you can often spot bad talks that use a lot of buzz words and do not talk about anything specific, but about IT security in general. These talks are often combined with humor. Nearly everybody can tell a funny security story or two, but is it really relevant?
  • Completeness. Is the research finished and covers all topics you would expect? Is the speaker biased and therefore not mentioning certain topics?
  • Ability to understand the talk. If it’s only understandable for the 0.2% of people who did manual chip decaping themselves, this might be just too hardcore. Again it depends on the conference’s focus. Maybe it’s important that there are at least some of these talks, so people don’t forget what the security community is working on.
  • Learning/knowledge/stimulation. Can I/people learn from the talk? Is the talk stimulating and people want to work on the topic after hearing all the details?
  • Everyday usefulness. Can people apply it right away at home? I guess it is important that there are some of these talks, but it’s not too important.
  • Is the information well written? Adds to the overall impression.
  • Was the research presented before at other conferences? I think you should mention in the comments if you’ve heard a talk before.
  • Personal overall feeling in three categories (and the amount of talks I rate that way): Accept (20%), undecided (60%) and reject (20%).
  • Would I go to the talk?

But then there is as well a more human component in this entire conference talk thing:

  • Speaker’s presence. There are a lot of people that talk a lot, are nice to listen to and afterwards I do think the talk was good. But sometimes it still feels like they didn’t say anything I didn’t know before. A good example is a Ted talk about nothing. Maybe I was blinded by the speaker being able to make me feel good, because I had that “oh, i thought that before!” moment. Keynotes often make me feel this way. I think that’s fine for keynotes.
  • Humor. I never rate a talk better because it is funny and I think it shouldn’t be part of the submission text (but maybe of the presentation). I think humor is very often making a good talk brilliant, because hard topics are easier to digest this way. It allows to repeat an important information while the repetition doesn’t seem boring. I think fun talks can be very entertaining, doing a hacker jeopardy is hilarious when everybody knows what’s coming. Humor can never replace good content.
  • Entertainment. Exactly like humor, the dose is important. I think it shouldn’t be part of the submission text.
  • Do I rate talks of people I personally know/dislike/admire? Do I rate talks better, because the speaker is well-known? Because I heard good things about his talks? Sometimes I do, sometimes I don’t, but I write about it in the review comment. Being honest is the key.
  • Equality, gender neutrality, quotas. I try to treat everyone the same.
  • What are red flag criteria? For me the most important red flag criteria is talking about research results, but not releasing the developed tool open source. If the speaker is not Aleph One, a talk should never have a title with “for fun and profit”. For me it is important to spot pure marketing stunts: It’s not only about corporations trying to do this, it is as well about open source tool maintainers who simply love their project and want to promote it. What’s the reason this topic should get a time slot?
  • When do I intervene with the conference board? For example if a research is obviously fake or plagiarism or in the wrong track.
  • Which talks should I rate? I start rating submissions for topics I’m very familiar with, starting with those I did research myself. If I have time I try to rate all talks I was asked to rate. I try to be honest in the comments and write if I’m not too familiar with the topic but I’m rating anyway.
  • Did I understand the submission’s topic? Maybe read it again? Maybe I shouldn’t rate it if I didn’t get it?

It’s a complicated topic. If you would like to do some further reading, I couldn’t find very much. If you know something or have a different opinion, leave it in the comments. Here are a couple of links:

cheers,
floyd

What I’ve been up to: a lot

Hi there

Yes, I know, you didn’t hear from me for quiet a while (apart from the usual Twitter noise). But I wasn’t lazy! Actually I feel like I need to get rid of a lot of information. Here’s what I was up to in the last few months:

  • Released the code review audit script scanner (crass) on github, which is basically a very much improved version of what I’ve talked about in one of my blog posts about a grep script. It is still heavy on the Android side, but supports a lot more now. Additionally it has some helpful other scripts as well.
  • For historical reasons I released some code about the mona.py unicode buffer overflow feature on github, which I also wrote two blog posts about in the past. By now the entire code is part of mona.py (which you should actually use). It’s on github if someone wants to refactor and understand my code (more comments, standalone version, etc.).
  • I released some very simple SSL MITM proxy in a couple of lines of bash script on github. To be honest, I was surprised myself that it really worked so nicely. It probably doesn’t work in all cases. I’m actually planning to write something on all the options pentesters have for SSL MITM-Proxies. There is also a Reddit discussion going on about it and I should definitely check those comments.
  • I was teaching some very basic beginner classes in Python (and learned a lot while doing it). Some of my students are going to use IBM websphere and its wsadminlib, so I had a look at that code and it honestly shocked me a little. My code is sometimes messy too, but for an official script that’s just wow. As I’m not very familiar with IBM websphere apart from post exploitation, I don’t think I’m the right guy to fix the code (I don’t even have access to an IBM websphere server). So I tried to be helpful on github. Meh.
  • I’ve analyzed how Android can be exploited on the UI level to break its sandbox, gave a talk about it at an event in Zurich (“Android apps in sheep’s clothing”). I developed an overlay proof of concept exploit (which is on github). When I emailed back and forth with the Android security team about it they had lame excuses like “we check apps that are put on Google Play”. That’s why I put malware on the Google Play Store (edit: removed link as with time I wasn’t in the mood the accept the new fine print for malware in the Google Play store, but it used to be on https://play.google.com/store/apps/details?id=ch.example.dancingpigs) and of course they didn’t detect it. But Google doesn’t seem to care, it’s still on there. We publicly wrote about it in April 2015, that’s 6 months at the moment. Nearly no downloads so far, but you get the point, right? Regarding if the overlay issue is considered a bug, Android only acknowledged that “apps shouldn’t be able to detect which other app is in the foreground”. So when I sent them a link to a stackoverflow posting showing them that they failed at that in Android 5.0 they opened Android bug ANDROID-20034603. It ended up in the (finally!) newly introduced security bulletins (August 2015), referenced as “CVE-2015-3833: Mitigation bypass of restrictions on getRecentTasks()”. I didn’t get credited because I wasn’t the author of the stackoverflow posting. Whatever.
  • I’ve released and updated my AFL crash analyzer scripts (Python) and other AFL scripts (mostly bash) on github.
  • I have to be a bit more realistic about the heap buffer overflow exploits I said I was “writing”, I’m currently more failing at being able to exploit them (which is very good, I learn a lot at the moment). It seems I found crashes (with AFL) that are pretty hard to exploit. I’m currently looking at something that needs to be exploited through a free call (I guess). Anyway, not a problem, I’ll just dig deeper. I just have to make sure that I further do crash analysis rather than setting up new fuzzers all the time… so much fun!
  • We went full disclosure on Good Technology, we released a XSS from 2013 that enabled you to wipe all mobile devices of your company as a regular user (just an example). Additionally, I found a new issue, an exported Android intent (aka insecure IPC mechanism) that can be exploited under certain conditions.

cheers,
floyd

New year – Vallader app, fuzzing and advisories

Happy new year everybody,

As some of you know I’m learning a new language (Vallader Romansh) and because that language is only spoken by a few ten thousand people there is no dictionary Android app. So hey, here is a version I coded in half a day on github and on Google Play. I never took the time to improve it, so I thought I simply release it today (which took me another half a day). The app isn’t very stable, not well tested, but I guess better some app than no app at all. Send me pull requests 😉

Moreover, I’ve been fuzzing quiet a lot in the last few months and the results are crazy, thanks to AFL. I’m writing heap buffer overflow exploits and I hope I’ll write some more posts about it soon.

If you haven’t seen it, we’ve been releasing a few advisories in 2014.

Additionally, I just changed some settings on this page. You won’t be bothered with third party JavaScript includes on this domain anymore.

Shellshock fix – bash compiling for OSX

By now probably all of you heard of the shellshock vulnerability. Just as a small heads-up, I wasn’t able to compile the bash version 4.3 on Mac OSX as the last few patches simply don’t work for me. But here’s how you can compile, test and install version 4.2 on your OSX:

#adopted from an original post (that was deleted) from http://www.linus-neumann.de/2014/09/26/clean-your-mac-from-shellshock-by-updating-bash/

PATCH_COMMAND=patch
#No better results with gnu-patch from mac ports -> /opt/local/bin/gpatch


#VERSION_TO_COMPILE=4.1
#VERSION_TO_COMPILE_NO_DOT=41
#VERSION_NUMBER_OF_PATCHES=17

VERSION_TO_COMPILE=4.2
VERSION_TO_COMPILE_NO_DOT=42
VERSION_NUMBER_OF_PATCHES=53

#patches starting from 029 don't work for me in version 4.3
#VERSION_TO_COMPILE=4.3
#VERSION_TO_COMPILE_NO_DOT=43
#VERSION_NUMBER_OF_PATCHES=30


echo "* Downloading bash source code"
wget --quiet http://ftpmirror.gnu.org/bash/bash-$VERSION_TO_COMPILE.tar.gz
tar xzf bash-$VERSION_TO_COMPILE.tar.gz 
cd bash-$VERSION_TO_COMPILE

echo "* Downloading and applying all patches"
for i in $(seq -f "%03g" 1 $VERSION_NUMBER_OF_PATCHES); do
   echo "Downloading and applying patch number $i for bash-$VERSION_TO_COMPILE"
   wget --quiet http://ftp.gnu.org/pub/gnu/bash/bash-$VERSION_TO_COMPILE-patches/bash$VERSION_TO_COMPILE_NO_DOT-$i
   $PATCH_COMMAND -p0 < bash$VERSION_TO_COMPILE_NO_DOT-$i
   #sleep 0.5
done

echo "* configuring and building bash binary"
sleep 1
./configure
make

echo "* writing bash test script"
#The following script will only work when your cwd has the bash binary,
#so you can execute ./bash
#mostly taken from shellshocker.net:
cat << EOF > /tmp/tmp-bash-test-file.sh
    #CVE-2014-6271
    echo "* If the following lines contain the word 'vulnerable' your bash is not fixed:"
    env x='() { :;}; echo vulnerable' ./bash -c "echo no worries so far"
    #CVE-2014-7169
    echo "* If the following lines print the actual date rather than the string 'date' you are vulnerable:"
    env X='() { ()=>\' ./bash -c "echo date"; cat echo;
    #unknown
    echo "* If the following lines contain the word 'vulnerable' your bash is not fixed:"
    env X=' () { }; echo vulnerable' ./bash -c 'echo no worries so far'
    #CVE-2014-7186
    echo "* If the following lines contain the word 'vulnerable' your bash is not fixed:"
    ./bash -c 'true <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF' || echo "vulnerable CVE-2014-7186 , redir_stack"
    #CVE-2014-7187
    echo "* If the following lines contain the word 'vulnerable' your bash is not fixed:"
    (for x in {1..200} ; do echo "for x$x in ; do :"; done; for x in {1..200} ; do echo done ; done) | ./bash || echo "vulnerable CVE-2014-7187 , word_lineno"
    #CVE-2014-6278
    echo "* If the following lines contain the word 'vulnerable' your bash is not fixed:"
    shellshocker='() { echo vulnerable; }' ./bash -c shellshocker
    #CVE-2014-6277
    echo "* If the following lines contain the word 'vulnerable' your bash is not fixed:"
    ./bash -c "f() { x() { _;}; x() { _;} <<a; }" 2>/dev/null || echo vulnerable
    #more tests, probably often testing the same as above, but better safe than sorry
    echo "* If the following lines contain the word 'vulnerable' your bash is not fixed:"
    env X='() { _; } >_[$($())] { echo vulnerable; }' ./bash -c : 
    echo "* If the following lines contain the word 'vulnerable' your bash is not fixed:"
    foo='() { echo vulnerable; }' ./bash -c foo
EOF

echo ""
echo "* Starting a new bash process to check for vulnerabilities"
echo ""
sleep 1
./bash /tmp/tmp-bash-test-file.sh

echo ""
echo "* If the compiled bash binary is not vulnerable, you want to install that binary in your system:"
echo "cd bash-$VERSION_TO_COMPILE"
echo "sudo make install"
echo "sudo mv /bin/bash /bin/old_vulnerable_bash && sudo ln /usr/local/bin/bash /bin/bash"

cheers,
floyd

DNS zone transfer

Today I thought it would be cool to have a list of all domains that exist in Switzerland. As it turns out, the swiss registrar (Switch) has configured their nameservers correctly, so you can not do a DNS zone transfer 🙁 . But I found out that a lot of other TLDs allow to make zone transfers. I don’t know if its on purpose, but I don’t think so, because not all of their DNS root servers allow to do the transfer… Try it yourself (bash script):

tlds="AC AD AE AERO AF AG AI AL AM AN AO AQ AR ARPA AS ASIA AT AU AW AX AZ BA BB BD BE BF BG BH BI BIZ BJ BM BN BO BR BS BT BV BW BY BZ CA CAT CC CD CF CG CH CI CK CL CM CN CO COM COOP CR CU CV CX CY CZ DE DJ DK DM DO DZ EC EDU EE EG ER ES ET EU FI FJ FK FM FO FR GA GB GD GE GF GG GH GI GL GM GN GOV GP GQ GR GS GT GU GW GY HK HM HN HR HT HU ID IE IL IM IN INFO INT IO IQ IR IS IT JE JM JO JOBS JP KE KG KH KI KM KN KP KR KW KY KZ LA LB LC LI LK LR LS LT LU LV LY MA MC MD ME MG MH MIL MK ML MM MN MO MOBI MP MQ MR MS MT MU MUSEUM MV MW MX MY MZ NA NAME NC NE NET NF NG NI NL NO NP NR NU NZ OM ORG PA PE PF PG PH PK PL PM PN PR PRO PS PT PW PY QA RE RO RS RU RW SA SB SC SD SE SG SH SI SJ SK SL SM SN SO SR ST SU SV SY SZ TC TD TEL TF TG TH TJ TK TL TM TN TO TP TR TRAVEL TT TV TW TZ UA UG UK US UY UZ VA VC VE VG VI VN VU WF WS XN XXX YE YT ZA ZM ZW"
    
for tld in $tlds
do
   echo "Doing TLD $tld"
   for f in `dig ns $tld. | grep "NS" | cut -f 7 | grep "$tld." | grep -v "ANSWER"`
   do
       echo "$tld : $f"
       dig axfr $tld @$f >> output.txt
   done
done

For me it worked for the following TLDs: an, bi, ci, cr, er, et, ga, ge, gy, jm, km, mc, mm, mo, mw, ni, np, pg, pro, sk, sv, tt, uk, uy, ye, zw. Might change in the future. For me the winner is… Slovakia (sk)! Never seen so many DNS entries in one file 😀

Update: I just uploaded my results here. When I talked to Max he decided to put his treasures (.DE for example!) up as well, you’ll find his domains here.

Extracting Windows Hashes

Extracting Windows hashes for password cracking is pretty basic, right? If you try to copy the SAM and SYSTEM file from C:\WINDOWS\system32\config\ on a running Windows 2003 server you get an error message, saying that it’s already in use. So before you start using shadowcopies or ntbackup or any other tools, consider just copying C:\WINDOWS\repair\SAM and SYSTEM. Basically the same files, altough it seems that the repair folder is not always up to date.

Update: There is some more research going on pauldotcom.