Rating conference submissions

Hi everyone

This blog post is about something I had in mind for quiet a while and is a topic from the “meta” corner. I think this topic will become more important with new forms of conference submission ratings such as Open CFPs. This blog post is about IT security conferences, but might apply to other conferences too.

A few years ago I was asked (and many others too) to review talk submissions for a (the biggest?) IT security conference in Europe, the CCC congress. As a reviewer you are able to access the material the speakers submitted in written form including links and papers. Usually you are only part of one reviewer team, which will only rate a certain track. You rate submissions between 1 and 5 stars (half stars allowed) and you write a review comment reasoning your decision. Rating a talk without reasoning in the review comment is possible, but in my opinion plain rude. I did review a couple of talks in the last few years but I wasn’t always comfortable with the way I did it. This blog post is approaching that by reflecting how I could do reviews differently. I hope this helps others to do the same.

Should I really care that much about my “criteria” and if I’m “doing it right”? That’s one of the first questions I asked myself. Maybe the whole point is that I throw in my opinion? I see two main aspects here: Someone of the conference organisation team chose me to review submissions, so it’s probably desired that I throw in my own opinion. On the other hand it’s important to questions one’s own methods. I decided it’s worth taking some time to think about how I review talks. I encourage you to think about the questions in this blog post and reflect your ratings, but you probably will and should disagree with some of my opinions.

The goal of reviewing submissions is choosing high quality talks for the conference. But should the talks be high quality to me or rather something I guess is an average conference participant? That’s probably hard to answer, but I usually try to adopt to the conference participants and especially to the conference purpose. But what’s quality? I thought about some criteria that might make up “quality” regarding the content of a talk:

  • What does the talk contribute to the overall IT security field? I know this is a very broad question. But maybe you should write in your review if you don’t see what the talk will contribute.
  • Novelty/creativity of research area/topic. For example the novelty of the target. I think this criterion is overrated, a talk shouldn’t be rated high just because it is about car hacking or hacking an IoT barbie. However, this criterion can contribute to an interesting talk.
  • Novelty/creativity of used techniques/developed tools/analysis approach. For me this is way more important than a fancy research topic. I guess the first talk about DOM based XSS was pretty cool, but if you start to explain that to people nowadays, not so much. In the past I ran into questions like “Is threat visualization a helpful feature or just a fancy gimmick?”. These questions aren’t always easy to answer.
  • Novelty/creativity of talk in general. I’ve heard a lot of malware talks, but I was often bored about “new” obfuscation techniques that malware writers invented. Although I couldn’t really say that it wasn’t new, it just didn’t feel new at all. But then maybe I’m just not a malware analyst.
  • The people’s/conference’s/personal relation to the topic and relevance. If the conference is about hardware hacking, an SQL injection talk is maybe not the thing people are after. But if they talk about a new CPU security feature of an exotic CPU architecture it might not be of relevance for everyone. However, due to my personal preferences I might still give a high rating.
  • Focus. I think you can often spot bad talks that use a lot of buzz words and do not talk about anything specific, but about IT security in general. These talks are often combined with humor. Nearly everybody can tell a funny security story or two, but is it really relevant?
  • Completeness. Is the research finished and covers all topics you would expect? Is the speaker biased and therefore not mentioning certain topics?
  • Ability to understand the talk. If it’s only understandable for the 0.2% of people who did manual chip decaping themselves, this might be just too hardcore. Again it depends on the conference’s focus. Maybe it’s important that there are at least some of these talks, so people don’t forget what the security community is working on.
  • Learning/knowledge/stimulation. Can I/people learn from the talk? Is the talk stimulating and people want to work on the topic after hearing all the details?
  • Everyday usefulness. Can people apply it right away at home? I guess it is important that there are some of these talks, but it’s not too important.
  • Is the information well written? Adds to the overall impression.
  • Was the research presented before at other conferences? I think you should mention in the comments if you’ve heard a talk before.
  • Personal overall feeling in three categories (and the amount of talks I rate that way): Accept (20%), undecided (60%) and reject (20%).
  • Would I go to the talk?

But then there is as well a more human component in this entire conference talk thing:

  • Speaker’s presence. There are a lot of people that talk a lot, are nice to listen to and afterwards I do think the talk was good. But sometimes it still feels like they didn’t say anything I didn’t know before. A good example is a Ted talk about nothing. Maybe I was blinded by the speaker being able to make me feel good, because I had that “oh, i thought that before!” moment. Keynotes often make me feel this way. I think that’s fine for keynotes.
  • Humor. I never rate a talk better because it is funny and I think it shouldn’t be part of the submission text (but maybe of the presentation). I think humor is very often making a good talk brilliant, because hard topics are easier to digest this way. It allows to repeat an important information while the repetition doesn’t seem boring. I think fun talks can be very entertaining, doing a hacker jeopardy is hilarious when everybody knows what’s coming. Humor can never replace good content.
  • Entertainment. Exactly like humor, the dose is important. I think it shouldn’t be part of the submission text.
  • Do I rate talks of people I personally know/dislike/admire? Do I rate talks better, because the speaker is well-known? Because I heard good things about his talks? Sometimes I do, sometimes I don’t, but I write about it in the review comment. Being honest is the key.
  • Equality, gender neutrality, quotas. I try to treat everyone the same.
  • What are red flag criteria? For me the most important red flag criteria is talking about research results, but not releasing the developed tool open source. If the speaker is not Aleph One, a talk should never have a title with “for fun and profit”. For me it is important to spot pure marketing stunts: It’s not only about corporations trying to do this, it is as well about open source tool maintainers who simply love their project and want to promote it. What’s the reason this topic should get a time slot?
  • When do I intervene with the conference board? For example if a research is obviously fake or plagiarism or in the wrong track.
  • Which talks should I rate? I start rating submissions for topics I’m very familiar with, starting with those I did research myself. If I have time I try to rate all talks I was asked to rate. I try to be honest in the comments and write if I’m not too familiar with the topic but I’m rating anyway.
  • Did I understand the submission’s topic? Maybe read it again? Maybe I shouldn’t rate it if I didn’t get it?

It’s a complicated topic. If you would like to do some further reading, I couldn’t find very much. If you know something or have a different opinion, leave it in the comments. Here are a couple of links:

cheers,
floyd