We’ve recently been hit by several waves of DDOS attacks of various different kinds. Layer3/4 is obviously something that our hosting provider needs to deal with, but we need to defend ourselves against Layer 7. One pattern that frequently presents itself is WordPress pingback abuse. This is nothing new, there are in fact a couple of blog posts dating back as far as 2012, maybe even earlier than that, when the WordPress community in their infinite wisdom decided to enable XML-RPC by default for the sake of the WordPress App and some Jetpack features.

Now this opens the very interesting vector of bouncing requests off any WordPress installation that hasn’t explicitly disabled pingbacks (for example using the Disable xml rpc pingback plugin. For details as to what’s going on, you may wish to read Incapsula blogpost from April 30th, 2013 or the Akamai blogpost from March 31, 2014 on the same issue.

TL;DR: A single compromised attack zombie would issue a lot of requests to vulnerable WordPress sites that would ask them to send pingback verification requests to the intended target. This would camouflage the zombie’s IP address for anything that’s scanning traffic on the network level, as the traffic that’s hitting the target would originate from the WordPress sites, not from the zombie. Et voilà: Lots of fresh IPs available to the DDOS attack without even compromising these origins directly.

Only when inspecting the UserAgent would the target notice the nature of these requests – this UserAgent has the form WordPress/[version]; http://[wordpress-ip]; verifying pingback from [zombie-ip], so for example „WordPress/4.3.6; http://555.555.555.555; verifying pingback from 666.666.666.666“.

CloudFlare offers to block these requests altogether with rule number 100047WP, which is disabled by default and needs to be enabled in Firewall, Web Application Firewall, Package: CloudFlare Rule Set, Rule details, CloudFlare Specials and then enabling „Block WordPress User-Agent completely (Pingback attack)“ which is currently on page 9 of this long list of rules. If you have no WordPress installed and/or you don’t care about this pingback nonsense, there’s no good reason to not turn this on.

If you wish to block these requests on your Apache webserver, you could use the following Rewrite statements in your VirtualHost:

This would still allow WordPress UserAgents to read your RSS feed for example, but the pingback crap would be dropped. You’ll still be hit by the malicious requests though, and they may still make an impact by consuming traffic, filling the logs and taking up webserver resources until the Forbidden response, so doing this at any prior stage where it might be possible to apply some filtering on the UserAgent string would probably be beneficial.

If you wish to check if this is working, you could test this with a simple wget directly on your webserver:

If you haven’t got shell access, you can drop the header param and instead of http://127.0.0.1/ target your site directly, but please don’t test this with anything but your own website.

Oh, btw: If you’re reading this and you’re the one that is responsible for us digging around knee deep in our server logs and wasting lots and lots of time and effort just in order to keep our websites running – I’m quite fantastically curious about your motivation in all this. Maybe you could show us the courtesy to at least let us know somehow? That would be terrific!

Ab und an benötigt man einen Cronjob der auch unter einer Minute ausgeführt werden soll, ohne sich erst ein Bashscript mit Loop basteln zu müssen. Der Trick dabei ist der „sleep“-Befehl im crontab und dass man den Eintrag mehrfach vornehmen muss.

Möchte man ein Script alle 30 Sekunden ausführen, trägt man folgendes ein:

Möchte man ein Script alle 5 Sekunden ausführen lassen, wird es schon etwas umfangreicher – Bspw.:

Eine saubere URL-Struktur mit echten Permalinks ist sicher ein wichtiges Element der OnSite-Optimierung. Damit das Konstrukt aber auch gegen äußere Einflüsse (d.h. URL-abtippende User, String-Destruction in Foren, E-Mail-Programmen u dgl.) stabil bleibt, sollte jede aufgerufene Seite zum einen ihre kanonische URL entweder im HTTP-Header oder als Canonical-Tag schicken. Außerdem sollte sie beim Aufruf den URL prüfen, mit dem sie aufgerufen wurde, diesen gegen den Canonical-Wert testen und bei Diskrepanzen entsprechend reagieren.

Weiterlesen