The revised and compressed OWASP Top 3 Web Application Vulnerabilities

I love Top 10s. They’re everywhere and about everythingTop 10 Fascinating Facts About Neanderthals, Top 10 Crazy Bridal Preparation Customs, Top 10 Alleged Battles Between Humans And Aliens, etc.

But my question was always: why 10? Why not 11? Or 9. Or whatever else? I guess 10 sounds more important than 11 or 9. It’s the decimal system, 10 fingers, easy to visualize. What would you trust more, a Top 11 or a Top 10? Then the pressure is on the top creator to add, eliminate or combine elements to end up with 10 for a credible list.

Let’s get back to our InfoSec sheep. I prefer simplification and that’s why I started a quest to see if I can end up with a shorter version of the OWASP Top 10.

"The OWASP Top Ten is a powerful awareness document for web application security [...] represents a broad consensus about what the most critical web application security flaws are. [...] Adopting the OWASP Top Ten is perhaps the most effective first step towards changing the software development culture within your organization into one that produces secure code." [link]

The OWASP Top 10 is a versatile project and can be used in multiple ways. But as you work with it, you realize that it is a little bit bloated. Continue reading The revised and compressed OWASP Top 3 Web Application Vulnerabilities

Short URLs are Harmful for Cloud Data Sharing

I was never a big fan of sharing cloud data through a unique link, rather than nominating the specific people that can access the data. To me it feels like security through obscurity.

It looks something like this:

https://{cloud_storage_provider}/?secret_token={some_unique_token}

All the security of this model relies in the randomness and length of the secret token. But essentially the data is exposed to everyone. Google (Drive) is doing it, Microsoft (OneDrive) is doing it.

Now the really silly part comes in. Because the URL is quite lengthy, a decision was made to use URL shorteners (goo.gl, bit.ly, etc.) to distribute the above mentioned links. Which essentially means that the entropy of secret link is now reduced to just a few characters (around 6 usually).

Martin Georgiev and Vitaly Shmatikov from Cornell Tech did an interesting research on these shortener services to see how much data they can gather, the results were impressive/scary. They were able to trace back Google Maps searches back to individuals and get access to confidential data.

A slightly more complex 3D printing project – The Pirate

I bought a Prusa i3 kit some time ago, in an effort to experiment a little bit around 3D printing.

In parallel, I had a discussion about everything with Dani (like most of our discussions).  Things like Kickstarter, games, prototyping and USB sticks were predominant in that particular one. A few days later he came back with a set of pirate characters and a story-line. I decided to focus on the main one and build a prototype.

pr1
The initial character sheet

Another, more colorful version
Another, more colorful version

Continue reading A slightly more complex 3D printing project – The Pirate

Hacking the Wii remote control

You know that sensation when you are ready to make that winning move but the Wii Remote is thinking otherwise and refuse to move as you intended? I feel the same and I had strong bad feelings about my controller(s). You might have noticed that I never considered that it might be my lack of skills, the controller is always to be blamed! And I keep changing them.

My feelings for the Wii Remote changed after I saw what Johnny Lee can do with it:

  • Tracking Your Fingers
  • Multi-point Interactive Whiteboards
  • Head Tracking for Desktop VR Displays

It’s pretty impressive for a 15$ piece of hardware and some additional components that you can get for a couple of bucks. Not to mention that if you already have the Wii console, it’s free.

Johnny published all the software on his site so that you can replicate (and maybe extend?) his work.

He delivered a presentation at TED demonstrating some of his work:

Updating Kali Linux from behind a restrictive proxy

I installed Kali Linux from the mini ISO, so I ended up with a fully functioning Linux system but with little to no tools (just nmap and ncat).

In order to install the tools that are making Kali what it is, I had to install the metapackages. For me, the easiest option was to install all of them (kali-linux-all).

It sounds simple:

# apt-get install kali-linux-all

but it was failing constantly

Failed to fetch http://http.kali.org/kali/pool/main/##whatever_package## Size mismatch

A little bit of research and trying to download the actual package from the host machine made me realize that the proxy was blocking access to the packages.

I decided to check if Tor traffic is allowed. Luckily it was. So I installed it

# apt-get install tor

started it

# tor &

and used torify to pass all the traffic through Tor

# torify apt-get install kali-linux-all

A few more minutes (6+ GB) and I had my fully featured Kali installation.

http vs https performance

A while ago I had a huge argument with a development team regarding the usage of https. Their major concern was that the impact on performance would be so big that their servers wouldn’t be able to handle the load.

Their approach was to use https just for the login sequence and plain text communication for everything else. And it was not like they didn’t understand the underplaying problem of sending session cookies over an unencrypted channel, it was just that they thought https is too much for the servers to deal with.

Doing some research back then, I found a paper from the 90s stating that the performance impact was between 10 and 20%. And that only because of the hardware (mainly) CPU available at that time. With the advancement in computational power that should have decreased over time.

And indeed, as of 2010, Gmail switched to using HTTPS for everything by default. Their calculation shows that SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Of course there were some tweaks, but no rocket science involved.

1%, 2%, 10KB. Nothing. I remember somebody saying that 640KB ought to be enough for anyone :) Maybe he knew something. As you can see in the link, Bill Gates didn’t actually say that.

5 more years have passed since then, hardware is more capable, cheaper, so there’s no excuse not to use https.

I’ve seen poor implementations where all http traffic was passed over a secure channel, but not the .js files. Needless to say, a MitM attack can easily modify the .js on the fly and run code in the victim’s browser.

As a closing note, use https for everything, don’t invoke the performance issues, there’s no reason in the current era not to do so.

Is application security an agile process?

No. Judging by the way it is marketed and sold today, application security is not, by any means, agile.

Can it be? Well, Microsoft says so.  When it comes to security, Microsoft changed a lot in the past decade. The development frameworks they offer have built-in security features nowadays. So, if they say security can be built into an agile development methodology, maybe they know something.

Agile

From the old days of development where the waterfall model was the sine qua non, application security developed alongside and followed the same waterfall approach.

Let’s see what are the major interactions between application security and the software development process in a waterfall model approach:

  1. Requirements – AppSec defines non-functional requirements aka security requirements. High level risk and threat analysis are also performed during this phase
  2. Design – secure architecture analysis and finer grain risk analysis
  3. Construction – source code analysis
  4. Testing – penetration testing
  5. Debugging – follow up on the security defects mitigation process
  6. Deployment – retesting if needed
  7. Maintenance – regular retesting

The challenges with an agile methodology, if we are to consider the Agile Manifesto, are multiple. Let’s take it one by one:

  1. Requirements – In an agile environment, changing the requirements is welcomed. While the high level security requirements are the same, specific requirements based on the functionality of the application are needed. New functionality may open new threats so a threat analysis should be performed. Also, each functional requirement should go through a risk analysis process
  2. Design – if the new requirements require a change in the design of the application, a new architecture analysis should be performed to cover the change
  3. Construction – things are no different here compared to the waterfall model, however, because sprints are usually very short ( a few weeks or even less) automation is a must.
  4. Testing – this is usually one of the major concerns, not only doing a penetration test on the changes, but also assessing the overall security implications
  5. Debugging – same as above, however at a much faster pace
  6. Deployment – similar
  7. Maintenance – in an agile environment, periodic retesting becomes crucial

So, what is there to be done to implement application security in an agile environment?

Here are some things to consider:

  • Security training; training the Agile team in respect to information and application security means they are going to take more security conscious decisions
  • Have a full time security expert in the agile team
  • Implement automation in the source code analysis; use a fully integrated solution with the development environment meaning that whenever a piece of code is saved in the repository, this gets scanned and potential security defects are sent to the bug tracking system for triage
  • Implement as much automation as possible in the testing phase; liaise with the QA team and implement security checks during that phase
  • Perform the individual regular activities at certain gates in the process (as opposed to each sprint)

It all boils down to the exact configuration of the development environment and the chosen methodology and processes, but application security can and should be mapped on them with very good results.

WordPress Security Implementation Guideline

I (finally) managed to complete my project on WordPress Security. You can find it here:

https://www.owasp.org/index.php/OWASP_Wordpress_Security_Implementation_Guideline

I also delivered a presentation at the OWASP Romania InfoSec Conference 2014 on this topic. The presentation is on SlideShare:

Yours truly in action:

Installing Raspbian from scratch without a keyboard or a monitor

So, you got your Raspberry Pi, a nice SD card, but you can’t remember the last time you saw a keyboard and the only thing around you is a laptop with Windows. Don’t worry, there’s a simple solution.

Download the latest version of Raspbian and Win32 Disk Imager.

Start Win32 Disk Imager (“Run as Administrator”). After installation start the program, select your SD card and the Raspbian image that you downloaded earlier. Lay back for a few minutes.

Since you don’t have any other means to access Raspbian other than SSH, you need to figure the IP address.

You can set up your router to assign a unique IP address via DHCP for the MAC address corresponding to your Raspberry Pi.

Or you can scan for open SSH ports in your LAN:

# nmap -sT -p 22 -v 192.168.x.1-255

Once you’ve identified the IP of your Raspberry, SSH into it.

The default user/password is pi/raspberry. Needless to say, you should change your default password. You can also set a root password, just “sudo su” from the command line and run “passwd” once you have root privileges.

Now it’s time to set a static IP address. SSH into the box once you know the IP address and do the following.

  • # sudo cp /etc/network/interfaces /etc/network/interfaces.old
    # sudo nano /etc/network/interfaces

In the end, the configuration file should look like this:

auto lo
 iface lo inet loopback

 auto eth0
 iface eth0 inet static
 address 192.168.x.222
 gateway 192.168.x.1
 netmask 255.255.255.0
 network 192.168.x.0
 broadcast 192.168.x.255

 allow-hotplug wlan0
 iface wlan0 inet manual
 wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf
 iface default inet dhcp

You just need to restart the network

# sudo /etc/init.d/networking restart

and you can SSH on the new static IP address.