Friday, November 8, 2013 Doesn't Use an EV Certificate?

I was attempting to work my way through the US Government's Health Insurance Marketplace tonight*, and in addition to the horrible load times, javascript timeouts, and strange behavior, I also noticed that the website does not use an Extended Validation (EV) Certificate. Granted, for many businesses, paying the extra couple of hundred dollars for an EV Certificate for their secure socket layer-protected websites is an unnecessary expense. However, for a website that may be used by a sizable percentage of the adult US population, this seems like a large mistake.

It seems to me, that a website that is used by a large number of individuals that aren't tech-savvy is the perfect application for an EV Certificate. These certificates have a stricter application process, and are therefore much more difficult to forge or impersonate using the standard phishing techniques -- at least for now. In addition, all web browsers from about 3 years back provide a clear visual indicator for sites using EV Certificates. For example:

A SSL-protected page using a standard certificate as it appears in Firefox**:

A SSL-protected page using an EV Certificate as it appears in Firefox:

The EV Certificate is much clearer to the end user that they are on the correct website. From a technical perspective, the issuers of EV Certificates are compiled into the browser code itself, preventing malware from modifying the certificate store.

Given all the advantages, one has to wonder why decided not to use one. Verisign, the most well-known (and most expensive) provider sells EV Certificates for about $2000. Expensive, but a drop in the bucket compared to the hundreds of millions of dollars spent on this website so far. I would imagine that the US Government might be able to strike up some kind of deal with Verisign, in any case.

Some might argue that the EV Certificate is unnecessary, and that may be true. However, let's look at the 'competition' -- some of the private insurance company websites.




Now, to be fair, many of the other sites did not use EV Certificates either, but I doubt they see anywhere near the level of users the Government's site will. Most customers of those healthcare plans are fully managed through their employer, and it is probably only a small number of users that log in through the web portal.

Given the amount of personal data the website collects, I believe that they should acquire an EV Certificate to protect users. Any effort made to reduce the likelihood of phishing attacks seems like a worthwhile investment for a site that may be used by millions of citizens.

*This is a discussion of a technical oversight, not a political statement
**Facebook does not use an EV Certificate because they use a wildcard certificate does not use a wildcard certificate

Thursday, January 10, 2013

Password Managers & Lastpass

This post is a collection of forum posts that I made on the topic of password managers and LastPass. As a developer interested in security software and encryption, I have a number of issues with LastPass, and I recommend KeePass instead. KeePass is free and open source. Download here


I don't really trust LastPass, and I find their security to be lacking (convenience over security.)

A system (like KeePass) where you can use a master password plus a keyfile to encrypt the database is the most secure option. It should be noted that LP (and possibly other cloud services) only use the other factor as access control. That is to say, if someone grabbed your password DB from LP's servers through some attack, all they would need is your master password.

I've never been cool with this. Say an (ex)employee wanted to inject some javascript into the login page that sent them your master password. They also grabbed the DB through some other mechanism--you're screwed. The access control provided by multi-factor is great and should be used more often, but it does nothing to protect the security of your actual password DB should it fall into the wrong hands. The fact that LP admitted to suspicious behavior on their network and the fact that they weren't using key transforms to make bruteforcing harder removes any trust I may have given them.

The key transforms is a standout to me. I'm a software developer that has written security/encryption code. Setting up a system that doesn't perform these transforms is very sloppy. I'd unfortunately expect as much on commercial websites, but for a company focused on security, this is a huge failure.

For what it's worth, I inspected the KeePass source code a few versions ago, and it passed my exam. Note, the packaged versions (for download) may have evil code added, but the developer has been at this awhile and personally signs his products.


haven't looked at 1Password in any depth. When I looked into LastPass, I was trying to find a service that would allow other members of my business to easily use common passwords. I did a deep dive into the service, including purchasing the full version and a Yubikey. The Yubikey is very cool, but that's a story for another day.

In the end, I use KeePass with 2 databases. One for 'work' and one for my personal passwords. I also use Firefox's password manager (with a master password for encryption support) for my less sensitive passwords (forums, shopping sites that don't have my CC info, etc.) This is a decent compromise between convenience and security for passwords that wouldn't be disastrous if lost.


(On the YubiKey device)

The YubiKey ( is a cool little USB device that has multiple security functions. It is small and thin, and it only has one 'button' on the top, which is really just a gold-plated finger contact. Anyway, LastPass uses one of its modes called OATH OTP (one time password.) Using their code or your own, you can write a form entry field (textbox) on your site that expects a code. Once plugged in, the YubiKey functions as a USB keyboard that can automatically 'type' a string of characters into the input field when the user presses the button. Using a couple algorithms, the server can determine that yes, you have the key in your possession. This satisfies multi-factor authentication: something you know (password) and something you have (Yubikey.)

Unfortunately, this provides access control only, though more web portals should use it to increase security (Vanguard, you listening?) What would be ideal, is if the YubiKey actually was used to encrypt your password database. Well...turns out you can do just that. I wrote proof of concept code to do it with the YubiKey, as it also supports a mode called 'challenge-response'. You can actually 'feed' the YK a value, and it will return a value constructed from that value + a secret internal value that can't be read (but can be reprogrammed.) So, for a password database, when you're saving the database, the software could come up with a random value, feed it to the YK, and encrypt the DB with the resultant value (cyrpto hash). The value initially passed to the YK would need to be stored with the database file (it's not sensitive.) Now, on next open, that value would be read, passed to the YK, and the hash used to decrypt the DB.

This would be one of the most secure processes I can think of. Of course, (from LastPass' perspective) there are downsides. Loss of the device would be problematic. A new device could be reprogrammed to function the same as the lost one, but it would require using tools offered by Yubikey. No problem for a local IT department, but hard to tell your customers when you aren't local (LastPass.) You are also forced to use a computer/device that has drivers for the YubiKey. So, understandably (from a business perspective), LastPass made a pro/con decision to be as secure as they can given the desired functionality: convenience over security.

If you do use LastPass, I would highly recommend one of their options for multi-factor. I believe they now offer the free Google Authenticator method, which uses a small program running on your cell phone as the 'what you have' portion.


(On malware that can read your clipboard)

KeePass tries to get around this a number of ways ( For one, it 'hooks' clipboard events to try and stop other programs (malware) from knowing that the clipboard just received data. This is not totally foolproof, but it's the best that is possible for the clipboard in Windows. Another option is to use auto-type. KeePass can 'type' your password using a mix of clipboard and key presses. Again, not unbeatable, but it makes it much more difficult.

In any case, many local spyware apps don't bother with the clipboard. They can simply attach to the network stack and see your raw HTTP (web) data as it's transmitted over the wire. It's easier for the hacker to figure out data from the network that looks like:

 POST /login.jsp HTTP/1.1
User-Agent: Mozilla/4.0
Content-Length: 27
Content-Type: application/x-www-form-urlencoded

than massive random clipboard data:

this is just a story that I am writing in my word processor(click 35,43)(enter)(click 45,65)(enter)joe(click 45,43)uhoh

Thursday, August 30, 2012

Hiking Loop

I got a full GPS track of my favorite hiking loop. Fed into Google Earth it looks like this:

Again, this was done with a 25lb backpack. The one point in the hike at around the 1.25mi mark is a nasty hill. I'd probably rank it as a Class 2 (some people may need to use hands to balance.) There's actually a different attack point of that hill that's a Class 3. I use trees and the ground for hand-holds, and I have to choose my footholds very carefully.

A fun little 'walk' with the backpack. About 660ft of cumulative elevation change, 2.19 miles, in 40 minutes.

Saturday, August 25, 2012


I have recently started adding hiking as my complement to weight training (I used to run), as there is a decent wooded path near my house. A recent enhancement to this steady state cardio was a weighted backpack. It is low tech: about 25 pounds of old cement barbell weights in a beat-up backpack. It doesn't seem like much at first, but it really starts to make you work on elevation changes.

Speaking of elevation changes, I recently tried using a GPS app on my phone that allows me to save a KML file (GPS coordinates over time) that can be imported into Google Earth. If you're a runner/walker/hiker you *must* start using something like this. The detail GEarth shows you is amazing. For example, here's an elevation/speed graph of a recent hike (I only turned on the GPS half way through):

The cumulative elevation gain is about 320ft, which really makes that backpack start to get heavy. I estimate that my whole path is about 3 miles and probably closer to 1000ft CEG, but I need to run the full path in GPS to check.

Sunday, August 19, 2012

tDCS Update

I've started to fill in my tDCS page with the background theory and application. Check it out here. Coming up, I will discuss safety concerns, a DIY tDCS device, electrodes, and experimental results.

Tuesday, August 7, 2012


I created a page for my open source backup software: ZipBackup

The full source code is available as a Mercurial repository over at Bitbucket

Monday, August 6, 2012

Prevent Computer Sleep in C#

I am wrapping up development on a backup utility program (more on this later), and I needed a way to prevent the computer from entering a 'sleep' state while the backup process was running. Though there is no .Net wrapper for the function, one can call the native method in kernel32.dll to achieve the desired result. PInvoke.Net has the details: SetThreadExecutionState

I recommend creating a static class containing the PInvoke to the native code and friendly wrappers for the functions you wish to perform. Here is the full content of my helper class:
internal static class NativeMethods {

        public static void PreventSleep() {
            SetThreadExecutionState(ExecutionState.EsContinuous | ExecutionState.EsSystemRequired);

        public static void AllowSleep() {

        [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)]
        private static extern ExecutionState SetThreadExecutionState(ExecutionState esFlags);

        private enum ExecutionState : uint {
            EsAwaymodeRequired = 0x00000040,
            EsContinuous = 0x80000000,
            EsDisplayRequired = 0x00000002,
            EsSystemRequired = 0x00000001

From this, just call NativeMethods.PreventSleep() when your program starts. It's wise, but not really required, to call AllowSleep() as your program is exiting.

Monday, July 30, 2012

Transcranial Direct-Current Stimulation

The other day I watched the Morgan Freeman hosted series 'Through the Wormhole', and the topic of transcranial direct-current stimulation (tDCS) was brought up. From recent research, tDCS is a simple, safe, and remarkably effective way of enhancing or controlling various brain functions. By applying a small current to specific locations on the outside of the head, things such as depression can be controlled, memory and spatial reasoning can be enhanced, and confidence can be boosted.

I've been reading as many research papers as I can find on this fascinating subject, and it looks like something that is well within the purview of a DIY experimenter. Over the next few blog posts I will lay out a summary of my research and a plan to construct and test a tDCS device.

Wikipedia has a good introductory summary here:

View my research here

Wednesday, June 15, 2011

I recently noticed a huge increase in the volume of blog-comment spam entries. Though some provided a few laughs -- poor grammar, misspellings, and insults -- it was clear that the only purpose they served was to provide a link-back to the spammer's website. After a few late-night emails woke me up, I decided to implement Google's reCAPTCHA service. BlogEngine.Net makes this very easy, but it isn't difficult for other websites to use the reCAPTCHA control, as it is provided as a web service.
The coolest thing about reCAPTCHA, besides it being free, is that every time someone completes a CAPTCHA, they also help to improve optical character recognition. In a nutshell, two obfuscated words are presented to the user. In most cases, one is known, and the other is from a scanned book or newspaper. Assuming the user passes the known CAPTCHA, their 'guess' at the other word is used as data for the OCR statistical analysis engine. Collect enough data this way, and the OCR software 'learns' how to read much more like a human.

Stopping spam and adding to the effort to digitize older, printed knowledge. What could be better than that?

Friday, June 3, 2011

Using the Wrong Version Control System is Costing You (Part 1 of 2)

Almost every developer worth his or her salt recognizes the importance of using a version control (revision control) solution to manage software source code. To the uninitiated, version control software allows files – typically text-based files – to be managed in terms of revisions. The job of the version control software is to keep a running log of any changes or additions to the files under its control.

The benefits to using version control for software code are enormous. Fixes or modifications can be made and ‘checked in’ along with comments as to why the change was made. If any new bugs are introduced by the change, the versions can be compared (along with the comments about the change) to determine what additional changes need to be made.

Version control is very useful for a single developer, but it becomes an absolute necessity once a project has multiple contributing developers. Most version control systems (VCS) include powerful merging tools to allow changes made to the same file, by separate developers, to be combined with relative ease. In addition, the running log of changes and comments by other developers facilitates a tightly integrated team.

So where’s the debate?

One of the most widely used version control systems is Subversion, and with good reason. It’s free and open source, has great client-tool support, and is easy to get started with. Subversion, like many other previous-generation version control systems, employs a client-server model. That is, a central server holds the master database of files and changes (known as the repository), and individual clients are able to pull/push changes as necessary. The server is always the master copy, and all changes must ultimately run through it.

Initially, this sounds like the perfect model. All developers must ensure they push changes to a centrally managed location. There is never any doubt to which ‘copy’ of a file is considered the master copy; it’s on the central server.

Unfortunately, this model doesn’t work well when developers don’t have quick and reliable access to the central repository – say on a local LAN – and instead work from remote locations, sometimes disconnected, etc. In addition, large modifications to the code base may require multiple days or weeks to complete. During this period, the code may not compile correctly, or it may exhibit strange behavior. So if I, a developer tasked with a multi-day project, have made some code modifications and wish to call it a day, I’m now faced with a no-win choice. If I do not commit the code to the central repository, I have removed one of the primary benefits to using version control. If, on the other hand, I commit my incomplete changes to the server, everyone else that may need to pull a recent copy will not be able to build the software correctly. Fortunately (somewhat), I can ‘branch’ the code I am working on so that the existing code (commonly called the ‘trunk’) is not in a state of disarray.

In a perfect world, branching, and the task of merging multiple branches back into the trunk, would be an easy task. Subversion does not make it so. Anyone that has spent enough time with Subversion has an understandable fear of reintegrating multiple branches. It can be done, but it takes effort and planning. It becomes clear that the central model adds a level of complexity and rigor to branching, a feature that is essential to a multi-member development team.

Even if a reliable approach to branching is determined, Subversion, or more specifically the client-server model, still does not work well for a distributed, sometimes disconnected, team. It was clear that a different approach to version control was required, one that was designed, from the ground up, with distributed development in mind…

A follow-up post will discuss distributed version control systems, and why they work better for real-world development.