[Rubygems-developers] Need to release 0.9.1 due to security exploit

Paul Duncan pabs at pablotron.org
Wed Jan 17 19:44:09 EST 2007

* Hugh Sasse (hgs at dmu.ac.uk) wrote:
> On Tue, 16 Jan 2007, Paul Duncan wrote:
> > The _really_ bad news is this type of attack became a whole lot easier
> > once RubyGems started using mirrors.  Instead of worrying about a
> > malicious user breaking into one machine (rubyforge.org), we now have to
> > worry about them breaking into N machines, where N is the number of
> > Gem mirrors.
> That's "breaking into any one of N machines" I take it?  Having to
> hack more than one would make it stronger, but 1 of N weaker...

What I'm saying is that instead of having only one door to knock on, now
a malicious user has N doors to knock on, where N is the number of

Even though having mirrors reduces the number of users that would be
affected by a break in, it still reduces overall security because it
increases the number of critical machines, and (by extension) the
chances that one of them could be compromised.

Let's say the malicious user finds a way to break into one of the
mirrors (through a bug in OpenSSH, Apache, or whatever).  Now they place
a trojaned gem in the mirror gem list, muddle with the configuration a
little bit so the file isn't properly updated when RubyForge pushes out

Since RubyForge uses a round-robin DNS entry for deploying gems, that
means that 1/N users that install the gem in question will now get a
malicious, trojaned version of that gem file.

> > The only way to completely eliminate this type of attack would be to
> > force gem authors to sign their gems, create an author certificate
> > distribution mechanism (or tie it to some sort of existing trust
> > mechanism like X.509 or PGP keyservers [1]), remove all the unsigned
> > gems from the Gem repositories, and (finally) enable signature checking
> > by default in RubyGems. 
> > 
> > Frankly, I don't see all of that (or any of it, really) happening any
> > time soon.
> > 
> I'd prefer to not get into all that as well.  Doing cryptography right,
> keeping the keys somewhere useful but sufficiently inaccessible, coping
> with the (changes in) cryptography legislation: it's all rather horrible.

RubyGems already includes cryptographic package signing (written by
yours truly).  There's also an entire chapter of the RubyGems manual
dedicated to creating a certificate and signing your gems (linked
below), so the next step is really raising developer awareness.  Here's
the chapter on RubyGems package signing:


So the actual crypto part is already mostly complete; support for CRL
checking and OCSP validation are missing, along with a couple of other
useful bits, but nothing that's too hard to add and nothing that's
absolutely essential for crypographically signing and verifying packages
right now.  

Unfortunately, the actual crypto isn't the hard part.  

The hard part is getting developers to adopt it.  I feel like the
documentation is adequate, and I also posted an entry on my web site
that has a relatively automagic gem signing blurb that can be dropped
into a Rakefile or Gem specification.  Here it is:

  (Ignore the first paragraph about the Rake patch and skip to the later
  bit about gem signing).

Another "hard" aspect is trust (I alluded to this in the paragraph you
quoted above).  Specifically, how can a user be sure a particular
certificate (or public key) is associated with the author of a given

X.509 public key infrastructure (PKI) addresses trust with a comparably
rigid hierarchy of signed certificates (a "trust chain") that ultimately
ends with an end user-trusted certificate, while PGP addresses trust by
having both certificates and certificates' cross-signatures [1]
distributed widely across PGP keyservers (aka, the "web of trust").

Traversing the path between the signing certificate and a known, trusted
certificate allows users to cryptographically validate the authenticity
of the message they've received and (in theory, anyway), be reasonably
certain [2] the message contents haven't been tampered with by an
intermediate party, malicious or not.

So, in order for a RubyGems end user to "trust" a package, we need
either an established X.509 PKI trust hierarchy (including pre-packaged,
root issuing certificates, some sort of security policy, and preferrably
a CRL distribution point and OCSP responder as well) or a bridge to
PGP's web of trust.

Since RubyGems currently has neither, here's what the one developer who
regularly signs his gems (me) does to provide a reasonable trust path
for end users:

* Signed my gems using my gem signing certificate.  The chapter in the
  RubyGems documentation I mentioned above has instructions for this
  step.  It's just a standard PEM-encoded X.509 certificate, so you can
  build your own using OpenSSL, TinyCA (Unix variants), MyCert
  (Windows), or whatever you've got handy.
* PGP-signed the certificate for my issuing CA and my Gem signing
  certificate, and posted both certificates and their signatures on my
  web site (at http://pablotron.org/files/certs/).
* Added extra documentation to several packages explaining how to verify
  these certificates against my PGP public key and to verify the
  package signature when installing the Gem (see the last section of
  for an example).

While this doesn't completely eliminate the trust issues, I feel as
though it mitigates them enough for "reasonable" [2] use: there are over
half a decade of messages written by me and signed by my public key
stored across multiple email archives, and a user can easily verify
the validity the signature of any of them against the PGP public key on
my web site and any of the bajillion [3] PGP keyservers that are
available world-wide.

Obviously this is more work than most gem authors should be expected to
do, which is why it'd be nice to have the aforementioned trust mechanism
in place.  Even something as simple as a button to upload your signing
key(s) to RubyForge and an ominous-sounding warning from RubyGems when
installing unsigned gems would be better what we've got now, which is

> Is there any merit in this suggestion, below?

No.  It has several problems:

* It doesn't actually say whether a particular gem is legitimate; it
  says that the gem is the same as it is on the other servers.  A
  malicious user that broke in to a server would almost certainly
  disable this check immediately.  And it doesn't address malicious gems
  that are uploaded direcly to the master RubyForge server.
* It has a time dependency; there's no way for any of the mirrors to
  actually verify the digest of a particular package until other mirror
  servers have the package, and each of them have the same problem as
* It requires both network access and knowledge of several gem mirrors
  on the client-side in order to "verify" a gem.

> <suggestion>
> SHA256 hashes of the gems are kept on at least 3 servers.  A gem is accepted
> from a mirror if it's hash agrees with a majority of the responses from
> those servers.
> Pluses:
>   Ruby contains the code for SHA256 in the standard library and it's better
>   than md5 (harder work to fake).  Maybe both together would be stronger.

SSL interpolates the output of multiple hash algorithms in several

>   To plant a fake gem would mean breaking into a majority of the hash
>   servers, rather than just one machine

Or subverting DNS on the client side and redirecting requests for known
gem repositories to a site the attacker controls.

>   It would be somewhat less prone to network/host outages iff there are
>   >3 servers
> Minuses:
>   It's still pretty weak.

Agreed (see my notes above).

>   we need some mechanism to prevent the data being replicated (falsely)
>   across a majority/all the servers
>   How do we keep people up to date with where the servers are, which
>   can be trusted, etc, without weakening security?  "Key under the
>   doormat" problem.  (security = 1/convenience)
> Interesting:
>   Could be applied to more than just the gems.
> </suggestion>
> I'm sure this is really rather naive, and what I've read about NOT
> implementing security systems from the ground up leads me to believe
> that this idea should probably be killed sooner rather than later,
> but in the hope that it is an improvement over where we are now, I
> will risk my global display of ignorance....

Agreed on the sooner part, for the reasons mentioned above and a couple
more that I've omitted for the sake of brevity.

(Yes, that's right; I actually left a bunch of stuff out of this

>         Hugh

[1] PGP actually has several additional useful concepts that don't
    translate directly to X.509, like "trust level" and "sub keys", but
    we'll pretend they doesn't exist for the duration this email.
    Unless you want a book instead of just a really lengthy response,
    that is. :)

[2] I keep throwing around this "reasonable" term.  Reasonably certain 
    or reasonably secure is a concept that's entirely relative to the
    situation at hand.  
    An end user being "reasonably sure" a certificate is valid can range
    from verifying a key obtained from a PGP keyserver against a handful
    of randomly picked messages from different mail archives, all the
    way up to meeting the person face to face and personally verifying
    the accuracy of the person's drivers license, and passport along
    with their PGP fingerprint.

    On the signing side, keeping a private signing key "reasonably
    secure" can range from keeping it encrypted on a thumb drive all the
    way up to keeping the key on a smart card in a locked cabinet of a
    non-network-accessible lab that's behind a 6-inch steel door which
    requires badge access and is monitored by cameras and protected by a
    metal detector and several armed security guards. 

    Personally, I keep the private key for the the root certificate of
    my signing CA encrypted saved on an encrypted partition (different
    algorithms and passphrases), which is stored on a locked-down
    machine that's firewalled off from the rest of the world.  I
    consider that arrangement more than "reasonably" secure for the CA's
    purposes: issuing the certificates for my home VPN and for my signed
    RubyGems packages.

[3] Okay, maybe not a bajillion.  But a reasonably large number.  Damn,
    there's that "reasonable" word again.

PS. I don't usually toot my own horn, but if you're still reading this
far and find this kind of stuff interesting, there are a couple
additional posts I've written in the last week or so that deal with
security, identity, and trust. The posts are available at the following

  (for the last one, scroll down to see my response to Sean's post)

Paul Duncan <pabs at pablotron.org>        OpenPGP Key ID: 0x82C29562
http://www.pablotron.org/               http://www.paulduncan.org/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
Url : http://rubyforge.org/pipermail/rubygems-developers/attachments/20070117/cbe58123/attachment-0001.bin 

More information about the Rubygems-developers mailing list