[Rubygems-developers] What is right and wrong with dependencies definitions?
thewoolleyman at gmail.com
Fri Nov 9 19:08:30 EST 2007
On 11/9/07, Eric Hodel <drbrain at segment7.net> wrote:
> > What is the best approach for this? Include everything as dependency?
> Including everything is the best approach.
> Things that need to be discussed are:
> * should developer dependencies by installed by default?
> * what does the command-line option look like?
> * what happens on uninstall?
> * what should `gem check` do if all dependencies aren't installed?
> Its really easy to say "gems should have developer dependencies".
> Its a lot of work to make it a reality.
>  http://www.bikeshed.org/
>  http://rubyforge.org/tracker/index.php?
<disclaimer>I'm really trying to NOT bikeshed, and know I won't have
the time to do anything about it other than pimp my tool I wrote to
address this issue.</disclaimer>
I agree that this will probably be really hard to implement in
rubygems, and I'll believe you that it's not worth it. However,
there's still valid reasons for both sides of the argument.
On one hand, it can be considered dangerous to include unused
dependencies on a production box. Here's a real case in point that
just happened to us on a shared box. There's a bug in the latest edge
rails gem that the mere existence of an unused soap4r dependency
causes failures: http://dev.rubyonrails.org/ticket/10001 Now, we were
able to handle this by auto-installing and locking down the soap4r
version (using GemInstaller), and we were proactive enough to find it
on a demo box instead of prod, but it's still an example of a
completely unrelated dependency causing problems.
Also, in some cases, it just obviously doesn't make sense to include
all dependencies. For example, on GemInstaller, it is a tool itself
to manage dependencies, and I don't want to force everyone who uses it
to install the dozen-plus build- and deploy- and test-time
dependencies I have
would freak some people out when they installed it and make them not
want to use the gem. Plus, it's very likely (especially in the case
of Rspec) that one of my dependency versions conflicts with something
else on their system, and unless they have their rspec version locked
down everywhere (unlikely) the mere installation of my gem will screw
them if it auto-installs a newer version of rspec that's incompatible
with their old specs.
Now, playing advocate for the other side, it may be a GOOD thing to
always include all dependencies on all platforms. By NOT doing so,
you may open yourself up to unexpected behavior on non-development
platforms. This could happen if the absence of a 'development'
dependency somehow breaks your application through the dynamic magic
of Ruby. This is very possible, and I think I just convinced myself
that I agree with you, at least in the case of Rails apps (because
this is what we've done for dozens of projects and it works fine).
Regardless, there are valid points on both sides. If building this
support into RubyGems really is too hard (and I believe you), then we
need alternate approaches.
Ok, pimp-mode on: I think that my GemInstaller tool
(http://geminstaller.rubyforge.org) is perfect for this - it centrally
manaages all your dependencies via an erb-parsed file (or files).
This allows you to explicitly manage your dependencies, and have them
behave differently based on whatever criteria you want - RAILS_ENV,
hostname, environment variables, whatever. If you have a startup
check in your app that installs and loads the appropriate gems in
development or test mode (rake or test_helper), then you can only
include actual runtime dependencies in your gem, but ensure that
anyone who wants to build/test your app still has the gems they need.
More information about the Rubygems-developers