According to slashdot, savannah.gnu.org is going to migrate over to GForge. Seems like a big sudden change until you realize GForge, Savannah, XoopsForge, etc. are all forks off the old Sourceforge 2.0 codebase (codenamed Alexandria) and aren't that much different at heart.
It's kind of frustrating to see things splinter like this, but really, that's what it's all about. Everyone grabs the source and tailors it to their own needs--just like what we've done in JDE Design Studio, where we needed to lock out anonymous public access from all project services, so we just forked off the Savannah codebase. A single solution will not simultaneously meet all needs.
Info on this is scarce so I thought I'd contribute some. It's often desirable to store, say, images in external files rather than throw them all into a resx xml resource file. So you add them to your vs.net csharp project--oftentimes a resource-only dll--as embedded resources. Works great until you want to compile it directly from the command line.
The tricky thing here is that the ms docs for the /res option only mention using it with .resources files, which are compiled resx files. But in fact you can embed any sort of file in there. The "Resources.image.png" part identifies the resource and puts it under the Resources namespace, which is what vs.net will do when specify a default namespace for the project.
What do you look for in a build system? Ease of use, maintainability, portability, extensibility, simplicity, scalability, reproducibility and performance are all valid things to worry about. All are important. But what I find I'm looking for more & more are the latter two: build reproducibility and build performance.
Both come down to saving time. If your builds aren't reproducible you're going to end up wasting hours tracking down strange differences that somehow crept into your software product. Omitting a file or getting an incorrect version of a file in your product could cost you months of support effort.
I also hate sitting around waiting for builds to finish. That's wasting my time as a developer or tester. So it's important to me that my build system take as many shortcuts as possible to reduce build time (without sacrificing correctness or reproducibility of course). Correctly managing dependencies is the primary way a build system does this. More exotic schemes can be employed in a team environment, where derived objects built under the exact same conditions can be shared across developer sandboxes, or by doing distributed builds. But dependency management takes the biggest bite out of build time & should be tackled first.
Cons is the latest build system I've fooled around with & on these two counts, it seems a head above the rest.
Cons is very anal with respect to reproducibility--you have to be. By default it builds everything inside a clean system environment. You must manually specify the paths to any tools you're invoking, and there is a well-defined mechanism for importing and exporting these "construction environments" from a parent build into a sub-build. Compare this to GNU make, which by default is sloppy about these things.
Cons is also very precise in its dependency management. Rather than use timestamps to determine out-of-dateness--which there are innumerable problems with--it does MD5 checksums of dependencies to see if they've changed, and would necessitate a rebuild of targets. This makes builds more reproducible too. But Cons doesn't just checksum the files: it even goes so far as to checksum the construction environment and the command line used to perform a build action. If any of these things change, the target must be rebuilt.
Now this might seem a little severe. What if you change the command line or the environment in ways which don't actually affect the build product? Won't Cons waste time rebuilding things it doesn't need to?
Look at it this way though: overall, the fact that you can rely on Cons to do correct, if severe, dependency management is actually going to save you tons of time. How many times have you not been sure that everything out-of-date got rebuilt correctly, and wound up cleaning the entire build tree & kicking off a full rebuild just to be certain? Once you get Cons set up, a top-level "make clean" is going to be very, very unlikely. And this is where the time savings are biggest.
Another cool thing that Cons does is global build sequencing. Rather than examine some dependencies, execute some build actions, examine some more dependencies, etc., Cons gathers all dependency information and appropriate build actions over the whole active build tree before doing anything. By deferring this it is able to put all build actions in the correct (topological with respect to the dependency graph) order. This is sometimes a problem under other build systems when you have a large, hierarchical build in which there is some dependency interaction between sub-builds.
Cons pays a lot of attention to hierarchical builds, and for this reason appears to be really, really scalable.
Cons has other cool features: it can do repository builds (naturally this pales in comparison to clearmake's wink-in), has source files scanners to auto-detect dependencies in common file types, and is very extensible (of course, it's written in Perl).
That's not to say that Cons is all goodness and light though. Probably the biggest problems with Cons are in the areas of simplicity and maintainability. Writing a Conscript file is much trickier than writing a Makefile if you don't know Perl--there's a steep learning curve there.
Also, I've always liked the idea of storing everything about the build in some structured format like XML--Ant's got the right idea in this respect--since really, if done right, a build is mostly data about what's going in and what's coming out, and their dependencies. The process can be abstracted out. But with Cons all this information is jumbled up in scripts. There's definitely an opportunity for someone to write a framework in Cons that lets you store the dependency information, at least, in some consumable format.
There are also some specific features in Cons that need work. The Conscript_chdir setting, which is supposed to make Cons chdir down into sub-build directories, is one of them (as of cons-2.3.0).
Finally, I'm not sure how active the Cons development community is. Things seem pretty quiet. And certainly there isn't much of a userbase, so one's hacking abilities are put to the test whenever a problem comes up.
Overall, though, Cons scores pretty high according to most of the metrics for judging a build system, and certainly nails what I consider the big ones. As a Cons user I'm brand new, so we'll see if I'm still this excited about it in a few months.
Well I'll be damned...I certainly didn't expect this. I've just finished writing a perl module that assists in building VS.NET csharp projects using Cons. As I said before there's lots of reasons why I'm fed up with VS.NET's devenv as a build tool, and have decided to go with Cons, but slow first-time builds wasn't among them. All along I assumed my Cons build would probably always be slower than devenv the first time: there's lots of perl to be interpreted, and each time it fires of csc or resgen these executables have to be reloaded, while devenv instantiates one instance of a compiler object per build. Mainly I was just thinking of the time I'll save on successive builds through good dependency management.
However my first-time Cons build is running almost twice as fast as my first-time devenv build! Currently, the devenv build takes me 1 minute 4 seconds, while Cons finishes in 36 seconds. This is definitely cool. I'm guessing that there are some big fixed costs at play here for devenv, and that it would catch up and pass Cons on larger projects, but the fact remains: clean builds are infrequent and usually unnecessary if you've got a solid & trustworthy build system. Next I'm gonna talk about some features in Cons that make it both.
If I haven't already given you enough, here's another reason why VS.NET is a Fisher Price development tool that you shouldn't waste any more time with.
The csharp command line compiler (csc) itself is crashing on me with this:
I have a feeling this one's gonna take a while to track down. Now, the thing is, I can compile this project fine from within the IDE--if it only showed me the compile command lines in the build output window, I'd be set. But unfortunately the IDE does not actually invoke csc. Instead, it creates an instance of the csharp compiler class and feeds it the source files directly.
Cute I suppose. But anyway, I wouldn't care if only VS.NET had kept around the export makefile option from VC6. Instead they eliminated it, the word being that it was just too confusing for users & was the source of too many problems. What? We wouldn't want to confuse...the...developers? Who the hell is using this thing anyway? (A rhetorical question.)
Again, I'd like to state that if VS.NET's devenv had acceptable command line build functionality I wouldn't even be here. But it is slow, incomplete, and wrong. When I build a solution it always rebuilds the whole thing--regardless of whether targets are really out of date. Clean is not implemented. I can't specify where intermediate files go so they end up in my source tree--ack! And it is slow slow loading. Just running csc myself is much more responsive.
Suppose you start using Tangram on a project. You make your first release, and then realize you want to change your class (and hence database) schema...you want to get rid of some fields, add some others, or maybe even refactor classes. But you want users to be able to upgrade without losing all the data they've stored under the old schema.
Usually you'd be sol. You could say sorry userbase, if you want new features x y z you'll just have to lose all your data and upgrade. If you want to continue to have a userbase though you'd better start writing migration scripts which will be long and painful no doubt. Or you could just extinguish your burning desire to redesign and adopt a backwards-compatible coding policy: adding new fields is okay, deprecating fields is okay, add new classes if you like, but never refactor. And simultaneously watch your codebase turn into a heap of dung.
This is a super common scenario with data-centric apps. Have I motivated this problem sufficiently? Okay.
So the solution I'm proposing is a way to define a mapping from one Tangram schema to another so you can have your cake and eat it too. You can redesign all you want and migration will be taken care of automatically, once you've defined the mapping.
Thoughts. You should only have to mention the things that changed in the mapping; all other fields and classes will stay the same, saving you keystrokes. Mere field changes would be trivial to deal with. Class relationship changes on the other hand might get really tricky. How to actually do the migration, maybe: set up a database with the new schema, query out all objects in the old one, transform them into new objects, insert them into the new database. Or you could somehow use the mapping to generate SQL that would do the transformation. If possible this latter approach has distinct advantages: better performance (all server-side) and no need for a new second database simultaneously. However SQL sucks so bad as a programming language.
Holy crap this is retarded. VS.NET's command line csharp compiler, csc, compiles directly to a final binary target without a linking step. What this means is that all files which go into a dll must be compiled with a single command line.
Okay now what's so bad about this you ask? Well say there are 10 source files compiled into my dll and I change one of them. I must recompile 9 source files which don't need recompiling in order to rebuild that dll, rather than compile 1 file and link it (a cheaper operation by far) with 9 object files. It's just not granular enough damnit!
That's it. Screw this, I am not going to try to improve our build any, ah hopeless task. The C# app was the one target I had hoped would contribute sanity to the build under a real build system like Cons. Besides sanity I had hoped to improve build time, specifically by managing build dependencies correctly--they're not managed at all right now. Every time I build it's a full rebuild which takes about 3 minutes. Big bummer when you're doing testing & verifying that developers have indeed fixed certain bugs. Those 3 minutes are an intolerable slowdown for me.
So I will just stick with my current piece of crap bunch of dirty batchfiles. Dirty tools beget dirty builds.