Scott Hanselman

Alphabetizing your .NET Resource (RESX) files

April 09, 2005 Comment on this post [1] Posted in ASP.NET | Corillian | PDC | XML | Tools
Sponsored By

We internationalize all our ASP.NET Web Applications, and usually end up with hundreds, if not thousands of strings in our RESX files. The names are all very hierarchical, like "AccountSummary.DataGrid.Columns.AvailableBalance" so you can see why it's important to keep them alphabetized. Additionally, there are usually a dozen or more of these RESX files in multiple levels of directories.

So, a little snazzy batch file action:

for /f "Tokens=*" %%i in ('dir /b /s *.resx') do nxslt "%%i"  alpharesx.xslt -o temp.xml & copy temp.xml "%%i" /y

The "dir /b /s" is "Directory BARE FORMAT FULL PATH all SUBDIRECTORIES" which provides the iterator for the FOR DO batch file loop. I had to use a temp.xml file because the nxslt tool didn't allow me to use the same filename for the input and output.

"nxslt" is a fantastic command-line .NET XSLT front end by Oleg Tkachenko. There are many command-line XSLT tools out there, but Oleg's is by far the most powerful and flexible.

The alpharesx.xslt is this XSLT stylesheet. It copies (xsl:copy-of) the RESX header stuff, then sorts the data nodes by name. There's no doubt even 'terser' XSLT-y ways to do this, but this is a start. Thanks to Travis Illig for the idea and starting chunk of XSLT

<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
    <xsl:template match="/" xml:space="preserve">
        <root>
            <xsl:copy-of select="root/xsd:schema"/>
            <xsl:copy-of select="root/resheader"/>
            <xsl:apply-templates select="root/data">
                <xsl:sort select="@name" />
            </xsl:apply-templates>
        </root>
    </xsl:template>
   
    <xsl:template match="data" xml:space="preserve">
        <data><xsl:attribute name="name"><xsl:value-of select="@name" /></xsl:attribute>
            <value><xsl:value-of select="value" /></value>
        </data>
    </xsl:template>
</xsl:stylesheet>

I'm also sure there's all sorts of SED-style ways to accomplish this, but this solution seem to be the simplest solution for this, short of writing a little custom C# program. It also was a faster-to-write solution than a new program.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

Overheard - Friday, April 8, 2005

April 08, 2005 Comment on this post [3] Posted in Javascript | Bugs
Sponsored By

Overheard while debugging some client-side JavaScript today at work...

"How come your body is bigger than your documentElement?"

"That's what she said!"

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

CVS and Subversion vs. VSS/SourceSafe

April 08, 2005 Comment on this post [14] Posted in DasBlog | Tools
Sponsored By

Here's a response I gave to a fellow on an email list I'm on yesterday. He was having trouble understanding that CVS (and other source control systems) didn't "lock" files (checkout with reservation) as he has been a VSS person for his whole career. This is what I said (a bit is oversimplified, but the essense is there):

With VSS you:

Checkout with reservations – meaning that developer has an exclusive lock on that file.

With CVS you:

Edit and Merge optimistically – developers can make a change to any file any time. However, CVS is the authoritative source and they must Update and Commit their changes to the repository.

It sounds scary, but it’s HIGHLY productive and very powerful. It works more often than not, as most devs don’t work on the exact same function. It is largely this concept that makes Continuous Integration work.

It’s a FANTASTICALLY powerful way to manage source. No need to wait for that “locked” file. Source files can be edited at any time. For example, if there is a file with two functions:
 
Line1: Public void a()
Line2: {}
Line3:
Line4: Public void b()
Line5: {}

Two developers can edit the same file, different functions: 

Line1: Public void aChanged() <- DEV1
Line2: {}
Line3:
Line4: Public void bDifferent() <- DEV2
Line5: {}

These changes are called “non-conflicting.” CVS will automatically merge them. Now, we don’t know if they files will still compile (that’s the job of the developer and the build system) but since different lines of text are changed, they don’t conflict.

If two devs change the SAME LINE, then CVS holds both changes with “Conflict Markers” and demands that the developer RECONCILE the changes before committing. I manage all the dasBlog development worldwide using CVS. How could I do that if I allowed you in India to LOCK a file for days at a time? For example, when you are done working in your sandbox, it’s your responsibility to merge your changes in with the mainline.

From http://www.atpm.com/7.03/voodoo-personal.shtml

"CVS (Concurrent Versions System) is an open-source version control tool that’s extremely popular in the Unix/Linux world. CVS is a client-server system, which makes it easy for multiple users, possbly scattered across the Internet, to collaborate.. In fact, Apple uses it internally to manage the development of Mac OS X, and to make their sources for the Darwin kernel available to the world. CVS’s signature feature is its use of “optimistic” locking to let multiple people work on the same file at the same time. It then automatically merges their changes and signals whether it thinks human intervention will be required to complete the merge. (It seems like it would take magic for CVS to do this reliably, but in practice it has worked very well for me.) CVS (Concurrent Versions System) is an open-source version control tool that’s extremely popular in the Unix/Linux world. Unlike Projector and VOODOO Personal, CVS is a client-server system, which makes it easy for multiple users, possibly scattered across the Internet, to collaborate. Although only the client runs on Classic Mac OS and Windows, the server runs on Mac OS X. In fact, Apple uses it internally to manage the development of Mac OS X, and to make their sources for the Darwin kernel available to the world. CVS’s signature feature is its use of "optimistic" locking to let multiple people work on the same file at the same time. It then automatically merges their changes and signals whether it thinks human intervention will be required to complete the merge. (It seems like it would take magic for CVS to do this reliably, but in practice it has worked very well for me.)"

From http://www.xpro.com.au/Presentations/UsingCVS/Document%20Version%20Control%20with%20CVS.htm

"By default, CVS uses an optimistic multiple writers approach. Everybody has write permission on a file, and changes are merged as the editors commit their changes to the repository. Changes won’t commit without merging. At first, this protocol appears problematic to most people who haven’t used it. In practice, manual intervention is only required for lines which have been modified by both editors involved in the merge. As with file-locking, this protocol is not without risk. It is possible to introduce logical errors to a file which has merged without incident, as added or deleted lines can change the semantics of the prior version. This technique is most effective when changes are committed frequently and the number of simultaneous writers is minimised."

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

Updating an old iMac's Firmware while running OSX

April 06, 2005 Comment on this post [16] Posted in Musings
Sponsored By

Here's some randomness. I had to do some testing with Safari 1.2, which is apparently only available on Mac OS X 10.3. Version 10.2 has Safari 1.0. Why I need to upgrade the OS to get a new browser, I don't know.

So, here's what went down.

  • Boot up iMac that has OS X 10.2
  • Put in OS X 10.3 disk
  • I'm informed that I need to "update my firmware" and I should visit http://www.apple.com/support/downloads. I'm unable to continue and install OS X 10.3
  • I go to the Apple site and search for "imac firmware"
  • I'm greeted with KB articles from 1999. Apparently "relevance" is low on the priority list there.
  • I finally find some firmware from 2001 and download it.
  • When I run the firmware updater, I'm told it will only run on OS 9. Lovely.
  • I find a copy of OS 9 and install it side-by-side. (What if I DIDN'T have it around?)
  • I go to System Preferences|Startup Disk and tell it to dual boot into OS 9.
  • I boot up, download the firmware again.
  • I run it, and I'm told that I have to reboot AND I have to find a secret recessed button on the iMacs ass and push it in with a pen. Stunning.
  • I reboot, push the funny button. The iMac screams in pain. It then sits for a minute doing nothing.
  • It restarts, again into OS 9 and tells me that it's flashed the firmware successfully.
  • Now, I'm just tired. I can't figure out how to get back into OS X 10.2...
  • I boot off the OS X 10.3 disk and just format the hard drive completely with the new OS X 10.3.

All this to update Safari on a 2-year-old iMac.

Anyone who says it's all peaches and cream in the Mac world is nuts.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

How do you organize your code?

April 01, 2005 Comment on this post [10] Posted in ASP.NET | Corillian | NUnit | NCover | Nant | Tools
Sponsored By

Organizedfolders1How do we organize our code? Well, it all depends on the project and what we're trying to accomplish. We're also always learning. Our big C++ app builds differently than our lightweight C# stuff. However, for the most part we start with some basic first principles.

  • A repeatable build
    • Obvious, maybe, but if you can't build it n times and get the same result, you may have some problems.
  • Continuous Integration
    • When possible and appropriate, have builds automated and build mail sent out.
    • We started with building on every check-in, and for the most part we do that.
    • We added a daily build, so now we will build on every check-in, but there's a 5pm build also.
  • Tools
    • CruiseControl.NET - the goal being a company-wide dashboard.
    • NAnt -building
    • NUnit - testing
    • NCover - code coverage, not formalized yet.
    • FxCop - For the obvious stuff, CLS compliance, etc.
    • NDoc - the shiznit. Use it to automate CHM file creation. That auto-generated CHM is then sucked into a larger 'prose' CHM.
    • libcheck - looking into it. Using it to measure API churn.

As far as layout (picture above), there's no right way. We've got folks who've been doing builds since before Windows, so we have a little *nix-y style in a few things. Again, if it works, use it. I like:

  • source - for source. duh.
  • tools - This CVS module is the source for any tools/utils the project needs. Shared between projects.
  • install - XCopy deployment, it'd be nice. We do versioned MSIs for everything significant.
  • test - Not Unit-tests per se, mostly larger projects for integration and regression testing.

As far as the build directory is concerned:

Organizedfolders3

Everything under build is 'important.' It kind of speaks for itself. Bin for bins, doc for the CHM.

In source, we have:

Organizedfolders2

Every 'subsystem' has a folder, and usally (not always) a Subsystem is a bigger namespace. In this picture there's "Common" and it might have 5 projects under it. There's a peer Common.Test. Under CodeGeneration, for example, there's 14 projects. Seven are sub-projects within the CodeGeneration subsystem, and seven are the NUnit tests for each. The reason that you see Common and Common.Test at this stuff is that they are the 'highest." But the pattern follows as you traverse the depths. Each subsystem has it's own NAnt build file, and these are used by the primary default.build.

In this souce directory we've got things like build.bat and buildpatch.bat. The goal being that folks can get the stuff from source control and type BUILD and be somewhere useful. It's VERY comforting to be able to reliably and simply build an entire system. Recently I had to patch something I'd worked on 18 months ago, but that had used this principal. I did a Fet, did a BUILD and I was cool. We then used reflector and Assembly Diff to confirm that the patch surface area was minimal.

We DO build things in VS.NET, but for the vast majority of projects, we prefer a command-line build to get real repeatabilty. With .user files and per-user reference paths, I just don't trust VS.NET to do the right thing.

Junctions

Another thing we make a lot of use of are what we call Junctions. These are NTFS Reparse Points, also known as symlinks.

Organizedfolders4

In this screenshot of a standard DIR from the command prompt, directories show up as <DIR> while junctions say <JUNCTION>. These junctions point to other directories. But NTFS makes it transparent; you can CD into them, and even Explorer has no clue. Certainly, they can be dangerous, as you can clean out a directory and not realize that it's the authoritative source for something and is being referenced by other projects.

Organizedfolders5

You can download Junction from SysInternals. You can also use LINKD, that shipped with the Windows 2000 Resource Kit. In this screen shot, the relative SDK folder points to an installed version of our SDK. This allows our CSPROJ files to refer to our assemblies with relative paths, gives developers a lot more flexibilty, and makes the NAnt builds simpler.

It also allows us to swap out different versions of things. For example, I could point the SDK folder to a previous (or future!) build of the SDK and build again. This flexibility comes with a cost though - complexity. People have to think a little more. Fortunately, our build guys are pretty savvy and will (shell script-like) go looking for preferred versions of things in standard locations and set up Junctions for folks. 

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

facebook bluesky subscribe
About   Newsletter
Hosting By
Hosted on Linux using .NET in an Azure App Service

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.