I can't speak for 500M sizes but these are the results I got putting ext3 on
three usb externals. (I discovered bittorrent.)
120G == 113 free = 7
250G == 230 free = 20
320G == 294 free = 26
Looks reasonable to me. If I had a 500 (two 320s are the same or cheaper at the
moment) I would first try formatting without partitioning and see what happens.
Obviously I crossed some kind of threshold between 120 and 250 but 250 to 320 is
The only second thought I have had is that if the external is only for storage
then ext2 should be fine and none of that lost journal space. Only drives with
frequent reads and writes would I use ext3. Without regular accesses there
should be no reason for fsck-ing save in the rare case of a crash during an
access. I was very happy with ext3 at the 40GB disk size as the fsck took on the
order of 20 minutes with ext2. That is just something I have thought of doing if
I get another but I probably won't try to do "better" as I have "good enough."
Rather I am getting around to looking up taking better control of the journal
file size as storage cannot possibly need one as big as a potentially multi-user
system with many operations going at one time. I will be looking to get the
journal to the bare minimum which is presumably no larger than twice the largest
file that may be accessed at the time of a power outage or some such. But that
would be something like 4.6GBx2 which isn't that much different from the
default. So it is not very high on my study priority.
Also my experience says that if you do something other than the simplest
defaults take good notes on paper and put that where you will not lose it. Being
too clever by half is a pain sometimes.
If Americans knew about Israel's treatment of non-Jews they would turn
against Israel as fast as they did against apartheid South Africa.
-- The Iron Webmaster, 3737
Larry Shiff http://www.yqcomputer.com/