StorEdge Enterprise Backup vs Veritas NetBackup

StorEdge Enterprise Backup vs Veritas NetBackup

Post by Julian Reg » Mon, 14 Jul 2003 06:05:24


Hi.

I have been tasked with investigating a backup/restore solution that will
be used to backup a Sun 3310 NAS with up to 1.3TB data. At the moment, the
L25 tape library with SDLT320 drive(s) looks like it will be the right
hardware for us, but I am interested in opinions with regards to software.

At the moment, we backup our Sun servers (An E450, E3000 and V880) using
ufsdump. I am keen to keep doing this for the operating system using local
DDS4 drives as it will make system restore in the event of a disaster much
quicker (or that's my understanding at least). Over time we intend to
migrate much of our "non-operating system" data off the servers and onto
the NAS.

Question is, Sun are selling both StorEdge Enterprise Backup (formally
Solstice Backup (a rebadged Legato Networker?)) and Veritas NetBackup. Are
they equivalent products? Is one aimed at the lowend and another at the
high end? What are the pros and cons of each?

Any help appreciated.

JR
 
 
 

StorEdge Enterprise Backup vs Veritas NetBackup

Post by Michael Vi » Mon, 14 Jul 2003 13:38:11

In article < XXXX@XXXXX.COM >,




They both have their pluses and minuses. I haven't seen the current
versions, but neither of them do "vaulting" the last time I looked some
years ago. NetBackup used to have an add-on to allow you to keep track
of off-site media that aren't in the jukebox. If your tapes are going
off-site, you'll need some way to track where they are.

The big problem with all the backup products is the catalog. When media
is "active" and contains files, the catalog holds this information. The
longer you retain the backups on the media, the larger the catalog
becomes. Be prepared to have this catalog on storage that has _lots_ of
room to grow. Calculate how many files you backup daily and how long
they'll be around. Both vendors have formula to figure out how much
overhead the catalog will take. Another big problem is when backups
"expire" out of the catalog, maintenance must be run regularly to
reclaim the expired space. In a couple of the products I've used with
very large catalogs (and yours will be right up there), this can take
over 8 hours, during which backups cannot be run nor the process halted
or interrupted. Corrupting this database is _very_ bad as it could not
allow you to do restores. Be sure there are tools aplenty to repair it.
Be sure you can run backups while the catalog is in maintenance mode or
do maintenance on only part of the catalog. If it's multiple files
rather than a single monolothic file, that's also better.

Some of the products just "forget" the files on an expired tape and
there's no way to recover them. I know you can get them off with Legato
but I don't know about Netbackup. This could be a problem if you're
doing archiving. I'd stay with ufsdump for that.

Contact both companies and get references for accounts that have similar
amounts of data you want to backup. Find out what they do.

--
DeeDee, don't press that button! DeeDee! NO! Dee...

 
 
 

StorEdge Enterprise Backup vs Veritas NetBackup

Post by len » Tue, 15 Jul 2003 08:56:21

n comp.unix.admin "Michael wrote:

This used to be an issue with us and SBU, but it's not anymore. When the
index format changed between versions 5.x and 6, the indexes shrank
quite a bit. I used to run nsrck regularly to reclaim space, but now it
stays at 100% used (i.e., no gaps in the data, a good thing). Usage in
the index filesystem dropped by close to 50% with the change.



You're right about the time - With ~90 clients, some with few and some
with many (xxx,xxx) files, it generally took us about 6 hours to check
them all. With the new format, it's much more efficient. In general,
we've been advised by Sun not to run index utilities as a maintenance
tool, but rather only as needed as a repair tool. These days, the
indexes pretty much take care of themselves. However, nsrck can be
interrupted without harm. In fact, until I upped the data size and file
handles limit with ulimit, it would regularly die about halfway through,
but without any damage.



SBU (Legato) has two levels of indexing (catalogs): The media index
contain what savesets (typically filesystems) are on what media and the
file index contains what files are in a given saveset. When the Browse
Policy is passed, the file indexes expire and its' no longer possible to
browse for files during a restore. However, you can still recover entire
filesystems and (I think) even a given file if you know the file's
location. When the Retention Policy is passed, nothing changes except
the saveset it considered Recyclable. When all savesets on a given media
pass the Retention Policy, that media is recyclable and will be
automatically reused if physically available. However, the media index
entry for that volume (tape) and saveset remain intact until the media
is relabeled (erased). To get the file index back for an expired volume,
you have to scan it in, which is generally about a 3-4 hour process for
a DLT7000 volume.

For simplicity, we generally set both policies to the same value.



I understand your point, but see above re: media index and scanning. The
data will stay on the tape and even if the indexes are deleted, scanning
it in will rebuild all that. Still, having backups on more than one
media never hurts. In fact, before I upgrade Solaris and Veritas Volume
Manager next week on our production server, I'll run backups on both DLT
and 8mm Mammoth tape.

--

-- Len Philpot ><> --
-- XXXX@XXXXX.COM http://philpot.org/ --
-- XXXX@XXXXX.COM (alternate email) --