Discussion:
xfs_rapair memory requirement per TB
Rene Salmon
2008-01-22 23:01:22 UTC
Permalink
Hi,

Reading the "Repairing a possibly incomplete xfs_growfs command?" thread
this month makes me wonder if there is some type of rough formula or
guesstimation cheat sheet to figure out how much memory and swap one
would need for an xfs_repair given a file system with many terabytes.


Say I have an 8TB LUN that needs an xfs_repair. What would be the rough
memory requirements and swap space?

Thanks
Rene
Barry Naujok
2008-01-23 02:51:41 UTC
Permalink
Post by Rene Salmon
Hi,
Reading the "Repairing a possibly incomplete xfs_growfs command?" thread
this month makes me wonder if there is some type of rough formula or
guesstimation cheat sheet to figure out how much memory and swap one
would need for an xfs_repair given a file system with many terabytes.
Say I have an 8TB LUN that needs an xfs_repair. What would be the rough
memory requirements and swap space?
General rule of thumb at the moment is 128MB of RAM/TB of filesystem
plus 4MB/million inodes on that filesystem.

Barry.
Ralf Gross
2008-01-23 08:53:39 UTC
Permalink
Post by Barry Naujok
Post by Rene Salmon
Reading the "Repairing a possibly incomplete xfs_growfs command?" thread
this month makes me wonder if there is some type of rough formula or
guesstimation cheat sheet to figure out how much memory and swap one
would need for an xfs_repair given a file system with many terabytes.
Say I have an 8TB LUN that needs an xfs_repair. What would be the rough
memory requirements and swap space?
General rule of thumb at the moment is 128MB of RAM/TB of filesystem
plus 4MB/million inodes on that filesystem.
Did this change lately? I found the rule of thumb: 2 GB RAM for 1 TB
of disk storage + some RAM per x inodes.

http://oss.sgi.com/archives/xfs/2005-08/msg00045.html

I'm interested in this too, because we'll get additional 15 TB disk
space soon. Until now I created filesystems of max 7-8 TB on a server
with 16 GB RAM. The 15 TB (mostly 2-5 GB large files) will be used on
a server with only 6 GB RAM (debian etch, xfsprogs 2.8.11-1). We plan
to expand the RAM to 12 GB. What size would be safe for a xfs
filesystem on a server with 12 GB RAM (given mostly large files)?

Ralf
David Chinner
2008-01-24 00:28:28 UTC
Permalink
Post by Ralf Gross
Post by Barry Naujok
Post by Rene Salmon
Reading the "Repairing a possibly incomplete xfs_growfs command?" thread
this month makes me wonder if there is some type of rough formula or
guesstimation cheat sheet to figure out how much memory and swap one
would need for an xfs_repair given a file system with many terabytes.
Say I have an 8TB LUN that needs an xfs_repair. What would be the rough
memory requirements and swap space?
General rule of thumb at the moment is 128MB of RAM/TB of filesystem
plus 4MB/million inodes on that filesystem.
Did this change lately? I found the rule of thumb: 2 GB RAM for 1 TB
of disk storage + some RAM per x inodes.
http://oss.sgi.com/archives/xfs/2005-08/msg00045.html
was based on reported usage on during live repair runs.

I think Barry discovered the difference to be things external
to repair such as heap fragmentation and has since corrected
the worst of the issues so requirements are, in general,
much closer to the theoretical numbers now.

Cheers,

Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
Barry Naujok
2008-01-24 01:26:34 UTC
Permalink
Post by David Chinner
Post by Rene Salmon
Post by Barry Naujok
Post by Rene Salmon
Reading the "Repairing a possibly incomplete xfs_growfs command?"
thread
Post by Barry Naujok
Post by Rene Salmon
this month makes me wonder if there is some type of rough formula or
guesstimation cheat sheet to figure out how much memory and swap one
would need for an xfs_repair given a file system with many terabytes.
Say I have an 8TB LUN that needs an xfs_repair. What would be the
rough
Post by Barry Naujok
Post by Rene Salmon
memory requirements and swap space?
General rule of thumb at the moment is 128MB of RAM/TB of filesystem
plus 4MB/million inodes on that filesystem.
Did this change lately? I found the rule of thumb: 2 GB RAM for 1 TB
of disk storage + some RAM per x inodes.
http://oss.sgi.com/archives/xfs/2005-08/msg00045.html
was based on reported usage on during live repair runs.
I think Barry discovered the difference to be things external
to repair such as heap fragmentation and has since corrected
the worst of the issues so requirements are, in general,
much closer to the theoretical numbers now.
Yes, quite a few memory improvements have been made.

Right now, I can repair a 9TB filesystem with ~150 million inodes
in 2GB of RAM without going to swap using xfs_repair 2.9.4 and
with no custom/tuning/config options.

Regards,
Barry.
Rene Salmon
2008-01-24 15:50:16 UTC
Permalink
Post by Barry Naujok
Post by David Chinner
Post by Ralf Gross
Post by Barry Naujok
General rule of thumb at the moment is 128MB of RAM/TB of
filesystem
Post by David Chinner
Post by Ralf Gross
Post by Barry Naujok
plus 4MB/million inodes on that filesystem.
Did this change lately? I found the rule of thumb: 2 GB RAM for 1
TB
Post by David Chinner
Post by Ralf Gross
of disk storage + some RAM per x inodes.
http://oss.sgi.com/archives/xfs/2005-08/msg00045.html
was based on reported usage on during live repair runs.
I think Barry discovered the difference to be things external
to repair such as heap fragmentation and has since corrected
the worst of the issues so requirements are, in general,
much closer to the theoretical numbers now.
Yes, quite a few memory improvements have been made.
Right now, I can repair a 9TB filesystem with ~150 million inodes
in 2GB of RAM without going to swap using xfs_repair 2.9.4 and
with no custom/tuning/config options.
Thanks. That is great news about the memory improvements. We currently
run SLES 10 SP1 which comes with:

hpcxe005:# xfs_repair -V
xfs_repair version 2.8.16

others come with:

hpcxe001:~ # xfs_repair -V
xfs_repair version 2.9.2


Did the memory improvements make it into 2.8.16? How about 2.9.2? If not
i take it we can download the latest source and just have the 2.9.4
xfs_repair binary laying around in case we ever need to use it. Would
using a 2.9.4 xfs_repair binary on a 2.8.16 created xfs file system
cause any problems?

Thanks
Rene
Ralf Gross
2008-01-24 16:07:14 UTC
Permalink
Post by Rene Salmon
...
Post by Barry Naujok
Right now, I can repair a 9TB filesystem with ~150 million inodes
in 2GB of RAM without going to swap using xfs_repair 2.9.4 and
with no custom/tuning/config options.
Thanks. That is great news about the memory improvements. We currently
hpcxe005:# xfs_repair -V
xfs_repair version 2.8.16
hpcxe001:~ # xfs_repair -V
xfs_repair version 2.9.2
Did the memory improvements make it into 2.8.16? How about 2.9.2? If not
i take it we can download the latest source and just have the 2.9.4
xfs_repair binary laying around in case we ever need to use it. Would
using a 2.9.4 xfs_repair binary on a 2.8.16 created xfs file system
cause any problems?
I did this today. Compiled 2.9.5 on debian etch and put it in
/opt/xfsprogs. Just in case I need it.

On #xfs I got the answer that xfs_check and xfs_repair should be fine
for an fs that was created with an older mkfs.xfs.

Ralf
Barry Naujok
2008-01-25 00:01:09 UTC
Permalink
Post by Rene Salmon
Post by Barry Naujok
Post by David Chinner
Post by Ralf Gross
Post by Barry Naujok
General rule of thumb at the moment is 128MB of RAM/TB of
filesystem
Post by David Chinner
Post by Ralf Gross
Post by Barry Naujok
plus 4MB/million inodes on that filesystem.
Did this change lately? I found the rule of thumb: 2 GB RAM for 1
TB
Post by David Chinner
Post by Ralf Gross
of disk storage + some RAM per x inodes.
http://oss.sgi.com/archives/xfs/2005-08/msg00045.html
was based on reported usage on during live repair runs.
I think Barry discovered the difference to be things external
to repair such as heap fragmentation and has since corrected
the worst of the issues so requirements are, in general,
much closer to the theoretical numbers now.
Yes, quite a few memory improvements have been made.
Right now, I can repair a 9TB filesystem with ~150 million inodes
in 2GB of RAM without going to swap using xfs_repair 2.9.4 and
with no custom/tuning/config options.
Thanks. That is great news about the memory improvements. We currently
hpcxe005:# xfs_repair -V
xfs_repair version 2.8.16
hpcxe001:~ # xfs_repair -V
xfs_repair version 2.9.2
Did the memory improvements make it into 2.8.16? How about 2.9.2? If not
i take it we can download the latest source and just have the 2.9.4
xfs_repair binary laying around in case we ever need to use it. Would
using a 2.9.4 xfs_repair binary on a 2.8.16 created xfs file system
cause any problems?
The memory improvements showed up in 2.9.2.

As Ralf said, xfs_repair can always fix older mkfs'ed filesystems.

Regards,
Barry.

Loading...