Incorrect RAID size
If anyone could offer any advice it would be much appreciated.
I am running Ubuntu Server 18.04.2 LTS server with 9 x 3TB WD Red hard drives in a RAID 5 configuration, using MDADM.
Is there any reason why I am seeing a difference in total RAID size capacity? I have attached some screen grabs with hopefully the correct information. If im missing anything please let me know.
Many Thanks.....
*df -T /srv/RAID
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/md127 ext4 20428278764 16279337724 4148924656 80% /srv/RAID*
*/dev/md127:
Version : 1.2
Creation Time : Mon Jul 23 20:38:24 2018
Raid Level : raid5
Array Size : 23441080320 (22355.16 GiB 24003.67 GB)
Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)
Raid Devices : 9
Total Devices : 9
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Mar 10 10:31:32 2019
State : clean
Active Devices : 9
Working Devices : 9
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : kenny:RAID (local to host kenny)
UUID : d293081d:91d806d2:d70f301d:a4023e4a
Events : 45623
Number Major Minor RaidDevice State
0 8 144 0 active sync /dev/sdj
1 8 128 1 active sync /dev/sdi
2 8 112 2 active sync /dev/sdh
3 8 96 3 active sync /dev/sdg
4 8 48 4 active sync /dev/sdd
5 8 32 5 active sync /dev/sdc
6 8 16 6 active sync /dev/sdb
7 8 0 7 active sync /dev/sda
8 8 64 8 active sync /dev/sde*
raid
New contributor
|
show 2 more comments
If anyone could offer any advice it would be much appreciated.
I am running Ubuntu Server 18.04.2 LTS server with 9 x 3TB WD Red hard drives in a RAID 5 configuration, using MDADM.
Is there any reason why I am seeing a difference in total RAID size capacity? I have attached some screen grabs with hopefully the correct information. If im missing anything please let me know.
Many Thanks.....
*df -T /srv/RAID
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/md127 ext4 20428278764 16279337724 4148924656 80% /srv/RAID*
*/dev/md127:
Version : 1.2
Creation Time : Mon Jul 23 20:38:24 2018
Raid Level : raid5
Array Size : 23441080320 (22355.16 GiB 24003.67 GB)
Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)
Raid Devices : 9
Total Devices : 9
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Mar 10 10:31:32 2019
State : clean
Active Devices : 9
Working Devices : 9
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : kenny:RAID (local to host kenny)
UUID : d293081d:91d806d2:d70f301d:a4023e4a
Events : 45623
Number Major Minor RaidDevice State
0 8 144 0 active sync /dev/sdj
1 8 128 1 active sync /dev/sdi
2 8 112 2 active sync /dev/sdh
3 8 96 3 active sync /dev/sdg
4 8 48 4 active sync /dev/sdd
5 8 32 5 active sync /dev/sdc
6 8 16 6 active sync /dev/sdb
7 8 0 7 active sync /dev/sda
8 8 64 8 active sync /dev/sde*
raid
New contributor
Please do not post screenshots of text. Rather use formatting styles to highlight e.g shell output. Please also add the output ofmdadm --detail /dev/md127
anddf -T /srv/RAID
.
– Thomas
Mar 10 at 10:25
Thank you for responding
– Arron Rutland
Mar 10 at 11:03
Could you please specify what size differences you are talking about ?
– Soren A
Mar 10 at 11:16
The file system shows - 19.03TB & The RAID shows 21.8TB.
– Arron Rutland
Mar 10 at 11:18
Thank you. I’ll get on it as soon as I get home. I noticed the issue when I added the 9th hard drive. I expanded the RAID (which grew in size) once completed but the file system size didn’t seem to change.
– Arron Rutland
Mar 10 at 12:00
|
show 2 more comments
If anyone could offer any advice it would be much appreciated.
I am running Ubuntu Server 18.04.2 LTS server with 9 x 3TB WD Red hard drives in a RAID 5 configuration, using MDADM.
Is there any reason why I am seeing a difference in total RAID size capacity? I have attached some screen grabs with hopefully the correct information. If im missing anything please let me know.
Many Thanks.....
*df -T /srv/RAID
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/md127 ext4 20428278764 16279337724 4148924656 80% /srv/RAID*
*/dev/md127:
Version : 1.2
Creation Time : Mon Jul 23 20:38:24 2018
Raid Level : raid5
Array Size : 23441080320 (22355.16 GiB 24003.67 GB)
Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)
Raid Devices : 9
Total Devices : 9
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Mar 10 10:31:32 2019
State : clean
Active Devices : 9
Working Devices : 9
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : kenny:RAID (local to host kenny)
UUID : d293081d:91d806d2:d70f301d:a4023e4a
Events : 45623
Number Major Minor RaidDevice State
0 8 144 0 active sync /dev/sdj
1 8 128 1 active sync /dev/sdi
2 8 112 2 active sync /dev/sdh
3 8 96 3 active sync /dev/sdg
4 8 48 4 active sync /dev/sdd
5 8 32 5 active sync /dev/sdc
6 8 16 6 active sync /dev/sdb
7 8 0 7 active sync /dev/sda
8 8 64 8 active sync /dev/sde*
raid
New contributor
If anyone could offer any advice it would be much appreciated.
I am running Ubuntu Server 18.04.2 LTS server with 9 x 3TB WD Red hard drives in a RAID 5 configuration, using MDADM.
Is there any reason why I am seeing a difference in total RAID size capacity? I have attached some screen grabs with hopefully the correct information. If im missing anything please let me know.
Many Thanks.....
*df -T /srv/RAID
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/md127 ext4 20428278764 16279337724 4148924656 80% /srv/RAID*
*/dev/md127:
Version : 1.2
Creation Time : Mon Jul 23 20:38:24 2018
Raid Level : raid5
Array Size : 23441080320 (22355.16 GiB 24003.67 GB)
Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)
Raid Devices : 9
Total Devices : 9
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Mar 10 10:31:32 2019
State : clean
Active Devices : 9
Working Devices : 9
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : kenny:RAID (local to host kenny)
UUID : d293081d:91d806d2:d70f301d:a4023e4a
Events : 45623
Number Major Minor RaidDevice State
0 8 144 0 active sync /dev/sdj
1 8 128 1 active sync /dev/sdi
2 8 112 2 active sync /dev/sdh
3 8 96 3 active sync /dev/sdg
4 8 48 4 active sync /dev/sdd
5 8 32 5 active sync /dev/sdc
6 8 16 6 active sync /dev/sdb
7 8 0 7 active sync /dev/sda
8 8 64 8 active sync /dev/sde*
raid
raid
New contributor
New contributor
edited Mar 10 at 11:31
Thomas
3,84681527
3,84681527
New contributor
asked Mar 10 at 4:12
Arron RutlandArron Rutland
11
11
New contributor
New contributor
Please do not post screenshots of text. Rather use formatting styles to highlight e.g shell output. Please also add the output ofmdadm --detail /dev/md127
anddf -T /srv/RAID
.
– Thomas
Mar 10 at 10:25
Thank you for responding
– Arron Rutland
Mar 10 at 11:03
Could you please specify what size differences you are talking about ?
– Soren A
Mar 10 at 11:16
The file system shows - 19.03TB & The RAID shows 21.8TB.
– Arron Rutland
Mar 10 at 11:18
Thank you. I’ll get on it as soon as I get home. I noticed the issue when I added the 9th hard drive. I expanded the RAID (which grew in size) once completed but the file system size didn’t seem to change.
– Arron Rutland
Mar 10 at 12:00
|
show 2 more comments
Please do not post screenshots of text. Rather use formatting styles to highlight e.g shell output. Please also add the output ofmdadm --detail /dev/md127
anddf -T /srv/RAID
.
– Thomas
Mar 10 at 10:25
Thank you for responding
– Arron Rutland
Mar 10 at 11:03
Could you please specify what size differences you are talking about ?
– Soren A
Mar 10 at 11:16
The file system shows - 19.03TB & The RAID shows 21.8TB.
– Arron Rutland
Mar 10 at 11:18
Thank you. I’ll get on it as soon as I get home. I noticed the issue when I added the 9th hard drive. I expanded the RAID (which grew in size) once completed but the file system size didn’t seem to change.
– Arron Rutland
Mar 10 at 12:00
Please do not post screenshots of text. Rather use formatting styles to highlight e.g shell output. Please also add the output of
mdadm --detail /dev/md127
and df -T /srv/RAID
.– Thomas
Mar 10 at 10:25
Please do not post screenshots of text. Rather use formatting styles to highlight e.g shell output. Please also add the output of
mdadm --detail /dev/md127
and df -T /srv/RAID
.– Thomas
Mar 10 at 10:25
Thank you for responding
– Arron Rutland
Mar 10 at 11:03
Thank you for responding
– Arron Rutland
Mar 10 at 11:03
Could you please specify what size differences you are talking about ?
– Soren A
Mar 10 at 11:16
Could you please specify what size differences you are talking about ?
– Soren A
Mar 10 at 11:16
The file system shows - 19.03TB & The RAID shows 21.8TB.
– Arron Rutland
Mar 10 at 11:18
The file system shows - 19.03TB & The RAID shows 21.8TB.
– Arron Rutland
Mar 10 at 11:18
Thank you. I’ll get on it as soon as I get home. I noticed the issue when I added the 9th hard drive. I expanded the RAID (which grew in size) once completed but the file system size didn’t seem to change.
– Arron Rutland
Mar 10 at 12:00
Thank you. I’ll get on it as soon as I get home. I noticed the issue when I added the 9th hard drive. I expanded the RAID (which grew in size) once completed but the file system size didn’t seem to change.
– Arron Rutland
Mar 10 at 12:00
|
show 2 more comments
0
active
oldest
votes
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "89"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Arron Rutland is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f1124440%2fincorrect-raid-size%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Arron Rutland is a new contributor. Be nice, and check out our Code of Conduct.
Arron Rutland is a new contributor. Be nice, and check out our Code of Conduct.
Arron Rutland is a new contributor. Be nice, and check out our Code of Conduct.
Arron Rutland is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Ask Ubuntu!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f1124440%2fincorrect-raid-size%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Please do not post screenshots of text. Rather use formatting styles to highlight e.g shell output. Please also add the output of
mdadm --detail /dev/md127
anddf -T /srv/RAID
.– Thomas
Mar 10 at 10:25
Thank you for responding
– Arron Rutland
Mar 10 at 11:03
Could you please specify what size differences you are talking about ?
– Soren A
Mar 10 at 11:16
The file system shows - 19.03TB & The RAID shows 21.8TB.
– Arron Rutland
Mar 10 at 11:18
Thank you. I’ll get on it as soon as I get home. I noticed the issue when I added the 9th hard drive. I expanded the RAID (which grew in size) once completed but the file system size didn’t seem to change.
– Arron Rutland
Mar 10 at 12:00