Incorrect RAID size












0















If anyone could offer any advice it would be much appreciated.



I am running Ubuntu Server 18.04.2 LTS server with 9 x 3TB WD Red hard drives in a RAID 5 configuration, using MDADM.



Is there any reason why I am seeing a difference in total RAID size capacity? I have attached some screen grabs with hopefully the correct information. If im missing anything please let me know.



Many Thanks.....



File System Mount



MDADM Settings



lsblk command



*df -T /srv/RAID
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/md127 ext4 20428278764 16279337724 4148924656 80% /srv/RAID*




*/dev/md127:
Version : 1.2
Creation Time : Mon Jul 23 20:38:24 2018
Raid Level : raid5
Array Size : 23441080320 (22355.16 GiB 24003.67 GB)
Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)
Raid Devices : 9
Total Devices : 9
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Mar 10 10:31:32 2019
State : clean
Active Devices : 9
Working Devices : 9
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : kenny:RAID (local to host kenny)
UUID : d293081d:91d806d2:d70f301d:a4023e4a
Events : 45623
Number Major Minor RaidDevice State
0 8 144 0 active sync /dev/sdj
1 8 128 1 active sync /dev/sdi
2 8 112 2 active sync /dev/sdh
3 8 96 3 active sync /dev/sdg
4 8 48 4 active sync /dev/sdd
5 8 32 5 active sync /dev/sdc
6 8 16 6 active sync /dev/sdb
7 8 0 7 active sync /dev/sda
8 8 64 8 active sync /dev/sde*









share|improve this question









New contributor




Arron Rutland is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





















  • Please do not post screenshots of text. Rather use formatting styles to highlight e.g shell output. Please also add the output of mdadm --detail /dev/md127 and df -T /srv/RAID.

    – Thomas
    Mar 10 at 10:25











  • Thank you for responding

    – Arron Rutland
    Mar 10 at 11:03











  • Could you please specify what size differences you are talking about ?

    – Soren A
    Mar 10 at 11:16











  • The file system shows - 19.03TB & The RAID shows 21.8TB.

    – Arron Rutland
    Mar 10 at 11:18













  • Thank you. I’ll get on it as soon as I get home. I noticed the issue when I added the 9th hard drive. I expanded the RAID (which grew in size) once completed but the file system size didn’t seem to change.

    – Arron Rutland
    Mar 10 at 12:00
















0















If anyone could offer any advice it would be much appreciated.



I am running Ubuntu Server 18.04.2 LTS server with 9 x 3TB WD Red hard drives in a RAID 5 configuration, using MDADM.



Is there any reason why I am seeing a difference in total RAID size capacity? I have attached some screen grabs with hopefully the correct information. If im missing anything please let me know.



Many Thanks.....



File System Mount



MDADM Settings



lsblk command



*df -T /srv/RAID
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/md127 ext4 20428278764 16279337724 4148924656 80% /srv/RAID*




*/dev/md127:
Version : 1.2
Creation Time : Mon Jul 23 20:38:24 2018
Raid Level : raid5
Array Size : 23441080320 (22355.16 GiB 24003.67 GB)
Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)
Raid Devices : 9
Total Devices : 9
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Mar 10 10:31:32 2019
State : clean
Active Devices : 9
Working Devices : 9
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : kenny:RAID (local to host kenny)
UUID : d293081d:91d806d2:d70f301d:a4023e4a
Events : 45623
Number Major Minor RaidDevice State
0 8 144 0 active sync /dev/sdj
1 8 128 1 active sync /dev/sdi
2 8 112 2 active sync /dev/sdh
3 8 96 3 active sync /dev/sdg
4 8 48 4 active sync /dev/sdd
5 8 32 5 active sync /dev/sdc
6 8 16 6 active sync /dev/sdb
7 8 0 7 active sync /dev/sda
8 8 64 8 active sync /dev/sde*









share|improve this question









New contributor




Arron Rutland is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





















  • Please do not post screenshots of text. Rather use formatting styles to highlight e.g shell output. Please also add the output of mdadm --detail /dev/md127 and df -T /srv/RAID.

    – Thomas
    Mar 10 at 10:25











  • Thank you for responding

    – Arron Rutland
    Mar 10 at 11:03











  • Could you please specify what size differences you are talking about ?

    – Soren A
    Mar 10 at 11:16











  • The file system shows - 19.03TB & The RAID shows 21.8TB.

    – Arron Rutland
    Mar 10 at 11:18













  • Thank you. I’ll get on it as soon as I get home. I noticed the issue when I added the 9th hard drive. I expanded the RAID (which grew in size) once completed but the file system size didn’t seem to change.

    – Arron Rutland
    Mar 10 at 12:00














0












0








0








If anyone could offer any advice it would be much appreciated.



I am running Ubuntu Server 18.04.2 LTS server with 9 x 3TB WD Red hard drives in a RAID 5 configuration, using MDADM.



Is there any reason why I am seeing a difference in total RAID size capacity? I have attached some screen grabs with hopefully the correct information. If im missing anything please let me know.



Many Thanks.....



File System Mount



MDADM Settings



lsblk command



*df -T /srv/RAID
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/md127 ext4 20428278764 16279337724 4148924656 80% /srv/RAID*




*/dev/md127:
Version : 1.2
Creation Time : Mon Jul 23 20:38:24 2018
Raid Level : raid5
Array Size : 23441080320 (22355.16 GiB 24003.67 GB)
Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)
Raid Devices : 9
Total Devices : 9
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Mar 10 10:31:32 2019
State : clean
Active Devices : 9
Working Devices : 9
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : kenny:RAID (local to host kenny)
UUID : d293081d:91d806d2:d70f301d:a4023e4a
Events : 45623
Number Major Minor RaidDevice State
0 8 144 0 active sync /dev/sdj
1 8 128 1 active sync /dev/sdi
2 8 112 2 active sync /dev/sdh
3 8 96 3 active sync /dev/sdg
4 8 48 4 active sync /dev/sdd
5 8 32 5 active sync /dev/sdc
6 8 16 6 active sync /dev/sdb
7 8 0 7 active sync /dev/sda
8 8 64 8 active sync /dev/sde*









share|improve this question









New contributor




Arron Rutland is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












If anyone could offer any advice it would be much appreciated.



I am running Ubuntu Server 18.04.2 LTS server with 9 x 3TB WD Red hard drives in a RAID 5 configuration, using MDADM.



Is there any reason why I am seeing a difference in total RAID size capacity? I have attached some screen grabs with hopefully the correct information. If im missing anything please let me know.



Many Thanks.....



File System Mount



MDADM Settings



lsblk command



*df -T /srv/RAID
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/md127 ext4 20428278764 16279337724 4148924656 80% /srv/RAID*




*/dev/md127:
Version : 1.2
Creation Time : Mon Jul 23 20:38:24 2018
Raid Level : raid5
Array Size : 23441080320 (22355.16 GiB 24003.67 GB)
Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)
Raid Devices : 9
Total Devices : 9
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Mar 10 10:31:32 2019
State : clean
Active Devices : 9
Working Devices : 9
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : kenny:RAID (local to host kenny)
UUID : d293081d:91d806d2:d70f301d:a4023e4a
Events : 45623
Number Major Minor RaidDevice State
0 8 144 0 active sync /dev/sdj
1 8 128 1 active sync /dev/sdi
2 8 112 2 active sync /dev/sdh
3 8 96 3 active sync /dev/sdg
4 8 48 4 active sync /dev/sdd
5 8 32 5 active sync /dev/sdc
6 8 16 6 active sync /dev/sdb
7 8 0 7 active sync /dev/sda
8 8 64 8 active sync /dev/sde*






raid






share|improve this question









New contributor




Arron Rutland is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




Arron Rutland is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited Mar 10 at 11:31









Thomas

3,84681527




3,84681527






New contributor




Arron Rutland is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked Mar 10 at 4:12









Arron RutlandArron Rutland

11




11




New contributor




Arron Rutland is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Arron Rutland is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Arron Rutland is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.













  • Please do not post screenshots of text. Rather use formatting styles to highlight e.g shell output. Please also add the output of mdadm --detail /dev/md127 and df -T /srv/RAID.

    – Thomas
    Mar 10 at 10:25











  • Thank you for responding

    – Arron Rutland
    Mar 10 at 11:03











  • Could you please specify what size differences you are talking about ?

    – Soren A
    Mar 10 at 11:16











  • The file system shows - 19.03TB & The RAID shows 21.8TB.

    – Arron Rutland
    Mar 10 at 11:18













  • Thank you. I’ll get on it as soon as I get home. I noticed the issue when I added the 9th hard drive. I expanded the RAID (which grew in size) once completed but the file system size didn’t seem to change.

    – Arron Rutland
    Mar 10 at 12:00



















  • Please do not post screenshots of text. Rather use formatting styles to highlight e.g shell output. Please also add the output of mdadm --detail /dev/md127 and df -T /srv/RAID.

    – Thomas
    Mar 10 at 10:25











  • Thank you for responding

    – Arron Rutland
    Mar 10 at 11:03











  • Could you please specify what size differences you are talking about ?

    – Soren A
    Mar 10 at 11:16











  • The file system shows - 19.03TB & The RAID shows 21.8TB.

    – Arron Rutland
    Mar 10 at 11:18













  • Thank you. I’ll get on it as soon as I get home. I noticed the issue when I added the 9th hard drive. I expanded the RAID (which grew in size) once completed but the file system size didn’t seem to change.

    – Arron Rutland
    Mar 10 at 12:00

















Please do not post screenshots of text. Rather use formatting styles to highlight e.g shell output. Please also add the output of mdadm --detail /dev/md127 and df -T /srv/RAID.

– Thomas
Mar 10 at 10:25





Please do not post screenshots of text. Rather use formatting styles to highlight e.g shell output. Please also add the output of mdadm --detail /dev/md127 and df -T /srv/RAID.

– Thomas
Mar 10 at 10:25













Thank you for responding

– Arron Rutland
Mar 10 at 11:03





Thank you for responding

– Arron Rutland
Mar 10 at 11:03













Could you please specify what size differences you are talking about ?

– Soren A
Mar 10 at 11:16





Could you please specify what size differences you are talking about ?

– Soren A
Mar 10 at 11:16













The file system shows - 19.03TB & The RAID shows 21.8TB.

– Arron Rutland
Mar 10 at 11:18







The file system shows - 19.03TB & The RAID shows 21.8TB.

– Arron Rutland
Mar 10 at 11:18















Thank you. I’ll get on it as soon as I get home. I noticed the issue when I added the 9th hard drive. I expanded the RAID (which grew in size) once completed but the file system size didn’t seem to change.

– Arron Rutland
Mar 10 at 12:00





Thank you. I’ll get on it as soon as I get home. I noticed the issue when I added the 9th hard drive. I expanded the RAID (which grew in size) once completed but the file system size didn’t seem to change.

– Arron Rutland
Mar 10 at 12:00










0






active

oldest

votes











Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "89"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});






Arron Rutland is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f1124440%2fincorrect-raid-size%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes








Arron Rutland is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















Arron Rutland is a new contributor. Be nice, and check out our Code of Conduct.













Arron Rutland is a new contributor. Be nice, and check out our Code of Conduct.












Arron Rutland is a new contributor. Be nice, and check out our Code of Conduct.
















Thanks for contributing an answer to Ask Ubuntu!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f1124440%2fincorrect-raid-size%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

How did Captain America manage to do this?

迪纳利

南乌拉尔铁路局