How do I expand the root's volume size?
I have Ubuntu Server 18.04 LTS running off a 16GB SanDisk USB pendrive in my server. From what I can remember, when I installed Ubuntu on there I had enabled LVM support. For some reason, when I ssh into my server, it says / is using 99.6% of 3.87GB, but doing sudo fdisk -l /dev/sdm says:
Disk /dev/sdm: 14.6 GiB, 15664676864 bytes, 30595072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 7DF91AEG-9DE1-43B2-A7C7-EB564B51FEB2
Device       Start      End  Sectors  Size Type
/dev/sdm1     2048     4095     2048    1M BIOS boot
/dev/sdm2     4096  2101247  2097152    1G Linux filesystem
/dev/sdm3  2101248 30593023 28491776 13.6G Linux filesystem
And output from (parted) print with /dev/sdm selected gives me:
Model: SanDisk Cruzer Glide (scsi)
Disk /dev/sdm: 15.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 
Number  Start   End     Size    File system  Name  Flags
 1      1049kB  2097kB  1049kB                     bios_grub
 2      2097kB  1076MB  1074MB  ext4
 3      1076MB  15.7GB  14.6GB
Running sudo df -h gives me:
Filesystem                         Size  Used Avail Use% Mounted on
udev                                12G     0   12G   0% /dev
tmpfs                              2.4G  241M  2.2G  10% /run
/dev/mapper/ubuntu--vg-ubuntu--lv  3.9G  3.9G     0 100% /
tmpfs                               12G     0   12G   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
tmpfs                               12G     0   12G   0% /sys/fs/cgroup
/dev/loop0                          90M   90M     0 100% /snap/core/6034
/dev/loop1                          90M   90M     0 100% /snap/core/6130
/dev/sdm2                          976M  155M  755M  17% /boot
tmpfs                              2.4G     0  2.4G   0% /run/user/1000
/dev/loop3                          92M   92M     0 100% /snap/core/6259
I have left out my ZFS volume above, as it's unnecessary to include it.
Running sudo vgs gives me:
  VG        #PV #LV #SN Attr   VSize  VFree
  ubuntu-vg   1   1   0 wz--n- 13.58g 9.58g
And, lastly, running sudo lvs gives me:
  LV        VG        Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  ubuntu-lv ubuntu-vg -wi-ao---- 4.00g                                                    
/dev/sdm is my root drive by the way. Any insight to this would be helpful. I do have ZFS installed managing other disks, but / is ext3 or 4.
One other thing to note is that I have LVM enabled because if my USB drive were to ever go bad I wanted to be able to restore the data to a new drive, whether it is smaller or larger than 16GB, and utilize the whole disk.
ubuntu filesystems lvm zfs ubuntu-18.04
New contributor
leetbacoon is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
add a comment |
I have Ubuntu Server 18.04 LTS running off a 16GB SanDisk USB pendrive in my server. From what I can remember, when I installed Ubuntu on there I had enabled LVM support. For some reason, when I ssh into my server, it says / is using 99.6% of 3.87GB, but doing sudo fdisk -l /dev/sdm says:
Disk /dev/sdm: 14.6 GiB, 15664676864 bytes, 30595072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 7DF91AEG-9DE1-43B2-A7C7-EB564B51FEB2
Device       Start      End  Sectors  Size Type
/dev/sdm1     2048     4095     2048    1M BIOS boot
/dev/sdm2     4096  2101247  2097152    1G Linux filesystem
/dev/sdm3  2101248 30593023 28491776 13.6G Linux filesystem
And output from (parted) print with /dev/sdm selected gives me:
Model: SanDisk Cruzer Glide (scsi)
Disk /dev/sdm: 15.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 
Number  Start   End     Size    File system  Name  Flags
 1      1049kB  2097kB  1049kB                     bios_grub
 2      2097kB  1076MB  1074MB  ext4
 3      1076MB  15.7GB  14.6GB
Running sudo df -h gives me:
Filesystem                         Size  Used Avail Use% Mounted on
udev                                12G     0   12G   0% /dev
tmpfs                              2.4G  241M  2.2G  10% /run
/dev/mapper/ubuntu--vg-ubuntu--lv  3.9G  3.9G     0 100% /
tmpfs                               12G     0   12G   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
tmpfs                               12G     0   12G   0% /sys/fs/cgroup
/dev/loop0                          90M   90M     0 100% /snap/core/6034
/dev/loop1                          90M   90M     0 100% /snap/core/6130
/dev/sdm2                          976M  155M  755M  17% /boot
tmpfs                              2.4G     0  2.4G   0% /run/user/1000
/dev/loop3                          92M   92M     0 100% /snap/core/6259
I have left out my ZFS volume above, as it's unnecessary to include it.
Running sudo vgs gives me:
  VG        #PV #LV #SN Attr   VSize  VFree
  ubuntu-vg   1   1   0 wz--n- 13.58g 9.58g
And, lastly, running sudo lvs gives me:
  LV        VG        Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  ubuntu-lv ubuntu-vg -wi-ao---- 4.00g                                                    
/dev/sdm is my root drive by the way. Any insight to this would be helpful. I do have ZFS installed managing other disks, but / is ext3 or 4.
One other thing to note is that I have LVM enabled because if my USB drive were to ever go bad I wanted to be able to restore the data to a new drive, whether it is smaller or larger than 16GB, and utilize the whole disk.
ubuntu filesystems lvm zfs ubuntu-18.04
New contributor
leetbacoon is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
Found out that the journal logs were taking up about ~290MB. purging them usingsudo journalctl --vacuum-time=2dgave me enough free space to perform the actions Michael suggested.
– leetbacoon
1 hour ago
add a comment |
I have Ubuntu Server 18.04 LTS running off a 16GB SanDisk USB pendrive in my server. From what I can remember, when I installed Ubuntu on there I had enabled LVM support. For some reason, when I ssh into my server, it says / is using 99.6% of 3.87GB, but doing sudo fdisk -l /dev/sdm says:
Disk /dev/sdm: 14.6 GiB, 15664676864 bytes, 30595072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 7DF91AEG-9DE1-43B2-A7C7-EB564B51FEB2
Device       Start      End  Sectors  Size Type
/dev/sdm1     2048     4095     2048    1M BIOS boot
/dev/sdm2     4096  2101247  2097152    1G Linux filesystem
/dev/sdm3  2101248 30593023 28491776 13.6G Linux filesystem
And output from (parted) print with /dev/sdm selected gives me:
Model: SanDisk Cruzer Glide (scsi)
Disk /dev/sdm: 15.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 
Number  Start   End     Size    File system  Name  Flags
 1      1049kB  2097kB  1049kB                     bios_grub
 2      2097kB  1076MB  1074MB  ext4
 3      1076MB  15.7GB  14.6GB
Running sudo df -h gives me:
Filesystem                         Size  Used Avail Use% Mounted on
udev                                12G     0   12G   0% /dev
tmpfs                              2.4G  241M  2.2G  10% /run
/dev/mapper/ubuntu--vg-ubuntu--lv  3.9G  3.9G     0 100% /
tmpfs                               12G     0   12G   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
tmpfs                               12G     0   12G   0% /sys/fs/cgroup
/dev/loop0                          90M   90M     0 100% /snap/core/6034
/dev/loop1                          90M   90M     0 100% /snap/core/6130
/dev/sdm2                          976M  155M  755M  17% /boot
tmpfs                              2.4G     0  2.4G   0% /run/user/1000
/dev/loop3                          92M   92M     0 100% /snap/core/6259
I have left out my ZFS volume above, as it's unnecessary to include it.
Running sudo vgs gives me:
  VG        #PV #LV #SN Attr   VSize  VFree
  ubuntu-vg   1   1   0 wz--n- 13.58g 9.58g
And, lastly, running sudo lvs gives me:
  LV        VG        Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  ubuntu-lv ubuntu-vg -wi-ao---- 4.00g                                                    
/dev/sdm is my root drive by the way. Any insight to this would be helpful. I do have ZFS installed managing other disks, but / is ext3 or 4.
One other thing to note is that I have LVM enabled because if my USB drive were to ever go bad I wanted to be able to restore the data to a new drive, whether it is smaller or larger than 16GB, and utilize the whole disk.
ubuntu filesystems lvm zfs ubuntu-18.04
New contributor
leetbacoon is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
I have Ubuntu Server 18.04 LTS running off a 16GB SanDisk USB pendrive in my server. From what I can remember, when I installed Ubuntu on there I had enabled LVM support. For some reason, when I ssh into my server, it says / is using 99.6% of 3.87GB, but doing sudo fdisk -l /dev/sdm says:
Disk /dev/sdm: 14.6 GiB, 15664676864 bytes, 30595072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 7DF91AEG-9DE1-43B2-A7C7-EB564B51FEB2
Device       Start      End  Sectors  Size Type
/dev/sdm1     2048     4095     2048    1M BIOS boot
/dev/sdm2     4096  2101247  2097152    1G Linux filesystem
/dev/sdm3  2101248 30593023 28491776 13.6G Linux filesystem
And output from (parted) print with /dev/sdm selected gives me:
Model: SanDisk Cruzer Glide (scsi)
Disk /dev/sdm: 15.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 
Number  Start   End     Size    File system  Name  Flags
 1      1049kB  2097kB  1049kB                     bios_grub
 2      2097kB  1076MB  1074MB  ext4
 3      1076MB  15.7GB  14.6GB
Running sudo df -h gives me:
Filesystem                         Size  Used Avail Use% Mounted on
udev                                12G     0   12G   0% /dev
tmpfs                              2.4G  241M  2.2G  10% /run
/dev/mapper/ubuntu--vg-ubuntu--lv  3.9G  3.9G     0 100% /
tmpfs                               12G     0   12G   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
tmpfs                               12G     0   12G   0% /sys/fs/cgroup
/dev/loop0                          90M   90M     0 100% /snap/core/6034
/dev/loop1                          90M   90M     0 100% /snap/core/6130
/dev/sdm2                          976M  155M  755M  17% /boot
tmpfs                              2.4G     0  2.4G   0% /run/user/1000
/dev/loop3                          92M   92M     0 100% /snap/core/6259
I have left out my ZFS volume above, as it's unnecessary to include it.
Running sudo vgs gives me:
  VG        #PV #LV #SN Attr   VSize  VFree
  ubuntu-vg   1   1   0 wz--n- 13.58g 9.58g
And, lastly, running sudo lvs gives me:
  LV        VG        Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  ubuntu-lv ubuntu-vg -wi-ao---- 4.00g                                                    
/dev/sdm is my root drive by the way. Any insight to this would be helpful. I do have ZFS installed managing other disks, but / is ext3 or 4.
One other thing to note is that I have LVM enabled because if my USB drive were to ever go bad I wanted to be able to restore the data to a new drive, whether it is smaller or larger than 16GB, and utilize the whole disk.
ubuntu filesystems lvm zfs ubuntu-18.04
ubuntu filesystems lvm zfs ubuntu-18.04
New contributor
leetbacoon is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
New contributor
leetbacoon is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
edited 2 hours ago
leetbacoon
New contributor
leetbacoon is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
asked 4 hours ago
leetbacoonleetbacoon
185
185
New contributor
leetbacoon is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
New contributor
leetbacoon is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
leetbacoon is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
Found out that the journal logs were taking up about ~290MB. purging them usingsudo journalctl --vacuum-time=2dgave me enough free space to perform the actions Michael suggested.
– leetbacoon
1 hour ago
add a comment |
Found out that the journal logs were taking up about ~290MB. purging them usingsudo journalctl --vacuum-time=2dgave me enough free space to perform the actions Michael suggested.
– leetbacoon
1 hour ago
Found out that the journal logs were taking up about ~290MB. purging them using
sudo journalctl --vacuum-time=2d gave me enough free space to perform the actions Michael suggested.– leetbacoon
1 hour ago
Found out that the journal logs were taking up about ~290MB. purging them using
sudo journalctl --vacuum-time=2d gave me enough free space to perform the actions Michael suggested.– leetbacoon
1 hour ago
add a comment |
                                1 Answer
                            1
                        
active
oldest
votes
As you can see, there's about 9.58 GiB free in your volume group, so that's how much space you can add to the logical volume.
First, you can use lvextend to extend the size of the logical volume, to fill up the remaining space:
sudo lvextend --extents +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
Now, you can resize the filesystem in that logical volume.
sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
Finally, you can see the end result:
sudo df -h /
Thank you, but I triedsudo lvextend --extents +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lvand it says:/etc/lvm/archive/.lvm_ubuntu-server_21283_417376161: write error failed: No space left on device
– leetbacoon
2 hours ago
@leetbacoon Yes, you do need at least a tiny bit of free space available to complete this. You should delete some files you don't need, such as old logs or whatever.
– Michael Hampton♦
2 hours ago
Beautiful. Worked like a charm. Thank you!! :)
– leetbacoon
2 hours ago
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "2"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
leetbacoon is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f953174%2fhow-do-i-expand-the-roots-volume-size%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
                                1 Answer
                            1
                        
active
oldest
votes
                                1 Answer
                            1
                        
active
oldest
votes
active
oldest
votes
active
oldest
votes
As you can see, there's about 9.58 GiB free in your volume group, so that's how much space you can add to the logical volume.
First, you can use lvextend to extend the size of the logical volume, to fill up the remaining space:
sudo lvextend --extents +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
Now, you can resize the filesystem in that logical volume.
sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
Finally, you can see the end result:
sudo df -h /
Thank you, but I triedsudo lvextend --extents +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lvand it says:/etc/lvm/archive/.lvm_ubuntu-server_21283_417376161: write error failed: No space left on device
– leetbacoon
2 hours ago
@leetbacoon Yes, you do need at least a tiny bit of free space available to complete this. You should delete some files you don't need, such as old logs or whatever.
– Michael Hampton♦
2 hours ago
Beautiful. Worked like a charm. Thank you!! :)
– leetbacoon
2 hours ago
add a comment |
As you can see, there's about 9.58 GiB free in your volume group, so that's how much space you can add to the logical volume.
First, you can use lvextend to extend the size of the logical volume, to fill up the remaining space:
sudo lvextend --extents +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
Now, you can resize the filesystem in that logical volume.
sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
Finally, you can see the end result:
sudo df -h /
Thank you, but I triedsudo lvextend --extents +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lvand it says:/etc/lvm/archive/.lvm_ubuntu-server_21283_417376161: write error failed: No space left on device
– leetbacoon
2 hours ago
@leetbacoon Yes, you do need at least a tiny bit of free space available to complete this. You should delete some files you don't need, such as old logs or whatever.
– Michael Hampton♦
2 hours ago
Beautiful. Worked like a charm. Thank you!! :)
– leetbacoon
2 hours ago
add a comment |
As you can see, there's about 9.58 GiB free in your volume group, so that's how much space you can add to the logical volume.
First, you can use lvextend to extend the size of the logical volume, to fill up the remaining space:
sudo lvextend --extents +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
Now, you can resize the filesystem in that logical volume.
sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
Finally, you can see the end result:
sudo df -h /
As you can see, there's about 9.58 GiB free in your volume group, so that's how much space you can add to the logical volume.
First, you can use lvextend to extend the size of the logical volume, to fill up the remaining space:
sudo lvextend --extents +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
Now, you can resize the filesystem in that logical volume.
sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
Finally, you can see the end result:
sudo df -h /
answered 2 hours ago
Michael Hampton♦Michael Hampton
168k27307629
168k27307629
Thank you, but I triedsudo lvextend --extents +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lvand it says:/etc/lvm/archive/.lvm_ubuntu-server_21283_417376161: write error failed: No space left on device
– leetbacoon
2 hours ago
@leetbacoon Yes, you do need at least a tiny bit of free space available to complete this. You should delete some files you don't need, such as old logs or whatever.
– Michael Hampton♦
2 hours ago
Beautiful. Worked like a charm. Thank you!! :)
– leetbacoon
2 hours ago
add a comment |
Thank you, but I triedsudo lvextend --extents +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lvand it says:/etc/lvm/archive/.lvm_ubuntu-server_21283_417376161: write error failed: No space left on device
– leetbacoon
2 hours ago
@leetbacoon Yes, you do need at least a tiny bit of free space available to complete this. You should delete some files you don't need, such as old logs or whatever.
– Michael Hampton♦
2 hours ago
Beautiful. Worked like a charm. Thank you!! :)
– leetbacoon
2 hours ago
Thank you, but I tried
sudo lvextend --extents +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv and it says: /etc/lvm/archive/.lvm_ubuntu-server_21283_417376161: write error failed: No space left on device– leetbacoon
2 hours ago
Thank you, but I tried
sudo lvextend --extents +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv and it says: /etc/lvm/archive/.lvm_ubuntu-server_21283_417376161: write error failed: No space left on device– leetbacoon
2 hours ago
@leetbacoon Yes, you do need at least a tiny bit of free space available to complete this. You should delete some files you don't need, such as old logs or whatever.
– Michael Hampton♦
2 hours ago
@leetbacoon Yes, you do need at least a tiny bit of free space available to complete this. You should delete some files you don't need, such as old logs or whatever.
– Michael Hampton♦
2 hours ago
Beautiful. Worked like a charm. Thank you!! :)
– leetbacoon
2 hours ago
Beautiful. Worked like a charm. Thank you!! :)
– leetbacoon
2 hours ago
add a comment |
leetbacoon is a new contributor. Be nice, and check out our Code of Conduct.
leetbacoon is a new contributor. Be nice, and check out our Code of Conduct.
leetbacoon is a new contributor. Be nice, and check out our Code of Conduct.
leetbacoon is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Server Fault!
- Please be sure to answer the question. Provide details and share your research!
 
But avoid …
- Asking for help, clarification, or responding to other answers.
 - Making statements based on opinion; back them up with references or personal experience.
 
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f953174%2fhow-do-i-expand-the-roots-volume-size%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Found out that the journal logs were taking up about ~290MB. purging them using
sudo journalctl --vacuum-time=2dgave me enough free space to perform the actions Michael suggested.– leetbacoon
1 hour ago