Ubuntu HowTo: How to autoinstall config “fill disk” option on Ubuntu 20.04 Automated Server Insall?

Original Source Link

I’m trying to install 20.04 with an auto install config file like this one:

user-data file:

version: 1
identity:
    hostname: hostname
    username: username
    password: $crypted_pass

But the automated install process (leaving everything at defaults) does NOT partition the disk to use all space, even though that seems to be the default setting when I run through the installer manually.

After manually selecting all the defaults, I get this storage section from /var/log/installer/autoinstall-user-data

  storage:
    config:
    - {ptable: gpt, serial: INTEL SSDPEKKF256G8L_BTHH85121P8H256B, wwn: eui.5cd2e42c81a42d1d,
      path: /dev/nvme0n1, wipe: superblock-recursive, preserve: false, name: '', grub_device: true,
      type: disk, id: disk-nvme0n1}
    - {device: disk-nvme0n1, size: 1048576, flag: bios_grub, number: 1, preserve: false,
      type: partition, id: partition-0}
    - {device: disk-nvme0n1, size: 256057016320, wipe: superblock, flag: '', number: 2,
      preserve: false, type: partition, id: partition-1}
    - {fstype: ext4, volume: partition-1, preserve: false, type: format, id: format-0}
    - {device: format-0, path: /, type: mount, id: mount-0}

However, it’s not clear to me what from here I need to include in my user-data file to just select the “fill disk” option?

I have not tried this, but the docs suggest a negative value will ‘fill’.

Source: https://wiki.ubuntu.com/FoundationsTeam/AutomatedServerInstalls/ConfigReference#storage

the server installer allows sizes to be specified as percentages of the containing device. Also, a negative size can be used for the final partition to indicate that the partition should use all the remaining space.

edit

I tried this out. Using size: -1 for the final partition did fill the disk. I tried using size: 100% and size: -1 for an LVM Logical Volume to use all the available space and it did not work. The installer errored in align_down in subiquity/models/filesystem.py.

I also tried 100%FREE but subiquity errored on dehumanize_size

I also tried removing the size property for the lvm_partition because the curtin docs say (at https://curtin.readthedocs.io/en/latest/topics/storage.html)

If the size key is omitted then all remaining space on the volgroup will be used for the logical volume.

This does not work as subiquity errors if there is no size property

This is unfortunate as using a percentage for an LVM Volume would be a pretty basic use case

The full storage config I tried.

  storage:
    grub:
      reorder_uefi: False
    swap:
      size: 0
    config:
    - {ptable: gpt, path: /dev/sda, preserve: false, name: '', grub_device: false,
      type: disk, id: disk-sda}
    - {device: disk-sda, size: 512M, wipe: superblock, flag: boot, number: 1,
      preserve: false, grub_device: true, type: partition, id: partition-sda1}
    - {fstype: fat32, volume: partition-sda1, preserve: false, type: format, id: format-2}
    - {device: disk-sda, size: 1G, wipe: superblock, flag: linux, number: 2,
      preserve: false, grub_device: false, type: partition, id: partition-sda2}
    - {fstype: ext4, volume: partition-sda2, preserve: false, type: format, id: format-0}
    - {device: disk-sda, size: -1, flag: linux, number: 3, preserve: false,
      grub_device: false, type: partition, id: partition-sda3}
    - name: vg-0
      devices: [partition-sda3]
      preserve: false
      type: lvm_volgroup
      id: lvm-volgroup-vg-0
    - {name: lv-root, volgroup: lvm-volgroup-vg-0, size: 100%, preserve: false,
      type: lvm_partition, id: lvm-partition-lv-root}
    - {fstype: ext4, volume: lvm-partition-lv-root, preserve: false, type: format,
      id: format-1}
    - {device: format-1, path: /, type: mount, id: mount-2}
    - {device: format-0, path: /boot, type: mount, id: mount-1}
    - {device: format-2, path: /boot/efi, type: mount, id: mount-3}

edit 2

I kept digging into this and it seems that sometimes subiquity stores disk sizes as a float, which led to the uncaught exception. I was actually able to work around this by not using human readable format. E.g. instead of size: 512M, use size: 536870912.

This is a sample storage config that uses the autofill option with property size: -1 and also configures a logical volume to fill a volume group with the property size: 100%

  storage:
    grub:
      reorder_uefi: False
    swap:
      size: 0
    config:
    - {ptable: gpt, path: /dev/sda, preserve: false, name: '', grub_device: false,
      type: disk, id: disk-sda}
    - {device: disk-sda, size: 536870912, wipe: superblock, flag: boot, number: 1,
      preserve: false, grub_device: true, type: partition, id: partition-sda1}
    - {fstype: fat32, volume: partition-sda1, preserve: false, type: format, id: format-2}
    - {device: disk-sda, size: 1073741824, wipe: superblock, flag: linux, number: 2,
      preserve: false, grub_device: false, type: partition, id: partition-sda2}
    - {fstype: ext4, volume: partition-sda2, preserve: false, type: format, id: format-0}
    - {device: disk-sda, size: -1, flag: linux, number: 3, preserve: false,
      grub_device: false, type: partition, id: partition-sda3}
    - name: vg-0
      devices: [partition-sda3]
      preserve: false
      type: lvm_volgroup
      id: lvm-volgroup-vg-0
    - {name: lv-root, volgroup: lvm-volgroup-vg-0, size: 100%, preserve: false,
      type: lvm_partition, id: lvm-partition-lv-root}
    - {fstype: ext4, volume: lvm-partition-lv-root, preserve: false, type: format,
      id: format-1}
    - {device: format-1, path: /, type: mount, id: mount-2}
    - {device: format-0, path: /boot, type: mount, id: mount-1}
    - {device: format-2, path: /boot/efi, type: mount, id: mount-3}

It looks like the float bug may have been fixed with this commit and might be avoided if the automatic installer update feature is used

https://github.com/CanonicalLtd/subiquity/commit/8a84e470c59e292138482a0b1bd7144fbb4644db#diff-1ca44bce35f59e931cbe850119e630db

Negative size does work. 2nd partition is set to size -1 and uses all available space.

  storage:
    config:
    - grub_device: true
      id: disk-sda
      path: /dev/sda
      ptable: gpt
      type: disk
      wipe: superblock-recursive
    - device: disk-sda
      flag: bios_grub
      id: partition-0
      number: 1
      size: 1048576
      type: partition
    - device: disk-sda
      id: partition-1
      number: 2
      size: -1
      type: partition
      wipe: superblock
    - fstype: ext4
      id: format-0
      type: format
      volume: partition-1
    - device: format-0
      id: mount-0
      path: /
      type: mount

Tagged : / / /

Ubuntu HowTo: Windows won’t boot after Installing Ubuntu 20.04

Original Source Link

Noob here. Installed Ubuntu 20.04 to dual boot on my laptop. However, now in grub the Windows Boot Manager doesn’t load Windows.

Looking at Gparted, I’m booting from /dev/nvme0n1p1, but the OS is on /dev/nvme0n1p3. Gparted Info

When I select the Windows Boot Manager option, a circle of dots appears, it soon becomes a black screen followed by the GRUB menu again.

I’m not sure why I have 7 partitions – this is what happened after Ubuntu’s default installation process.

So far, I’ve tried boot-repair, updating grub, and changing the default OS In etc/default/grub. Running update-grub finds /dev/nvme0n1p1 as the Windows Boot Manager and displays it, but this drive crashes when launching Windows.

Any help would be appreciated!

Tagged : / / /

Linux HowTo: Flash drive (ExFAT) shows as empty on Mac but not Windows

Original Source Link

I have an ExFAT drive that I recently reformatted & erased, and also used disk utility repair on Mac. When I pop it in a Windows machine, I’m able to view and add files. But the drive appears empty on Mac.

Disk Utility shows:

  • External Physical disk: 15.8 GB used (all)
  • External Physical volume: 5.9 GB used, 15.8 GB free

I’m wondering what is causing this discrepancy (how can there be 15.8 GB free if 5.9/15.8 GB are used)? How can I fix this issue (e.g. make files appear on the drive on Mac)?

I’m guessing here because of lack of detailed information but I have seen similar problems before.

I think your external drive has a mix of GPT and MBR based partition information, which confused MacOS (and Disk Utility Repair), but, so far, hasn’t caused problems in Windows.
I say so far… Windows might start acting up as well, especially as you fill the disk with more data, which might be written, but can’t be read back properly.

You best take the safe route here to prevent loss of data.

  • Copy all data of to another medium. (Hopefully nothing is corrupted yet.)
  • In Windows erase the whole disk with the CLEAN command in the DISKPART utility. This insures there is a valid, non-ambiguous, partition table on the disk.
  • After that you can create a new volume on the disk and format it fresh.
    That new volume should also be safe to use on MacOS.

Disk utility on MacOS shows you the volume and the partition table. As you didn’t provide both MacOS release nor Windows release, I cannot investigate further.

But, it may be possible that your external drive lacks partition table, or has a windows dedicated partition table.
Maybe even LVM ? I’m unsure if Windows is able to handle that.

Only Mojave/Catalina are able to use something that looks like LVM on MacOS.

Tagged : /

Linux HowTo: How to calculate exact size of partition and number of inodes to write a directory

Original Source Link

I need to write a directory with files (specifically – a Linux chroot) to a file with LVM image on it. The background of task is stupid, but I want to understand what is going on for now.
I calculate the size of directory with du:

# du -s --block-size=1 chroot
3762733056  chroot

I round it up and create a file large enough to encompass it:

# fallocate -l 4294967296 image.lvm
# ls -lah
drwxr-xr-x 23 root    root    4.0K мая 27 20:59 chroot
-rw-r--r--  1 root    root    4.0G мая 28 09:59 image.lvm

I mount (sorry, not sure for the right term) the file as loop device and create an LVM partition on it. I will use ext4 fs for it, I know that ext4 reserves 5% of space for root (I can tune it) and some space for inodes table, so I create a partition bigger than my actual directory by about 10% (4139006362 bytes) and round it up so it is multiple of 512 (4139006464 bytes) for LVM needs:

# losetup -f --show image.lvm
/dev/loop0
# pvcreate /dev/loop0
  Physical volume "/dev/loop0" successfully created.
# vgcreate IMAGE /dev/loop0
  Volume group "IMAGE" successfully created
# lvcreate --size 4139006464B -n CHROOT IMAGE
  Rounding up size to full physical extent <3.86 GiB
  Logical volume "CHROOT" created.

I then create an ext4 filesystem on this LV:

# mkfs.ext4 /dev/IMAGE/CHROOT
mke2fs 1.45.6 (20-Mar-2020)
Discarding device blocks: done                            
Creating filesystem with 1010688 4k blocks and 252960 inodes
Filesystem UUID: fb3775ff-8380-4f97-920d-6092ae0cd454
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 

# mount /dev/IMAGE/CHROOT mnt
# df --block-size=1 mnt
Filesystem                   1B-blocks         Used    Available Use% Mounted on
/dev/mapper/IMAGE-CHROOT    4007591936     16179200   3767648256   1% /mnt

While 3767648256 is greater than 3762733056 that I got from du, I still tune it up a notch:

# tune2fs -m 0 /dev/IMAGE/CHROOT
tune2fs 1.45.6 (20-Mar-2020)
Setting reserved blocks percentage to 0% (0 blocks)
# df --block-size=1 mnt
Filesystem                1B-blocks     Used  Available Use% Mounted on
/dev/mapper/IMAGE-CHROOT 4007591936 16179200 3974635520   1% /mnt

So far so good, let’s write some data to it:

# cp -a chroot/. mnt/
...
cp: cannot create regular file 'mnt/./usr/portage/profiles/hardened/linux/powerpc/ppc64/32bit-userland/use.mask': No space left on device

Bang. Let’s see what df shows:

# df --block-size=1 mnt
Filesystem                1B-blocks       Used Available Use% Mounted on
/dev/mapper/IMAGE-CHROOT 4007591936 3587997696 402817024  90% /mnt

So there is actually space available. After googling it up a bit, I found out that you can run out of inodes on your partition, which seems exactly like my case:

# df -i mnt
Filesystem               Inodes IUsed IFree IUse% Mounted on
/dev/mapper/IMAGE-CHROOT   248K  248K     0  100% /mnt

And now, the question! I can easily use bigger file size, create 1.5x larger partition, write my files there, and it will work. But being the pedantic developer who wants to preserve the space: how do I calculate precisely how much bytes and inodes I will need for my directory to be written? I am also fairly certain I screw up with --block-size=1 somewhere along the way too.

The “why LVM” context: it is used for its snapshot capabilities. So basically other scripts create a 20G snapshot from said 4G chroot, do the stuff in this snapshot, and then remove it, leaving the original contents of chroot untouched. So the base filesystem may be considered readonly. “Simple” stupid docker containers invented long before Docker that cannot be easily replaced with Docker itself or its overlayfs.

mkfs.ext4 gives you three interesting options (see the man page for full details).

  • -i bytes-per-inode

    • Specify the bytes/inode ratio.
    • mke2fs creates an inode for every bytes-per-inode bytes of space on the disk.
    • The larger the bytes-per-inode ratio, the fewer inodes will be created.
  • -I inode-size

    • Specify the size of each inode in bytes.
    • The default inode size is controlled by the mke2fs.conf(5) file. In the mke2fs.conf file shipped with e2fsprogs
      • The default inode size is 256 bytes for most file systems,
      • Except for small file systems where the inode size will be 128 bytes.
  • -N number-of-inodes

    • Overrides the default calculation of the number of inodes that should be reserved for the filesystem (which is based on the number of blocks and the bytes-per-inode ratio).
    • This allows the user to specify the number of desired inodes directly.

Using a combination of these, you can precisely shape the filesystem. If you are confident that you’ll never need to create any additional files or are mounting the filesystem as read-only, then you could theoretically give -N ${number-of-entities}.

$ truncate -s 10M ino.img
$ mkfs.ext4 -N 5 ino.img
mke2fs 1.44.1 (24-Mar-2018)
Discarding device blocks: done
Creating filesystem with 10240 1k blocks and 16 inodes
Filesystem UUID: 164876f1-bbfa-405f-8b2d-704830d7c165
Superblock backups stored on blocks:
        8193

Allocating group tables: done
Writing inode tables: done
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done

$ mount -o loop ino.img ./mnt
$ df -i mnt
Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/loop0         16    11     5   69% /home/attie/box/mnt
$ touch ./mnt/1
$ touch ./mnt/2
$ touch ./mnt/3
$ touch ./mnt/4
$ touch ./mnt/5
$ touch ./mnt/6
touch: cannot touch './mnt/6': No space left on device
$ df -B1 mnt
Filesystem     1B-blocks   Used Available Use% Mounted on
/dev/loop0       9425920 176128   8516608   3% /home/attie/box/mnt
$ df -i mnt
Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/loop0         16    16     0  100% /home/attie/box/mnt

Remember that directories will take an inode too:

$ mkfs.ext4 -N 5 ino.img
mke2fs 1.44.1 (24-Mar-2018)
ino.img contains a ext4 filesystem
        last mounted on /home/attie/box/mnt on Thu May 28 09:08:41 2020
Proceed anyway? (y/N) y
Discarding device blocks: done
Creating filesystem with 10240 1k blocks and 16 inodes
Filesystem UUID: a36efc6c-8638-4750-ae6f-a900ada4330f
Superblock backups stored on blocks:
        8193

Allocating group tables: done
Writing inode tables: done
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done

$ mount -o loop ino.img ./mnt
$ mkdir mnt/1
$ mkdir mnt/2
$ touch mnt/a
$ touch mnt/b
$ touch mnt/1/c
$ touch mnt/2/d
touch: cannot touch 'mnt/2/d': No space left on device

You can get a count of entities using find or similar, remembering to count directories too! (i.e: don’t use -type f or -not -type d).

find "${source_dir}" | wc -l

Now that you know (or can specify) the inode size as well, you can determine much more precisely how much headroom you’ll need to allocate, and you can avoid “wasting” space on unused inodes.


If you are using the filesystem read-only, then another option could be to look into squashfs instead of ext4, which will allocate a contiguous (and compressed) block based specifically on the input files… rather than creating a container that you hope is big enough and filling it.

And unless you’re really after something from LVM, you can easily get away without it, as shown above (and I’d recommend not using it too). You might like / want an MBR, depending on how you’ll be deploying the image.

Tagged : / / / /

Making Game: How to calculate exact size of partition and number of inodes to write a directory

Original Source Link

I need to write a directory with files (specifically – a Linux chroot) to a file with LVM image on it. The background of task is stupid, but I want to understand what is going on for now.
I calculate the size of directory with du:

# du -s --block-size=1 chroot
3762733056  chroot

I round it up and create a file large enough to encompass it:

# fallocate -l 4294967296 image.lvm
# ls -lah
drwxr-xr-x 23 root    root    4.0K мая 27 20:59 chroot
-rw-r--r--  1 root    root    4.0G мая 28 09:59 image.lvm

I mount (sorry, not sure for the right term) the file as loop device and create an LVM partition on it. I will use ext4 fs for it, I know that ext4 reserves 5% of space for root (I can tune it) and some space for inodes table, so I create a partition bigger than my actual directory by about 10% (4139006362 bytes) and round it up so it is multiple of 512 (4139006464 bytes) for LVM needs:

# losetup -f --show image.lvm
/dev/loop0
# pvcreate /dev/loop0
  Physical volume "/dev/loop0" successfully created.
# vgcreate IMAGE /dev/loop0
  Volume group "IMAGE" successfully created
# lvcreate --size 4139006464B -n CHROOT IMAGE
  Rounding up size to full physical extent <3.86 GiB
  Logical volume "CHROOT" created.

I then create an ext4 filesystem on this LV:

# mkfs.ext4 /dev/IMAGE/CHROOT
mke2fs 1.45.6 (20-Mar-2020)
Discarding device blocks: done                            
Creating filesystem with 1010688 4k blocks and 252960 inodes
Filesystem UUID: fb3775ff-8380-4f97-920d-6092ae0cd454
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 

# mount /dev/IMAGE/CHROOT mnt
# df --block-size=1 mnt
Filesystem                   1B-blocks         Used    Available Use% Mounted on
/dev/mapper/IMAGE-CHROOT    4007591936     16179200   3767648256   1% /mnt

While 3767648256 is greater than 3762733056 that I got from du, I still tune it up a notch:

# tune2fs -m 0 /dev/IMAGE/CHROOT
tune2fs 1.45.6 (20-Mar-2020)
Setting reserved blocks percentage to 0% (0 blocks)
# df --block-size=1 mnt
Filesystem                1B-blocks     Used  Available Use% Mounted on
/dev/mapper/IMAGE-CHROOT 4007591936 16179200 3974635520   1% /mnt

So far so good, let’s write some data to it:

# cp -a chroot/. mnt/
...
cp: cannot create regular file 'mnt/./usr/portage/profiles/hardened/linux/powerpc/ppc64/32bit-userland/use.mask': No space left on device

Bang. Let’s see what df shows:

# df --block-size=1 mnt
Filesystem                1B-blocks       Used Available Use% Mounted on
/dev/mapper/IMAGE-CHROOT 4007591936 3587997696 402817024  90% /mnt

So there is actually space available. After googling it up a bit, I found out that you can run out of inodes on your partition, which seems exactly like my case:

# df -i mnt
Filesystem               Inodes IUsed IFree IUse% Mounted on
/dev/mapper/IMAGE-CHROOT   248K  248K     0  100% /mnt

And now, the question! I can easily use bigger file size, create 1.5x larger partition, write my files there, and it will work. But being the pedantic developer who wants to preserve the space: how do I calculate precisely how much bytes and inodes I will need for my directory to be written? I am also fairly certain I screw up with --block-size=1 somewhere along the way too.

The “why LVM” context: it is used for its snapshot capabilities. So basically other scripts create a 20G snapshot from said 4G chroot, do the stuff in this snapshot, and then remove it, leaving the original contents of chroot untouched. So the base filesystem may be considered readonly. “Simple” stupid docker containers invented long before Docker that cannot be easily replaced with Docker itself or its overlayfs.

mkfs.ext4 gives you three interesting options (see the man page for full details).

  • -i bytes-per-inode

    • Specify the bytes/inode ratio.
    • mke2fs creates an inode for every bytes-per-inode bytes of space on the disk.
    • The larger the bytes-per-inode ratio, the fewer inodes will be created.
  • -I inode-size

    • Specify the size of each inode in bytes.
    • The default inode size is controlled by the mke2fs.conf(5) file. In the mke2fs.conf file shipped with e2fsprogs
      • The default inode size is 256 bytes for most file systems,
      • Except for small file systems where the inode size will be 128 bytes.
  • -N number-of-inodes

    • Overrides the default calculation of the number of inodes that should be reserved for the filesystem (which is based on the number of blocks and the bytes-per-inode ratio).
    • This allows the user to specify the number of desired inodes directly.

Using a combination of these, you can precisely shape the filesystem. If you are confident that you’ll never need to create any additional files or are mounting the filesystem as read-only, then you could theoretically give -N ${number-of-entities}.

$ truncate -s 10M ino.img
$ mkfs.ext4 -N 5 ino.img
mke2fs 1.44.1 (24-Mar-2018)
Discarding device blocks: done
Creating filesystem with 10240 1k blocks and 16 inodes
Filesystem UUID: 164876f1-bbfa-405f-8b2d-704830d7c165
Superblock backups stored on blocks:
        8193

Allocating group tables: done
Writing inode tables: done
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done

$ mount -o loop ino.img ./mnt
$ df -i mnt
Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/loop0         16    11     5   69% /home/attie/box/mnt
$ touch ./mnt/1
$ touch ./mnt/2
$ touch ./mnt/3
$ touch ./mnt/4
$ touch ./mnt/5
$ touch ./mnt/6
touch: cannot touch './mnt/6': No space left on device
$ df -B1 mnt
Filesystem     1B-blocks   Used Available Use% Mounted on
/dev/loop0       9425920 176128   8516608   3% /home/attie/box/mnt
$ df -i mnt
Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/loop0         16    16     0  100% /home/attie/box/mnt

Remember that directories will take an inode too:

$ mkfs.ext4 -N 5 ino.img
mke2fs 1.44.1 (24-Mar-2018)
ino.img contains a ext4 filesystem
        last mounted on /home/attie/box/mnt on Thu May 28 09:08:41 2020
Proceed anyway? (y/N) y
Discarding device blocks: done
Creating filesystem with 10240 1k blocks and 16 inodes
Filesystem UUID: a36efc6c-8638-4750-ae6f-a900ada4330f
Superblock backups stored on blocks:
        8193

Allocating group tables: done
Writing inode tables: done
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done

$ mount -o loop ino.img ./mnt
$ mkdir mnt/1
$ mkdir mnt/2
$ touch mnt/a
$ touch mnt/b
$ touch mnt/1/c
$ touch mnt/2/d
touch: cannot touch 'mnt/2/d': No space left on device

You can get a count of entities using find or similar, remembering to count directories too! (i.e: don’t use -type f or -not -type d).

find "${source_dir}" | wc -l

Now that you know (or can specify) the inode size as well, you can determine much more precisely how much headroom you’ll need to allocate, and you can avoid “wasting” space on unused inodes.


If you are using the filesystem read-only, then another option could be to look into squashfs instead of ext4, which will allocate a contiguous (and compressed) block based specifically on the input files… rather than creating a container that you hope is big enough and filling it.

And unless you’re really after something from LVM, you can easily get away without it, as shown above (and I’d recommend not using it too). You might like / want an MBR, depending on how you’ll be deploying the image.

Tagged : / / / /

Making Game: grub rescue: All paritions have unknown filesystem

Original Source Link

Similar to this post however it did not solve my issue. I have made a windows recovery USB but it does not show up when on ls command, and I do not see a way to search for new partitions.

Weirdly i’m not able to run any commands other than ls and set(of the commands that ive tested).

For some background I had windows 10 and manjaro on my system. I stupidly deleted the manjaro partition and now I’m here. If I can remove the manjaro boot loader in the efi parition will that solve the issue?

Tagged : / / /

Making Game: Flash drive (ExFAT) shows as empty on Mac but not Windows

Original Source Link

I have an ExFAT drive that I recently reformatted & erased, and also used disk utility repair on Mac. When I pop it in a Windows machine, I’m able to view and add files. But the drive appears empty on Mac.

Disk Utility shows:

  • External Physical disk: 15.8 GB used (all)
  • External Physical volume: 5.9 GB used, 15.8 GB free

I’m wondering what is causing this discrepancy (how can there be 15.8 GB free if 5.9/15.8 GB are used)? How can I fix this issue (e.g. make files appear on the drive on Mac)?

I’m guessing here because of lack of detailed information but I have seen similar problems before.

I think your external drive has a mix of GPT and MBR based partition information, which confused MacOS (and Disk Utility Repair), but, so far, hasn’t caused problems in Windows.
I say so far… Windows might start acting up as well, especially as you fill the disk with more data, which might be written, but can’t be read back properly.

You best take the safe route here to prevent loss of data.

  • Copy all data of to another medium. (Hopefully nothing is corrupted yet.)
  • In Windows erase the whole disk with the CLEAN command in the DISKPART utility. This insures there is a valid, non-ambiguous, partition table on the disk.
  • After that you can create a new volume on the disk and format it fresh.
    That new volume should also be safe to use on MacOS.

Disk utility on MacOS shows you the volume and the partition table. As you didn’t provide both MacOS release nor Windows release, I cannot investigate further.

But, it may be possible that your external drive lacks partition table, or has a windows dedicated partition table.
Maybe even LVM ? I’m unsure if Windows is able to handle that.

Only Mojave/Catalina are able to use something that looks like LVM on MacOS.

Tagged : /

Ubuntu HowTo: how do you expand the boot partition of a standard 14.04 install

Original Source Link

I have read several solutions to this problem but they did not address a non dual boot standard 14.04 install. Below is the structure of my disk. I tried to paste a picture of gparted but apparently don’t have enough moxy to use pictues. I had intended to use gparted to fix the problem if it can be fixed. I need more boot sector space to do a standard upgrade of Ubuntu.

[Filesystem                  1K-blocks     Used Available Use% Mounted on
/dev/mapper/ubuntu--vg-root 475802696 20276728 431333560   5% /
none                                4        0         4   0% /sys/fs/cgroup
udev                          1953252       12   1953240   1% /dev
tmpfs                          392888     1388    391500   1% /run
none                             5120        0      5120   0% /run/lock
none                          1964428     5028   1959400   1% /run/shm
none                           102400       76    102324   1% /run/user
/dev/sda2                      241965   226665      2808  99% /boot
/dev/sda1                      523248     3428    519820   1% /boot/efi

I am at a loss to translate what I have read to what my disk configuration is. Since this is a result of taking the defaults during the install can the boot partition be expanded and how is that done?

That’s because you’re looking in the wrong direction. To see your disk partition information, use sudo fdisk -l /dev/sda.

Rather than risking fiddling with disk partitions (although I’ll get to that), I suggest that you find out what is using 99% of /boot.

See the question and my answer at this post

And now, how to risk destroying your disk by “adjusting” partitions.

Step 1.

Make absolutely sure you have a backup that you can restore.

Step 2.

Download and burn to a USB key a “GParted Live” distribution. Google will help you find one.

Step 3.

Shutdown your system the official way, not by pulling the plug or the vulcan nerve pinch. This will let the system get the filesystems on the disk into stable, up-to-date states.

Step 4.

Boot from the GParted Live USB key

Step 5.

Tell GParted which disk you want to “fix”. Once it’s selected, GParted will show you a picture of the disk layout.

Step 6.

You can use GParted to resize partitions into adjacent I’ve had trouble before Free Space, or shrink them (don’t go below the size the filesystem thinks is used) to create Free Space.

You can use Gparted to move partitions, but cannot change the order.

GParted will store your intended actions, and do them all at the end, once it gives you a final Go/No-Go choice.

Step 7.

As your penultimate action, tell GParted to run fsck on each and every partition you’ve touched.

Step 8.

If you are happy with what you have set up (and be sure to check and recheck – are you sure you can restore the backup?) tell Gparted to proceed.

Step 9.

Wait. Interrupting partition moves, GParted at work, etc leads straight to doorstop mode. (Your disk becomes more useful as a doorstop than connected to a computer).

Step 10.

Shut down GParted, remove the USB key and reboot.

Step 11.

YMMV, not responsible, Linux is a powerful tool, THIS IS RISKY!

Tagged : / /

Ubuntu HowTo: Black screen after installing rEFInd in Ubuntu on Mac Mini

Original Source Link

I created a partition on my Mac Mini and installed Ubuntu to it. The installation worked, but it would automatically boot into Ubuntu whenever I turn on the computer, so I installed rEFInd using the PPA in Ubuntu. I turned the computer off, and when I turned it back on, it just showed a black screen. It did say there was a system problem or something like that when I was turning it off, but I didn’t think much of it and I just ignored it. I’ve tried the Mac startup key combinations, but none of them seem to work. What I should I do to fix my problem?

Tagged : / / / /

Ubuntu HowTo: What is the difference between Filesystem and “Extended Partition” in Disks?

Original Source Link

I am struggling to understand the output from the “Disks” program. Here I see two different partitions, both pointing to the same disk space.

One is devices /dev/sda3 (highlighted) and the other is /dev/sda5.

Can someone tell me what is the difference?
It is slightly confusing because I do not have two partitions and my windows partition only shows up as one (Partition 2).

Disks output

An Extended Partition is an artifact of MBR ‘legacy’ disk partitioning, as the MBR system only allows a maximum of four (4) partitions. To have more than four partitions, an Extended Partition is used to hold multiple Logical Partitions.

This was obsoleted by the GPT Partition Table, which removed the cap of four (4) partitions on a disk, but there are still plenty of folks out there with the MBR ‘legacy’ partition tables, as Windows 7 and earlier versions of Windows defaulted to the MBR ‘legacy’ partition tables. I suspect that’s what I see in your picture above.

BTW, Partition 1 of that drive is a Windows Recovery Partition. It is a Primary Partition, just like the NTFS partition (Partition 2).

For more on filesystems, see this article.

An extended partition is a partition that can be further divided to create additional partitions.

Basically, the extended partition allows you to have more partitions on a physical drive than you could otherwise have.

Here is an example of a disk with five partitions and a link to a definition of an extended partition.

http://www.info.org/extended_partition.html

Disks display of a hard disk with five partitions

$ df -a /dev/sda*
Filesystem     1K-blocks      Used Available Use% Mounted on
udev             3910716         4   3910712   1% /dev
/dev/sda1      493235028  36226812 431930208   8% /
udev             3910716         4   3910712   1% /dev
/dev/sda5      151058636 107940272  35421980  76% /media/stephen/Hitachi72101Ptn5
udev             3910716         4   3910712   1% /dev
/dev/sda7      157554484    609108 148918960   1% /media/stephen/Hitachi72101Ptn7
/dev/sda8      151058636     60884 143301368   1% /media/stephen/Hitachi72101Ptn8

Tagged : / /