Well just a small issue. My 2 ext 4TB drives are missing. I get a message when I power them down that both were successfully removed. One drive showed offline using Win disk manager. I set it back to online and tried again but LE 11.01 does not seem to recognize them. Something in the update process is likely the cause as this happened with the 11.0 upgrade. Here's my log.
ext HDs not mounted after 11.01 update
-
Reddirt -
March 31, 2023 at 6:01 AM -
Thread is Resolved
-
-
Code
Display MoreMar 30 21:44:02.661572 LibreELEC kernel: usb 1-1: new high-speed USB device number 2 using ehci-pci Mar 30 21:44:02.661646 LibreELEC kernel: usb 3-3: new high-speed USB device number 2 using xhci_hcd Mar 30 21:44:02.661717 LibreELEC kernel: usb 2-1: new high-speed USB device number 2 using ehci-pci Mar 30 21:44:02.661791 LibreELEC kernel: hub 1-1:1.0: USB hub found Mar 30 21:44:02.661858 LibreELEC kernel: hub 1-1:1.0: 4 ports detected Mar 30 21:44:02.661924 LibreELEC kernel: scsi host4: uas Mar 30 21:44:02.662003 LibreELEC kernel: scsi 4:0:0:0: Direct-Access JMicron Disk0 0105 PQ: 0 ANSI: 6 Mar 30 21:44:02.662073 LibreELEC kernel: hub 2-1:1.0: USB hub found Mar 30 21:44:02.662140 LibreELEC kernel: hub 2-1:1.0: 6 ports detected Mar 30 21:44:02.662216 LibreELEC kernel: scsi 4:0:0:1: Direct-Access JMicron Disk1 0105 PQ: 0 ANSI: 6 Mar 30 21:44:02.662290 LibreELEC kernel: sd 4:0:0:0: Attached scsi generic sg0 type 0 Mar 30 21:44:02.662362 LibreELEC kernel: sd 4:0:0:1: Attached scsi generic sg1 type 0 Mar 30 21:44:02.662433 LibreELEC kernel: sd 4:0:0:0: [sda] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB) Mar 30 21:44:02.662504 LibreELEC kernel: sd 4:0:0:1: [sdb] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB) Mar 30 21:44:02.662574 LibreELEC kernel: sd 4:0:0:0: [sda] Write Protect is off Mar 30 21:44:02.662644 LibreELEC kernel: sd 4:0:0:0: [sda] Mode Sense: 67 00 10 08 Mar 30 21:44:02.662713 LibreELEC kernel: sd 4:0:0:1: [sdb] Write Protect is off Mar 30 21:44:02.662784 LibreELEC kernel: sd 4:0:0:1: [sdb] Mode Sense: 67 00 10 08 Mar 30 21:44:02.662854 LibreELEC kernel: sd 4:0:0:0: [sda] Write cache: enabled, read cache: enabled, supports DPO and FUA Mar 30 21:44:02.662924 LibreELEC kernel: sd 4:0:0:1: [sdb] Write cache: enabled, read cache: enabled, supports DPO and FUA Mar 30 21:44:02.662998 LibreELEC kernel: sd 4:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Mar 30 21:44:02.663182 LibreELEC kernel: sd 4:0:0:0: [sda] Optimal transfer size 33553920 bytes not a multiple of preferred minimum block size (4096 bytes) Mar 30 21:44:02.663255 LibreELEC kernel: sd 4:0:0:1: [sdb] Preferred minimum I/O size 4096 bytes Mar 30 21:44:02.663325 LibreELEC kernel: sd 4:0:0:1: [sdb] Optimal transfer size 33553920 bytes not a multiple of preferred minimum block size (4096 bytes) Mar 30 21:44:02.663334 LibreELEC kernel: Alternate GPT is invalid, using primary GPT. Mar 30 21:44:02.663341 LibreELEC kernel: sda: sda1 sda2 Mar 30 21:44:02.663409 LibreELEC kernel: sd 4:0:0:0: [sda] Attached SCSI disk Mar 30 21:44:02.663417 LibreELEC kernel: sdb: sdb1 sdb2 Mar 30 21:44:02.663485 LibreELEC kernel: sd 4:0:0:1: [sdb] Attached SCSI disk Mar 30 21:44:02.663556 LibreELEC kernel: usb 3-5: new full-speed USB device number 3 using xhci_hcd Mar 30 21:44:02.664086 LibreELEC kernel: usb 3-6: new high-speed USB device number 4 using xhci_hcd Mar 30 21:44:02.664231 LibreELEC kernel: usb-storage 3-6:1.0: USB Mass Storage device detected Mar 30 21:44:02.664297 LibreELEC kernel: scsi host5: usb-storage 3-6:1.0 Mar 30 21:44:02.664454 LibreELEC kernel: scsi 5:0:0:0: Direct-Access SanDisk Cruzer Blade 1.26 PQ: 0 ANSI: 5 Mar 30 21:44:02.664525 LibreELEC kernel: sd 5:0:0:0: Attached scsi generic sg2 type 0 Mar 30 21:44:02.664595 LibreELEC kernel: sd 5:0:0:0: [sdc] 31266816 512-byte logical blocks: (16.0 GB/14.9 GiB) Mar 30 21:44:02.664664 LibreELEC kernel: sd 5:0:0:0: [sdc] Write Protect is off Mar 30 21:44:02.664735 LibreELEC kernel: sd 5:0:0:0: [sdc] Mode Sense: 43 00 00 00 Mar 30 21:44:02.664805 LibreELEC kernel: sd 5:0:0:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA Mar 30 21:44:02.664814 LibreELEC kernel: sdc: sdc1 sdc2 Mar 30 21:44:02.664882 LibreELEC kernel: sd 5:0:0:0: [sdc] Attached SCSI removable disk Mar 30 21:44:02.664891 LibreELEC fsck: fsck.fat 4.2 (2021-01-31) Mar 30 21:44:02.664900 LibreELEC fsck: /dev/sdc1: 12 files, 38232/65501 clusters Mar 30 21:44:02.664907 LibreELEC fsck: Storage: clean, 5120/944704 files, 233249/3776128 blocks Mar 30 21:44:02.664919 LibreELEC kernel: EXT4-fs (sdc2): mounted filesystem with ordered data mode. Quota mode: disabled. Mar 30 21:44:04.083037 LibreELEC kernel: ntfs3: Max link count 4000 Mar 30 21:44:04.086573 LibreELEC kernel: ext3: Unknown parameter 'fmask' Mar 30 21:44:04.086628 LibreELEC kernel: ext3: Unknown parameter 'fmask' Mar 30 21:44:04.109703 LibreELEC kernel: ext2: Unknown parameter 'fmask' Mar 30 21:44:04.109778 LibreELEC kernel: ext2: Unknown parameter 'fmask' Mar 30 21:44:04.109802 LibreELEC kernel: ext4: Unknown parameter 'fmask' Mar 30 21:44:04.109819 LibreELEC kernel: ext4: Unknown parameter 'fmask' Mar 30 21:44:04.109839 LibreELEC kernel: squashfs: Unknown parameter 'fmask' Mar 30 21:44:04.113106 LibreELEC kernel: squashfs: Unknown parameter 'fmask' Mar 30 21:44:04.113141 LibreELEC kernel: F2FS-fs (sda2): Magic Mismatch, valid(0xf2f52010) - read(0x8e0e8966) Mar 30 21:44:04.113157 LibreELEC kernel: F2FS-fs (sda2): Can't find valid F2FS filesystem in 1th superblock Mar 30 21:44:04.119707 LibreELEC kernel: F2FS-fs (sda2): Magic Mismatch, valid(0xf2f52010) - read(0x0) Mar 30 21:44:04.119759 LibreELEC kernel: F2FS-fs (sda2): Can't find valid F2FS filesystem in 2th superblock Mar 30 21:44:04.119778 LibreELEC kernel: F2FS-fs (sdb2): Magic Mismatch, valid(0xf2f52010) - read(0x8e0e8966) Mar 30 21:44:04.119793 LibreELEC kernel: F2FS-fs (sdb2): Can't find valid F2FS filesystem in 1th superblock Mar 30 21:44:04.123273 LibreELEC kernel: F2FS-fs (sdb2): Magic Mismatch, valid(0xf2f52010) - read(0x0) Mar 30 21:44:04.123330 LibreELEC kernel: F2FS-fs (sdb2): Can't find valid F2FS filesystem in 2th superblock Mar 30 21:44:04.230206 LibreELEC udevil[814]: mount: mounting /dev/sdb2 on /media/4TB failed: Invalid argument Mar 30 21:44:04.230376 LibreELEC udevil[813]: mount: mounting /dev/sda2 on /media/NewVolume failed: Invalid argument Mar 30 21:44:04.230440 LibreELEC udevil[710]: mount: mounting /dev/sda2 on /media/NewVolume failed: No such device Mar 30 21:44:04.230549 LibreELEC udevil[711]: mount: mounting /dev/sdb2 on /media/4TB failed: No such device
The update process will have absolutely zero impact. The completely different software stack that you're running post-update *can* have an impact (as we bumped ~95% of the packages in the distro; stuff has changed). The log shows both drives have issues mounting. It's unclear to me which filesystem is being used on the drives; there are messages about ext2/3/4, ntfs3 and f2fs.
-
-
Dont take me wrong but as posts about ntfs formated HDD's not mounting are popping up allover the forum..does it really looks like we have to go to the ext4 way or is this some issue that might get fixed one day?
-
I'm using NTFS-formatted HDD's (4-8 atm) on several Pi4 without any issues, but maybe i'm just lucky.
Most people use Windows at home & at work so a fix would probably be wanted (and needed) by those that do run into problems.
-
Dont take me wrong but as posts about ntfs formated HDD's not mounting are popping up allover the forum..does it really looks like we have to go to the ext4 way or is this some issue that might get fixed one day?
NTFS has always been troublesome in Linux and never was a good choice. It's best to use ext4 and copy stuff via SMB or, if you really really need to connect the drive to other non-linux devices use EXFAT.
No idea if/when anything gets fixed, and so far we don't even know what the users with problem reports did (or what happened on their devices) that caused the NTFS partitions to become dirty so we are completely in the dark here (and my guess is some percentage of the reports are caused by user errors like unplugging the disk or hard pulling the plug while the drive was mounted - in that case any filesystem will get marked dirty).
so long,
Hias
-
The last time I wrote code was in the eighties so tech has passed me by since then. I can repair the drives and hopefully offer my experience here in hopes of a better solution. This issue occurred on my X86 HTPC a i3 Gigabyte system with (2) internal 1 TB SSDs one dedicated to LE (default boot) and the 2nd with Win10 both unaffected. A StarTec dual drive ext bay with a USB3 interface and (2) 4TB 7200rpm drives for mass storage. Both are GPT formatted NTFS systems. The StarTec drives were not powered when I did the 11.0 or 11.01 updates. Hope this sheds a bit of light of the issue.
-
If the drives worked before, on LE10, and failed at the first attempt with LE11 then one possibility is that the filesystems were already unclean but the old driver didn't care about that - whilst the new driver is more cautious (just a guess though).
Run the filesystem checks/repairs on Windows and keep an eye on it. If it happens again we would need to know what triggered that.
so long,
Hias
-
I repaired the drives in Win10 and now they both are mounted as expected. As this is the second time after updates I can only assume something in the process is the cause. I'm good till the next update (just my guess)
-
I repaired the drives in Win10 and now they both are mounted as expected. As this is the second time after updates I can only assume something in the process is the cause. I'm good till the next update (just my guess)
Maybe if you provided a little more background to the circumstances prior to this issue it would help to identify and isolate the problem. Was the drive attached to a Windows system and then transferred to a Linux system. Was it in a hibernated state. Is the Windows system set up for fast start etc.
Users stating their drives no longer mount in Linux LE without any background will not help resolve this issue and you couldn’t expect the developers to do so if they can’t replicate the problem.
-
As I mentioned in post #7 the storage drives were not powered at time of LE update. They were powered on the next day and failed to mount at that time. I have two separate systems on this unit. Each system occupies a 1TB SSD with LE11.01 the default boot drive Win10 on the other. The Win10 is always connected via sata. The 4TB storage drives are exact copies and connected to the system via USB3 interface. Hope that helps.
-
As I mentioned in post #7 the storage drives were not powered at time of LE update. They were powered on the next day and failed to mount at that time. I have two separate systems on this unit. Each system occupies a 1TB SSD with LE11.01 the default boot drive Win10 on the other. The Win10 is always connected via sata. The 4TB storage drives are exact copies and connected to the system via USB3 interface. Hope that helps.
It’s certainly a start and hopefully others who are experiencing this issue are a little more expansive beyond the proverbial “I’m having the same problem” as if by magic the developers will be able to address it.
-
One additional item may or may not be relevant is that when I booted up Win10 and powered up the two drives I saw a flash message about hard drive collision which I understand to mean the drives were being accessed at the same time and should never happen. This also occurred last update.
-
One additional item may or may not be relevant is that when I booted up Win10 and powered up the two drives I saw a flash message about hard drive collision which I understand to mean the drives were being accessed at the same time and should never happen. This also occurred last update.
Every little detail helps. Hopefully that type of information will be as forthcoming from others. While the logs are so important for developers to determine the cause of an issue, behaviour and habits unfortunately are not recorded in them. While I don’t use NTFS drives with LE I too am curious what is causing this issue for some and not for others.
-
Out of pure curiosity I powered up my Rpi4-8gb running LE11.01 then powered up a USB connected 2TB external HD GPT /ntfs system and no issues to report. Could the drive size be a factor?
-
This is worth a read https://wiki.archlinux.org/title/NTFS-3G
-
Out of pure curiosity I powered up my Rpi4-8gb running LE11.01 then powered up a USB connected 2TB external HD GPT /ntfs system and no issues to report. Could the drive size be a factor?
I have Pi4 with 4x6TB running without any issues, even one with atm 8x4TB that on occasions runs with 16x4TB, havent seen any problems with any of them yet.
-
This is worth a read https://wiki.archlinux.org/title/NTFS-3G
This is outdated information from the fuse version of ntfs, as used in LE10.
But LE11 uses ntfs3 (which was included in kernel 5.15) https://wiki.archlinux.org/title/NTFS
-