ext HDs not mounted after 11.01 update

  • Well just a small issue. My 2 ext 4TB drives are missing. I get a message when I power them down that both were successfully removed. One drive showed offline using Win disk manager. I set it back to online and tried again but LE 11.01 does not seem to recognize them. Something in the update process is likely the cause as this happened with the 11.0 upgrade. Here's my log.

    http://ix.io/4sie

  • The update process will have absolutely zero impact. The completely different software stack that you're running post-update *can* have an impact (as we bumped ~95% of the packages in the distro; stuff has changed). The log shows both drives have issues mounting. It's unclear to me which filesystem is being used on the drives; there are messages about ext2/3/4, ntfs3 and f2fs.

  • Code
    Mar 30 21:44:04.219708 LibreELEC kernel: ntfs3: sda2: volume is dirty and "force" flag is not set!
    Mar 30 21:44:04.226378 LibreELEC kernel: ntfs3: sdb2: volume is dirty and "force" flag is not set!

    Connect the drives to a windows box and perform a filesystem check/repair.

    so long,

    Hias

  • Code
    Mar 30 21:44:04.219708 LibreELEC kernel: ntfs3: sda2: volume is dirty and "force" flag is not set!
    Mar 30 21:44:04.226378 LibreELEC kernel: ntfs3: sdb2: volume is dirty and "force" flag is not set!

    Connect the drives to a windows box and perform a filesystem check/repair.

    so long,

    Hias

    Dont take me wrong but as posts about ntfs formated HDD's not mounting are popping up allover the forum..does it really looks like we have to go to the ext4 way or is this some issue that might get fixed one day?

  • I'm using NTFS-formatted HDD's (4-8 atm) on several Pi4 without any issues, but maybe i'm just lucky.

    Most people use Windows at home & at work so a fix would probably be wanted (and needed) by those that do run into problems.

  • Dont take me wrong but as posts about ntfs formated HDD's not mounting are popping up allover the forum..does it really looks like we have to go to the ext4 way or is this some issue that might get fixed one day?

    NTFS has always been troublesome in Linux and never was a good choice. It's best to use ext4 and copy stuff via SMB or, if you really really need to connect the drive to other non-linux devices use EXFAT.

    No idea if/when anything gets fixed, and so far we don't even know what the users with problem reports did (or what happened on their devices) that caused the NTFS partitions to become dirty so we are completely in the dark here (and my guess is some percentage of the reports are caused by user errors like unplugging the disk or hard pulling the plug while the drive was mounted - in that case any filesystem will get marked dirty).

    so long,

    Hias

  • The last time I wrote code was in the eighties so tech has passed me by since then. I can repair the drives and hopefully offer my experience here in hopes of a better solution. This issue occurred on my X86 HTPC a i3 Gigabyte system with (2) internal 1 TB SSDs one dedicated to LE (default boot) and the 2nd with Win10 both unaffected. A StarTec dual drive ext bay with a USB3 interface and (2) 4TB 7200rpm drives for mass storage. Both are GPT formatted NTFS systems. The StarTec drives were not powered when I did the 11.0 or 11.01 updates. Hope this sheds a bit of light of the issue.

  • If the drives worked before, on LE10, and failed at the first attempt with LE11 then one possibility is that the filesystems were already unclean but the old driver didn't care about that - whilst the new driver is more cautious (just a guess though).

    Run the filesystem checks/repairs on Windows and keep an eye on it. If it happens again we would need to know what triggered that.

    so long,

    Hias

  • I repaired the drives in Win10 and now they both are mounted as expected. As this is the second time after updates I can only assume something in the process is the cause. I'm good till the next update (just my guess) ;)

  • I repaired the drives in Win10 and now they both are mounted as expected. As this is the second time after updates I can only assume something in the process is the cause. I'm good till the next update (just my guess) ;)

    Maybe if you provided a little more background to the circumstances prior to this issue it would help to identify and isolate the problem. Was the drive attached to a Windows system and then transferred to a Linux system. Was it in a hibernated state. Is the Windows system set up for fast start etc.

    Users stating their drives no longer mount in Linux LE without any background will not help resolve this issue and you couldn’t expect the developers to do so if they can’t replicate the problem.

  • As I mentioned in post #7 the storage drives were not powered at time of LE update. They were powered on the next day and failed to mount at that time. I have two separate systems on this unit. Each system occupies a 1TB SSD with LE11.01 the default boot drive Win10 on the other. The Win10 is always connected via sata. The 4TB storage drives are exact copies and connected to the system via USB3 interface. Hope that helps.

  • As I mentioned in post #7 the storage drives were not powered at time of LE update. They were powered on the next day and failed to mount at that time. I have two separate systems on this unit. Each system occupies a 1TB SSD with LE11.01 the default boot drive Win10 on the other. The Win10 is always connected via sata. The 4TB storage drives are exact copies and connected to the system via USB3 interface. Hope that helps.

    It’s certainly a start and hopefully others who are experiencing this issue are a little more expansive beyond the proverbial “I’m having the same problem” as if by magic the developers will be able to address it.

  • One additional item may or may not be relevant is that when I booted up Win10 and powered up the two drives I saw a flash message about hard drive collision which I understand to mean the drives were being accessed at the same time and should never happen. This also occurred last update.

  • One additional item may or may not be relevant is that when I booted up Win10 and powered up the two drives I saw a flash message about hard drive collision which I understand to mean the drives were being accessed at the same time and should never happen. This also occurred last update.

    Every little detail helps. Hopefully that type of information will be as forthcoming from others. While the logs are so important for developers to determine the cause of an issue, behaviour and habits unfortunately are not recorded in them. While I don’t use NTFS drives with LE I too am curious what is causing this issue for some and not for others.

  • Out of pure curiosity I powered up my Rpi4-8gb running LE11.01 then powered up a USB connected 2TB external HD GPT /ntfs system and no issues to report. Could the drive size be a factor?

  • Out of pure curiosity I powered up my Rpi4-8gb running LE11.01 then powered up a USB connected 2TB external HD GPT /ntfs system and no issues to report. Could the drive size be a factor?

    I have Pi4 with 4x6TB running without any issues, even one with atm 8x4TB that on occasions runs with 16x4TB, havent seen any problems with any of them yet.