What's new

Attached USB Flash Drive Storage Not Reporting Accurate Available Space

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

garycnew

Senior Member
I have a USB Flash Drive attached to my Asuswrt-Merlin Router that shows only 57% of its space used (using df -h), but I've started receiving "No space left on device" messages; when, trying to write to it. I've even rebooted the router, which still reports the same usage and "No space left on device."

If I remove a few unnecessary files, from the USB Flash Drive, it still reports the same available space, but I can then write to it, until it is full, again.

Thanks.
 
This can happen when a process is/was writing a large amount of data to a file (e.g. a log file) and is still holding the file open. When the file is closed the filesystem stats are updated and df reports the correct information.
 
This can happen when a process is/was writing a large amount of data to a file (e.g. a log file) and is still holding the file open. When the file is closed the filesystem stats are updated and df reports the correct information.

@ColinTaylor I wouldn't be surprised, if that were exactly the case. What's the best way to locate the process and file?

Thanks!
 
You can find the PIDs using fuser or lsof if you have them installed via Entware.

Code:
# fuser -cu /tmp/mnt/TOSHIBA1
/tmp/mnt/TOSHIBA1:    3523e(admin)
# ps | grep [3]523
 3523 admin     5048 S    vnstatd -d
 
@ColinTaylor To provide more clarity... The USB Flash Drive in question is what I use for my Entware installation. Using losf, it appears that the Nginx error.log was consuming about 1 Gig of space.

Code:
# lsof /tmp/mnt/HitachiHDD/
COMMAND   PID  USER   FD   TYPE DEVICE SIZE/OFF   NODE NAME
sh       5362 admin  cwd    DIR    8,1     4096 522243 /tmp/mnt/HitachiHDD/tmp/home/root
nginx    6715 admin  txt    REG    8,1  2230772 789907 /tmp/mnt/HitachiHDD/entware/sbin/nginx
nginx    6715 admin  mem    REG    8,1   142760 783396 /tmp/mnt/HitachiHDD/entware/lib/ld-2.23.so
nginx    6715 admin  mem    REG    8,1    30148 783404 /tmp/mnt/HitachiHDD/entware/lib/libcrypt-2.23.so
nginx    6715 admin  mem    REG    8,1   135024 789862 /tmp/mnt/HitachiHDD/entware/lib/liblua.so.5.1.5
nginx    6715 admin  mem    REG    8,1    76132 788943 /tmp/mnt/HitachiHDD/entware/lib/libz.so.1.2.11
nginx    6715 admin  mem    REG    8,1    92648 783434 /tmp/mnt/HitachiHDD/entware/lib/libpthread-2.23.so
nginx    6715 admin  mem    REG    8,1    44456 783389 /tmp/mnt/HitachiHDD/entware/lib/libgcc_s.so.1
nginx    6715 admin  mem    REG    8,1    38432 783415 /tmp/mnt/HitachiHDD/entware/lib/libnss_files-2.23.so
nginx    6715 admin  mem    REG    8,1     9676 783406 /tmp/mnt/HitachiHDD/entware/lib/libdl-2.23.so
nginx    6715 admin  mem    REG    8,1   673212 783408 /tmp/mnt/HitachiHDD/entware/lib/libm-2.23.so
nginx    6715 admin  mem    REG    8,1  1121180 789880 /tmp/mnt/HitachiHDD/entware/lib/libxml2.so.2.9.10
nginx    6715 admin  mem    REG    8,1   422688 783682 /tmp/mnt/HitachiHDD/entware/lib/libpcre.so.1.2.13
nginx    6715 admin  mem    REG    8,1   434880 788950 /tmp/mnt/HitachiHDD/entware/lib/libssl.so.1.1
nginx    6715 admin  mem    REG    8,1  1999808 788949 /tmp/mnt/HitachiHDD/entware/lib/libcrypto.so.1.1
nginx    6715 admin  mem    REG    8,1  1219224 783400 /tmp/mnt/HitachiHDD/entware/lib/libc-2.23.so
nginx    6715 admin    2w   REG    8,1        0 790426 /tmp/mnt/HitachiHDD/entware/var/log/nginx/error.log
nginx    6715 admin   10w   REG    8,1        0 790431 /tmp/mnt/HitachiHDD/entware/var/log/nginx/access.log
nginx    6715 admin   11w   REG    8,1        0 790426 /tmp/mnt/HitachiHDD/entware/var/log/nginx/error.log
nginx    6717 admin  txt    REG    8,1  2230772 789907 /tmp/mnt/HitachiHDD/entware/sbin/nginx
nginx    6717 admin  mem    REG    8,1   142760 783396 /tmp/mnt/HitachiHDD/entware/lib/ld-2.23.so
nginx    6717 admin  mem    REG    8,1    30148 783404 /tmp/mnt/HitachiHDD/entware/lib/libcrypt-2.23.so
nginx    6717 admin  mem    REG    8,1   135024 789862 /tmp/mnt/HitachiHDD/entware/lib/liblua.so.5.1.5
nginx    6717 admin  mem    REG    8,1    76132 788943 /tmp/mnt/HitachiHDD/entware/lib/libz.so.1.2.11
nginx    6717 admin  mem    REG    8,1    92648 783434 /tmp/mnt/HitachiHDD/entware/lib/libpthread-2.23.so
nginx    6717 admin  mem    REG    8,1    44456 783389 /tmp/mnt/HitachiHDD/entware/lib/libgcc_s.so.1
nginx    6717 admin  mem    REG    8,1    38432 783415 /tmp/mnt/HitachiHDD/entware/lib/libnss_files-2.23.so
nginx    6717 admin  mem    REG    8,1     9676 783406 /tmp/mnt/HitachiHDD/entware/lib/libdl-2.23.so
nginx    6717 admin  mem    REG    8,1   673212 783408 /tmp/mnt/HitachiHDD/entware/lib/libm-2.23.so
nginx    6717 admin  mem    REG    8,1  1121180 789880 /tmp/mnt/HitachiHDD/entware/lib/libxml2.so.2.9.10
nginx    6717 admin  mem    REG    8,1   422688 783682 /tmp/mnt/HitachiHDD/entware/lib/libpcre.so.1.2.13
nginx    6717 admin  mem    REG    8,1   434880 788950 /tmp/mnt/HitachiHDD/entware/lib/libssl.so.1.1
nginx    6717 admin  mem    REG    8,1  1999808 788949 /tmp/mnt/HitachiHDD/entware/lib/libcrypto.so.1.1
nginx    6717 admin  mem    REG    8,1  1219224 783400 /tmp/mnt/HitachiHDD/entware/lib/libc-2.23.so
nginx    6717 admin    2w   REG    8,1        0 790426 /tmp/mnt/HitachiHDD/entware/var/log/nginx/error.log
nginx    6717 admin   10w   REG    8,1        0 790431 /tmp/mnt/HitachiHDD/entware/var/log/nginx/access.log
nginx    6717 admin   11w   REG    8,1        0 790426 /tmp/mnt/HitachiHDD/entware/var/log/nginx/error.log
nginx    6718 admin  txt    REG    8,1  2230772 789907 /tmp/mnt/HitachiHDD/entware/sbin/nginx
nginx    6718 admin  mem    REG    8,1   142760 783396 /tmp/mnt/HitachiHDD/entware/lib/ld-2.23.so
nginx    6718 admin  mem    REG    8,1    30148 783404 /tmp/mnt/HitachiHDD/entware/lib/libcrypt-2.23.so
nginx    6718 admin  mem    REG    8,1   135024 789862 /tmp/mnt/HitachiHDD/entware/lib/liblua.so.5.1.5
nginx    6718 admin  mem    REG    8,1    76132 788943 /tmp/mnt/HitachiHDD/entware/lib/libz.so.1.2.11
nginx    6718 admin  mem    REG    8,1    92648 783434 /tmp/mnt/HitachiHDD/entware/lib/libpthread-2.23.so
nginx    6718 admin  mem    REG    8,1    44456 783389 /tmp/mnt/HitachiHDD/entware/lib/libgcc_s.so.1
nginx    6718 admin  mem    REG    8,1    38432 783415 /tmp/mnt/HitachiHDD/entware/lib/libnss_files-2.23.so
nginx    6718 admin  mem    REG    8,1     9676 783406 /tmp/mnt/HitachiHDD/entware/lib/libdl-2.23.so
nginx    6718 admin  mem    REG    8,1   673212 783408 /tmp/mnt/HitachiHDD/entware/lib/libm-2.23.so
nginx    6718 admin  mem    REG    8,1  1121180 789880 /tmp/mnt/HitachiHDD/entware/lib/libxml2.so.2.9.10
nginx    6718 admin  mem    REG    8,1   422688 783682 /tmp/mnt/HitachiHDD/entware/lib/libpcre.so.1.2.13
nginx    6718 admin  mem    REG    8,1   434880 788950 /tmp/mnt/HitachiHDD/entware/lib/libssl.so.1.1
nginx    6718 admin  mem    REG    8,1  1999808 788949 /tmp/mnt/HitachiHDD/entware/lib/libcrypto.so.1.1
nginx    6718 admin  mem    REG    8,1  1219224 783400 /tmp/mnt/HitachiHDD/entware/lib/libc-2.23.so
nginx    6718 admin    2w   REG    8,1        0 790426 /tmp/mnt/HitachiHDD/entware/var/log/nginx/error.log
nginx    6718 admin   10w   REG    8,1        0 790431 /tmp/mnt/HitachiHDD/entware/var/log/nginx/access.log
nginx    6718 admin   11w   REG    8,1        0 790426 /tmp/mnt/HitachiHDD/entware/var/log/nginx/error.log
tar     13042 admin  cwd    DIR    8,1     4096 522243 /tmp/mnt/HitachiHDD/tmp/home/root
gzip    13043 admin  cwd    DIR    8,1     4096 522243 /tmp/mnt/HitachiHDD/tmp/home/root
lsof    16532 admin  txt    REG    8,1   154532 788981 /tmp/mnt/HitachiHDD/entware/bin/lsof
lsof    16532 admin  mem    REG    8,1    92648 783434 /tmp/mnt/HitachiHDD/entware/lib/libpthread-2.23.so
lsof    16532 admin  mem    REG    8,1   142760 783396 /tmp/mnt/HitachiHDD/entware/lib/ld-2.23.so
lsof    16532 admin  mem    REG    8,1    44456 783389 /tmp/mnt/HitachiHDD/entware/lib/libgcc_s.so.1
lsof    16532 admin  mem    REG    8,1   118088 788987 /tmp/mnt/HitachiHDD/entware/lib/libtirpc.so.3.0.0
lsof    16532 admin  mem    REG    8,1  1219224 783400 /tmp/mnt/HitachiHDD/entware/lib/libc-2.23.so
lsof    16532 admin  mem    REG    8,1  2916416 783769 /tmp/mnt/HitachiHDD/entware/usr/lib/locale/locale-archive
lsof    16536 admin  txt    REG    8,1   154532 788981 /tmp/mnt/HitachiHDD/entware/bin/lsof
lsof    16536 admin  mem    REG    8,1    92648 783434 /tmp/mnt/HitachiHDD/entware/lib/libpthread-2.23.so
lsof    16536 admin  mem    REG    8,1   142760 783396 /tmp/mnt/HitachiHDD/entware/lib/ld-2.23.so
lsof    16536 admin  mem    REG    8,1    44456 783389 /tmp/mnt/HitachiHDD/entware/lib/libgcc_s.so.1
lsof    16536 admin  mem    REG    8,1   118088 788987 /tmp/mnt/HitachiHDD/entware/lib/libtirpc.so.3.0.0
lsof    16536 admin  mem    REG    8,1  1219224 783400 /tmp/mnt/HitachiHDD/entware/lib/libc-2.23.so
lsof    16536 admin  mem    REG    8,1  2916416 783769 /tmp/mnt/HitachiHDD/entware/usr/lib/locale/locale-archive

After clearing the error.log, it reports another 1 Gig of space free. I don't see much else in the way of processes holding large files open.

Code:
# df -h
Filesystem                Size      Used Available Use% Mounted on
/dev/root                29.1M     29.1M         0 100% /
devtmpfs                124.7M         0    124.7M   0% /dev
tmpfs                   124.8M      1.7M    123.2M   1% /tmp
/dev/mtdblock4           62.8M     62.0M    768.0K  99% /jffs
/dev/sda2               436.5G    381.9G     54.6G  87% /tmp/mnt/Time_Capsule
/dev/mtdblock4           62.8M     62.0M    768.0K  99% /usr/sbin/dnsapi
/dev/mtdblock4           62.8M     62.0M    768.0K  99% /usr/sbin/acme.sh
/dev/sda1                28.8G     14.8G     12.5G  54% /tmp/mnt/HitachiHDD
/dev/mtdblock4           62.8M     62.0M    768.0K  99% /www/Main_LogStatus_Content.asp
 
Hmm. Does tune2fs -l /tmp/mnt/HitachiHDD provide any clues?

You might have to unmount the drive and fsck it to reclaim orphaned space.
 
Last edited:
Code:
# df -h
Filesystem                Size      Used Available Use% Mounted on
...
/dev/mtdblock4           62.8M     62.0M    768.0K  99% /jffs
/dev/mtdblock4           62.8M     62.0M    768.0K  99% /usr/sbin/dnsapi
/dev/mtdblock4           62.8M     62.0M    768.0K  99% /usr/sbin/acme.sh
/dev/mtdblock4           62.8M     62.0M    768.0K  99% /www/Main_LogStatus_Content.asp

Your JFFS partition is almost full at 99%. You need to find out why. Any large files stored there? Or thousands of small files?

UPDATE:
Use the following command to find the top 100 files over 100KB (if you happen to have that many):
Bash:
du -axk /jffs | sort -nr -t ' ' -k 1 | awk -v minKB="100" -F ' ' '{if ($1 > minKB) print $0}' | head -n 100
 
Last edited:
Hmm. Does tune2fs -l /tmp/mnt/HitachiHDD provide any clues?

You might have to unmount the drive and fsck it to reclaim orphaned space.

@ColinTaylor

Code:
# tune2fs -l /dev/sda1
tune2fs 1.42.13 (17-May-2015)
Filesystem volume name:   HitachiHDD
Last mounted on:          /tmp/mnt/HitachiHDD
Filesystem UUID:          b47903cb-b92a-49fa-9062-f0a327fd30bf
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file uninit_bg dir_nlink extra_isize
Filesystem flags:         unsigned_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              1917600
Block count:              7669021
Reserved block count:     383451
Free blocks:              3480550
Free inodes:              0
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      1022
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8160
Inode blocks per group:   510
Flex block group size:    16
Filesystem created:       Sun May 15 01:19:11 2022
Last mount time:          Sun Mar 24 13:40:55 2024
Last write time:          Sun Mar 24 13:40:55 2024
Mount count:              69
Maximum mount count:      -1
Last checked:             Sun May 15 01:19:11 2022
Check interval:           0 (<none>)
Lifetime writes:          1683 MB
Reserved blocks uid:      0 (user admin)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:              256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      6618767b-9c8b-4654-81de-918bfe1e8026
Journal backup:           inode blocks

I'm assuming I'll have to disable Entware and the associated Mount Point, reboot the device, fsck the volume, re-enable Entware and Mount Point, and reboot the device, again?

How does one temporarily disable Entware? I don't see any Mount Points in /etc/fstab. Where are the Mount Points located under Asuswrt?

Thanks, again!


Gary
 
The tune2fs output looks OK. It says the filesystem is clean and has ~ the same free space as df is showing.

As per post #7, are you sure the error is for the USB drive and not your /jffs filesystem that is 99% full? What are you doing that generates this error message? Do you get any errors in syslog?
 
Last edited:
If I remove a few unnecessary files, from the USB Flash Drive, it still reports the same available space, but I can then write to it, until it is full, again.

If this isn't a red-flag, I'm not sure what is...

Stop writing to the drive, and recover what you can...
 
The tune2fs output looks OK. It says the filesystem is clean and has ~ the same free space as df is showing.

As per post #7, are you sure the error is for the USB drive and not your /jffs filesystem that is 99% full? What are you doing that generates this error message? Do you get any errors in syslog?

@ColinTaylor and @sfx2000

I'm positive that it's the /tmp/mnt/HitachiHDD drive. As previously reported, if I remove some unnecessary files and then try writing to the HitachiHDD it is successful, until the drive is full, again, but not accurately reported with df.

I don't see any errors in the syslog.log:

Code:
# grep -i hitachi /tmp/syslog.log
Mar 24 13:14:41 Data_Center-D448-CA04B43-R syslog: USB ext4 fs at /dev/sda1 mounted on /tmp/mnt/HitachiHDD
Mar 24 13:14:41 Data_Center-D448-CA04B43-R usb: USB ext4 fs at /dev/sda1 mounted on /tmp/mnt/HitachiHDD.
Mar 24 13:14:42 Data_Center-D448-CA04B43-R custom_script: Running /jffs/scripts/post-mount (args: /tmp/mnt/HitachiHDD)
Mar 24 13:14:43 Data_Center-D448-CA04B43-R kernel: Adding 2097148k swap on /tmp/mnt/HitachiHDD/myswap.swp.  Priority:-1 extents:6 across:2260988k
Mar 24 13:14:44 Data_Center-D448-CA04B43-R Entware: Starting Entware services on /tmp/mnt/HitachiHDD
Mar 24 13:14:47 Living_Room-C293-CA04B43-R syslog: USB ext4 fs at /dev/sda1 mounted on /tmp/mnt/HitachiHDD
Mar 24 13:14:47 Living_Room-C293-CA04B43-R usb: USB ext4 fs at /dev/sda1 mounted on /tmp/mnt/HitachiHDD.
Mar 24 13:15:17 Wiring_Closet-D7A6-CA04B43-R syslog: USB ext4 fs at /dev/sda1 mounted on /tmp/mnt/HitachiHDD
Mar 24 13:15:17 Wiring_Closet-D7A6-CA04B43-R usb: USB ext4 fs at /dev/sda1 mounted on /tmp/mnt/HitachiHDD.
Mar 24 13:15:17 Wiring_Closet-D7A6-CA04B43-R custom_script: Running /jffs/scripts/post-mount (args: /tmp/mnt/HitachiHDD)
Mar 24 13:15:18 Wiring_Closet-D7A6-CA04B43-R kernel: Adding 2097148k swap on /tmp/mnt/HitachiHDD/myswap.swp.  Priority:-1 extents:6 across:2260988k
Mar 24 13:15:19 Wiring_Closet-D7A6-CA04B43-R Entware: Starting Entware services on /tmp/mnt/HitachiHDD
Mar 24 13:15:32 Wiring_Closet-AE61-CA04B43-R syslog: USB ext4 fs at /dev/sda1 mounted on /tmp/mnt/HitachiHDD
Mar 24 13:15:32 Wiring_Closet-AE61-CA04B43-R usb: USB ext4 fs at /dev/sda1 mounted on /tmp/mnt/HitachiHDD.
Mar 24 13:15:32 Wiring_Closet-AE61-CA04B43-R custom_script: Running /jffs/scripts/post-mount (args: /tmp/mnt/HitachiHDD)
Mar 24 13:15:33 Wiring_Closet-AE61-CA04B43-R kernel: Adding 2097148k swap on /tmp/mnt/HitachiHDD/myswap.swp.  Priority:-1 extents:6 across:2260988k
Mar 24 13:15:34 Wiring_Closet-AE61-CA04B43-R Entware: Starting Entware services on /tmp/mnt/HitachiHDD
Mar 24 13:40:55 usb: USB ext4 fs at /dev/sda1 mounted on /tmp/mnt/HitachiHDD.
Mar 24 13:40:56 custom_script: Running /jffs/scripts/post-mount (args: /tmp/mnt/HitachiHDD)
Mar 24 13:40:57 Entware: Starting Entware services on /tmp/mnt/HitachiHDD

It appears that disk checks are being preformed on the sda1 volume:

Code:
# grep -i sda1 /tmp/syslog.log
Mar 24 13:14:31 Data_Center-D448-CA04B43-R amtm disk-check: Probing 'ext4' on device /dev/sda1
Mar 24 13:14:31 Data_Center-D448-CA04B43-R amtm disk-check: Running disk check v3.0, with command 'e2fsck -p' on /dev/sda1
Mar 24 13:14:41 Data_Center-D448-CA04B43-R amtm disk-check: Disk check done on /dev/sda1
Mar 24 13:14:41 Data_Center-D448-CA04B43-R syslog: USB ext4 fs at /dev/sda1 mounted on /tmp/mnt/HitachiHDD
Mar 24 13:14:41 Data_Center-D448-CA04B43-R usb: USB ext4 fs at /dev/sda1 mounted on /tmp/mnt/HitachiHDD.
Mar 24 13:14:41 Data_Center-D448-CA04B43-R kernel: EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: user_xattr
Mar 24 13:14:43 Living_Room-C293-CA04B43-R amtm disk-check: Probing 'ext4' on device /dev/sda1
Mar 24 13:14:43 Living_Room-C293-CA04B43-R amtm disk-check: Running disk check v2.9, with command 'e2fsck -p' on /dev/sda1
Mar 24 13:14:47 Living_Room-C293-CA04B43-R amtm disk-check: Disk check done on /dev/sda1
Mar 24 13:14:47 Living_Room-C293-CA04B43-R syslog: USB ext4 fs at /dev/sda1 mounted on /tmp/mnt/HitachiHDD
Mar 24 13:14:47 Living_Room-C293-CA04B43-R usb: USB ext4 fs at /dev/sda1 mounted on /tmp/mnt/HitachiHDD.
Mar 24 13:14:47 Living_Room-C293-CA04B43-R kernel: EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: user_xattr
Mar 24 13:14:54 Wiring_Closet-61EC-CA04B43-R custom_script: Running /jffs/scripts/pre-mount (args: /dev/sda1 ext4)
Mar 24 13:14:54 Wiring_Closet-61EC-CA04B43-R amtm disk-check: Probing 'ext4' on device /dev/sda1
Mar 24 13:14:54 Wiring_Closet-61EC-CA04B43-R amtm disk-check: Running disk check v3.2, with command 'e2fsck -p' on /dev/sda1
Mar 24 13:15:16 Wiring_Closet-D7A6-CA04B43-R amtm disk-check: Disk check done on /dev/sda1
Mar 24 13:15:17 Wiring_Closet-D7A6-CA04B43-R syslog: USB ext4 fs at /dev/sda1 mounted on /tmp/mnt/HitachiHDD
Mar 24 13:15:17 Wiring_Closet-D7A6-CA04B43-R kernel: EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: user_xattr
Mar 24 13:15:17 Wiring_Closet-D7A6-CA04B43-R usb: USB ext4 fs at /dev/sda1 mounted on /tmp/mnt/HitachiHDD.
Mar 24 13:15:22 Wiring_Closet-AE61-CA04B43-R amtm disk-check: Probing 'ext4' on device /dev/sda1
Mar 24 13:15:22 Wiring_Closet-AE61-CA04B43-R amtm disk-check: Running disk check v3.0, with command 'e2fsck -p' on /dev/sda1
Mar 24 13:15:31 Wiring_Closet-AE61-CA04B43-R amtm disk-check: Disk check done on /dev/sda1
Mar 24 13:15:32 Wiring_Closet-AE61-CA04B43-R syslog: USB ext4 fs at /dev/sda1 mounted on /tmp/mnt/HitachiHDD
Mar 24 13:15:32 Wiring_Closet-AE61-CA04B43-R usb: USB ext4 fs at /dev/sda1 mounted on /tmp/mnt/HitachiHDD.
Mar 24 13:15:32 Wiring_Closet-AE61-CA04B43-R kernel: EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: user_xattr
Mar 24 13:16:17 Data_Center-5DA5-CA04B43-R amtm disk-check: Disk check done on /dev/sda1
Mar 24 13:16:17 Data_Center-5DA5-CA04B43-R syslog: USB ext4 fs at /dev/sda1 mounted on /tmp/mnt/SanDiskSDCZ
Mar 24 13:16:17 Data_Center-5DA5-CA04B43-R usb: USB ext4 fs at /dev/sda1 mounted on /tmp/mnt/SanDiskSDCZ.
Mar 24 13:16:17 Data_Center-5DA5-CA04B43-R kernel: EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: user_xattr
Mar 24 13:40:47 custom_script: Running /jffs/scripts/pre-mount (args: /dev/sda1 ext4)
Mar 24 13:40:50 amtm disk-check: Probing 'ext4' on device /dev/sda1
Mar 24 13:40:50 amtm disk-check: Running disk check v2.9, with command 'e2fsck -p' on /dev/sda1
Mar 24 13:40:55 amtm disk-check: Disk check done on /dev/sda1
Mar 24 13:40:55 usb: USB ext4 fs at /dev/sda1 mounted on /tmp/mnt/HitachiHDD.

Why does the tunes2fs output state that the sda1 volume was Last Checked in May 2022?

FYI... I'm not worried about the data. I regularly backup the HitachiHDD. I'm more concerned with the disk incorrectly reporting available free space.

Thanks, again, for the assistance.


Gary
 
Last edited:
Hitachi HDDs were great in the 90's.

Time to replace it?
 
Hitachi HDDs were great in the 90's.

Time to replace it?

LOL @L&LD Nice to see you're still around making smart remarks. 😂

BTW... The Hitachi HTS725050A9A364 was manufactured in the 2000's.
 
Last edited:
That is still well past the best use-by date. ;)
 
That is still well past the best use-by date. ;)

@L&LD I think best-used-by dates are a gimmick. I have a server with a hard drive that is still functioning properly that has been in operation for almost 25-years. I do make regular backups of the data as I expect that it will fail--someday.

In the case of this USB drive attached to my Asuswrt, I'd like to confirm whether it's the operating system incorrectly reporting or the actual drive. This drive has only been in use for about a year.

Thank you for your input.
 
@garycnew So this is actually a hard drive and not a "USB Flash Drive"? Your system log is confusing because it appears to show five different machines that all have the same named drive.

To answer your earlier question; it looks like the only running process on that drive is nginx. So if you shut that down (and are cd'd into a different filesystem) you should be able to unmount it. Then you can do an e2fsck -f /dev/sda1
 
This drive has only been in use for about a year.

Yes, in that case, I agree 100% with you. That 25-year-old HDD must be obsolete by now, no?
 
Your JFFS partition is almost full at 99%. You need to find out why. Any large files stored there? Or thousands of small files?

UPDATE:
Use the following command to find the top 100 files over 100KB (if you happen to have that many):
Bash:
du -axk /jffs | sort -nr -t ' ' -k 1 | awk -v minKB="100" -F ' ' '{if ($1 > minKB) print $0}' | head -n 100

@Martinski @ColinTaylor @sfx2000 @Tech9 @L&LD

It's nice to see that the whole gang is still around. Except, where is @SomeWhereOverTheRainBow ?

I went through and cleaned up the /jffs volume, wondering if it being 99% full was somehow related to the issue.

Code:
# df -h
Filesystem                Size      Used Available Use% Mounted on
/dev/root                29.1M     29.1M         0 100% /
devtmpfs                124.7M         0    124.7M   0% /dev
tmpfs                   124.8M      1.8M    123.1M   1% /tmp
/dev/mtdblock4           62.8M      9.1M     53.6M  15% /jffs
/dev/sda2               436.5G    381.5G     55.0G  87% /tmp/mnt/Time_Capsule
/dev/mtdblock4           62.8M      9.1M     53.6M  15% /usr/sbin/dnsapi
/dev/mtdblock4           62.8M      9.1M     53.6M  15% /usr/sbin/acme.sh
/dev/sda1                28.8G     14.8G     12.5G  54% /tmp/mnt/HitachiHDD
/dev/mtdblock4           62.8M      9.1M     53.6M  15% /www/Main_LogStatus_Content.asp

It seems cleaning up the /jffs volume had no effect in resolving the /tmp/mnt/HitachiHDD volume issue.

Code:
# opkg install fuser
Collected errors:
 * opkg_conf_load: Creating temp dir /opt/tmp/opkg-l5sYlD failed: No space left on device.

@ColinTaylor Correct... I transitioned from an SanDiskSDHC to HitachiHDD a little over a year ago. I'll shutdown Nginx and see if I can force check the drive.

Any other ideas other than the router hiding space and replacing the drive? ;)

Thanks, again.


Gary
 
Last edited:
All:

I stumbled across @latenitetech wiki article that answered many of my questions on the subject.


I ran the following recommended commands to stop all Entware services and swap on the Hitachi drive:

Code:
# /opt/etc/init.d/rc.unslung stop
 Checking nginx...              dead.
 Checking tor...              dead.
 Checking rpcbind...              dead.
 Checking syslog-ng...              dead.

# swapoff /tmp/mnt/HitachiHDD/myswap.swp

# umount /dev/sda1

# e2fsck /dev/sda1
e2fsck 1.42.13 (17-May-2015)
HitachiHDD: clean, 1917594/1917600 files, 4003673/7669021 blocks

However, I ran into "Memory allocation failed" errors; when, I attempted to force check the sda1 volume:

Code:
# e2fsck -f /dev/sda1
e2fsck 1.42.13 (17-May-2015)
Pass 1: Checking inodes, blocks, and sizes
Failed to iterate extents in inode 1766638
    (op EXT2_EXTENT_DOWN, blk 0, lblk 0): Memory allocation failed
Clear inode<y>? yes
Restarting e2fsck from the beginning...
Pass 1: Checking inodes, blocks, and sizes
Error while reading over extent tree in inode 1837665: Memory allocation failed
Clear inode<y>? yes
Inode 1837665, i_blocks is 8, should be 0.  Fix<y>? yes
Error while reading over extent tree in inode 1837898: Memory allocation failed
Clear inode<y>? yes
ext2fs_write_inode: Cannot allocate memory while writing inode 1837898 in check_blocks_extents

HitachiHDD: ***** FILE SYSTEM WAS MODIFIED *****
e2fsck: aborted

HitachiHDD: ***** FILE SYSTEM WAS MODIFIED *****

Code:
# e2fsck -f /dev/sda1
e2fsck 1.42.13 (17-May-2015)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Error allocating icount structure: Memory allocation failed
e2fsck: aborted

Fsck still reports that the HitachiHDD drive is clean, though.

Code:
# e2fsck /dev/sda1
e2fsck 1.42.13 (17-May-2015)
HitachiHDD: clean, 1917594/1917600 files, 4003673/7669021 blocks

The sda1 volume is only a total of 32GB in size.

Any ideas why I'm running into these "Memory allocation failed" errors?

Thanks, again.


Gary
 
Last edited:

Similar threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!

Staff online

Top