1

I'm using an Airport Extreme 802.11n (2nd Generation), running firmware version 7.6.3

I'm connecting to it via a Mac Mini running 10.8.4

I'm trying to diagnose a backup software issue, where the software wants to open a number of files in the directory structure of a disk mounted and shared from the Airport Extreme. There are around 420 files in the directory structure.

I replicated the directory structure on a local disk, and made a test program (in Java, for what it's worth) that opened each file (i.e. obtained a file descriptor) for each file.

I've already changed the default launchctl limits and ulimit settings to be higher for OS X.

When running the test program that opened the files on the local disk, it worked, but it failed on the Airport Extreme over AFP, as well as when I mounted the Airport Extreme volume over SMB.

This is the failure message (over SMB, when mounting in AFP the error was "Too many open files in system".

Exception in thread "main" java.io.FileNotFoundException: /Volumes/XXX/Crashplan    Backup/485670197192556908/cpbf0000000000006343267/cpbdf (Too many open files)
  at java.io.FileInputStream.open(Native Method)
  at java.io.FileInputStream.<init>(FileInputStream.java:120)
  at java.io.FileInputStream.<init>(FileInputStream.java:79)
  at AFPTest.main(AFPTest.java:105)

Is this a known issue, and is there any way I can increase the number of open files the Airport Extreme can serve simultaneously?

My ulimit -a settings are:

$ ulimit -a
Maximum size of core files created                           (kB, -c) 0
Maximum size of a process’s data segment                     (kB, -d) unlimited
Maximum size of files created by the shell                   (kB, -f) unlimited
Maximum size that may be locked into memory                  (kB, -l) unlimited
Maximum resident set size                                    (kB, -m) unlimited
Maximum number of open file descriptors                          (-n) 1000000
Maximum stack size                                           (kB, -s) 8192
Maximum amount of cpu time in seconds                   (seconds, -t) unlimited
Maximum number of processes available to a single user           (-u) 709
Maximum amount of virtual memory available to the shell      (kB, -v) unlimited

And launchctl limit:

$ launchctl limit
cpu         unlimited      unlimited
filesize    unlimited      unlimited
data        unlimited      unlimited
stack       8388608        67104768
core        0              unlimited
rss         unlimited      unlimited
memlock     unlimited      unlimited
maxproc     709            1064
maxfiles    1000000        1000000
cjs
  • 111
  • Just to confirm, what do you get for 'ulimit -a' and 'launchctl limit maxfiles' – AllInOne Aug 02 '13 at 19:24
  • I added the output of those commands to the original posting. – cjs Aug 02 '13 at 21:02
  • Found this discussion which might give some insight (no solution tho): http://help.bombich.com/discussions/questions/4245-help-an-error-occurred-while-ccc-was-informationen-fr-dieses-objekt-auf-der-quelle-erhalten-receiving-information-for-an-object-on-the-source-or-sth-like-that-and-other-errors – AllInOne Aug 02 '13 at 21:11
  • Is there anyway you can modify your app to attempt to perform that transfer in smaller batches? As a troubleshooting step can you modify your app to try to copy just one file? If so does it succeed? – AllInOne Aug 02 '13 at 21:14
  • I can alter my test app, but I'm having an issue with CrashPlan (and have worked with them on this). CrashPlan, during it's synchronization phase, opens all of the archive files, which, in my case, numbered in the 400's. They advised not using the NAS feature of the AE as a backup archive. My ultimate problem was solved with a direct attach of the disk, and utilization of the network backup features of CrashPlan. – cjs Aug 05 '13 at 19:39
  • Is this still something you can reproduce on 10.10? – bmike Apr 20 '15 at 19:10

0 Answers0