Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.

Bug 1512714

Summary: fs.go:382] Stat fs failed. Error: no such file or directory
Product: OpenShift Container Platform Reporter: Eric Paris <eparis>
Component: StorageAssignee: Tomas Smetana <tsmetana>
Status: CLOSED DUPLICATE QA Contact: Jianwei Hou <jhou>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3.7.0CC: aos-bugs, aos-storage-staff
Target Milestone: ---   
Target Release: 3.9.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-12-11 10:21:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Description Eric Paris 2017-11-13 22:43:41 UTC
atomic-openshift-node-3.7.4-1.git.0.472090f.el7.x86_64

I see numerous of these back to back and then every 60 second sor so another batch of log spam message like:

Nov 13 22:36:21 ip-172-31-71-195.us-east-2.compute.internal atomic-openshift-node[106479]: E1113 22:36:21.428420  106479 fs.go:382] Stat fs failed. Error: no such file or directory

I have no idea what it means or what to do about it. What does it mean? What should I do about it? I'm sure I can find the message on more nodes that the 1 described above, if needed.

Comment 1 Tomas Smetana 2017-11-14 09:53:55 UTC
The only thing that can spit this error is cadvisor. I wish it would at least say which "file or directory" is it missing...

Comment 2 Tomas Smetana 2017-11-14 15:44:50 UTC
There looks to be a similar issue discussed in kubernetes upstream: https://github.com/kubernetes/kubernetes/issues/35062

Comment 3 Tomas Smetana 2017-11-15 09:04:22 UTC
Are there some nodes in the cluster where you *don't* see those log messages? We might find what is different there. Looks like a kubelet change/restart might make cadvisor to start logging those errors...

Comment 5 Tomas Smetana 2017-12-11 10:21:52 UTC
Cadvisor problem fixed with https://github.com/kubernetes/kubernetes/pull/17883, closing. This needs someone who knows cadvisor...

*** This bug has been marked as a duplicate of bug 1511576 ***