Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1062584 - CVE-2014-0069 kernel: cifs: uncached writes don't handle bad user addresses correctly [fedora-20]
Summary: CVE-2014-0069 kernel: cifs: uncached writes don't handle bad user addresses c...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: 20
Hardware: All
OS: Linux
urgent
urgent
Target Milestone: ---
Assignee: Kernel Maintainer List
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On: 1062578
Blocks: 1062585
TreeView+ depends on / blocked
 
Reported: 2014-02-07 11:30 UTC by Jeff Layton
Modified: 2014-06-18 07:43 UTC (History)
6 users (show)

Fixed In Version: kernel-3.13.3-201.fc20
Doc Type: Release Note
Doc Text:
Clone Of: 1062578
: 1062585 (view as bug list)
Environment:
Last Closed: 2014-02-17 21:04:40 UTC


Attachments (Terms of Use)

Description Jeff Layton 2014-02-07 11:30:58 UTC
+++ This bug was initially created as a clone of Bug #1062578 +++

Al Viro discovered the following problem in cifs.ko while overhauling some of the VFS layer write code:

>         Take a look at cifs_iovec_writev().  We have decided that
> the next chunk to send should cover nr_pages worth of data, allocated
> the pages and
>                 save_len = cur_len;
>                 for (i = 0; i < nr_pages; i++) {
>                         copied = min_t(const size_t, cur_len, PAGE_SIZE);
>                         copied = iov_iter_copy_from_user(wdata->pages[i], &it,
>                                                          0, copied);
>                         cur_len -= copied;
>                         iov_iter_advance(&it, copied);
>                 }
>                 cur_len = save_len - cur_len;
> tried to copy from iovec into said pages.  What happens if iovec spans
> an munmapped area?  It'll copy as much as it can and from that point on
> iov_iter_copy_from_user() will be returning 0.  And iov_iter_advance(&it, 0)
> will, of course, do nothing.
>
> So we'll end up with cur_len well less than nr_pages * PAGE_SIZE.  Then we
> start to populate a structure that will be passed to (async) write request:
>                 wdata->sync_mode = WB_SYNC_ALL;
>                 wdata->nr_pages = nr_pages;
>                 wdata->offset = (__u64)offset;
>                 wdata->cfile = cifsFileInfo_get(open_file);
>                 wdata->pid = pid;
>                 wdata->bytes = cur_len;
>                 wdata->pagesz = PAGE_SIZE;
>                 wdata->tailsz = cur_len - ((nr_pages - 1) * PAGE_SIZE);
>                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>                 rc = cifs_uncached_retry_writev(wdata);
>
> and we have a negative number in wdata->tailsz.  I don't have a setup
> ready for testing, so this is strictly from RTFS, but AFAICS we'll send
> nr_pages-1 full pages out (some - with valid data, some - with whatever
> page_alloc() has left there) and then hit the last one.  We kmap it and
> set an iovec with base at the address we'd mapped it on and length
> a bit under 4Gb.  Then we call kernel_sendmsg() on that turd.  Note that
> verify_iovec() isn't called - iovec is already kernel-side anyway and
> kernel_sendmsg() assumes it to be valid.
>
> Again, I hadn't tested it, but it looks like a user-triggerable oops at
> the very least.  All that is needed is writable file on cifs volume
> mounted with cache=strict...  

He's quite correct (though it's even easier to reproduce if you mount with cache=none). I have a reproducer that can trivially crash the kernel when run as an unprivileged user and a patch that fixes the problem. I'll attach those in a bit.

Since fixing this is holding up some important work in other areas, I'm going to propose that we disclose this in one week (February 14th).

--- Additional comment from Jeff Layton on 2014-02-07 06:22:36 EST ---

This is the reproducer for the bug. Simply mount up a cifs share with 'cache=none', and then run this with the path to a scratch file on the mount as the first argument.

--- Additional comment from Jeff Layton on 2014-02-07 06:29:23 EST ---

This patch fixes the bug. It should apply fairly cleanly to most recent kernels (including RHEL6/7).

Comment 1 Josh Boyer 2014-02-14 17:45:22 UTC
Fixed in git.

Comment 2 Fedora Update System 2014-02-15 15:24:13 UTC
kernel-3.13.3-201.fc20 has been submitted as an update for Fedora 20.
https://admin.fedoraproject.org/updates/FEDORA-2014-2576/kernel-3.13.3-201.fc20

Comment 3 Fedora Update System 2014-02-16 23:23:14 UTC
Package kernel-3.13.3-201.fc20:
* should fix your issue,
* was pushed to the Fedora 20 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing kernel-3.13.3-201.fc20'
as soon as you are able to, then reboot.
Please go to the following url:
https://admin.fedoraproject.org/updates/FEDORA-2014-2576/kernel-3.13.3-201.fc20
then log in and leave karma (feedback).

Comment 4 Fedora Update System 2014-02-17 21:04:40 UTC
kernel-3.13.3-201.fc20 has been pushed to the Fedora 20 stable repository.  If problems still persist, please make note of it in this bug report.


Note You need to log in before you can comment on or make changes to this bug.