Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-09-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 johnmilton joined #gluster
00:13 primehaxor joined #gluster
00:18 armyriad joined #gluster
00:43 victori joined #gluster
00:49 Alghost_ joined #gluster
01:04 shdeng joined #gluster
01:08 Alghost joined #gluster
01:25 beemobile joined #gluster
01:36 kpease joined #gluster
01:38 aj__ joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:52 victori joined #gluster
02:00 ZachLanich joined #gluster
02:02 hgichon joined #gluster
02:22 beemobile Hi, I'm looking for some advice regarding rebalancing. We initially provisioned 2 volumes (distributed/replicated) with bricks evenly spread across 12 nodes. First volume with 12 x 54TB bricks and a second with 24 x 54TB bricks. We're ready to add another 12 x 54TB bricks to the first volume.
02:23 beemobile Is it advisable and more efficient to add all of the 12 bricks and rebalance all at once?
02:23 beemobile Or is it preferrable to add/rebalance more gradually?
02:24 d0nn1e joined #gluster
02:25 Lee1092 joined #gluster
02:53 ahino joined #gluster
02:59 victori joined #gluster
03:10 victori joined #gluster
03:24 arcolife joined #gluster
03:43 B21956 joined #gluster
03:50 gem joined #gluster
03:55 riyas joined #gluster
04:22 rafi joined #gluster
04:58 atinmu joined #gluster
05:16 victori joined #gluster
05:22 om joined #gluster
05:24 aravindavk joined #gluster
05:36 gem joined #gluster
05:45 jiffin joined #gluster
05:46 Philambdo joined #gluster
05:47 jiffin1 joined #gluster
05:53 [diablo] joined #gluster
06:02 ankitraj joined #gluster
06:10 mhulsman joined #gluster
06:14 mhulsman joined #gluster
06:16 mhulsman1 joined #gluster
06:19 mhulsman joined #gluster
06:20 mhulsman1 joined #gluster
06:22 mhulsman joined #gluster
06:33 victori joined #gluster
06:38 jtux joined #gluster
06:39 plarsen joined #gluster
06:56 atinmu joined #gluster
06:59 deniszh joined #gluster
06:59 om joined #gluster
07:01 aravindavk joined #gluster
07:16 mbukatov joined #gluster
07:17 jiffin1 joined #gluster
07:20 jiffin joined #gluster
07:23 jri joined #gluster
07:25 mhulsman joined #gluster
07:30 jiffin1 joined #gluster
07:33 auzty joined #gluster
07:37 mhulsman1 joined #gluster
07:39 hchiramm joined #gluster
07:44 creshal joined #gluster
07:45 Dave joined #gluster
07:46 fsimonce joined #gluster
07:48 ivan_rossi joined #gluster
07:51 morse joined #gluster
07:54 cloph_away joined #gluster
08:01 Mallo joined #gluster
08:01 jiffin1 joined #gluster
08:02 Mallo 'morning everyone !
08:03 Mallo quick question. I'm evaluating glusterfs for a production app, and I need to create snapshots hourly. The snapshot creation seems to be blocking. Am I right ? With an rsync running, it takes approximately 30 seconds.
08:04 Mallo Will the snapshot creation time increase with server load and/or amount of data ?
08:05 jri_ joined #gluster
08:05 marbu joined #gluster
08:06 Mallo (by "blocking" I mean that no data seems to be written, and rsync seems to be "waiting" during that time)
08:06 jiffin joined #gluster
08:10 jiffin1 joined #gluster
08:12 jiffin1 joined #gluster
08:14 armyriad joined #gluster
08:15 armyriad joined #gluster
08:24 jiffin joined #gluster
08:25 jiffin1 joined #gluster
08:25 [fre] Dear Gluster-geeks
08:26 [fre] we encounter performance issues on samba running on gluster.
08:27 sflfr joined #gluster
08:27 sflfr left #gluster
08:27 sflfr joined #gluster
08:28 sflfr left #gluster
08:28 [fre] having almost 200 tcp-connections opened by gluster, rngd eating up more than 30% cpu.
08:28 sflfr joined #gluster
08:28 sflfr left #gluster
08:34 jiffin1 joined #gluster
08:38 jiffin joined #gluster
08:42 victori joined #gluster
08:55 harish joined #gluster
09:10 congpine joined #gluster
09:13 congpine good morning, I have a quick question regarding to using Hard Links with Glusterfs (Distributed-Replica volume). Do you experience any issues like split-brain or slow to look up for files ?
09:13 congpine Our apps seem to use hardlink a lot and then they will copy and remove this hard link.
09:13 congpine i'm not sure if glusterfs "likes" those operations.
09:19 aj__ joined #gluster
09:34 Piotr123 joined #gluster
09:34 harish joined #gluster
09:35 TZaman Anyone tried to build Gluster on Alpine linux?
09:36 TZaman `CPPFLAGS="-I/usr/include/tirpc" LDFLAGS="-L/usr/include/tirpc" CFLAGS="-L/usr/include/tirpc" ./configure` this is the command I use because Alpine has it's own "tirpc" package
09:36 TZaman however, I get this error when building:
09:36 TZaman rpcsvc.c:973:20: error: implicit declaration of function 'xdr_sizeof'
09:37 marbu joined #gluster
09:39 beemobile joined #gluster
09:44 jkroon joined #gluster
09:49 victori joined #gluster
09:51 jiffin joined #gluster
09:54 thomas_ joined #gluster
09:55 Guest84899 left #gluster
09:56 Arrfab ndevos: online ?
09:57 ndevos Arrfab: _o/
09:58 Arrfab ndevos: any idea why the procedure for gluster vol rename doesn't work ?
09:59 Arrfab it *seems* to work but when mounted, the vol seems now divided by two (from a disk size pov)
09:59 Arrfab and also, what can be the root cause for gluster to be really so slow (unusable btw)
10:01 ndevos Arrfab: I would think renaming the volume should work fine... you just need to make sure there are no glusterd processes running, and you need to restart all glusterfsd/brick processes too
10:02 Arrfab ndevos: all is stopped, and only glusterd is restarted (but it seems it brings glusterfsd online automatically)
10:02 ndevos Arrfab: yes, glusterd starts the glusterfsd processes for the volumes that are not in 'stopped' state
10:03 ndevos Arrfab: glusterd tries to sync the volume configuration to others, if there was a glusterd still running, the old volume may still be around somewhere?
10:03 Arrfab does that procedure needs to only be done on one node ?
10:04 Arrfab but everything is stopped on all nodes
10:04 ndevos no, all the rename operations need to be done on all servers, and none of the glusterd may be running
10:04 Arrfab modified on all ones, and glusterd restarted on all nodes
10:04 Arrfab yeah, what I did
10:04 ndevos ok, sounds correct then
10:04 Arrfab and it seems it worked, as gluster volume list shows only the new name
10:04 ndevos thats good too
10:04 Arrfab but when mounted, size isn't the correct one
10:05 Arrfab it's showing the size of only one brick
10:05 ndevos maybe some glusterfsd processes did not get started?
10:05 Arrfab why would gluster volume status show all the bricks active then ? (with their pid)
10:05 ndevos if it is a distributed volume, and only some of the distributed bricks are there, the size of the volume is rediced
10:05 ndevos *reduced
10:05 Arrfab distributed+replicated
10:06 ndevos can the client connect to all the bricks?
10:06 Arrfab how can you test that ?
10:06 ndevos if it can not connect, it should complain about that in the log
10:06 ndevos in /var/log/gluster/<mount-point>.log
10:06 jiffin1 joined #gluster
10:06 skoduri joined #gluster
10:08 Arrfab failed to get the port number for remote subvolume
10:11 ndevos hmm, and 'gluster volume status <volume>' shows the right ports?
10:11 Arrfab yes
10:11 Arrfab well, a port, don't know if that's the right one :-)
10:11 Arrfab the other issue is also the speed that is awful
10:12 ndevos you could check with 'ss' if the brick actually listens on it
10:12 ndevos speed will be horrible if the volume (from client pov) is in some degraded state
10:13 ndevos once the client can connect to all bricks, it should be faster
10:13 Arrfab worth noting that clients are the servers too
10:14 Arrfab and how can I see if volume is in degraded state ?
10:14 ndevos thats the case when clients can not connect to all bricks
10:14 ndevos the logic is all done in the client, so the clients view of the volume is important
10:15 Arrfab ndevos: so how can you see that oo ?
10:15 Arrfab s/oo/too/
10:15 glusterbot What Arrfab meant to say was: ndevos: so how can you see that too ?
10:15 ndevos you can only really see that in the logs
10:15 Arrfab aarrrrrgh
10:15 jiffin1 joined #gluster
10:15 Arrfab yeah, it would be too good to be true if "gluster volume status" would show that
10:16 ndevos but you can mount the volume on a different path, and pass "-o log-level=DEBUG" to get more logging
10:16 [fre] ndevos, Arrfab, guys... do you have any idea how to troubleshoot performance issues with gluster & samba?
10:17 ndevos the gluster command itself is just talking to the management daemon... it would be quite an extension to have it do client-side stuff too
10:17 Arrfab ndevos: do you have a list of things that needs to be modified for the gluster rename ?
10:18 ndevos [fre]: not really, there are many things that can improve performance for Samba, but I think it depends on the kind of workload and features used
10:19 ndevos Arrfab: no, I dont have that... a 'find' and 'grep' for the volume name should point it out
10:20 Arrfab ndevos: I though that it would work, but obviously something is missing : http://pastebin.centos.org/53056/
10:20 glusterbot Title: #53056 • CentOS Pastebin (at pastebin.centos.org)
10:22 ndevos Arrfab: that definitely is the idea, but there may be some other subdirs under /var/lib/glusterd/vols/<volume>/ ?
10:22 ndevos Arrfab: also, does pastebin.centos.org allow posting through 'nc' like termbin.com ?
10:22 Arrfab don't think so
10:23 Arrfab also, that's the reason why it goes into /vols/$gluster_vol_new_name/
10:24 [fre] ndevos, loads seem quite high. rngd running between 35 & 45 % of cpu. smbd similar. glusterfsd nearly 50%.
10:25 ndevos Arrfab: is the name of the volume part of the path that is used for bricks? maybe the directory for the bricks was changed in the configuration, but not on the system?
10:25 beemobile joined #gluster
10:25 Arrfab ndevos: hmm, maybe I found something strange : service glusterfsd stop doesn't stop glusterfs processes :-(
10:25 [fre] can't imagine why the randomNumberStuff must be that high.
10:25 Arrfab ndevos: brick paths will remain the same, reason why the sed commands only replace volumes and subvolumes
10:26 Arrfab ndevos: trying to find what's the issue
10:26 ndevos Arrfab: yeah, stopping the service only stops glusterd, the intention (and requirement from some users) is that a service restart does not affect the data-path
10:26 Arrfab but so service glusterfsd doesn't kill processes ?
10:26 Arrfab a "pkill glusterfs" is enough and safe ?
10:26 ndevos Arrfab: the service is called 'glusterd', and not all processed get killed indeed, the brick processes keep running
10:27 ndevos yea, pkill does the trick
10:27 Arrfab reason why I mentioned also "service glusterfsd stop" :-)
10:27 Arrfab which doesn't stop anything
10:27 ndevos if you 'pkill glusterfs', it will also kill the client-mounts on the system
10:27 Arrfab nothing is mounted
10:27 ndevos is there a glusterfsd-serice?
10:28 Arrfab yes, from your pkg ;-)
10:29 Arrfab ndevos: updated to gluster 3.8.3-1 built on cbs btw
10:29 Arrfab and got another undocumented issue when updating from 3.6.1 to 3.8.3 too : https://bugzilla.redhat.co​m/show_bug.cgi?id=1191176
10:29 ndevos [fre]: operations like directory listing is pretty heavy through Samba, but where the rngd load comes from, I dont know
10:29 glusterbot Bug 1191176: urgent, unspecified, ---, bugs, CLOSED CURRENTRELEASE, Since 3.6.2: failed to get the 'volume file' from server
10:29 Arrfab so much fun today with gluster :-p
10:30 ndevos Arrfab: and I have fun with Gluster *every*day*
10:30 [fre] ndevis, are there fs-parameters I could set to enhance performance for a lot of small files?
10:30 Arrfab ndevos: well, not sure we have the same definition of "fun" though ..
10:31 ndevos [fre]: it's mostly options in the Samba configuration, and maybe some for the Gluster volume, but I can not find any docs for it
10:31 ndevos [fre]: oh, maybe check the 'red hat gluster storage' docs on access.redhat.com, they might be more complete
10:32 ndevos Arrfab: my 'fun' mostly is about reviewing code changes, commenting and grading them with -1's
10:33 * ndevos hits head against table
10:38 Arrfab ndevos: tested again that procedure and I see the volume, but only half the size
10:46 ndevos Arrfab: weird... I cant think of anything that would cause that
10:46 ndevos Arrfab: can you email me a tarball with the glusterd configuration? I
10:46 ndevos 'll run the script and check the outcome here
10:47 Philambdo joined #gluster
10:49 Arrfab ndevos: actually I have to rollback and restart the whole thing on previous gluster vol : have lost enough time right now
10:50 Arrfab ndevos: I'll replicate the same setup elsewhere and we can work on this
10:50 ndevos Arrfab: ok, sure
10:55 Arrfab ndevos: diving into quick lunch mode, but apart from that gluster vol rename operation, I'd like to understand why gluster operations can be so slow
10:55 Arrfab let's see after my lunch
10:55 ndevos Arrfab: enjoy your lunch!
10:57 Philambdo joined #gluster
10:59 beemobile joined #gluster
11:02 prth joined #gluster
11:05 armin joined #gluster
11:08 victori joined #gluster
11:09 Anarka joined #gluster
11:10 rafi joined #gluster
11:12 harish joined #gluster
11:13 [diablo] joined #gluster
11:24 Arrfab ndevos: wrt gluster vol rename, if that doesn't work, I think I'll use something else (if possible-
11:24 Arrfab but how one can troubleshoot speed issue in the gluster setup ?
11:25 ndevos Arrfab: the rename should work, or at least we need a description of how it can be done
11:25 ndevos the speed issue depends a lot on the workload...
11:26 Arrfab ndevos: nothing was active on the volumes, but only one node accessing the vol
11:26 Arrfab write perf at the brick level are ok, but awful when access through gluster
11:27 Arrfab sometimes only writing at ~10 MiB/sec
11:27 Arrfab and that's the single client accessing the whole cluster
11:27 Arrfab (well in that case the client is also one of the gluster servers)
11:27 ndevos Arrfab: is that a single file, or like rsync?
11:27 Arrfab ndevos: tried with cp and faster than with rsync
11:28 Arrfab rsync is really something that doesn't work
11:28 ndevos also, Gluster scales out, so multiple files and parallel access will improve things...
11:28 ndevos rsync is mostly synchronous and with lots of stat() calls, those are expensive
11:29 ndevos if you 'cp' different files through different mountpoints, things could speed up
11:29 Arrfab ndevos: even a dd if=/dev/zero of=/path/to/gluster is slow
11:29 ndevos you could also see if using glusterfs-coreutils improves
11:31 ndevos when you do a transfer, the gluster client will do the replication, so bandwidth is effectively halved when using replica-2
11:31 Arrfab I know that
11:31 ndevos but 10 MB/s sounds rather low
11:31 Arrfab but if I have 10Gbit/s network, and writing at 10MB/s , there is an issue
11:31 Arrfab I can write on the underlying xfs bricks at ~160MB/s
11:32 Arrfab so you see why I'd like to know what's the bottleneck through gluster
11:33 ndevos and network is functioning well? something like iperf shows good results?
11:33 Arrfab ndevos: yes, tested ah iperf3 shows indeed ~10Gbit/s between all nodes, point to point, individually
11:33 Arrfab in fact that IPoIB, but you get the point
11:33 ndevos hmm
11:34 ndevos are there any glusterfs processes having high load when you do such transfers?
11:35 Arrfab ndevos: some glusterfs are taking ~25% of a cpu core
11:36 ndevos that sounds rather high
11:36 Arrfab we see that often : gluster is really cpu killer on our setup
11:37 Arrfab reason also why we even thought about moving to something else that would be simpler/faster/less resources angry
11:37 ndevos is that only the glusterfs client/mount process, or some other pid?
11:37 ndevos gluster shouldnt be very resource hungry, it only peaks when recovering/healing
11:39 ndevos Arrfab: whats the full commandline of the process(es) that have a high CPU load?
11:42 Arrfab ndevos: can it be that previous volumes are keeping old ip somewhere ? (so before they were migrated to Infiniband)
11:43 ndevos Arrfab: I guess that is possible, depends on how the migration was done
11:43 ndevos Arrfab: you can 'grep' for the old ip-address under /var/lib/glusterd/ and see
11:43 Arrfab yep, nothing
11:43 Arrfab and we used hostnames since day1 too
11:44 ndevos if only hostnames are used, things should be fine
11:44 * ndevos is picked up for lunch now, will be back later
11:46 * Arrfab is lost : when copying from brick to the gluster, vol, write perf are ok
11:49 [diablo] joined #gluster
11:53 TvL2386 joined #gluster
12:01 Arrfab ndevos: trying to dd= to test write speed and I see one node sending at ~1Gbit/s and the two other ones receiving at ~500Mbit/s (so in that case the replica)
12:01 Arrfab what's blocking gluster on that IPoIB ?
12:01 Arrfab remote nodes not being able to write to underlying disks as fast ?
12:02 Sebbo1 joined #gluster
12:03 Arrfab I'm now testing on the individual xfs volumes for those bricks,and both can write at ~160MiB/s
12:03 Arrfab so what's the issue ?
12:11 Arrfab ndevos: testing gfcp and have very different results for the same file
12:11 Arrfab so would fuse be really the blocking factor ?
12:12 jwd joined #gluster
12:26 jri joined #gluster
12:28 jri_ joined #gluster
12:31 jri joined #gluster
12:36 ndevos Arrfab: fuse definitely is impacting performance, it does a lot of context switching that gfcp does not need to do
12:38 ndevos Arrfab: other users have improved the performance for their environment by using multiple fuse mounts on the same client systems, the bottleneck can get reduced that way
12:38 ndevos bur glusterfs-coreutils would be better, specially if you do not really need to mount and can just gfcp files around
12:39 ndevos s/bur/but/
12:39 glusterbot What ndevos meant to say was: but glusterfs-coreutils would be better, specially if you do not really need to mount and can just gfcp files around
12:49 Arrfab ndevos: well, for my data that needs to move from one vol to the other I can use gfcp
12:49 Arrfab and getfacl/setfacl to restore permissions on the files
12:50 Arrfab ndevos: so when does one can expect a proper kernel module for gluster and so let fuse go away ? :-)
12:51 Philambdo joined #gluster
12:55 ndevos Arrfab: og, gfcp doesnt do ACLs? you should file a bug for that
12:55 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
12:55 ndevos actually at https://github.com/gluster/​glusterfs-coreutils/issues/ , I think
12:55 glusterbot Title: Issues · gluster/glusterfs-coreutils · GitHub (at github.com)
12:56 Arrfab ndevos: hmm, and it doesn't accept directory either :-(
12:56 Arrfab as I have to sync a bunch of dir
12:57 ndevos isnt there a -r switch or something?
12:58 Arrfab ndevos: not for gfcp, only for gfput
12:59 ndevos :-(
13:00 Arrfab seems that there is a need for a script/wrapper around this for a data move
13:00 Arrfab but really strange that it really depends on the file/location as it can help, but not always (glusterfs-coreutils)
13:01 Arrfab rsync would have been easier for the sync/perms/etc
13:01 Arrfab but it takes ages
13:01 ndevos @rsync
13:01 glusterbot ndevos: normally rsync creates a temp file, .fubar.A4af, and updates that file. Once complete, it moves it to fubar. Using the hash algorithm, the tempfile hashes out to one DHT subvolume, while the final file hashes to another. gluster doesn't move the data if a file is renamed, but since the hash now points to the wrong brick, it creates a link pointer to the brick that actually has the data. to avoid this use the
13:01 glusterbot ndevos: rsync --inplace option
13:02 ndevos Arrfab: ^ that is worth a try?
13:04 Philambdo joined #gluster
13:04 Arrfab ndevos: I tested that already, but write speed is awful :-(
13:06 ndevos yeah, I would not expect much improvement, only fewer operations
13:10 Arrfab still I'd like to know why fuse would be so much of a blocker/bottleneck
13:10 Arrfab I even found people on list/blog posts arguing to only use nfs with gluster as "native" (so through fuse) was really too slow/bad
13:11 ndevos yes, nfs can be faster
13:11 Arrfab ndevos: if using nfs, how does the replica/distribute work ? as normally everything happens at the client side
13:11 Arrfab but with nfs client, it doesn't reach the gluster metadata, so would the server then internally redirect those to the bricks ?
13:12 Arrfab and I browsed the official doc, but couldn't find what needs to be done to allow nfs on gluster :-)
13:12 ndevos Arrfab: the nfs-server is the gluster client in that case
13:13 rastar joined #gluster
13:13 ndevos you can not really use nfs on gluster servers, or mount with '-o nolock'
13:15 Arrfab ndevos: so would you recommend a wrapper script with gfcp/gfcat/gfput to move data between volumes ?
13:15 raghug joined #gluster
13:15 Arrfab if all the rest is slow (because of gluster, not the disks/network)
13:18 ndevos Arrfab: yes, a script would do, maybe create the directories over a fuse mount
13:19 Arrfab ndevos: yeah, working on this .. or a rsync from only directory structure, then gfcp all the files in correct path
13:22 ndevos Arrfab: yes, that'll be the easiest I guess
13:24 Dave___ joined #gluster
13:25 clyons|2 joined #gluster
13:25 lkoranda joined #gluster
13:27 kramdoss_ joined #gluster
13:31 ndevos Arrfab: I've filed https://github.com/gluster/g​lusterfs-coreutils/issues/22 and https://github.com/gluster/g​lusterfs-coreutils/issues/23 now
13:31 glusterbot Title: Add support for POSIX ACLs and other extended attributes · Issue #22 · gluster/glusterfs-coreutils · GitHub (at github.com)
13:33 muneerse joined #gluster
13:34 eMBee joined #gluster
13:36 muneerse2 joined #gluster
13:46 rafi joined #gluster
13:48 Arrfab ndevos: just watching my perf (over network graphs) and it seems that using glusterfs-coreutils isn't really worth doing either
13:48 Arrfab so still a bottleneck issues somewhere :-(
13:49 ndevos Arrfab: no? but you would be able to run multiple processes that way, not only one
13:49 Arrfab hmm so launching gfcp in parallel and not serial ?
13:50 ndevos yeah
13:50 Arrfab but still I'd like to know why a single cp operation doesn't use all the available disk/network
13:50 Arrfab tempted to revert to plain nfs storage and raid0/vg for the underlying disks (and replicate with something else like drdd)
13:50 Arrfab drbd even
13:50 Arrfab :-p
13:52 ndevos size of the read/write could be an issue, some profiling/network-capture would be needed to see more
13:53 ndevos heh, from what I've heard, drbd will have other issues ;-)
13:53 Arrfab ndevos: curious, tell me more
13:53 Arrfab apart that it's not packaged anymore for rhel/Centos
13:54 Lee1092 joined #gluster
13:56 ndevos Arrfab: we regulary have users moving from drbd to gluster, and others are migrating to ceph every now and then
13:57 ndevos but on the other hand, gluster or ceph users may move to drbd too, they might not be vocal about it in channels I'm in
14:06 wushudoin joined #gluster
14:07 devyani7 joined #gluster
14:09 Klas hmm, is there any good way to check, from client, if the mounted volume is read-only?
14:10 Klas (I prefer to not write data if at all possible)
14:26 aj__ joined #gluster
14:41 csaba joined #gluster
14:41 victori joined #gluster
14:45 hagarth joined #gluster
14:47 Guest_84847 joined #gluster
14:47 Guest_84847 allah is doing
14:47 Guest_84847 sun is not doing allah is doing
14:47 Guest_84847 moon is not doing allah is doing
14:48 Guest_84847 stars are not doing allah is doing
14:48 Guest_84847 planets are not doing allah is doing
14:48 Guest_84847 galaxies are not doing allah is doing
14:48 Guest_84847 oceans are not doing allah is doing
14:49 Guest_84847 mountains are not doing allah is doing
14:49 Guest_84847 trees are not doing allah is doing
14:49 Guest_84847 mom is not doing allah is doing
14:49 Guest_84847 dad is not doing allah is doing
14:50 Guest_84847 boss is not doing allah is doing
14:50 devyani7 joined #gluster
14:50 Guest_84847 job is not doing allah is doing
14:50 Guest_84847 dollar is not doing allah is doing
14:51 Guest_84847 medicine is not doing allah is doing
14:51 Guest_84847 customers are not doing allah is doing
14:51 Guest_84847 you can not get married without the permission of allah
14:53 Guest_84847 you can not get married without the permission of allah
14:53 Guest_84847 nobody can get angry at you without the permission of allah
14:53 Guest_84847 light is not doing allah is doing
14:53 Guest_84847 fan is not doing allah is doing
14:53 Guest_84847 businessess are not doing allah is doing
14:54 Guest_84847 america is not doing allah is doing
14:54 Guest_84847 fire can not burn without the permission of allah
14:54 Guest_84847 knife can not cut without the permission of allah
14:54 Guest_84847 rulers are not doing allah is doing
14:55 Guest_84847 governments are not doing allah is doing
14:55 Guest_84847 sleep is not doing allah is doing
14:55 Guest_84847 hunger is not doing allah is doing
15:01 legreffier wow, that was intense.
15:04 emitor_uy gluster is not doing alla is doing :P
15:06 Arrfab can he make my gluster volume fly fast instead ?
15:17 johnmilton joined #gluster
15:28 creshal "inshallah" certainly seems to be the unofficial motto of gluster's consensus algorithm :P
15:35 ninkotech joined #gluster
15:35 ninkotech_ joined #gluster
15:44 john51_ joined #gluster
15:44 clyons|3 joined #gluster
15:45 Gnomethrower joined #gluster
15:45 lh_ joined #gluster
15:47 billputer_ joined #gluster
15:47 Trefex_ joined #gluster
15:48 MmikeM joined #gluster
15:48 portante_ joined #gluster
15:48 marlinc_ joined #gluster
15:48 crag_ joined #gluster
15:48 ndk- joined #gluster
15:48 xMopxShe- joined #gluster
15:48 nohitall1 joined #gluster
15:48 arif-ali_ joined #gluster
15:48 Iouns_ joined #gluster
15:48 juhaj_ joined #gluster
15:48 social_ joined #gluster
15:48 delhage_ joined #gluster
15:49 jkroon joined #gluster
15:49 auzty joined #gluster
15:50 johnmilton joined #gluster
15:50 loadtheacc joined #gluster
15:53 TZaman I'm trying to play with the official docker image (gluster/gluster-centos), but the container fails to start due to an error: "Couldn't find an alternative telinit implementation to spawn."
15:58 skoduri joined #gluster
15:59 ndevos TZaman: there are centos/<something>gluster docker images available, I think it would be better to use those
15:59 ninkotech joined #gluster
16:00 ninkotech_ joined #gluster
16:00 ndevos TZaman: see https://wiki.centos.org/ContainerPipeline for some more details
16:00 glusterbot Title: ContainerPipeline - CentOS Wiki (at wiki.centos.org)
16:00 ndevos at the bottom of the page....
16:00 TZaman thanks, found it
16:04 prth joined #gluster
16:05 victori joined #gluster
16:23 Gambit15 joined #gluster
16:24 Guest_84847 joined #gluster
16:32 ivan_rossi left #gluster
16:37 creshal joined #gluster
16:43 Chinorro joined #gluster
16:47 samikshan joined #gluster
17:09 plarsen joined #gluster
17:11 Arrfab ndevos: other "interesting" feature : gfcp doesn't honor sparsed files
17:11 Arrfab while with cp you can use --sparse=always|never|auto
17:12 Arrfab and with rsync -S if you want to honor sparse
17:12 Arrfab was wondering why new volume seems to use more disk space than previous one :-)
17:17 victori joined #gluster
17:17 tdasilva joined #gluster
17:21 d0nn1e joined #gluster
17:23 arcolife joined #gluster
17:26 eKKiM joined #gluster
17:45 ZachLanich joined #gluster
18:16 owlbot joined #gluster
18:24 victori joined #gluster
18:36 mhulsman joined #gluster
19:12 StarBeast joined #gluster
19:29 aj__ joined #gluster
19:34 ahino joined #gluster
19:35 victori joined #gluster
19:59 Arrfab ndevos: fyi : I played with gfcat/gfput and pipe it through pv and it's interesting (to see the write speed)
19:59 Arrfab so it seems the gluster vol seems "fine" when writing to it .. but if you want to cp files from one vol to the other (and so reading/writing to the same disks) that's where it all hurts
20:00 clyons|2 joined #gluster
20:04 kkeithle joined #gluster
20:04 nhayashi joined #gluster
20:09 mlhess- joined #gluster
20:11 galeido joined #gluster
20:11 davidj joined #gluster
20:11 telius joined #gluster
20:11 PotatoGim joined #gluster
20:11 scubacuda joined #gluster
20:11 d4n13L joined #gluster
20:11 jockek joined #gluster
20:11 cholcombe joined #gluster
20:11 Ulrar joined #gluster
20:13 twisted` joined #gluster
20:15 beemobile joined #gluster
20:21 scubacuda joined #gluster
20:23 prth joined #gluster
20:26 davidj joined #gluster
20:26 PotatoGim joined #gluster
20:27 telius joined #gluster
20:46 victori joined #gluster
20:57 emitor_uy joined #gluster
21:03 Ulrar Help
21:03 Ulrar I just added 3 bricks to a volume, and the VMs disks are corrupted now
21:11 arcolife joined #gluster
21:26 Gambit15 Ulrar, you'll need to provide more info on your setup than that Ulrar!
21:26 Ulrar Gambit15: I think everything is dead now
21:27 Ulrar I removed the brick hoping it'd go back to normal, and it doesn't
21:27 Ulrar There are still shards in the removed bricks for some reason
21:28 Ulrar It's a 1x3 sharded volume I tried to bump to 2x3
21:28 prth joined #gluster
21:28 Ulrar As soon as I added the 3 new bricks everything started to do I/O errors
21:28 Ulrar I can't boot the VMs anymore, kvm says the files are corrupted ..
21:29 Ulrar I don't know what to do
21:29 Gambit15 Not debugged VHD files before. Can you open them with fdisk or parted?
21:30 Ulrar Yeah but they are corrupted, it can't see anything
21:30 Gambit15 I'm afraid I can't help with anything further than "check your logs". I disabled sharding due to aweful performance
21:31 Gambit15 AFAIK, adding new bricks shouldn't affect your standing shards unless you rebalance
21:32 Ulrar Yeah, it shouldn't
21:32 Ulrar But it destroyed everything
21:32 Ulrar I really don't know how to get out of this
21:32 Gambit15 Backups...
21:32 * Gambit15 eyes Ulrar
21:33 Ulrar Yeah, this night's backups
21:33 Ulrar But that's hours lost
21:33 Gambit15 If you don't have backups, do you have any snapshots?
21:34 Gambit15 You could also trying piecing the shards back together manually
21:35 Gambit15 Another reason I'm not using sharding yet also...facilitates regular snapshotting of all of the volumes. Every 15 mins on some of the busier ones
21:36 Ulrar Yeah but that's a lot of VMs, really don't have the space for that
21:36 Ulrar That's why I was addind new bricks
21:37 Gambit15 Snapshots are diferential - they only use the amount of space used by subsequently changed blocks
21:39 Ulrar Gambit15: I did a remove-brick start on my new bricks, but there's still 880M of data on them at the end
21:39 Ulrar Looks like the VMs wrote stuff on the new servers instead of the old ones and I now end up with different shards on both sides ..
21:39 Ulrar For some reason the heal doesn't feel that's a problem
21:40 Gambit15 Re-add the bricks & try to heal again?
21:40 Ulrar Yeah I just did that
21:40 Ulrar Doesn't do anything, the heal status seems to say everything's fine
21:41 Ulrar They don't even have the same size on both replica set
21:41 Ulrar The shards are supposed to be 64M, how could they have different size
21:41 Ulrar That's ridiculous
21:42 Gambit15 Not sure it'll help fix the problem, but the logs might shed some light on what happened.
21:43 Gambit15 Is there a way to force gluster to do an integrity check on all of the volumes?
21:48 ZachLanich joined #gluster
21:50 victori joined #gluster
22:07 sanoj joined #gluster
22:11 cyberbootje joined #gluster
22:37 sanoj joined #gluster
23:04 sanoj joined #gluster
23:06 beemobile joined #gluster
23:11 johnmilton joined #gluster
23:15 congpine joined #gluster
23:16 victori joined #gluster
23:46 Jacob843 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary