Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-06-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:38 JesperA- joined #gluster
01:03 Philambdo1 joined #gluster
01:16 arcolife joined #gluster
01:20 natarej joined #gluster
01:31 PaulCuzner joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:54 Lee1092 joined #gluster
02:45 kshlm joined #gluster
02:55 David_H__ joined #gluster
03:02 yoavz joined #gluster
03:09 ramteid joined #gluster
03:12 yoavz joined #gluster
03:21 nishanth joined #gluster
03:26 anil joined #gluster
03:51 itisravi joined #gluster
03:54 nbalacha joined #gluster
03:56 overclk joined #gluster
04:01 atinm joined #gluster
04:10 ppai joined #gluster
04:22 javi404 joined #gluster
04:35 kdhananjay joined #gluster
04:36 kotreshhr joined #gluster
04:38 aspandey joined #gluster
04:46 prasanth joined #gluster
04:49 kdhananjay joined #gluster
04:51 kdhananjay joined #gluster
04:54 nehar joined #gluster
04:55 shubhendu joined #gluster
05:00 dgbaley joined #gluster
05:04 pocketprotector joined #gluster
05:08 pocketprotector joined #gluster
05:08 anoopcs joined #gluster
05:20 gem joined #gluster
05:23 karthik___ joined #gluster
05:28 ndarshan joined #gluster
05:36 poornimag joined #gluster
05:46 Bhaskarakiran joined #gluster
05:48 Apeksha joined #gluster
05:48 hgowtham joined #gluster
05:49 jiffin joined #gluster
05:49 hchiramm joined #gluster
06:05 DV joined #gluster
06:07 mbukatov joined #gluster
06:08 ppai joined #gluster
06:17 ramky joined #gluster
06:18 ashiq joined #gluster
06:19 rafi joined #gluster
06:22 karnan joined #gluster
06:22 karnan_ joined #gluster
06:24 jtux joined #gluster
06:29 ju5t joined #gluster
06:32 aravindavk joined #gluster
06:33 DV joined #gluster
06:37 [Enrico] joined #gluster
06:40 anil_ joined #gluster
06:45 Manikandan joined #gluster
06:49 nehar joined #gluster
06:49 pur__ joined #gluster
07:00 Guest71295 joined #gluster
07:16 jri joined #gluster
07:19 deniszh joined #gluster
07:22 rastar joined #gluster
07:25 Saravanakmr joined #gluster
07:28 Slashman joined #gluster
07:30 auzty joined #gluster
07:42 ivan_rossi joined #gluster
07:47 Jules- joined #gluster
07:51 Jules- can anyone tell me what this log line means: [afr.c:97:fix_quorum_options] 0-netshare-replicate-0: quorum-type none overriding quorum-count 1
07:52 anil_ joined #gluster
07:57 kovshenin joined #gluster
07:57 muneerse2 joined #gluster
07:58 zzal joined #gluster
08:03 karthik___ joined #gluster
08:04 sakshi joined #gluster
08:05 jri_ joined #gluster
08:05 shersi joined #gluster
08:07 Pintomatic joined #gluster
08:07 Lee1092 joined #gluster
08:07 shersi Hi All, i'm experiencing an issue with mounting glusterfs volume using fuse client. If i enable SELINUX, glusterfs volume does not mount automatically after reboot. Please advice me.
08:38 aspandey joined #gluster
08:39 itisravi_ joined #gluster
08:40 kdhananjay joined #gluster
08:41 atalur joined #gluster
08:44 ivan_rossi left #gluster
08:47 mkzero joined #gluster
08:49 mkzero left #gluster
08:50 anoop_ joined #gluster
08:52 anoop_ joined #gluster
08:53 anoop_ joined #gluster
08:55 hackman joined #gluster
08:58 anoop_ joined #gluster
08:58 anoop_ left #gluster
09:05 [Enrico] joined #gluster
09:06 post-factum should performance.cache-max-file-size set to 0 disable client-side cache?
09:14 the-me joined #gluster
09:32 raghu joined #gluster
09:33 mkzero joined #gluster
09:33 raghug xavih: you there?
09:34 xavih raghug: hi
09:35 raghug xavih: Hi. Wanted to discuss with you on pre-op checks during dentry operations
09:35 raghug I just saw your mail
09:35 raghug give me a minute, I'll try to give you an example
09:35 xavih raghug: yes, I still don't understand when it can fail...
09:36 xavih raghug: ok
09:37 raghug xavih: my concern with patch #13885 is that, pre-op checks (xattr comparision) is done only at storage/posix
09:37 raghug but in clustered scenario like afr/ec, the brick (by itself) doesn't know the "correctness" of xattrs present on it
09:38 raghug So, whatever the decision the brick makes, can't be trusted
09:38 xavih raghug: yes, it only knows its own state, and it should act depending on it
09:39 raghug xavih: yes.
09:39 anil_ left #gluster
09:39 raghug xavih: consider this scenario
09:39 xavih raghug: in this case afr/ec will combine the answers as appropriate and give a "good" answer to the upper layer. Bricks not matching the "good" answer will be marked as bad
09:40 raghug xavih: yes. Would afr_mkdir/ec_mkdir (cbk) do that?
09:41 raghug is it possible to implement that behaviour?
09:42 raghug xavih: just for clarification, here is an example I gave out in the bug report of bz 1341429
09:42 raghug [#gluster] xavih: what about rollback? If the posix has already created the directory, but ec/afr considers the answer as bad, we need to delete the directory
09:42 raghug raghug: sorry ignore last statement
09:42 raghug For Preop checks like [1], dentry operations like mkdir etc would rely on xattrs on parent (like dht layout). However, a "bad" subvolume of afr cannot make correct decision during preop check as the xattrs are not guaranteed to be correct. Imagine the following scenario:
09:42 raghug a. mkdir succeeds on non-readable subvolume
09:42 raghug b. fails on readable subvolume (may be because of layout xattrs didn't match).
09:42 raghug afr here would report mkdir as success to parent xlators (here dht). However, since non-readable subvolume is not guaranteed to have correct xattrs, mkdir shouldn't have succeeded. Worse still, if mkdir on readable subvolume had failed because preop check failed (client in memory layout xattrs and layout xattrs persisted on disk didn't match), we would be ignoring a genuine failure and instead transforming it as a success.
09:43 raghug you can use readable/good interchanged
09:43 raghug same with non-readable/bad subvols
09:43 xavih raghug: as I understand the changes in storage/posix, mkdir *cannot* succeed on a bad brick
09:45 xavih raghug: if the xattr check fails, it returns an error without creating the directory
09:45 raghug xavih: what if a "bad" brick has the "stale" xattrs coincedentally cached by client also?
09:46 raghug it can happen with frequent changes of xattrs I suppose. thinking for a concrete example..
09:47 atinm joined #gluster
09:47 xavih raghug: do you mean that directories with the same xattr can be good and bad ? shouldn't that xattr value be unique in some way ?
09:47 xavih raghug: otherwise this preop check seems weak...
09:49 raghug For simplicity, Lets consider the case of 3 way replica b1, b2 and b3
09:49 xavih raghug: ok
09:49 raghug Lets say a client c1 has cached xattr value x1
09:50 xavih raghug: when you say "cached" do you mean that it has stored that value on the three bricks ?
09:50 raghug xavih: no. The client has stored the value in its memory
09:50 raghug in the inode-ctx of directory/file lets say
09:51 xavih raghug: without sending the request to the bricks and check the answer ?
09:51 xavih raghug: I think this is not possible
09:52 raghug xavih: it would have sent the request (say a getxattr/lookup). But lets say there is a future fop (say mkdir) that relies on this xattr. In this time window, there is a possibility that this value can change on brick
09:52 raghug that is the "staleness" 13855 is addressing at
09:53 xavih raghug: yes, this is possible, but I still don't see the problem...
09:53 raghug yes.. here is the scenario
09:54 raghug 15:19 < raghug> For simplicity, Lets consider the case of 3 way replica b1, b2 and b3
09:54 raghug 15:19 < xavih> raghug: ok
09:54 raghug 15:19 < raghug> Lets say a client c1 has cached xattr value x1
09:54 raghug are you ok till now?
09:54 raghug For Preop checks like [1], dentry operations like mkdir etc would rely on xattrs on parent (like dht layout). However, a "bad" subvolume of afr cannot make correct decision during preop check as the xattrs are not guaranteed to be correct. Imagine the following scenario:
09:54 raghug a. mkdir succeeds on non-readable subvolume
09:54 raghug b. fails on readable subvolume (may be because of layout xattrs didn't match).
09:54 raghug xavih: sorry :)
09:54 raghug ignore the comment from "For Preop checks"
09:55 raghug so, we've three bricks b1, b2 and b3
09:55 raghug all three part of a replica
09:55 raghug Now imagine a client c1 has read an xattr with value x1 and it has cached that value in its memory
09:55 xavih raghug: I don't understand the xattr being cached. AFAIK netither afr nor ec cache regular xattrs beyong those needed by its own internal use...
09:56 xavih raghug: anyway, let's assume it's cached...
09:56 raghug xavih: dht is caching them
09:56 xavih raghug: because dht needs it...
09:56 raghug the xattr being cached is layout of the directory
09:56 raghug xavih: yes
09:56 DV joined #gluster
09:56 raghug xavih: any doubts till now?
09:56 xavih raghug: afr and ec don't need it, so they don't cache it
09:57 xavih raghug: no, it's ok
09:57 raghug xavih: ok. So, now another client c2 changes the same xattr value to x2. But it succeeds only on b2 and b3
09:57 xavih raghug: ok
09:57 raghug so, b1 has x1. b2 and b3 have x2
09:58 raghug also dht on client c1 has cached value x1
09:58 xavih raghug: ok
09:59 raghug so now when dht (from client c1) tries to do a mkdir using the same xattr, mkdir succeeds on "bad" brick b1 but fails on b2 and b3
09:59 xavih raghug: in that case (I will speak for ec, not so sure for afr), when dht on c1 tries to create a directory it will receive an EIO error with the xattr indicating that preop failed
10:00 xavih raghug: from the ec side, it will see b2 and b3 failing, and b1 succeeding
10:00 xavih raghug: b1 will be marked as bad, and self-heal will take care of it
10:01 raghug xavih: yes. With Quorum this problem is mitigated
10:01 raghug But with two way replica, this would be a problem
10:01 raghug I was also worried about "rollback"
10:01 raghug (here deleting directory from b1)
10:02 xavih raghug: I can't talk for afr. Pranith should know better
10:02 xavih raghug: the removal of directory will happen, but it could be delayed
10:02 raghug xavih: thanks. Let me think over what we discussed. Will get back to you if I find any issues. Thanks :)
10:03 xavih raghug: anyway, since the parent directory has been marked as bad, and future readdir won't read contents from that brick
10:04 xavih raghug: so the "bad" newly created directory won't be visible to the user (nor dht)
10:04 raghug xavih: ok. got it
10:05 raghug xavih: what if there is a quorum of "bad" bricks?
10:05 xavih raghug: this self-heal will also heal the "outdated" xattr that caused the incorrect creation of the directory in the first place
10:05 xavih raghug: there cannot be a quorum of "bad" bricks
10:05 raghug in this case lets say for some reason b2 also had the same "stale" xattr and mkdir succeeded on it
10:06 raghug xavih: hmm..
10:06 xavih raghug: if there were a quorum of bad bricks, it means that another client has failed on more than a half of the bricks
10:06 raghug then, I suppose that op itself is a failure
10:06 raghug (from another client)
10:06 xavih raghug: bad in that case, the "bad" bricks will be the ones that have succeeded the mkdir on the other client
10:07 xavih raghug: following your example, to try to have a majority of "bad" bricks, c2 should have failed on b1 and b2, and succeded only on b3, right ?
10:07 raghug xavih: yes
10:08 xavih raghug: in that case, c2 will receive an error, indicating that the mkdir has failed
10:08 raghug ok
10:08 xavih raghug: this means that the good bricks are really b1 and b2
10:08 raghug xavih: yes. Got it
10:09 xavih raghug: so c1 won't see the as bad, and it will succeed on them without problems
10:09 xavih raghug: everything is consistent on both clients
10:09 raghug xavih: on a slightly unrelated issue, I am planning to move the locking to bricks as future work. So, EC wouldn't be witnessing lock that synchronizes modification of the xattr (this lock is again taken by dht). Do you think there would be any problem with that?
10:09 raghug (ref: patch 13885)
10:10 titansmc joined #gluster
10:11 raghug with this lock, no dht will be able to modify xattr for the duration of lock
10:11 raghug (no dht from other clients)
10:11 xavih raghug: that might be problematic
10:11 raghug xavih: ...
10:14 xavih raghug: ec takes the necessary inodelks to guarantee integrity, but if you move the locking from dht to the bricks, it means that two clients could try to change the same xattr. EC guarantees that all bricks will be updated in the same order, but it cannot guarantee that a lock taken after it (in the brick) will be honored
10:14 xavih raghug: how do you plan to move locking to the bricks ? maybe this could be used be ec as well.
10:14 titansmc Hi all, I have a short Question....I've tried to follow the documentation, but it seems I am missing something.
10:14 titansmc I've got 4T drive that I've mounted to make the bricks available to Gluster, everythings it's OK,  but when it comes to mount the gluster volume, when I map it to /glusterfs-data since this is within  / directory that is 120G only, I can only see 120G of my GlusterFS
10:14 titansmc gluster1:/volume1  119G  2.0G  116G   2% /glusterfs-data
10:14 titansmc /dev/sda3          119G  4.7G  113G   4% /
10:14 titansmc Mount:
10:14 titansmc gluster1:/volume1  /glusterfs-data      glusterfs
10:14 xavih raghug: I doubt this could be transparent to ec
10:15 raghug xavih: dht's directory self-heal takes lock on _all_ its subvols
10:16 raghug so, a lock on one subvol would prevent self-heal
10:16 raghug but the catch is whether an EC subvol would choose the same brick as lock server
10:17 raghug In the context of #13885..
10:17 atalur joined #gluster
10:17 raghug mkdir would've wound to b1, b2 and b3
10:18 aspandey_ joined #gluster
10:18 xavih raghug: from the point of view of DHT, an EC subvolume should be seen as a single brick, however if a feature crosses EC "hidden" in an xdata, special care must be taken...
10:18 raghug and all three bricks would 1. acquire inodelk in the same brick 2. do pre-op check (xattr comparision). 3. release inodelk
10:19 shubhendu joined #gluster
10:19 raghug note that each brick is aquiring an inodelk only on itself
10:19 xavih raghug: in this special case for mkdir, it shouldn't be a problem, but the locking issue should be analyzed again
10:19 DV joined #gluster
10:19 om joined #gluster
10:20 xavih raghug: oh, I see
10:20 raghug xavih: its not just mkdir. We've plans to expand this logic to all dentry operations that depend on layout (create, unlink, rename, rmdir etc)
10:20 xavih raghug: I should analyze it deeper. At first glance, I think there could be some deadlocks
10:20 itisravi joined #gluster
10:20 raghug xavih: no problem. We can meet later to discuss
10:21 raghug Its sufficient for now that you understand the scope of the problem/solution we are trying to implement in dht
10:21 xavih raghug: when you have something more detailed, I can look at it.
10:22 raghug xavih: roughly you expect patches similar to #13885 for other dentry operations
10:22 DV joined #gluster
10:22 raghug create, link, unlink, rmdir, symlink, mknod etc
10:23 xavih raghug: good. I don't foresee any problem with them, but you can add me as a reviewer so that I'll be able to check each of them
10:23 raghug there is also special case of lk, which I think we can take up once you are clear about dentry ops
10:24 raghug when lk is issued on a directory, dht has to choose a "lock-server", which all the clients agree upon
10:24 raghug (as directory is present on all subvols)
10:24 raghug we are planning to have an "hashed-subvol" as the lock-server
10:24 raghug and this hashed-subvol is dependent on layout xattr
10:25 xavih raghug: wouldn't this cause problems if that subvol dies ?
10:25 raghug xavih: yes. its a problem
10:25 raghug xavih: ideally we should acquire lock on _all_ subvols
10:25 xavih raghug: yes
10:25 raghug but its complicated and poses scalability issues
10:25 ndarshan joined #gluster
10:25 raghug so, I assume that's a tradeoff we've to live with
10:26 xavih raghug: and using a subset of the subvolumes ? (2 or more)
10:26 raghug xavih: yes, that's also a variant we are thinking of
10:26 raghug but bare minimum would be to agree on at-least one subvol as "lock-server"
10:27 xavih ok
10:27 raghug the current code is broken there too
10:27 raghug and can sometimes lead to different "lock-server(s)" even when all subvols are up
10:27 nishanth joined #gluster
10:28 raghug note that these are corner cases
10:28 xavih raghug: one think to consider (for future versions probably)... if directories are converted to files, this problem disappears, as it will be the same than locking a file (i.e. a single subvolume form the point of view of DHT)
10:29 raghug xavih: how do you "convert" a directory to file?
10:29 raghug note that directories have HA requirement
10:29 xavih raghug: coding the directory structure into a file instead of realying on the undelying fs to store them
10:29 raghug with non-metadata-server design of current dht
10:30 xavih raghug: the HA is given bye afr/ec
10:30 raghug xavih: I think that approach is followed by dht2
10:30 kdhananjay joined #gluster
10:30 raghug with MDS model
10:30 xavih raghug: ok
10:32 raghug xavih: thanks for your time. Will get back to you for any future questions :).
10:32 xavih raghug: yw :)
10:40 om2 joined #gluster
10:49 lanning joined #gluster
10:52 atalur joined #gluster
11:05 karthik___ joined #gluster
11:06 shubhendu joined #gluster
11:07 johnmilton joined #gluster
11:08 ndarshan joined #gluster
11:11 nishanth joined #gluster
11:13 atalur joined #gluster
11:21 siel joined #gluster
11:32 aspandey joined #gluster
11:35 ggarg joined #gluster
11:41 ahino joined #gluster
11:44 om2 joined #gluster
11:47 ppai joined #gluster
11:48 ninkotech joined #gluster
11:48 ninkotech_ joined #gluster
11:52 atinm joined #gluster
11:59 fedele joined #gluster
12:03 fedele Good morning, I have a problem: sistematically happens that files just written on a glusterfs disappears but they are present in a brick... Can anyone help me?
12:07 kotreshhr left #gluster
12:11 ppai joined #gluster
12:30 unclemarc joined #gluster
12:33 om2 joined #gluster
12:37 julim joined #gluster
12:51 ira joined #gluster
12:57 plarsen joined #gluster
12:57 aphorise joined #gluster
13:03 nbalacha joined #gluster
13:14 siel joined #gluster
13:24 chirino_m joined #gluster
13:25 chirino_m joined #gluster
13:29 fedele joined #gluster
13:32 jiffin joined #gluster
13:33 David_H_Smith joined #gluster
13:34 shyam joined #gluster
13:43 rwheeler joined #gluster
13:44 luizcpg joined #gluster
13:50 pur__ joined #gluster
13:52 gem joined #gluster
13:57 julim joined #gluster
13:58 om2 joined #gluster
14:03 DV joined #gluster
14:04 donaldinou joined #gluster
14:05 donaldinou hi there
14:06 donaldinou I need some help if anyone is kind enough
14:06 monotek joined #gluster
14:10 donaldinou I want to replace my gluster volume replicate from 2 to 3 bricks. So i did:
14:10 donaldinou gluster peer probe new-empty-disk-server-volume -> OK
14:10 donaldinou on new-empty-disk-server-volume I did:
14:10 donaldinou gluster volume add-brick volume-name replica 3 new-empty-disk-server-volume:/path/to/the/brick -> OK
14:10 donaldinou if i do a gluster volume info all three bricks are displayed
14:11 donaldinou but my new bricks still stay empty (no files, folder)
14:11 donaldinou so i've done a gluster volume heal... It seems to be OK but my new server with the empty disk stay empty, no heal...
14:13 arcolife joined #gluster
14:13 test joined #gluster
14:13 donaldinou plz help :P
14:16 nage joined #gluster
14:20 kpease joined #gluster
14:25 kpease joined #gluster
14:28 pdrakeweb joined #gluster
15:00 F2Knight joined #gluster
15:03 hchiramm joined #gluster
15:07 ju5t joined #gluster
15:07 JesperA joined #gluster
15:07 wushudoin joined #gluster
15:09 wushudoin joined #gluster
15:16 JesperA- joined #gluster
15:17 shyam left #gluster
15:20 JesperA joined #gluster
15:21 aspandey joined #gluster
15:32 JesperA- joined #gluster
15:40 JesperA joined #gluster
15:42 nathwill joined #gluster
15:44 shyam joined #gluster
15:51 nathwill joined #gluster
15:52 tertiary joined #gluster
15:54 fedele joined #gluster
15:56 JesperA joined #gluster
16:03 shaunm joined #gluster
16:07 Slashman joined #gluster
16:10 donaldinou damned!
16:10 donaldinou left #gluster
16:18 DV joined #gluster
16:23 nehar joined #gluster
16:27 tyler274 so glusterd is unable to start due to timeout
16:28 tyler274 on any of my servers
16:29 Slashman joined #gluster
16:34 jri joined #gluster
16:35 tyler274 "glusterd.service: Start operation timed out. Terminating."
16:37 anoopcs tyler274, Check logs.
16:38 tyler274 https://www.irccloud.com/pastebin/l0m1YJ1P/
16:38 glusterbot Title: Pastebin | IRCCloud (at www.irccloud.com)
16:40 tyler274 https://www.irccloud.com/pastebin/KVq1QoJL/
16:40 glusterbot Title: Pastebin | IRCCloud (at www.irccloud.com)
16:40 tyler274 something about rdma missing?
16:40 JesperA- joined #gluster
16:46 jri joined #gluster
16:47 johnmilton joined #gluster
16:48 anoopcs tyler274, rpm installed?
16:49 tyler274 archlinux not rhel/fedora
16:51 jri joined #gluster
16:55 om joined #gluster
16:56 anoopcs tyler274, Have you tried installing other versions?
16:56 anoopcs Is this an update to previous 3.7 version?
16:57 tyler274 3.7.11
16:57 tyler274 specific package here journalctl -xe
16:57 tyler274 * https://www.archlinux.org/packa​ges/community/x86_64/glusterfs/
16:57 glusterbot Title: Arch Linux - glusterfs 3.7.11-2 (x86_64) (at www.archlinux.org)
17:01 anoopcs tyler274, Can you please paste the last few lines from glusterd log?
17:02 tyler274 tail -f -n 50 /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
17:02 anoopcs tyler274, That would be enough I guess.
17:04 tyler274 https://www.irccloud.com/pastebin/qm4eUJpi/
17:04 glusterbot Title: Pastebin | IRCCloud (at www.irccloud.com)
17:05 johnmilton joined #gluster
17:06 tyler274 always just timesout
17:09 tyler274 also leaves a few running processes
17:09 tyler274 which need to be manually killed
17:10 anoopcs tyler274, rdma.so error can be ignored
17:10 tyler274 anoopcs: ok, so its not that then...
17:10 anoopcs tyler274, tcp transport has been succeeded. Can you try running glusterd in debug mode.
17:10 anoopcs as in `glusterd -L DEBUG`
17:12 anoopcs and paste the last few lines as before.
17:12 tyler274 running that as root it just sits in the terminal, logs are as follows
17:13 tyler274 https://www.irccloud.com/pastebin/Vxsr7KjN/
17:13 glusterbot Title: Pastebin | IRCCloud (at www.irccloud.com)
17:15 anoopcs tyler274, You had some peers before?
17:15 tyler274 they are all suffering the same issue
17:17 anoopcs tyler274, All peers are in reachable state, right?
17:19 tyler274 of the 4, one of them is powered off due to a hardware issue, two refuse to start glusterd, and the fourth seems to be misbehaving as well
17:19 tyler274 although the last one has no bricks on it anyway
17:22 karnan joined #gluster
17:23 RameshN joined #gluster
17:26 tyler274 anoopcs: This issue suddenly appeared within the last two weeks or so
17:27 anoopcs tyler274, OK. I don't have any clue as of now.
17:28 anoopcs tyler274, So you did update glusterfs on all nodes?
17:28 tyler274 I believe so
17:29 anoopcs Hm.
17:33 anoopcs tyler274, Do we have anything in brick logs?
17:34 tyler274 lots of stuff
17:34 tyler274 https://www.irccloud.com/pastebin/ENabGIfl/
17:34 glusterbot Title: Pastebin | IRCCloud (at www.irccloud.com)
17:36 plarsen joined #gluster
17:39 atalur joined #gluster
17:39 anoopcs tyler274, Failed to fetch volume file is something what we need to see.
17:47 anoopcs tyler274, Anyway drop a mail to gluster-users@gluster.org with all these details and hopefully someone will help you thereafter.
17:48 shubhendu joined #gluster
17:56 Guest76557 joined #gluster
18:06 julim joined #gluster
18:19 gem joined #gluster
18:21 om joined #gluster
18:21 om2 joined #gluster
18:21 PaulCuzner joined #gluster
18:22 Guest12872 joined #gluster
18:24 om2 joined #gluster
18:27 guhcampos joined #gluster
18:38 F2Knight joined #gluster
18:54 gem joined #gluster
18:55 jiffin joined #gluster
19:03 luizcpg joined #gluster
19:18 deniszh joined #gluster
19:19 swebb joined #gluster
19:20 nishanth joined #gluster
19:28 arcolife joined #gluster
19:44 shyam left #gluster
19:46 shyam joined #gluster
20:11 DV joined #gluster
20:26 wushudoin joined #gluster
20:45 hackman joined #gluster
20:51 kovsheni_ joined #gluster
21:05 deniszh joined #gluster
21:25 om joined #gluster
21:26 om2 joined #gluster
21:26 kovshenin joined #gluster
21:28 Guest58723 left #gluster
21:52 om2 left #gluster
21:53 julim joined #gluster
22:00 julim joined #gluster
22:06 klaxa joined #gluster
22:07 DV joined #gluster
22:15 luizcpg joined #gluster
22:29 jbrooks joined #gluster
23:05 gbox joined #gluster
23:29 plarsen joined #gluster
23:57 chirino joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary