Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-09-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:29 papamoose joined #gluster
00:42 shyam joined #gluster
00:48 bennyturns joined #gluster
00:54 n-st joined #gluster
01:02 johndescs joined #gluster
01:02 n-st joined #gluster
01:05 julim joined #gluster
01:12 rafi joined #gluster
01:20 julim joined #gluster
01:30 Lee1092 joined #gluster
01:33 weijin joined #gluster
01:34 n-st joined #gluster
01:45 tg2 joined #gluster
01:54 badone_ joined #gluster
02:11 harish_ joined #gluster
02:12 nangthang joined #gluster
02:13 amye joined #gluster
02:26 julim joined #gluster
02:39 haomaiwa_ joined #gluster
02:39 overclk joined #gluster
02:57 maveric_amitc_ joined #gluster
02:59 haomaiwang joined #gluster
03:09 vishvendra joined #gluster
03:19 TheSeven joined #gluster
03:23 overclk joined #gluster
03:27 mrrrgn joined #gluster
03:31 calisto joined #gluster
03:37 overclk joined #gluster
03:39 bharata-rao joined #gluster
03:42 nishanth joined #gluster
03:44 overclk joined #gluster
03:52 shubhendu joined #gluster
04:01 ppai joined #gluster
04:04 ashiq joined #gluster
04:09 kanagaraj joined #gluster
04:11 baojg joined #gluster
04:12 sakshi joined #gluster
04:19 neha joined #gluster
04:22 overclk joined #gluster
04:26 yazhini joined #gluster
04:28 Manikandan joined #gluster
04:31 itisravi joined #gluster
04:38 RameshN joined #gluster
04:41 overclk joined #gluster
04:43 aravindavk joined #gluster
04:48 ndarshan joined #gluster
04:53 kotreshhr joined #gluster
04:55 rafi joined #gluster
04:59 Saravana_ joined #gluster
05:00 ppai joined #gluster
05:03 Bhaskarakiran joined #gluster
05:10 vimal joined #gluster
05:12 skoduri joined #gluster
05:18 deepakcs joined #gluster
05:18 maveric_amitc_ joined #gluster
05:21 side_control joined #gluster
05:30 rjoseph joined #gluster
05:31 jiffin joined #gluster
05:32 nbalacha joined #gluster
05:39 poornimag joined #gluster
05:43 pppp joined #gluster
05:45 kshlm joined #gluster
05:47 overclk joined #gluster
05:47 vmallika joined #gluster
05:49 kdhananjay joined #gluster
05:49 hgowtham joined #gluster
05:56 JC__ joined #gluster
05:58 TvL2386 joined #gluster
06:02 barius2333 joined #gluster
06:02 yazhini joined #gluster
06:03 skoduri joined #gluster
06:06 vishvendra1 joined #gluster
06:08 jtux joined #gluster
06:15 jp009 joined #gluster
06:17 ndarshan joined #gluster
06:17 mhulsman joined #gluster
06:18 atalur joined #gluster
06:18 shubhendu joined #gluster
06:33 ramky joined #gluster
06:37 dusmant joined #gluster
06:41 spalai joined #gluster
06:43 mhulsman joined #gluster
06:43 hgowtham joined #gluster
06:56 Philambdo joined #gluster
06:59 anil joined #gluster
07:01 Philambdo joined #gluster
07:04 [Enrico] joined #gluster
07:05 jwd joined #gluster
07:08 auzty joined #gluster
07:10 ppai joined #gluster
07:31 ndarshan joined #gluster
07:32 shubhendu joined #gluster
07:33 RameshN joined #gluster
07:34 fsimonce joined #gluster
07:46 Philambdo joined #gluster
07:55 DV joined #gluster
08:05 jwaibel joined #gluster
08:26 DV joined #gluster
08:27 DocGreen joined #gluster
08:36 ctria joined #gluster
08:38 weijin joined #gluster
08:40 arcolife joined #gluster
08:40 Bhaskarakiran joined #gluster
08:44 RameshN joined #gluster
08:44 LebedevRI joined #gluster
08:46 svalo joined #gluster
08:46 skoduri ndevos, ping..
08:47 ndevos pong skoduri
08:47 skoduri please merge http://review.gluster.org/#/c/12089/ .. it has passed regressions :)
08:47 glusterbot Title: Gerrit Code Review (at review.gluster.org)
08:47 RayTrace_ joined #gluster
08:47 ndevos skoduri: done!
08:47 skoduri ndevos, thanks :)
08:47 skoduri ndevos++
08:47 glusterbot skoduri: ndevos's karma is now 21
08:56 s19n joined #gluster
08:57 dusmant joined #gluster
08:58 s19n Hi all. Does it matter if reserved blocks on a ext4 brick are different from 0? I think it doesn't since gluster daemons run as root, is it correct?
09:02 nishanth joined #gluster
09:08 natarej joined #gluster
09:12 kovshenin joined #gluster
09:15 raghu joined #gluster
09:21 arcolife joined #gluster
09:23 deniszh joined #gluster
09:25 Intensity joined #gluster
09:26 spalai joined #gluster
09:35 RameshN joined #gluster
09:37 skoduri_ joined #gluster
09:37 poornimag joined #gluster
09:45 cristov joined #gluster
09:49 Trefex joined #gluster
09:53 overclk joined #gluster
09:55 shubhendu joined #gluster
09:56 spalai joined #gluster
09:58 dusmant joined #gluster
10:04 social joined #gluster
10:06 Trefex joined #gluster
10:13 ndarshan joined #gluster
10:13 Manikandan joined #gluster
10:15 kbyrne joined #gluster
10:23 rjoseph|afk joined #gluster
10:24 rjoseph|afk joined #gluster
10:26 overclk joined #gluster
10:46 Debloper joined #gluster
10:47 Bhaskarakiran joined #gluster
10:48 ndarshan joined #gluster
10:58 skoduri joined #gluster
10:59 ira joined #gluster
11:00 Pupeno joined #gluster
11:01 poornimag joined #gluster
11:07 shubhendu joined #gluster
11:11 kkeithley1 joined #gluster
11:12 arcolife joined #gluster
11:15 dusmant joined #gluster
11:18 weijin joined #gluster
11:29 skoduri joined #gluster
11:36 B21956 joined #gluster
11:37 julim joined #gluster
11:43 Bhaskarakiran joined #gluster
11:46 dusmant joined #gluster
11:54 cyberswat joined #gluster
12:04 pdrakeweb joined #gluster
12:05 elico joined #gluster
12:05 rgustafs joined #gluster
12:08 Mr_Psmith joined #gluster
12:12 jtux joined #gluster
12:12 atalur joined #gluster
12:17 ws2k3 joined #gluster
12:18 dusmant joined #gluster
12:19 bennyturns joined #gluster
12:21 shubhendu joined #gluster
12:21 arcolife joined #gluster
12:22 kotreshhr left #gluster
12:32 amye joined #gluster
12:32 aravindavk joined #gluster
12:33 spalai left #gluster
12:40 unclemarc joined #gluster
12:40 jcastill1 joined #gluster
12:44 ashiq joined #gluster
12:45 jcastillo joined #gluster
12:46 Trefex joined #gluster
12:48 weijin joined #gluster
12:49 kanagaraj joined #gluster
12:53 shyam joined #gluster
12:53 Bhaskarakiran joined #gluster
13:04 rwheeler joined #gluster
13:04 mbukatov joined #gluster
13:06 amye joined #gluster
13:11 rgustafs joined #gluster
13:14 maveric_amitc_ joined #gluster
13:14 harish_ joined #gluster
13:15 ashiq joined #gluster
13:16 shyam joined #gluster
13:18 amye joined #gluster
13:19 plarsen joined #gluster
13:21 cristov joined #gluster
13:22 s19n another question... There is a rebalance fix-layout running after a volume expansion; is it expected that those directories fixed (as stated in volname-rebalance.log) have their mtime updated?
13:24 Slashman joined #gluster
13:29 rwheeler joined #gluster
13:29 calisto joined #gluster
13:32 RayTrace_ joined #gluster
13:33 klaxa joined #gluster
13:36 dgandhi joined #gluster
13:37 harold joined #gluster
13:41 RayTrace_ joined #gluster
13:43 RayTrace_ joined #gluster
13:44 firemanxbr joined #gluster
13:46 marbu joined #gluster
13:50 RayTrac__ joined #gluster
13:50 s19n maybe this behaviour is related to https://bugzilla.redhat.com/show_bug.cgi?id=884594 ?
13:50 glusterbot Bug 884594: medium, medium, ---, nsathyan, ASSIGNED , handle mtime updates properly in dht
13:57 mbukatov joined #gluster
13:59 maveric_amitc_ joined #gluster
14:05 aravindavk joined #gluster
14:08 RayTrace_ joined #gluster
14:09 wehde joined #gluster
14:09 cholcombe joined #gluster
14:11 RayTrace_ joined #gluster
14:24 DocGreen joined #gluster
14:25 shyam joined #gluster
14:27 Trefex joined #gluster
14:29 Merlin_ joined #gluster
14:29 spalai joined #gluster
14:32 Iodun joined #gluster
14:33 nishanth joined #gluster
14:35 DocGreen joined #gluster
14:36 x_merlin_x Hi everybody. I do have a question on how to backup glusterfs snapshots.
14:37 x_merlin_x I would like to create a cron job that does backup daily. Create snapshot, make img of snapshot and upload to ftp.
14:37 x_merlin_x How do I identify the glusterfs snapshot file?
14:39 wehde anyone using gluster with iscsi?
14:43 ndevos x_merlin_x: a glusterfs snapshot creates lvm-snapshots for each brick, you would need to read /dev/<volumegroup>/<snapshot-lv> and upload that for each brick
14:44 ndevos wehde: iscsi which side? formatted iscsi block devices for the bricks, or storing images and configuring scsi-target-utils to export those?
14:46 rwheeler joined #gluster
14:46 wehde i was looking at running vm's on glusterfs and then creating iscsi blocks on top of the gluster volume as well. then mounting the iscsi block devices in my windows vm's
14:47 ndevos wehde: there is a gluster plugin for scsi-target-utils that you should be able to use for that
14:50 x_merlin_x @ndevos: would that be: mount /dev/gluster/2d828e6282964e0e89616b297130aa1b_0 as an example? How could I automate that after issuing gluster snapshot create snap1 via cron?
14:51 ndevos x_merlin_x: sorry, I dont know how snapshot names can be mapped to bricks
14:51 wehde ndevos, is there a better way that you know of to let windows access gluster
14:52 ndevos x_merlin_x: maybe there is a "gluster snapshot info snap1" command or the like?
14:52 vmallika joined #gluster
14:52 x_merlin_x this seems to work: sudo dd if=/dev/gluster/d0c254908dca451d8f566be77437c538_0 | gzip > snap1.gz
14:52 ndevos wehde: it really depends on what you need, you could export a volume over Samba too, or configure Windows to use NFS
14:52 wehde ndevos, i would ideally like to store the entirety of my file server natively on gluster but unfortunately its a windows 2012 machine
14:53 wehde ndevos, windows using nfs doesn't maintain active directory permissions
14:54 wehde ndevos, samba exporting the shares in linux doesn't work either because samba4 doesn't play nicely with the new active directory schema
14:55 wehde i know that vmware for the longest time had a 2tb vmdk file size limit and they were able to get around that using iscsi to the storage volume natively
14:55 ndevos wehde: hmm, I would think Samba should work nicely, if that is not the case, I would suggest to check with the Samba folks
14:55 x_merlin_x is there a command that returns the volume name of a snapshot?
14:55 * ndevos isnt a snapshot expert
14:58 x_merli__ joined #gluster
14:59 kanarip joined #gluster
14:59 nbalacha joined #gluster
14:59 rafi joined #gluster
14:59 Merlin_ joined #gluster
15:03 Merlin_ joined #gluster
15:05 calisto joined #gluster
15:09 bennyturns joined #gluster
15:11 calisto joined #gluster
15:13 spcmastertim joined #gluster
15:19 Merlin__ joined #gluster
15:21 kanarip joined #gluster
15:24 Merlin_ joined #gluster
15:25 dbruhn joined #gluster
15:26 s19n is a "mismatching layouts for /" issue something which will be solved after that the fix-layout finishes its job?
15:27 JoeJulian Right, that's fixed with fix-layout.
15:27 papamoose left #gluster
15:29 Merlin__ joined #gluster
15:29 jiffin joined #gluster
15:29 s19n JoeJulian: thanks
15:31 RedW joined #gluster
15:33 calisto joined #gluster
15:34 s19n I am in the same situation as the one described here: http://www.gluster.org/pipermail/glu​ster-users/2014-October/019292.html
15:34 glusterbot Title: [Gluster-users] gluster 3.4.5: lots of permission problems after add-brick/rebalance (at www.gluster.org)
15:38 s19n lots of SETATTR(): [path] => -1 (Operation not permitted) on the client logs
15:38 JoeJulian s19n: Are you using geo-rep?
15:39 vishvendra joined #gluster
15:40 s19n JoeJulian: no, it's a replicated distributed volume, just expanded from 4x2 to 6x2 on three nodes (after having been 4x3 for a while to avoid replace-brick).
15:41 JoeJulian Well then it's not the other bug I found.
15:44 s19n JoeJulian: thanks anyway, that was just one of a few "weird" things I am seeing now
15:52 rotbeard joined #gluster
15:53 weijin joined #gluster
15:56 Merlin_ joined #gluster
16:01 Merlin__ joined #gluster
16:04 Asako joined #gluster
16:04 JoeJulian s19n: I can't find any other place for an EPERM... Can you share your volume info?
16:04 Asako hey, has anybody noticed that the yum packages for CentOS 7 have broken dependencies?
16:04 JoeJulian Oh?
16:05 kkeithley_ Asako: which version? What's missing?
16:05 kkeithley_ userspace_rcu? Get that from EPEL.
16:06 Asako I grabbed http://download.gluster.org/pub/gluster/glus​terfs/3.6/LATEST/CentOS/glusterfs-epel.repo
16:06 Asako yum install glusterfs-server fails
16:06 kkeithley_ what's the error? (use fpaste if it's big)
16:07 Asako one sec
16:07 s19n JoeJulian: you mean just the output of 'gluster volume info'?
16:07 JoeJulian yes
16:07 Asako http://pastebin.com/5XVmDjfL
16:07 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
16:08 Asako pb has ads?  I never noticed.
16:08 Asako http://fpaste.org/263176/44129650/ there
16:08 glusterbot Title: #263176 Fedora Project Pastebin (at fpaste.org)
16:08 Merlin_ joined #gluster
16:09 kkeithley_ you've already got 3.4.7 installed?
16:09 JoeJulian Even if you block the ads, it has a fixed display width that's terrible.
16:09 coredump joined #gluster
16:09 s19n JoeJulian: sure, http://fpaste.org/263177/
16:09 glusterbot Title: #263177 Fedora Project Pastebin (at fpaste.org)
16:10 weijin joined #gluster
16:10 JoeJulian Wierd.
16:10 Asako no, 3.4.7 is in the default repos
16:10 kkeithley_ and actually it's trying to use the client-side packages from RHEL/CentOS base instead of from download.gluster.org. Something is very strange
16:10 Asako Available Packages
16:10 Asako glusterfs.x86_64                                                                                     3.6.0.29-2.el7                                                                                     system-base
16:11 spalai left #gluster
16:11 squizzi joined #gluster
16:11 Asako maybe I can exclude them in yum.conf
16:12 s19n using 3.4.2-ubuntu2~precise6 on nodes, 3.4.7-ubuntu1~trusty1 on clients
16:12 Asako hmm
16:12 Asako I think somebody has altered our yum repos
16:13 Pupeno_ joined #gluster
16:13 Merlin_ joined #gluster
16:13 kkeithley_ Well, RHEL and CentOS have gluster client-side RPMs. (No server). That's what the 3.6.0-29-2 RPMs are
16:14 Asako adding exclude=gluster* to yum.system.repo fixed it
16:14 kkeithley_ yup
16:15 Asako thanks, now I can get some work done
16:15 Merlin__ joined #gluster
16:17 s19n at the moment my most urgent issue is the directory mtime being mangled on fix-layout; I could not determine if it's a known bug or not...
16:17 JoeJulian It is
16:18 JoeJulian It's been long since fixed.
16:20 s19n JoeJulian: great, if it wasn't for https://bugzilla.redhat.co​m/show_bug.cgi?id=1168897 I would have updated before expanding the volume
16:20 glusterbot Bug 1168897: medium, medium, ---, bugs, NEW , Attempt remove-brick after node has terminated in cluster gives error: volume remove-brick commit force: failed: One or more nodes do not support the required op-version. Cluster op-version must atleast be 30600.
16:22 JoeJulian s19n: yeah, you would have needed to have been using >= 3.6.3 iirc.
16:24 kkeithley_ JoeJulian: https://bugzilla.redhat.co​m/show_bug.cgi?id=1251821    Any specifics?
16:24 glusterbot Bug 1251821: unspecified, unspecified, ---, kkeithle, ASSIGNED , /usr/lib/glusterfs/ganesha/ganesha_ha.sh is distro specific
16:28 JoeJulian kkeithley_: yeah... I wish I'd been more specific too. What a lame bug report.
16:29 Merlin_ joined #gluster
16:34 Merlin_ joined #gluster
16:36 winterMyt joined #gluster
16:40 Merlin_ joined #gluster
16:42 amye joined #gluster
16:44 RayTrace_ joined #gluster
16:54 RameshN joined #gluster
17:03 Rapture joined #gluster
17:04 Saravana_ joined #gluster
17:12 calisto joined #gluster
17:14 DocGreen joined #gluster
17:15 amye joined #gluster
17:18 amye joined #gluster
17:21 bennyturns joined #gluster
17:28 skoduri joined #gluster
17:29 Pupeno joined #gluster
17:31 Trefex joined #gluster
17:35 amye joined #gluster
17:49 unclemarc joined #gluster
17:58 julim joined #gluster
18:04 Gill joined #gluster
18:18 bfoster joined #gluster
18:32 daMaestro joined #gluster
18:35 wushudoin| joined #gluster
18:38 amye joined #gluster
18:39 Asako [2015-09-03 18:38:40.763833] E [socket.c:2276:socket_connect_finish] 0-management: connection to 10.39.136.171:24007 failed (No route to host)
18:39 Asako is this a firewall error?
18:40 wushudoin| joined #gluster
18:43 JoeJulian Asako: usually
18:43 Asako I can ping the IP
18:43 Asako so there is a route
18:44 JoeJulian But the port's probably blocked and sends that icmp message when connections are attempted.
18:44 JoeJulian @ports
18:44 glusterbot JoeJulian: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
18:47 Asako thanks
18:48 Merlin_ joined #gluster
18:48 bene2 joined #gluster
18:55 Asako yup, it's firewalld
18:58 JoeJulian Asako: fyi bug 1253967
18:58 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1253967 unspecified, unspecified, ---, anekkunt, POST , glusterfs doesn't include firewalld rules
18:59 Asako I just manually added the ports to the zone
19:04 JoeJulian Asako: which will work until they change.
19:04 JoeJulian The brick ports are dynamic.
19:05 Asako oh boy
19:11 Gill joined #gluster
19:14 shaunm joined #gluster
19:14 amye joined #gluster
19:16 svalo joined #gluster
19:19 jobewan joined #gluster
19:23 ben____ joined #gluster
19:24 ben____ hello
19:24 glusterbot ben____: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
19:25 ben____ I have a couple of glusterfs servers 3.7.0 with a volume  of one brick replicated on each server. When a client copies a big file and you ls in another shell in the harget directory it hangs. It also hangs when the client removes the file
19:26 ben____ i wantd to request your opinion
19:26 JoeJulian My first opinion is upgrade to 3.7.4. There were some (imho) show-stopping bugs that were released in 3.7.0.
19:27 JoeJulian Second, you're writing these files to the a mounted volume, right? Not directly to the bricks?
19:27 magamo Is there a changelog between patchlevel versions of gluster?
19:27 ben____ yes
19:28 ben____ to the mounted volume
19:28 ben____ client and servers are 3.7.0
19:28 ben____ butn i also observed with 3.6.3.
19:28 JoeJulian magamo: yes
19:28 ben____ thats why i thought it may be a more generic issue
19:31 Trefex joined #gluster
19:32 JoeJulian magamo: Ok, I'm more disappointed with my response than I'd hoped. http://gluster.readthedocs.o​rg/en/latest/release-notes/ You can also, of course, read the commit logs (which is what I usually do) https://github.com/gluster​/glusterfs/commits/v3.7.4
19:32 glusterbot Title: index - Gluster Docs (at gluster.readthedocs.org)
19:36 magamo Thanks, JoeJulian.  The commit logs come close enough for my tastes.
19:36 JoeJulian ben____: This sounds interesting. Have you checked your client logs during this hang to see if there's any clues there?
19:36 Merlin_ joined #gluster
19:36 ben____ it doesnt report much really
19:37 ben____ transport endpoint dissconnections
19:37 ben____ i must say the machines are virtual servers of the c3.8xlarge type in AWS
19:37 ben____ tho they have enhanced networking, so does the client
19:38 jbautista- joined #gluster
19:39 ben____ lets say you go to the client and copy a big 5Gb file from /root to /myshare which is a gluster share, if you open another shell in the client and cd to /myshare and ls, hangs forever
19:39 ben____ and at some point the copy hangs too
19:39 ben____ but it eventually returns
19:39 ben____ a rm also hangs
19:39 ben____ tho they get done but after some time
19:39 ben____ network is checked its all apparently fine, btw thanks for your opinions
19:42 JoeJulian The transport disconnection sounds like resource starvation causing the server not to respond and the tcp connection times out.
19:43 ben____ exactly, we had  a smaller instance as a client with no enhanced networking and we upgraded it thinking that wd help
19:43 ben____ it still shows the hangs
19:43 jbautista- joined #gluster
19:49 Merlin__ joined #gluster
19:51 JoeJulian I don't know very much about aws.
19:52 chuz04arley joined #gluster
19:52 ben____ i will do more tests see if i get some more info
19:53 ben____ Thanks JoeJulian for your comments
19:55 Merlin_ joined #gluster
19:56 tertiary joined #gluster
19:56 amye left #gluster
19:57 tertiary Any suggestions on a 107 error when probing a new server? The server log states "rejecting management handshake request from unknown peer". Firewalls are off, I can ping it fine, and the other identical new nodes connected to the pool just fine...
20:00 cristov joined #gluster
20:08 Merlin_ joined #gluster
20:13 JoeJulian 107 = Transport endpoint is not connected
20:13 JoeJulian (I still don't know why it doesn't say that)
20:14 JoeJulian tertiary:
20:14 JoeJulian The "unknown peer" message makes me believe that the server that's being probed is already part of a trusted pool.
20:16 Merlin_ joined #gluster
20:20 amye joined #gluster
20:21 jwd joined #gluster
20:21 julim joined #gluster
20:21 jwd_ joined #gluster
20:23 shyam joined #gluster
20:23 Merlin_ joined #gluster
20:26 amye joined #gluster
20:28 rwheeler joined #gluster
20:31 Merlin_ joined #gluster
20:39 Merlin_ joined #gluster
20:40 tertiary @JoeJulian: I ended up reformatting and reinstalling the OS and the problem has gone away. Very strange. BTW, thanks, you've helped me before and I greatly appreciate your suggestions
20:42 julim joined #gluster
20:46 shyam joined #gluster
20:46 JoeJulian Oh, good. I'm glad you were able to get it sorted out.
20:47 Merlin_ joined #gluster
20:48 shyam joined #gluster
20:49 amye joined #gluster
20:51 jwd joined #gluster
20:54 Merlin__ joined #gluster
20:54 shyam joined #gluster
20:59 Pupeno joined #gluster
21:01 Merlin_ joined #gluster
21:02 deniszh joined #gluster
21:02 julim joined #gluster
21:09 Merlin_ joined #gluster
21:10 Pupeno_ joined #gluster
21:11 Trefex joined #gluster
21:15 Merlin_ joined #gluster
21:15 elico joined #gluster
21:24 Merlin_ joined #gluster
21:24 amye left #gluster
21:26 badone_ joined #gluster
21:30 Merlin_ joined #gluster
21:33 Pupeno joined #gluster
21:36 Merlin_ joined #gluster
21:37 Pupeno__ joined #gluster
21:43 Merlin_ joined #gluster
21:44 Pupeno joined #gluster
21:45 badone_ joined #gluster
21:50 Merlin_ joined #gluster
21:52 jcastill1 joined #gluster
21:55 Merlin_ joined #gluster
21:57 jcastillo joined #gluster
22:02 m0zes joined #gluster
22:10 amye joined #gluster
22:23 Mr_Psmith joined #gluster
22:25 corretico joined #gluster
23:15 cyberswat joined #gluster
23:18 badone__ joined #gluster
23:22 gildub joined #gluster
23:33 amye joined #gluster
23:55 MugginsM joined #gluster
23:57 Merlin_ joined #gluster
23:58 MugginsM hi all. So our gluster server (3.6.5, Ubuntu, 2 node replicated cluster) has started spewing several hundred log entries a second from [rpc-clnt-ping.c:145:rpc_clnt_ping_cbk] "socket or ib related error"
23:58 MugginsM it's rapidly filling our drive (10G of log messages in the last couple of hours already)
23:58 MugginsM anyone seen this before? can't find much on the Googles, and the code seems to suggest it's failing to connect to its peer sometimes, but not all the time

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary