Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-01-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 gildub joined #gluster
00:17 T3 joined #gluster
00:30 wkf joined #gluster
00:50 zerick joined #gluster
00:51 zerick_ joined #gluster
00:52 _zerick_ joined #gluster
01:00 zerick joined #gluster
01:07 rwheeler joined #gluster
01:18 T3 joined #gluster
01:31 bala joined #gluster
01:37 PeterA i am having a VERY strange error with sqlplus with tnsnames and wallets files on Gluster NFS
01:37 PeterA http://pastie.org/9870008
01:37 PeterA getting the "ptrace: umoven: Input/output error"
01:37 PeterA when sqlplus trying to read the tnsnames.ora
01:39 PeterA any clue?
02:07 Rapture joined #gluster
02:11 T3 joined #gluster
02:14 haomaiwa_ joined #gluster
02:39 bala joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:50 eightyeight joined #gluster
03:02 zerick joined #gluster
03:09 wkf joined #gluster
03:12 bala joined #gluster
03:16 deniszh1 joined #gluster
03:16 zerick joined #gluster
03:22 bennyturns joined #gluster
03:30 rejy joined #gluster
03:42 bala joined #gluster
03:49 kanagaraj joined #gluster
03:49 spandit joined #gluster
03:52 itisravi joined #gluster
03:55 nbalacha joined #gluster
04:05 bala joined #gluster
04:08 atinmu joined #gluster
04:15 rafi joined #gluster
04:19 nishanth joined #gluster
04:24 nangthang joined #gluster
04:29 pp joined #gluster
04:30 kshlm joined #gluster
04:31 shubhendu joined #gluster
04:42 rjoseph|afk joined #gluster
04:45 anoopcs joined #gluster
04:48 Micromus joined #gluster
04:52 ndarshan joined #gluster
04:53 sakshi joined #gluster
04:57 MacWinner joined #gluster
04:57 gem_ joined #gluster
05:00 hchiramm joined #gluster
05:03 anrao joined #gluster
05:05 jiffin joined #gluster
05:06 Manikandan joined #gluster
05:13 timbyr_ joined #gluster
05:18 anil joined #gluster
05:18 foster joined #gluster
05:19 julim_ joined #gluster
05:20 DV joined #gluster
05:22 kdhananjay joined #gluster
05:23 deepakcs joined #gluster
05:26 anoopcs joined #gluster
05:28 jriano joined #gluster
05:31 jriano joined #gluster
05:34 hagarth joined #gluster
05:36 smohan joined #gluster
05:36 soumya_ joined #gluster
05:38 jriano joined #gluster
05:40 glusterbot News from newglusterbugs: [Bug 1186993] "gluster volume set help" for server.statedump-path has wrong description <https://bugzilla.redhat.com/show_bug.cgi?id=1186993>
05:47 shubhendu joined #gluster
05:48 ppai joined #gluster
05:49 kumar joined #gluster
06:00 anil joined #gluster
06:01 meghanam joined #gluster
06:02 hchiramm joined #gluster
06:10 dusmant joined #gluster
06:10 anrao joined #gluster
06:14 Telsin joined #gluster
06:19 timbyr_ joined #gluster
06:24 Pupeno joined #gluster
06:27 Pupeno_ joined #gluster
06:29 smohan joined #gluster
06:30 aravindavk joined #gluster
06:34 Pupeno joined #gluster
06:37 Pupeno_ joined #gluster
06:41 Pupeno joined #gluster
06:41 Pupeno joined #gluster
06:43 sadbox joined #gluster
06:45 jriano joined #gluster
06:46 fubada joined #gluster
06:48 kshlm joined #gluster
06:53 dusmant joined #gluster
06:53 nbalacha joined #gluster
06:53 shubhendu joined #gluster
06:54 mikedep333 joined #gluster
06:56 atinmu joined #gluster
07:05 fubada joined #gluster
07:07 Bardack joined #gluster
07:07 stickyboy joined #gluster
07:22 jtux joined #gluster
07:23 mbukatov joined #gluster
07:29 nbalacha joined #gluster
07:43 bala joined #gluster
07:50 ralala joined #gluster
07:54 atinmu joined #gluster
07:55 dusmant joined #gluster
08:05 shaunm joined #gluster
08:11 DV joined #gluster
08:11 glusterbot News from newglusterbugs: [Bug 1075417] Spelling mistakes and typos in the glusterfs source <https://bugzilla.redhat.com/show_bug.cgi?id=1075417>
08:11 glusterbot News from newglusterbugs: [Bug 1187021] Geo-replication not replicating ACLs and xattrs to target <https://bugzilla.redhat.com/show_bug.cgi?id=1187021>
08:15 timbyr_ joined #gluster
08:23 ctria joined #gluster
08:32 semiosis joined #gluster
08:39 liquidat joined #gluster
08:43 anoopcs joined #gluster
08:46 dusmant joined #gluster
08:51 hagarth joined #gluster
08:51 ndarshan joined #gluster
08:55 nbalacha joined #gluster
08:56 Fen1 joined #gluster
09:30 fattaneh1 joined #gluster
09:30 ramteid joined #gluster
09:33 karnan joined #gluster
09:33 tanuck joined #gluster
09:35 deepakcs joined #gluster
09:44 stickyboy joined #gluster
09:44 anrao joined #gluster
09:51 DV joined #gluster
09:51 dusmant joined #gluster
09:56 fattaneh1 left #gluster
10:02 kovshenin joined #gluster
10:09 deniszh joined #gluster
10:14 nshaikh joined #gluster
10:15 deniszh1 joined #gluster
10:19 Fen1 joined #gluster
10:19 nishanth joined #gluster
10:20 anrao joined #gluster
10:22 dusmant joined #gluster
10:24 fattaneh1 joined #gluster
10:28 nbalacha joined #gluster
10:29 DV joined #gluster
10:32 bala joined #gluster
10:46 T0aD joined #gluster
10:51 dusmant joined #gluster
10:51 anoopcs joined #gluster
10:56 LordFolken g'day guys, I've just updated to 3.6.2
10:56 LordFolken replace brick is now sorta working on my disperse volume
10:56 LordFolken command completed successfully
10:56 LordFolken new data is being written to the new brick
10:57 LordFolken if I read old data it appears on the new brick also
10:57 LordFolken is there an easy way to get it to sync up
10:57 bala joined #gluster
11:00 Norky joined #gluster
11:00 hagarth joined #gluster
11:01 xavih LordFolken: to force a full synchronization you should execute "find <mount point> -d -exec getfattr -h -n trusted.ec.heal {} \;"
11:01 T3 joined #gluster
11:03 LebedevRI joined #gluster
11:04 quantum Hi! Can i config hybrid gre and vlan in neutron ml2 plugin?
11:12 LordFolken xavih: sweet, only possible issue, if somebody does a replace-brick on a disperse volume, they might assume the data automatically syncs up
11:12 LordFolken xavih: and be a bit upset if one of the two remaining bricks dies and data is lost
11:19 xavih LordFolken: currently self-heal is reactive. It's planned to have self-heal daemon working with dispersed volumes in 3.7
11:22 LordFolken xavih: it's all good ;-p
11:29 Norky joined #gluster
11:32 nishanth joined #gluster
11:34 purpleidea fubada: i remember a while back you had issues getting data in modules (hiera data) thing working... what was the ultimate fix? do you remember?
11:34 kkeithley1 joined #gluster
11:36 purpleidea fubada: user ccard is having a problem with my puppet-ipa module. this issue is their data in modules setup isn't working, if you remember what your issue was, please let us know, so ccard can get it working, and so we can add the answer to the FAQ.
11:39 ccard joined #gluster
11:39 ccard purpleidea: I'm here
11:40 purpleidea ccard: wait here, and when fubada sees the messages, perhaps fubada will know the answer
11:41 theron joined #gluster
11:44 Slashman joined #gluster
11:45 calisto joined #gluster
11:54 meghanam joined #gluster
11:56 ira joined #gluster
12:04 ira joined #gluster
12:04 anrao joined #gluster
12:06 siel joined #gluster
12:12 glusterbot News from newglusterbugs: [Bug 1187128] Value displayed in slave column of geo-replication status command is inconsistent. <https://bugzilla.redhat.com/show_bug.cgi?id=1187128>
12:12 glusterbot News from newglusterbugs: [Bug 1187140] [RFE]: geo-rep: Tool to find missing files in slave volume <https://bugzilla.redhat.com/show_bug.cgi?id=1187140>
12:14 shaunm joined #gluster
12:15 siel joined #gluster
12:18 smohan_ joined #gluster
12:18 itisravi_ joined #gluster
12:25 soumya_ joined #gluster
12:26 siel joined #gluster
12:35 elico joined #gluster
12:41 purpleidea JoeJulian: testing...
12:41 purpleidea JoeJulian: ping
12:41 glusterbot purpleidea: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
12:41 purpleidea JoeJulian: i <3 your bot... i thought you had added that, but wanted to test.
12:41 purpleidea THANK YOU, you're fixing the internet
12:41 samppah :O
12:42 purpleidea now someone make me an irssi script, that as long as i've talked to you before, that that gets sent out automatically if you are 'ping-ed'
12:43 julim joined #gluster
12:56 badone joined #gluster
12:59 vikumar joined #gluster
13:01 anoopcs joined #gluster
13:02 chirino joined #gluster
13:02 rjoseph|afk joined #gluster
13:15 psilvao joined #gluster
13:15 psilvao hi
13:15 glusterbot psilvao: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:15 psilvao I have a big problem
13:16 psilvao when i try to run glusterd from boot in centos 7 doesn't works, but when i start by hand works, see the log file i can find the problem.. ---> that is   [common-utils.c:227:gf_resolve_ip6] 0-resolver: getaddrinfo failed
13:16 glusterbot psilvao: -'s karma is now -342
13:17 psilvao why gluster use ipv6?
13:17 psilvao How i can obligate use ipv4? for resolve the name
13:17 psilvao thanks in advance
13:21 Fen1 joined #gluster
13:25 vikumar joined #gluster
13:25 eychenz joined #gluster
13:27 eychenz Hi , can someone help me with page 16 of this documentation http://www.gluster.org/wp-content/uploads/2012/05/Gluster_File_System-3.3.0-Administration_Guide-en-US.pdf
13:27 eychenz in the figure it shows two servers each having two bricks
13:28 eychenz but the command doesn't seem right! it is mentioning server 3 and 4
13:29 eychenz My situation is that I have two servers
13:30 eychenz each has two volumes /dev/sdb and /dev/sdc
13:31 eychenz Now I want to create a distributed & replicated volume , like the one in the figure on page 16 in the mentioned document
13:31 eychenz What I did by now is entering the commands below :
13:32 eychenz sudo gluster volume create gv0 replica 2 gluster1:/export/sdb1/brick gluster2:/export/sdb1/brick
13:32 eychenz sudo gluster volume create gv1 replica 2 gluster1:/export/sdc1/brick gluster2:/export/sdc1/brick
13:32 eychenz and now I have the following :
13:33 eychenz ubuntu@gluster1:~$ sudo gluster volume info
13:33 eychenz
13:33 eychenz Volume Name: gv0
13:33 eychenz Type: Replicate
13:33 eychenz Volume ID: ce406330-9c7e-480e-9c2c-97927fd8c37e
13:33 eychenz Status: Started
13:33 eychenz Number of Bricks: 1 x 2 = 2
13:33 eychenz Transport-type: tcp
13:33 eychenz Bricks:
13:33 eychenz Brick1: gluster1:/export/sdb1/brick
13:33 eychenz Brick2: gluster2:/export/sdb1/brick
13:33 eychenz
13:33 eychenz Volume Name: gv1
13:33 eychenz Type: Replicate
13:33 eychenz Volume ID: 12833b98-f951-45ac-bda9-3455904cab0b
13:33 eychenz Status: Started
13:33 eychenz Number of Bricks: 1 x 2 = 2
13:33 eychenz Transport-type: tcp
13:33 eychenz Bricks:
13:33 eychenz Brick1: gluster1:/export/sdc1/brick
13:33 eychenz Brick2: gluster2:/export/sdc1/brick
13:33 eychenz and on the other node is the same :
13:33 eychenz ubuntu@gluster2:~$ sudo gluster volume info
13:33 eychenz
13:33 eychenz Volume Name: gv0
13:33 eychenz Type: Replicate
13:33 eychenz Volume ID: ce406330-9c7e-480e-9c2c-97927fd8c37e
13:33 eychenz Status: Started
13:33 eychenz Number of Bricks: 1 x 2 = 2
13:33 eychenz Transport-type: tcp
13:33 eychenz Bricks:
13:34 eychenz Brick1: gluster1:/export/sdb1/brick
13:34 eychenz Brick2: gluster2:/export/sdb1/brick
13:34 eychenz
13:34 eychenz Volume Name: gv1
13:34 eychenz Type: Replicate
13:34 eychenz Volume ID: 12833b98-f951-45ac-bda9-3455904cab0b
13:34 eychenz Status: Started
13:34 eychenz Number of Bricks: 1 x 2 = 2
13:34 eychenz Transport-type: tcp
13:34 eychenz Bricks:
13:34 eychenz Brick1: gluster1:/export/sdc1/brick
13:34 eychenz Brick2: gluster2:/export/sdc1/brick
13:36 gothos eychenz: next time please use something like: http://fpaste.org/
13:37 eychenz sure, sorry for that
13:39 smohan joined #gluster
13:43 LordFolken eychenz: whats the issue?
13:43 LordFolken what does error 17 mean
13:43 LordFolken as in
13:43 LordFolken [2015-01-29 13:41:42.357671] W [ec-common.c:164:ec_check_status] 2-datapoint-disperse-0: Operation failed on some subvolumes (up=7, mask=6, remaining=0, good=6, bad=1)
13:43 LordFolken [2015-01-29 13:41:42.357737] W [ec-common.c:131:ec_heal_report] 2-datapoint-disperse-0: Heal failed (error 17)
13:43 LordFolken [2015-01-29 13:41:42.357671] W [ec-common.c:164:ec_check_status] 2-datapoint-disperse-0: Operation failed on some subvolumes (up=7, mask=6, remaining=0, good=6, bad=1)
13:43 LordFolken [2015-01-29 13:41:42.357737] W [ec-common.c:131:ec_heal_report] 2-datapoint-disperse-0: Heal failed (error 17)
13:44 eychenz well in the document, the figure shows two servers but the command includes 4 servers...
13:45 eychenz page 16 on http://www.gluster.org/wp-content/uploads/2012/05/Gluster_File_System-3.3.0-Administration_Guide-en-US.pdf
13:45 eychenz I'm not sure if the command is right for that purpose
13:46 eychenz So I'm asking how can I create a replicated & distributed volume while having two servers and each having two volumes
13:47 meghanam joined #gluster
13:48 harish joined #gluster
13:52 tdasilva joined #gluster
13:58 bene2 joined #gluster
14:02 nbalacha joined #gluster
14:06 shubhendu joined #gluster
14:09 virusuy joined #gluster
14:09 virusuy joined #gluster
14:25 kanagaraj joined #gluster
14:35 wkf joined #gluster
14:37 meghanam joined #gluster
14:38 dgandhi joined #gluster
14:39 dgandhi joined #gluster
14:40 dgandhi joined #gluster
14:41 hagarth joined #gluster
14:42 psilvao LordFolken:: it's possible to glusterd works only ip v4?
14:48 mikedep333 joined #gluster
14:51 DV joined #gluster
14:52 psilvao Its a bug? --> [common-utils.c:227:gf_resolve_ip6] 0-resolver: getaddrinfo failed (Nombre o servicio desconocido)
14:52 psilvao why ip v6?
14:53 jmarley joined #gluster
14:53 jmarley joined #gluster
14:54 theron joined #gluster
14:55 rwheeler joined #gluster
14:59 kshlm joined #gluster
15:02 Intensity joined #gluster
15:06 harish joined #gluster
15:16 jobewan joined #gluster
15:17 shaunm joined #gluster
15:17 wushudoin joined #gluster
15:24 lmickh joined #gluster
15:29 Bardack joined #gluster
15:29 ira joined #gluster
15:34 Inflatablewoman joined #gluster
15:35 Inflatablewoman hi.  Question with regards replication type of volume.  It appears that my data is not actually totally replicated but split across the two volumes.  Is that normal?
15:38 semiosis Inflatablewoman: sounds like you forgot to use the [replica N] option in your volume create command.  without that, using two bricks will make a distributed volume.
15:38 Inflatablewoman ahhhhh
15:39 Inflatablewoman gluster volume create gv0 replica 2 gluster1.qmirm.test2:/export/sdb1/brick gluster2.qmirm.test2:/export/sdb1/brick
15:39 Inflatablewoman replica 2
15:40 Inflatablewoman using glusterfs 3.5.2
15:42 Inflatablewoman hmm how can I see what the replica state thinks it is?
15:43 kkeithley_ gluster volume info
15:43 semiosis kkeithley_++
15:43 glusterbot semiosis: kkeithley_'s karma is now 1
15:43 semiosis kkeithley++
15:43 glusterbot semiosis: kkeithley's karma is now 23
15:43 anrao joined #gluster
15:43 Inflatablewoman Type: Replicate
15:43 Inflatablewoman Well, it says replicate but not the count
15:43 Inflatablewoman Status: Started Number of Bricks: 1 x 2 = 2
15:44 * kkeithley_ wishes for a dogmabot
15:44 Inflatablewoman thanks
15:46 jmarley joined #gluster
15:46 Inflatablewoman any ideas ?
15:47 semiosis whats the question?
15:47 kkeithley_ files aren't replicated on both bricks
15:47 Inflatablewoman I created the volume with replica 2.  The data though looks split.
15:47 Inflatablewoman I was wondering if that was normal...
15:47 kkeithley_ no, it's not normal
15:47 Inflatablewoman ahhh ok
15:48 Inflatablewoman So if one node goes down, then I cant access the data?
15:48 Inflatablewoman sorry, these questions might seem dumb. :)
15:48 kkeithley_ until it's healed, yes.  try healing the volume and see if it fixes itself.
15:48 kkeithley_ @splitbrain
15:48 glusterbot kkeithley_: I do not know about 'splitbrain', but I do know about these similar topics: 'split brain', 'split-brain'
15:48 semiosis try a 'gluster volume heal full' or something like that
15:49 kkeithley_ @split brain
15:49 glusterbot kkeithley_: To heal split-brain, use splitmount. http://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/
15:49 semiosis doesnt sound like split brain to me, yet
15:49 semiosis maybe a connectivity issue
15:49 Inflatablewoman if I kill a node manually
15:50 Inflatablewoman I was under the impression I can still get to the data.
15:50 semiosis Inflatablewoman: did you allow the right ,,(ports) between the hosts?
15:50 glusterbot Inflatablewoman: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
15:50 semiosis Inflatablewoman: your replication is not working right. once it is you will have high availability
15:50 Inflatablewoman ahhh ok
15:50 Inflatablewoman I mounted the volume in osx
15:50 Inflatablewoman all looks good from the OSX
15:51 Inflatablewoman then when I check out the bricks
15:51 Inflatablewoman the data is split between them
15:51 Inflatablewoman I was wondering if I had done something obvious.  I was expecting replication to mirror the two bricks.
15:51 semiosis Inflatablewoman: check the client log file, usually /var/log/glusterfs/the-mount-point.log but no idea where that would be on osx.  i suspect you'll find some connection failures in the log
15:51 semiosis and i'm betting it's a firewall issue
15:52 harish joined #gluster
15:52 Inflatablewoman the connection is fine.  I killed the node on purpose to test the principal
15:52 Inflatablewoman I was under the impression that if I have 2 nodes with the same data, if one goes down, then I can still access the data.
15:52 semiosis you keep repeating yourself
15:52 kkeithley_ up to the point you killed the node, files should be present on both nodes
15:53 kkeithley_ and when you kill one node you should still be able to read, from the remaining node
15:53 semiosis find that log file, put it on pastie.org, that should help us help you
15:53 Inflatablewoman the structure is exactly the same on both bricks
15:53 Inflatablewoman but the data is split
15:54 semiosis you've said that three times now
15:54 semiosis still not enough information for us to help you.  get the log file.
15:54 * semiosis gbtw
15:55 plarsen joined #gluster
15:58 Inflatablewoman I am trying not to talk about specifics, but the principles.
15:59 Inflatablewoman if I have a replication volume.  I thought (it seems incorrectly) that the data is replicated between the bricks.  It seems that is not true, and that the data is split.
16:04 bennyturns joined #gluster
16:06 semiosis Inflatablewoman: we can help you fix your replication problem, to get it working correctly. but we need logs.
16:06 semiosis Inflatablewoman: aside from that i'm not sure there's anything else we can do to help you.
16:06 semiosis s/not sure/doubtful/
16:07 glusterbot What semiosis meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
16:07 semiosis yay glusterbot
16:07 Inflatablewoman I think you already have
16:10 Inflatablewoman Can you confirm that redudancy is only supported by the Gluster client and not the NFS mount?
16:15 semiosis when using an nfs client replication is handled by the server.  the files are stored redundantly (on a replicated volume) but the client connection is not HA, unless you have set up a virtual IP between your servers.
16:15 kkeithley_ With a gluster native mount, the data is replicated by the client.
16:15 kkeithley_ yeah, what semiosis said
16:16 kkeithley_ semiosis++
16:16 glusterbot kkeithley_: semiosis's karma is now 2000010
16:16 Inflatablewoman semiosis++
16:16 glusterbot Inflatablewoman: semiosis's karma is now 2000011
16:17 Inflatablewoman thanks.
16:17 Inflatablewoman kkeithley_++
16:17 glusterbot Inflatablewoman: kkeithley_'s karma is now 2
16:21 * kkeithley_ needs a karma balance transfer
16:24 semiosis talk to JoeJulian
16:25 semiosis thats how i got a 2000000 karma mortgage
16:25 T3 joined #gluster
16:40 JoeJulian Mortgage... pfft... you earned those but there was no tracking method.
16:41 _Bryan_ joined #gluster
16:44 partner i have some way to go to those numbers.. need to active myself, might need to quit the day job, too :)
16:44 JoeJulian Hehe
16:45 T3 joined #gluster
16:50 kovshenin joined #gluster
16:51 tanuck joined #gluster
16:55 harish joined #gluster
17:01 bbreton joined #gluster
17:03 calisto joined #gluster
17:06 bbreton I was wondering if anyone had problems with glusterfs on centos 7.  I have a centos 6 installation that I use and configured a centos 7 image the same aside from a few minor differences due to the OS differences. Gluster is working, but when I hit it with a cluster I am getting transport endpoint not connected.  Admittedly it is getting hit fairly hard, but this same config on centos 6 with the same workload worked
17:07 partner selinux? firewall?
17:08 JoeJulian Check firewalld/iptables, selinux, and locale
17:09 JoeJulian partner++
17:09 glusterbot JoeJulian: partner's karma is now 9
17:11 samppah one by one :)
17:11 JoeJulian Need to bump your numbers too, samppah. You used to help out a lot in the early days.
17:11 JoeJulian (when it was most critical)
17:11 samppah Yeah, but I have been slacking too much :O
17:12 partner samppah: as i'm ignoring all the parts/joins, has any new faces from the meetup joined the channel?
17:12 JoeJulian We wouldn't be where we are if it wasn't for your help back then.
17:12 samppah partner: haven't noticed any.. I'm trying to look for joins from .fi domains
17:13 bbreton thanks Joe, firewall was already disabled at least for now... this is on AWS so I double checked that security groups are premissive at least for within the cluster, selinux is enabled by default on this image I will try disabling it... locale is saying en_US.UTF-8 for everything which I assume is good
17:13 tanuck joined #gluster
17:13 partner on what exact version that locale this is present? i haven't bumped into it but then again i'm running old version
17:13 partner locale thing..
17:14 JoeJulian It's probably an selinux bug.
17:14 partner ah
17:14 JoeJulian The locale thing came around 3.5.2ish. Non english locales seem to be effected.
17:14 JoeJulian affected too.
17:15 partner might turn into rh/centos dude on monday as the platform i'm supposed to take care is running on such.. :o
17:15 JoeJulian Excellent. I look forward to your conversion.
17:16 gem_ joined #gluster
17:16 partner haha
17:16 DV joined #gluster
17:17 partner its been awhile.. at around some red hat 6 or so from ~2000 was the last one i've had on any of my machines
17:19 bbreton BTW just to make clear for small jobs everything works with no errors... it is only when I start hitting it hard maxing out the cluster that I am getting the issuse. That said CPU/ram/process runtime seem equivlent in 7 vs. 6 so I'm assuming the load on gluster is equivlent (maybe a dangerous assumption)
17:20 JoeJulian Oh, not clear...
17:20 bbreton sorry yea I realized after that I didn't make it clear that it was at least working and this was only under load that it starts failing
17:21 JoeJulian so ENOTCONN on heavy load...
17:21 JoeJulian Check the client logs. Look for ping-timeout.
17:21 JoeJulian Then take that timestamp and look at the bricks.
17:22 JoeJulian And no, nobody's complained of such an issue as far as I've seen.
17:23 bbreton would that with default settings be in: /var/log/glusterfs/gluster.log ?
17:24 JoeJulian If your mount point is /gluster
17:24 bbreton yea it is
17:25 T3 ndevos: awesome work at http://www.gluster.org/community/documentation/images/7/71/Debugging-with-wireshark-niels-de-vos.pdf
17:25 T3 ndevos: thank you so much
17:25 JoeJulian ndevos++
17:25 glusterbot JoeJulian: ndevos's karma is now 7
17:26 T3 well deserved
17:26 JoeJulian ikr
17:32 PeterA joined #gluster
17:35 bbreton I'm going to try disabling selinux before going further... I confirmed it was disabled when i was running centos6 and I see googling that selinux can impact performance... if my cluster load was close to the tipping point even on the current setup I could see this pushing it over the edge
17:36 zerick joined #gluster
17:42 T3 I'm about to run the following command to capture gluster packages, and would appreciate if anyone could give a feedback about it:
17:42 T3 # tcpdump -w glusterfs1.pcap -i any -s 0 (tcp and portrange 24007-24008 and portrange 49152-49153 and portrange 38465-38467 and port 2049 and port 111) and (udp port 111)
17:43 JoeJulian Looks fine to me, assuming those are the right ports for your bricks. Check with "gluster volume status".
17:43 T3 actually:
17:43 T3 # tcpdump -w glusterfs1.pcap -i any -s 0 '(tcp and portrange 24007-24008 and portrange 49152-49153 and portrange 38465-38467 and port 2049 and port 111) or (udp port 111)'
17:43 T3 yeah, only 2 bricks
17:44 T3 will check as soon as possible
17:44 T3 # gluster volume status
17:44 T3 Another transaction could be in progress. Please try again after sometime.
17:47 nishanth joined #gluster
17:49 deniszh joined #gluster
18:02 harish joined #gluster
18:08 timbyr_ joined #gluster
18:10 fattaneh joined #gluster
18:13 glusterbot News from newglusterbugs: [Bug 1187296] No way to gracefully rotate the libgfapi Samba vfs_glusterfs logfile. <https://bugzilla.redhat.com/show_bug.cgi?id=1187296>
18:14 Gill joined #gluster
18:20 kkeithley_ semiosis: know anything about non-functional georep in wheezy pkgs?
18:20 Rapture joined #gluster
18:20 JoeJulian the libexec directory doesn't exist
18:22 kkeithley_ can you say a bit more about that? ;-)
18:26 JoeJulian iirc, /usr/libexec doesn't exist in that distro so /usr/libexec/glusterfs/gsyncd isn't there... or something like that...
18:27 JoeJulian dammit... I actually spun up VMs to figure this out and now I can't find the occasion...
18:28 fattaneh left #gluster
18:28 JoeJulian Found it
18:28 kkeithley_ ugh.
18:28 JoeJulian [2015-01-08 02:28:15.933542] E [resource(monitor):207:logerr] Popen: ssh> bash: /usr/libexec/glusterfs/gsyncd: No such file or directory
18:28 JoeJulian https://botbot.me/freenode/gluster/2015-01-08/?msg=29017476&amp;page=1
18:33 kkeithley_ google's first two links would seem to suggest that on Debian-based distros we should be using /usr/lib/glusterfs/ as per FHS
18:41 kkeithley_ or maybe /usr/lib/x86_64-linux-gnu/glusterfs/gsyncd
18:48 fandi joined #gluster
18:49 kkeithley_ hmmm. autoconf seems to be the culprit.  I guess we're supposed to git --libexecdir= to the configure script on Debian
18:49 kkeithley_ s/git/pass/
18:49 glusterbot What kkeithley_ meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
18:50 kkeithley_ hmmm. autoconf seems to be the culprit.  I guess we're supposed to pass --libexecdir= to the configure script on Debian
18:55 semiosis kkeithley_: i do know about that
18:56 kkeithley_ mkay
18:56 semiosis there was a bug, let me try to find it
18:57 semiosis bug 1132766
18:57 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1132766 unspecified, unspecified, ---, glusterbugs, ASSIGNED , ubuntu ppa: 3.5 missing hooks and files for new geo-replication
18:58 kkeithley_ okay
18:58 jmarley joined #gluster
18:59 semiosis nathan's irc nic was nated. he was extremely helpful
18:59 semiosis https://botbot.me/freenode/gluster/2014-08-28/?msg=20573136&amp;page=6
19:01 semiosis according to my last comment, we fixed one issue (the post hook script) but never solved the path problem you're talking about
19:01 semiosis https://bugzilla.redhat.com/show_bug.cgi?id=1132766#c7
19:01 glusterbot Bug 1132766: unspecified, unspecified, ---, glusterbugs, ASSIGNED , ubuntu ppa: 3.5 missing hooks and files for new geo-replication
19:03 semiosis kkeithley_: https://github.com/semiosis/glusterfs-debian/commit/2d6ed903ecefa292ce32f4a70da380f515370bf0
19:25 lpabon joined #gluster
19:27 MacWinner joined #gluster
19:29 mnbvasd joined #gluster
19:33 harish joined #gluster
19:36 harish joined #gluster
19:41 n-st joined #gluster
19:48 tdasilva joined #gluster
19:50 nage__ joined #gluster
19:50 JustinClift joined #gluster
19:50 AaronGreen joined #gluster
19:51 bbreton No luck with disabling selinux still getting the transport errors.  Looking at the logs the first message with something wrong says: server XX.XX.XX.XX:49153 has not responded in the last 42 seconds, disconnecting
19:52 Nuxr0 joined #gluster
19:52 bbreton I also tried cutting the load by 1/2 and still getting the error though not as quickly... monitoring doesn't show networking being down or any other issue with the failing server
19:52 dgandhi1 joined #gluster
19:53 georgeh__ joined #gluster
19:53 kkeithley_ semiosis: I don't see that that will fix the /usr/libexec/ vs /usr/lib/ issue. Am I missing something?
19:54 tobias- joined #gluster
19:54 semiosis kkeithley_: you're correct.  [14:01] <semiosis> according to my last comment, we fixed one issue (the post hook script) but never solved the path problem you're talking about
19:54 semiosis last comment in the bug report
19:54 kkeithley_ haha, yes, I suppose I should have read all the way to the bottom.
19:54 abyss^_ joined #gluster
19:55 dgandhi1 joined #gluster
19:55 social_ joined #gluster
19:56 dgandhi1 joined #gluster
19:56 marcoceppi joined #gluster
19:57 _br_ joined #gluster
19:57 dgandhi joined #gluster
19:58 al joined #gluster
19:58 dgandhi joined #gluster
19:58 bbreton nevermind server is unreachable I was looking at the wrong thing
20:00 bbreton ach... i terminated that node it didn't die on it's own... got mixed up on which test cluster i was looking at
20:02 fattaneh1 joined #gluster
20:02 semiosis i hate it when that happens
20:03 fattaneh1 left #gluster
20:09 bbreton I was all hopefull... if the issue was an entire node crashing that is more straightforward to debug at least for me
20:13 glusterbot News from newglusterbugs: [Bug 1187347] RPC ping does not retransmit <https://bugzilla.redhat.com/show_bug.cgi?id=1187347>
20:28 calum_ joined #gluster
20:51 ricky-ticky joined #gluster
21:00 tanuck joined #gluster
21:02 tdasilva joined #gluster
21:25 elico joined #gluster
21:26 Guest24754 joined #gluster
21:47 natgeorg joined #gluster
22:05 natgeorg joined #gluster
22:07 T3 joined #gluster
22:14 glusterbot News from newglusterbugs: [Bug 1187372] Samba "use sendfile" is incompatible with GlusterFS libgfapi vfs_glusterfs. <https://bugzilla.redhat.com/show_bug.cgi?id=1187372>
22:14 Rapture joined #gluster
22:19 tdasilva joined #gluster
22:23 partner hmm please remind me again for the reason of these popping into log each second:
22:24 partner [2015-01-29 22:22:30.137812] W [client-rpc-fops.c:1994:client3_3_setattr_cbk] 0-dfs-client-12: remote operation failed: Operation not permitted
22:24 partner [2015-01-29 22:22:30.137851] E [dht-linkfile.c:213:dht_linkfile_setattr_cbk] 0-dfs-dht: setattr of uid/gid on /filedata/6431a17fc6f0aa5fa851c7ee4b4b414f1cc9176bc :<gfid:00000000-0000-0000-0000-000000000000> failed (Operation not permitted)
22:38 vimal joined #gluster
22:57 dgandhi joined #gluster
22:58 wkf joined #gluster
23:12 n-st joined #gluster
23:13 T3 joined #gluster
23:15 gildub joined #gluster
23:20 mkzero joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary