Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-08-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 jbautista- joined #gluster
00:14 nzero joined #gluster
00:14 gildub joined #gluster
00:24 wushudoin| joined #gluster
00:27 cliluw joined #gluster
00:28 wushudoin| joined #gluster
00:54 skoduri joined #gluster
01:28 PaulCuzner joined #gluster
01:28 PaulCuzner left #gluster
01:33 shyam joined #gluster
01:40 nangthang joined #gluster
01:42 nzero joined #gluster
01:45 victori joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:55 calisto joined #gluster
01:55 gildub joined #gluster
01:59 pranithk joined #gluster
01:59 pranithk left #gluster
02:14 RameshN joined #gluster
02:16 zwevans joined #gluster
02:17 zwevans left #gluster
02:27 pppp joined #gluster
02:30 davidself joined #gluster
02:42 harish joined #gluster
02:53 corretico_ joined #gluster
02:58 calavera joined #gluster
03:12 dusmant joined #gluster
03:20 Pupeno joined #gluster
03:21 skoduri joined #gluster
03:24 bharata-rao joined #gluster
03:26 Byreddy joined #gluster
03:26 Twistedgrim joined #gluster
03:27 aravindavk joined #gluster
03:28 ccha joined #gluster
03:35 shubhendu joined #gluster
03:40 calisto joined #gluster
03:48 dusmant joined #gluster
03:54 itisravi joined #gluster
03:56 TheSeven joined #gluster
04:03 ppai joined #gluster
04:08 hagarth joined #gluster
04:14 atinm joined #gluster
04:20 yosafbridge joined #gluster
04:24 jwd joined #gluster
04:26 nbalacha joined #gluster
04:27 jwaibel joined #gluster
04:28 RameshN joined #gluster
04:29 sakshi joined #gluster
04:30 kotreshhr joined #gluster
04:34 yazhini joined #gluster
04:39 surabhi joined #gluster
04:42 deepakcs joined #gluster
04:48 ndarshan joined #gluster
04:49 bennyturns joined #gluster
04:50 bennyturns joined #gluster
04:51 kanagaraj joined #gluster
04:52 howtostart joined #gluster
04:53 howtostart left #gluster
04:54 aravindavk joined #gluster
04:58 poornimag joined #gluster
05:00 kovshenin joined #gluster
05:00 jiffin joined #gluster
05:02 overclk joined #gluster
05:02 skoduri joined #gluster
05:04 ppai joined #gluster
05:05 surabhi joined #gluster
05:07 gem joined #gluster
05:08 kshlm joined #gluster
05:13 vimal joined #gluster
05:16 hgowtham joined #gluster
05:17 ashiq joined #gluster
05:19 vmallika joined #gluster
05:20 jiffin1 joined #gluster
05:24 raghu joined #gluster
05:24 aravindavk joined #gluster
05:25 edualbus joined #gluster
05:29 rafi joined #gluster
05:32 Manikandan joined #gluster
05:36 anil joined #gluster
05:37 jwd joined #gluster
05:42 dusmant joined #gluster
05:43 poornimag joined #gluster
05:44 TvL2386 joined #gluster
05:45 ppai joined #gluster
05:46 pppp joined #gluster
05:46 skoduri joined #gluster
05:46 Manikandan joined #gluster
05:49 ndarshan joined #gluster
05:49 shubhendu joined #gluster
05:50 Bhaskarakiran joined #gluster
05:51 woakes070048 joined #gluster
05:52 prabu joined #gluster
05:57 itisravi joined #gluster
06:07 autoditac joined #gluster
06:08 schandra joined #gluster
06:09 schandra hchiramm: there?
06:09 kdhananjay joined #gluster
06:10 schandra hchiramm:  readthedocs URLs need not have file extentions, only specify using the filenames (may it be any file)
06:10 hchiramm schandra, yes
06:10 chirino joined #gluster
06:11 hchiramm schandra, pm ?
06:16 vimal joined #gluster
06:28 martinetd joined #gluster
06:30 RameshN joined #gluster
06:32 kshlm joined #gluster
06:37 nangthang joined #gluster
06:39 prabu joined #gluster
06:46 neha joined #gluster
06:46 ndarshan joined #gluster
06:46 shubhendu joined #gluster
06:48 sakshi joined #gluster
06:52 nbalacha joined #gluster
06:57 TvL2386 joined #gluster
07:03 itisravi_ joined #gluster
07:06 Manikandan joined #gluster
07:07 poornimag joined #gluster
07:10 dusmant joined #gluster
07:10 atinm joined #gluster
07:13 sakshi joined #gluster
07:13 itisravi_ joined #gluster
07:15 kshlm joined #gluster
07:17 schandra joined #gluster
07:17 [Enrico] joined #gluster
07:18 ctria joined #gluster
07:27 jiffin1 joined #gluster
07:30 Manikandan joined #gluster
07:41 papamoose joined #gluster
07:43 shaunm joined #gluster
07:44 DV_ joined #gluster
07:48 whattodo joined #gluster
07:52 fsimonce joined #gluster
07:55 jvandewege joined #gluster
08:02 ashiq joined #gluster
08:02 DV joined #gluster
08:03 RedW joined #gluster
08:04 poornimag joined #gluster
08:04 dusmant joined #gluster
08:12 woakes070048 hey is there anyone have there infiniband carts bonded?
08:12 jcastillo joined #gluster
08:18 Trefex joined #gluster
08:19 ashiq- joined #gluster
08:20 ashiq- joined #gluster
08:22 rjoseph joined #gluster
08:29 ashiq- joined #gluster
08:29 vipul_ joined #gluster
08:30 vipul_ left #gluster
08:32 rafi1 joined #gluster
08:34 kaushal_ joined #gluster
08:36 nbalacha joined #gluster
08:38 harish joined #gluster
08:42 woakes070048 joined #gluster
08:47 atinm joined #gluster
08:49 shubhendu joined #gluster
08:50 ndarshan joined #gluster
08:51 mattmulhern joined #gluster
08:54 dusmant joined #gluster
08:56 Philambdo joined #gluster
09:01 martinetd Hi. Is there a recommandation (for rhel7) to automount a gluster volume on localhost? fstab with _netdev is still too early because the brick itself is off a _netdev...
09:02 martinetd I get this in logs, as expected: "[2015-08-06 08:40:19.762452] E [socket.c:2332:socket_connect_finish] 0-glu0-client-0: connection to <ip>:<port> failed (Connection refused)"
09:02 DV joined #gluster
09:02 Manikandan joined #gluster
09:03 kshlm joined #gluster
09:03 martinetd Doesn't matter if I try to mount with server:vol or .vol file syntax, I'm guessing fetch-attempts=X could help but it's marked as "Deprecated"... Couldn't get a mount unit file depending on glusterd.service to work either (my guess is that glusterd forks before the volume is ready so mount is still too early)
09:04 martinetd Only thing that kind-of works is using automounter but it's kind of silly to rely on that for something I want always mounted
09:05 LebedevRI joined #gluster
09:06 arcolife joined #gluster
09:07 elico joined #gluster
09:07 eljrax bennyturns: Hey, sorry I disappeared the other day
09:07 eljrax did you mention you saw a slow-down with distributed volumes as well?
09:08 eljrax martinetd: On RHEL7 you've got systemd, you could write your own mount unit that uses dependencies
09:08 jcastill1 joined #gluster
09:10 mattmulhern joined #gluster
09:10 martinetd It's what I meant by "Couldn't get a mount unit file depending on glusterd.service to workeither (my guess is that glusterd forks before the volume is ready so mount is still too early)"
09:10 martinetd I made the mount unit file depend on glusterd.service
09:10 eljrax Oh sorry, missed that
09:10 martinetd Sorry for writing too much at once ;)
09:12 ramky joined #gluster
09:13 eljrax I'm not exactly a systemd ninja, but did you try setting Before in glusterd.service ?
09:14 jcastillo joined #gluster
09:15 martinetd No haven't tried that, but I'd wager it should be the same as setting "After=glusterd.service" in the mount unit?
09:15 martinetd Given glusterd.service is of type forking systemd can't tell when the service is really "ready" either
09:16 martinetd I could try making it run with --no-daemon first
09:16 eljrax Yeah, that sounds like it'd do the same thing
09:17 mattmulhern joined #gluster
09:17 eljrax bennyturns: Btw, I did try to set up a whole new volume with distributed from teh get-go, rather than add two bricks to a replicated one. Still getting the same results - that 2x2 is markedly slower than 1x2
09:17 kd1 joined #gluster
09:18 laxdog_ Hi, I'm having a problem with the speed of my gluster mount. It's over RDMA, but only writing at around 10% of the speed of the disk and no where near the speed of the network. Tested independantly the disk and network are fine. I'm kind of stuck now. Does anyone have an idea where I could go with this?
09:20 jiffin rafi1: can u look into laxdog_ issue
09:20 jiffin laxdog_: which gluster version are u using?
09:21 mattmulhern joined #gluster
09:21 laxdog_ jiffin: 3.7.3
09:22 martinetd BTW I haven't spent much time trying RDMA yet, but we're at least missing a selinux policy to allow access to /dev/infiniband stuff
09:22 laxdog_ I've set it to permissive, do I need to disable it completely?
09:23 anoopcs laxdog_, What was the workload?
09:23 anoopcs laxdog_, I mean, was that a dd based test?
09:24 laxdog_ dd writing from /dev/zero to the mount
09:24 laxdog_ Yea
09:24 laxdog_ 1G file
09:24 rafi1 laxdog_: and the volume configuration ?
09:24 atinm joined #gluster
09:24 laxdog_ replica 2
09:24 laxdog_ tested with duplicate as well and had the same issue
09:24 * anoopcs and we have our RDMA specialist live here :)
09:26 rafi1 laxdog_: I guess your IPoIB is configured correctly ?
09:26 autoditac joined #gluster
09:27 rafi1 laxdog_: did you looked into qperf output to get line speed
09:27 laxdog_ I ran tests with iperf between the hosts (I'm only using two at the minute for the PoC) and I was getting about 13GB/s, which is about what I'd expect.
09:28 laxdog_ qperf said it was 4x and 10Gb I think. I can check again. Though most of what I checked around the IB stuff was looking ok.
09:29 laxdog_ Actually, I'm thinking of another tool. I haven't used qperf. I'll poke at that now.
09:30 coredump joined #gluster
09:30 jcastill1 joined #gluster
09:31 rafi1 laxdog_: did you checked the same thing over tcp ? over IPoIB ?
09:32 rafi1 laxdog_: and also may be you can use RAM disk to make sure your disk is not playing around ;)
09:33 laxdog_ The qpef tests look ok. RDMA tests ~ 1.73 GB/s and tcp a bit lower.
09:34 laxdog_ I don't think I need the RAM disk. The write speed is so low that I don't think it's an issue there. The RAID array was testing around 130 MB/s with dd and the GFS mount was testing around 10 MB/s
09:35 jcastillo joined #gluster
09:36 DV joined #gluster
09:39 rafi1 laxdog_: what about tcp protocol? can you make sure , you are getting expected speed with tcp ?
09:39 rafi1 laxdog_: to narrow down the possible scenarios ?
09:40 Manikandan joined #gluster
09:41 DV joined #gluster
09:44 shubhendu joined #gluster
09:44 laxdog_ Yea, I tested that when you mentioned qperf. My qperf results: TCP 1.69 GB/s, RDMA 1.72 GB/s.
09:45 laxdog_ I have also tested the volume with RMDA and IPoIB and got the same results.
09:47 ndarshan joined #gluster
09:48 dusmant joined #gluster
09:49 jcastill1 joined #gluster
09:54 jcastillo joined #gluster
10:13 cuqa_ joined #gluster
10:22 Slashman joined #gluster
10:32 cuqa_ joined #gluster
10:34 shyam joined #gluster
10:35 maveric_amitc_ joined #gluster
10:44 kkeithley1 joined #gluster
10:44 ashiq joined #gluster
10:44 ashiq- joined #gluster
10:45 DV joined #gluster
10:46 ramky joined #gluster
10:47 gildub joined #gluster
11:00 ajames-41678 joined #gluster
11:01 nsoffer joined #gluster
11:06 firemanxbr joined #gluster
11:09 jcastill1 joined #gluster
11:14 jcastillo joined #gluster
11:16 Saravana_ joined #gluster
11:17 DV joined #gluster
11:21 shyam joined #gluster
11:26 ninkotech joined #gluster
11:26 ninkotech_ joined #gluster
11:29 arcolife joined #gluster
11:30 B21956 joined #gluster
11:30 B21956 left #gluster
11:32 Manikandan joined #gluster
11:33 martinetd hmm, this is fun, I now have two nodes in a pool; one sees the other as disconnected, the other sees the first as connected..
11:34 martinetd Tried restarting one glusterd after the other and state didn't change
11:34 martinetd the glusterd.vol.log is pretty scary "readv on gluster socket failed (Invalid argument)" - can be due to an upgrade?
11:37 harish joined #gluster
11:37 anoopcs laxdog_, Since you have observed same speed with both TCP and RDMA protocol, I guess it has nothing to do with protocol side. Can please provide details regarding your volume set options, if any??
11:40 unclemarc joined #gluster
11:42 anoopcs s/nothing/something
11:45 anoopcs laxdog_, ^^
11:52 rafi joined #gluster
11:53 ashiq joined #gluster
11:56 ndarshan joined #gluster
12:01 jcastill1 joined #gluster
12:03 DV_ joined #gluster
12:06 jcastillo joined #gluster
12:06 gem joined #gluster
12:10 itisravi joined #gluster
12:17 deefy joined #gluster
12:18 rafi1 joined #gluster
12:18 deefy Hello, we're trying to mount a glusterfs drive through fstab on a centos7 but the systems hangs, probably because the network is no up yet. The _netdev option doesn't seem to have any effect on this. We've been googling around for hours but we can't be the first one to have this problem
12:24 poornimag joined #gluster
12:26 DV joined #gluster
12:32 l0uis deefy: i automount my gluster volume (on Ubuntu 12.04, mind you) from fstab w/ nobootwait and _netdev, no issues at all
12:32 aravindavk joined #gluster
12:33 arcolife joined #gluster
12:35 ajames-41678 joined #gluster
12:37 rafi joined #gluster
12:51 theron joined #gluster
12:52 deefy I get an error if I use nobootwait
12:52 deefy now I'm using a systemd .mount file and it works fine
12:53 deefy this was probably centos7-specific
12:53 l0uis yep glad its working
12:54 deefy but thanks for your help
13:00 cholcombe joined #gluster
13:00 bennyturns joined #gluster
13:01 DV_ joined #gluster
13:02 Manikandan joined #gluster
13:03 bennyturns joined #gluster
13:09 DV joined #gluster
13:09 mattmulhern joined #gluster
13:09 portante joined #gluster
13:09 portante has anybody played with logstash and gluster?
13:10 portante And have folks seen packetbeat for ElasticSearch? https://github.com/elastic/packetbeat
13:10 glusterbot Title: elastic/packetbeat · GitHub (at github.com)
13:10 portante I'd like to see somebody here could help me get packetbeat to track gluster protocol events like packetbeat does for mysql, pgsql, mongodb, etc.
13:17 shyam joined #gluster
13:18 ramky joined #gluster
13:23 aaronott joined #gluster
13:31 DV joined #gluster
13:39 arcolife joined #gluster
13:41 muneerse2 joined #gluster
13:41 muneerse2 hi
13:41 glusterbot muneerse2: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:41 dgandhi joined #gluster
13:43 cholcombe joined #gluster
13:46 chirino joined #gluster
13:47 plarsen joined #gluster
13:52 DV_ joined #gluster
13:52 siel joined #gluster
14:03 _maserati joined #gluster
14:04 _maserati good morning fellas
14:08 victori joined #gluster
14:14 laxdog_ anoopcs: gluster volume create gv1 transport rdma gluster-01-ib0:/data/brick/gv1/ gluster-02-ib0:/data/brick/gv1/
14:15 laxdog_ and the same without the transport rdma for the tcp one. I also tried it in replica 2.
14:16 laxdog_ I didn't set any other volume options after the creation.
14:17 laxdog_ It was mounted with: mount -o transport=rdma -t glusterfs 10.1.10.1:/gv1 /mnt/gv1
14:22 saltsa joined #gluster
14:27 Lee1092 joined #gluster
14:30 R0ok_ joined #gluster
14:42 victori joined #gluster
14:49 kd1 joined #gluster
14:53 timotheus1 joined #gluster
14:59 shyam joined #gluster
15:21 calavera joined #gluster
15:33 _Bryan_ joined #gluster
15:34 bearydjay joined #gluster
15:35 victori joined #gluster
15:38 shaunm joined #gluster
15:42 shaunm joined #gluster
15:44 spcmastertim joined #gluster
15:49 s19n joined #gluster
15:52 harish joined #gluster
16:09 squizzi joined #gluster
16:14 skoduri joined #gluster
16:25 s19n left #gluster
16:29 hagarth joined #gluster
16:32 jcastill1 joined #gluster
16:33 kbyrne joined #gluster
16:37 jcastillo joined #gluster
16:37 ttkg joined #gluster
16:44 calavera joined #gluster
16:57 rafi joined #gluster
16:58 saltsa joined #gluster
16:59 victori joined #gluster
17:01 Rapture joined #gluster
17:02 jobewan joined #gluster
17:03 plarsen joined #gluster
17:15 jiffin joined #gluster
17:19 jcastill1 joined #gluster
17:24 jcastillo joined #gluster
17:27 JoeJulian portante: I have (I'm not currently but will again). I haven't even looked to packetbeat yet. Sounds cool.
17:27 papamoose1 joined #gluster
17:30 cholcombe joined #gluster
17:31 psycobass joined #gluster
17:32 papamoose1 joined #gluster
17:32 calisto joined #gluster
17:33 psycobass Are there any problems upgrading from 3.6.3 to 3.7.3 on CentOS 6?
17:34 JoeJulian psycobass: I haven't heard of any, but I have heard of issues with 3.7.3 client and 3.7.2 servers. I would test your upgrade process first.
17:35 JoeJulian psycobass: Also, there are upgrade instructions if you're using quota or geo-replication when upgrading from 3.6 to 3.7.
17:35 psycobass JoeJulian: Thanks.
17:39 portante JoeJulian: do you have pointers to any past work?
17:40 JoeJulian Unfortunately, no. Everything's at my old job. semiosis had something that I started with, iirc.. Let me see if I can find it. I think it was just a gist.
17:41 JoeJulian lol... Ok, so did I.
17:41 JoeJulian https://gist.github.com/joejulian/5982126
17:41 glusterbot Title: gist:5982126 · GitHub (at gist.github.com)
17:42 semiosis https://gist.github.com/semiosis/1499710
17:42 glusterbot Title: logstash parser for glusterfs logs · GitHub (at gist.github.com)
17:42 semiosis but i'm sure that won't work with current logstash :(
17:43 semiosis @logstash
17:43 glusterbot semiosis: semiosis' logstash parser for glusterfs logs: https://gist.github.com/1499710
17:43 semiosis ha!
17:43 JoeJulian heh, forgot about that one.
17:43 semiosis probably no good for current glusterfs either for that matter
17:44 portante semiosis, JoeJulian, thanks!
17:44 semiosis good luck
17:44 jwd joined #gluster
17:59 nsoffer joined #gluster
18:01 shyam joined #gluster
18:01 hagarth joined #gluster
18:03 nexus joined #gluster
18:18 captainflannel joined #gluster
18:20 techsenshi running gluster 3.5.5 and a distributed-replicate volume.  planning to add several bricks and i guess a rebalance...
18:20 techsenshi saw some posts that there were some issues back in 3.5.3, would it be safer to upgrade to 3.6 before rebalancing?
18:25 l0uis techsenshi: not sure if there are any issues, but i recently added 2 bricks to my dist-repl volume and everything went fine w/ rebalance
18:27 techsenshi l0uis, what gluster version where you running?
18:27 l0uis 3.5.5
18:27 l0uis on ubuntu 12.04
18:27 techsenshi my volume is about 50TB, trying to ensure nothing goes crazy.. when adding another 50TB
18:28 techsenshi are you running xfs for your bricks?
18:28 l0uis we took a smaller step going from 5x2 (27TB) to 6x2 (33TB)
18:29 l0uis we'll continue to add 6TB bricks as we need em
18:29 l0uis yes
18:29 l0uis xfs
18:30 papamoose1 joined #gluster
18:31 techsenshi okay running pretty similar setup just on centos7
18:32 papamoose1 joined #gluster
18:34 papamoose1 joined #gluster
18:36 l0uis techsenshi: i will add that we did the entire thing during a maintenance window with the volume disabled.
18:36 l0uis techsenshi: just to avoid any possibility of something whacky happening. probably not required, but we wanted to be safe :)
18:37 papamoose1 joined #gluster
18:38 techsenshi yea that makes sense, not sure how long it would take to rebalance our existing 5x2 50TB..
18:39 l0uis techsenshi: once we saw the rebalance was running ok and nothing looked out of whack cpu wise we let the clients remount. we dont have amny clients, only about 20, so it wasnt a big deal to keep em out
18:40 l0uis techsenshi: but all in all the activity on the volume was low during the rebalance as we didn't run any jobs or do anything out of the ordinary
18:41 techsenshi very nice, the volume is not very active, its mainly used to share out via SMB/CIFS via node1
18:42 techsenshi i'll plan to have it down while I start the rebalance.  I read somewhere that 3.6.x rebalance engine had a major change but not sure its worth upgrading to 3.6 just for that
18:43 l0uis techsenshi: i contemplated going to 3.6 prior to the upgrade but decided against it. 3.5 has been rock solid for us
18:44 techsenshi yea i have 3.6 in mirror for a smaller web farm, but this 50tb volume is pretty critical to us
18:54 victori joined #gluster
19:07 siel joined #gluster
19:28 shyam joined #gluster
19:37 finknottle joined #gluster
19:42 nzero joined #gluster
20:01 Trefex joined #gluster
20:10 DV joined #gluster
20:11 ackjewt joined #gluster
20:12 bennyturns joined #gluster
20:30 jwaibel joined #gluster
20:50 elico joined #gluster
20:54 Pupeno_ joined #gluster
21:05 jon__ joined #gluster
21:07 Telsin joined #gluster
21:17 badone_ joined #gluster
21:49 nzero joined #gluster
22:12 nsoffer joined #gluster
22:13 daMaestro joined #gluster
22:17 _Bryan_ joined #gluster
22:43 Ramereth joined #gluster
22:48 plarsen joined #gluster
22:48 shyam joined #gluster
22:48 theron_ joined #gluster
22:48 shyam left #gluster
22:51 aaronott joined #gluster
23:04 Ramereth joined #gluster
23:16 Ramereth joined #gluster
23:22 Ramereth joined #gluster
23:29 capri joined #gluster
23:29 dgandhi joined #gluster
23:31 techsenshi joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary