Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-01-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:34 RicardoSSP joined #gluster
00:34 RicardoSSP joined #gluster
00:41 SOLDIERz joined #gluster
01:11 Pupeno joined #gluster
01:11 Pupeno joined #gluster
01:16 lyang0 joined #gluster
01:32 diegows joined #gluster
01:35 eka joined #gluster
01:40 harish joined #gluster
01:58 wgao joined #gluster
02:00 bala joined #gluster
02:09 nangthang joined #gluster
02:20 bharata-rao joined #gluster
02:49 kdhananjay joined #gluster
03:09 smohan joined #gluster
03:10 nrcpts joined #gluster
03:10 MugginsM joined #gluster
03:18 eka joined #gluster
03:19 soumya_ joined #gluster
03:25 julim joined #gluster
03:33 smohan joined #gluster
03:34 hagarth joined #gluster
03:48 rejy joined #gluster
03:48 vimal joined #gluster
03:52 ubungu joined #gluster
03:56 kumar joined #gluster
03:58 itisravi joined #gluster
03:58 SOLDIERz joined #gluster
04:00 lalatenduM joined #gluster
04:02 haomaiwa_ joined #gluster
04:08 kshlm joined #gluster
04:08 nishanth joined #gluster
04:10 ppai joined #gluster
04:21 spandit joined #gluster
04:22 siel joined #gluster
04:24 zerick joined #gluster
04:26 shubhendu joined #gluster
04:27 nbalacha joined #gluster
04:31 jiffin joined #gluster
04:31 ndarshan joined #gluster
04:33 nishanth joined #gluster
04:34 anoopcs joined #gluster
04:39 atinmu joined #gluster
04:44 rafi1 joined #gluster
04:50 kanagaraj joined #gluster
04:54 suman_d joined #gluster
04:54 atalur joined #gluster
04:57 rjoseph joined #gluster
05:14 bharata-rao Which package will have glusterd in Fedora 21 ?
05:15 bharata-rao never mind, figured out
05:26 prasanth_ joined #gluster
05:27 gem joined #gluster
05:35 meghanam joined #gluster
05:40 dusmant joined #gluster
05:42 rastar_afk joined #gluster
05:44 hagarth joined #gluster
05:46 Manikandan joined #gluster
05:46 glusterbot News from newglusterbugs: [Bug 1163543] Fix regression test spurious failures <https://bugzilla.redhat.com/show_bug.cgi?id=1163543>
05:52 nbalacha joined #gluster
05:55 aravindavk joined #gluster
06:00 anil joined #gluster
06:12 Micromus joined #gluster
06:14 overclk joined #gluster
06:17 dusmant joined #gluster
06:19 ndarshan joined #gluster
06:20 shubhendu joined #gluster
06:23 soumya_ joined #gluster
06:35 nshaikh joined #gluster
06:36 elico joined #gluster
06:37 raghu joined #gluster
06:37 ubungu joined #gluster
06:44 fandi joined #gluster
06:47 fandi joined #gluster
06:54 shubhendu joined #gluster
06:54 ndarshan joined #gluster
06:55 ubungu joined #gluster
06:57 nshaikh joined #gluster
07:00 ctria joined #gluster
07:00 Philambdo joined #gluster
07:02 dusmant joined #gluster
07:03 Manikandan joined #gluster
07:06 saurabh joined #gluster
07:09 nangthang joined #gluster
07:11 raghu joined #gluster
07:12 maveric_amitc_ joined #gluster
07:12 jporterfield joined #gluster
07:15 mbukatov joined #gluster
07:15 bala joined #gluster
07:16 y4m4_ joined #gluster
07:17 rgustafs joined #gluster
07:18 jtux joined #gluster
07:24 fandi joined #gluster
07:24 bharata-rao joined #gluster
07:25 marcoceppi joined #gluster
07:25 marcoceppi joined #gluster
07:25 plarsen joined #gluster
07:32 jporterfield joined #gluster
07:38 kdhananjay joined #gluster
07:39 mbukatov joined #gluster
07:44 [Enrico] joined #gluster
07:46 rjoseph joined #gluster
07:50 chen joined #gluster
07:52 jporterfield joined #gluster
08:01 mbukatov joined #gluster
08:03 mbukatov joined #gluster
08:06 soumya_ joined #gluster
08:13 mbukatov joined #gluster
08:13 Philambdo joined #gluster
08:17 deniszh joined #gluster
08:23 mbukatov joined #gluster
08:26 RameshN joined #gluster
08:27 rafi1 joined #gluster
08:30 dusmant joined #gluster
08:30 vimal joined #gluster
08:38 harish joined #gluster
08:39 ProT-0-TypE joined #gluster
08:40 fsimonce joined #gluster
08:40 [Enrico] joined #gluster
08:43 anoopcs joined #gluster
08:44 Manikandan joined #gluster
08:46 jiffin1 joined #gluster
08:49 anil joined #gluster
08:57 prg3 joined #gluster
09:04 Slashman joined #gluster
09:04 meghanam joined #gluster
09:12 Norky joined #gluster
09:14 ppai joined #gluster
09:17 glusterbot News from resolvedglusterbugs: [Bug 1099955] self-heal process can sometimes create directories instead of symlinks for the root gfid file in .glusterfs <https://bugzilla.redhat.com/show_bug.cgi?id=1099955>
09:19 anil joined #gluster
09:19 RameshN hchiramm: ping
09:19 glusterbot RameshN: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
09:26 ubungu joined #gluster
09:33 soumya joined #gluster
09:34 harish joined #gluster
09:37 jamesc joined #gluster
09:37 misch joined #gluster
09:39 jamesc I have a gluster setup with nfs on top. listing directories ca take 0 to 2 seconds somtimes. What can I do to speed this up. The hardware is  not a bottleneck?
09:44 mbukatov joined #gluster
09:44 vimal joined #gluster
09:45 prg3 joined #gluster
09:45 mbukatov joined #gluster
09:46 tryggvil joined #gluster
09:46 ubungu joined #gluster
09:47 glusterbot News from newglusterbugs: [Bug 1181048] lockless lookup cause disk to be kicked out <https://bugzilla.redhat.com/show_bug.cgi?id=1181048>
09:48 tryggvil joined #gluster
09:54 soumya joined #gluster
10:00 nangthang joined #gluster
10:01 anil joined #gluster
10:06 mbukatov joined #gluster
10:06 nshaikh joined #gluster
10:09 Micromus joined #gluster
10:16 shubhendu joined #gluster
10:17 ndarshan joined #gluster
10:18 deepakcs joined #gluster
10:22 bala joined #gluster
10:23 rgustafs_ joined #gluster
10:26 nangthang joined #gluster
10:27 nrcpts joined #gluster
10:32 jamesc I have a gluster setup with nfs on top. listing directories ca take 0 to 2 seconds somtimes. What can I do to speed this up. The hardware is  not a bottleneck?
10:37 jiffin joined #gluster
10:38 DV joined #gluster
10:46 anil joined #gluster
10:56 bala joined #gluster
10:59 pcaruana joined #gluster
11:00 Norky glusterfs metadata performance is not brilliant, having many files in a directory will be slow to browse
11:01 Norky that's quite a variance though, is this with different directories with different file counts?
11:01 * Norky pokes jamesc
11:02 Norky I've seen the same myself, using NFS rather than the 'native' glusterfs-on-FUSE client improved matters for directory enumeration
11:07 eightyeight joined #gluster
11:07 purpleidea fubada: for review: https://github.com/purpleidea/puppet-gluster/tree/bug/folder-purges
11:08 purpleidea fubada: if it fixes the folder purge issues, i'll push it to git master :) ... i didn't test it, but i think it's straightforward.
11:08 purpleidea fubada: thanks again for testing and reporting issues!
11:12 stickyboy joined #gluster
11:16 ctria joined #gluster
11:21 badone joined #gluster
11:21 DV joined #gluster
11:26 mbukatov joined #gluster
11:28 ndarshan joined #gluster
11:29 dusmant joined #gluster
11:29 shubhendu joined #gluster
11:30 jamesc Norky: HI, thanks I am using nfs on gluster, currently turning off dir times on all nfs clients to see if this helps.
11:31 bala joined #gluster
11:31 gem joined #gluster
11:32 Norky how large are your directories?
11:32 Norky I'm no expert in Gluster btw, just a user same as you
11:33 anoopcs joined #gluster
11:36 T3 joined #gluster
11:39 Manikandan joined #gluster
11:41 nshaikh joined #gluster
11:44 ppai joined #gluster
11:53 gem joined #gluster
12:00 polychrise_ joined #gluster
12:00 Manikandan joined #gluster
12:01 Dw_Sn joined #gluster
12:01 Dw_Sn any idea about a good benchmark method for GlusterFS +oVirt ?
12:02 lpabon joined #gluster
12:02 polychrise joined #gluster
12:02 R0ok_ joined #gluster
12:14 hchiramm_ joined #gluster
12:18 ppai joined #gluster
12:22 itisravi joined #gluster
12:28 al joined #gluster
12:37 dusmant joined #gluster
12:39 diegows joined #gluster
12:41 SOLDIERz joined #gluster
12:47 ppai joined #gluster
12:48 glusterbot News from newglusterbugs: [Bug 1175617] Glusterd gets killed by oom-killer because of memory consumption <https://bugzilla.redhat.com/show_bug.cgi?id=1175617>
12:49 atalur joined #gluster
12:50 DV joined #gluster
12:50 plarsen joined #gluster
12:52 harish joined #gluster
12:56 m0zes joined #gluster
13:00 hagarth joined #gluster
13:02 RameshN joined #gluster
13:03 LebedevRI joined #gluster
13:05 calisto joined #gluster
13:08 DV joined #gluster
13:15 T0aD joined #gluster
13:18 ctria joined #gluster
13:18 necrogami joined #gluster
13:19 rgustafs joined #gluster
13:23 mbukatov joined #gluster
13:24 mbukatov joined #gluster
13:30 doubt joined #gluster
13:37 Gill joined #gluster
13:45 davidhadas joined #gluster
13:47 Gill Good morning everyone. I Have a quick question. I’m trying to mount a gluster drive after my OpenVPN starts I tried to the mount as the up script but it wont run. If I run the script manually it works right away but from the Up command it doesnt. Are there any special permissions/enviroment variables I need? Thanks!
13:49 partner Gill: is the script executable or what exactly command you've defined there? with full paths?
13:50 Gill yes its executable and has full paths
13:50 Gill i see the script starting but eventually times out
13:51 misch Gill: Perhaps the VPN tunnel is not up yet. Perhaps you have to add a test loop to check if you can ping the server.
13:51 Gill I get this message: I [glusterfsd.c:1493:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.2.7 but then nothing until it times out
13:51 Gill I put a sleep in the script but I can try the test loop
13:52 Gill this is my curent script without the sleep: #!/bin/sh
13:52 Gill script: /bin/mount -t glusterfs nfs01.az.internal:/fs_storage  /mnt/nfs/storage/
13:52 dgandhi joined #gluster
13:53 davidhadas__ joined #gluster
13:54 partner have you set the security level to 2 on openvpn? i assume so as the script is called..
13:54 dgandhi joined #gluster
13:54 Gill yes sir
13:54 bene joined #gluster
13:55 dgandhi joined #gluster
13:56 dgandhi joined #gluster
13:57 rwheeler joined #gluster
13:57 partner Gill: have you tried --route-up ? i still think you're trying to mount before everything is ready (there's things happening between up and route-up)
13:58 Gill ill give it a try
13:58 dgandhi joined #gluster
13:58 partner that is run _after_ the routes are all up
13:58 partner so otherwise similar, just run a bit later
13:59 Gill perfect!
13:59 Gill thanks partner!! I spent way too much time on that :(
14:00 partner np, i know the feeling, glad you found to here and asked :)
14:00 dgandhi joined #gluster
14:00 Gill I have 1 more quesiton which i researched and sawit as a bug. I have 2 peers and 1 says connected the other says disconnected but everything seems to be working. Should i be worried?
14:02 dgandhi joined #gluster
14:02 partner so you have State: Peer in Cluster (Disconnected) ?
14:02 ctria joined #gluster
14:02 Gill yes
14:02 Gill but only on one node
14:02 partner if you have 2 servers you should only see 1 peer
14:03 theron joined #gluster
14:03 dgandhi joined #gluster
14:03 Gill yes I see one node on each server
14:03 Gill i can pastebin it may be easier
14:03 virusuy joined #gluster
14:04 partner probably. also, are all the servers running the management daemon glusterd (ie. not died to anything)? the bricks would be online anyways even if the management would be down
14:04 dgandhi joined #gluster
14:04 Gill http://pastebin.com/jsxWnTgu
14:04 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
14:05 Gill this is the 2nd server http://pastebin.com/LpKSvhYj
14:05 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
14:05 Gill yes both should be running it i’ll double check
14:05 dgandhi joined #gluster
14:05 mbukatov joined #gluster
14:05 Gill yep glusterd is running on both systems
14:06 dgandhi joined #gluster
14:06 partner some basic questions again: iptables on either box? any /etc/hosts entries? any actions performed prior getting into this state (such as updating glusterfs)?
14:06 DV joined #gluster
14:07 Gill this is a new install. I have Iptables but they are set to allow all from the other system an no etc/hosts entries because I have aprivate DNS server running
14:07 partner also how does your output look on: gluster volume status
14:08 DV joined #gluster
14:08 dgandhi joined #gluster
14:08 Gill on gluser volume info It says started and number of bricks 2
14:08 partner so you are sure on either side there is no firewall blocking the traffic, no drops on logs?
14:08 partner "gluster volume status" gives a bit different view
14:08 dgandhi joined #gluster
14:09 Gill status gives me an error
14:09 partner alright, maybe a good thing
14:09 Gill gluster volume status
14:09 Gill unrecognized word: status (position 1)
14:09 dgandhi joined #gluster
14:09 Gill im sure nothing is being blocked
14:09 partner hmm, works for me. oh, i think you had 3.2 version?
14:10 partner how about gluster volume status your_volume detail?
14:10 dgandhi joined #gluster
14:10 Gill same error
14:10 partner ok, probably due to version
14:11 Gill i have profile
14:11 Gill im looking under help
14:11 Gill cant use status
14:11 Gill i can top the volme
14:11 kkeithley (,,ppa) | Gill
14:11 partner 3.2 is pretty ancient, i'm not an expert of that particular version
14:11 kkeithley @ppa
14:11 glusterbot kkeithley: The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
14:11 kkeithley Gill: ^^^
14:11 Gill ah ill upgrade
14:11 partner might be even some known issue but just not aware of any of them
14:12 Gill didnt realize it was so old
14:12 partner which os?
14:12 Gill debian
14:12 Gill should i go with 3.6?
14:12 dgandhi joined #gluster
14:12 kkeithley @debian
14:12 glusterbot kkeithley: I do not know about 'debian', but I do know about these similar topics: 'files edited with vim become unreadable on other clients with debian 5.0.6'
14:13 kkeithley @rpm
14:13 glusterbot kkeithley: The official community glusterfs packges for RHEL/CentOS/SL (and Fedora 17 and earlier) are available here http://goo.gl/s077x
14:13 partner http://download.gluster.org/pub/gluster/glusterfs/3.4/ there's a link to start with, one level up you can find 3.5 and 3.6 and debian packages
14:13 kkeithley partner++
14:13 glusterbot kkeithley: partner's karma is now 3
14:13 Gill awsome! thanks partner  and kkeithley
14:13 Gill partner ++
14:13 Gill partner++
14:13 kkeithley partner's karma should be much higher than it is. ;-)
14:13 glusterbot Gill: partner's karma is now 4
14:13 dgandhi joined #gluster
14:13 Gill yea it should!
14:14 Gill ill upgrade and report back!
14:14 Gill Thanks again!
14:14 Gill so so much!
14:14 partner just don't tell my employer i spent some time for community ;)
14:14 Gill what time ;)
14:14 Gill lol
14:14 dgandhi joined #gluster
14:15 julim joined #gluster
14:15 partner i am preparing for storage move that will happen tomorrow, some real test for the gluster, can't wait for any surprise issue that will arise :)
14:15 bennyturns joined #gluster
14:15 partner so stick around, i might need you ;D
14:15 partner all of you :o
14:16 Gill sounds good!
14:16 Gill good luck man!!
14:16 dgandhi joined #gluster
14:16 partner good luck and bottle of glue or how did it go?-) thanks
14:17 partner Gill: anyways, a slight update and get back to us on how did it go. brave ones go with 3.6, i'm personally sticking still with 3.4 series
14:17 Gill oh 3.6 is dev?
14:17 Gill ill go to 3.4 then as well :)
14:18 dgandhi joined #gluster
14:19 dblack joined #gluster
14:19 radez left #gluster
14:20 dgandhi joined #gluster
14:22 dgandhi joined #gluster
14:22 partner honestly i don't know which exactly version would be "best" at the moment, 3.4.6 and 3.5.3 are latest releases, i'm trying to balance somehow between the stability and functionality (and also supportability, hence 3.2 isn't exactly kosher selection anymore)
14:23 dgandhi joined #gluster
14:23 kkeithley 3.6 isn't dev, not really. If you want to give 3.6 a try I suggest waiting for 3.6.2, which should be out later this week or next.  Otherwise 3.5.3 and 3.4.6 and stable/legacy releases
14:23 kkeithley Maybe use 3.5 unless you need the features that are in 3.6 if you want to be super conservative
14:24 dgandhi joined #gluster
14:24 deniszh joined #gluster
14:24 partner i agree, 3.5 is probably the way to go. and if you went with 3.4 there is a good opportunity to rehearse the upgrade, too :)
14:24 Gill haha ok i am upgrading to 3.4 now but whenits done ill go to 3.5
14:26 dgandhi joined #gluster
14:26 kkeithley 3.4 only has a couple more months of supported life. 3.7 will be out in a couple months, and then 3.4 will be EOL.
14:26 Gill wow ok so ill go with 3.5 for now and upgrade again down the line I guess
14:27 dgandhi joined #gluster
14:28 partner only (known) issue that i have with 3.4 currently is the memory leak on rebalance. and even that isn't so critical as it does not make much sense to rebalance this sized volumes thought i would still like to balance the free disk space there..
14:28 [Enrico] joined #gluster
14:28 dgandhi joined #gluster
14:29 partner not sure if 3.4.6 addressed that or not
14:29 RameshN joined #gluster
14:29 tdasilva joined #gluster
14:30 dgandhi joined #gluster
14:30 Gill whoops that cleared my configs
14:31 Gill ill reprobe and redo the volume
14:32 dgandhi joined #gluster
14:32 Gill both sies say connected ow :)
14:32 Gill should i just do a volume create or should i pass options to it for best performance?
14:33 dgandhi joined #gluster
14:34 kshlm joined #gluster
14:34 kshlm joined #gluster
14:35 dgandhi joined #gluster
14:35 partner you can tune various parameters later on
14:35 Gill ok cool thanks! Before I had defaults. Is there much to tune?
14:36 dgandhi joined #gluster
14:36 partner i haven't done much tuning as the bottleneck is elsewhere anyways
14:36 partner on my environment that is
14:36 Dw_Sn joined #gluster
14:37 partner better generate some baseline first and only then tune if needed, otherwise you can't see if it got better or worse
14:37 dgandhi joined #gluster
14:37 Gill ok cool
14:37 nshaikh left #gluster
14:38 dgandhi joined #gluster
14:39 partner so in that sense i'm not any expert on performance tunings, others might be able to provide you with such info
14:39 Gill Failed to perform brick order check. Do you want to continue creating the volume?  (y/n) should i continue?
14:40 Gill ah i had a typo
14:40 dgandhi joined #gluster
14:40 partner sounds like replica setup
14:41 Gill yes sir
14:41 Gill cool i have gluster volume staus now
14:42 Gill and both sides say connected :)
14:42 Gill partner++
14:42 glusterbot Gill: partner's karma is now 5
14:42 dgandhi joined #gluster
14:42 partner alright, one more happy "customer" :)
14:42 partner np, glad to be of assistance. i'll head offline now for some time, cya ->
14:43 Gill thanks so much! Have a great day
14:43 eka_ joined #gluster
14:43 dgandhi joined #gluster
14:45 maveric_amitc_ joined #gluster
14:45 squizzi joined #gluster
14:45 dgandhi joined #gluster
14:46 dgandhi joined #gluster
14:48 neofob joined #gluster
14:48 dgandhi joined #gluster
14:51 dgandhi joined #gluster
14:51 nbalacha joined #gluster
14:52 dgandhi joined #gluster
14:52 nbalacha joined #gluster
14:54 dgandhi joined #gluster
14:57 nbalacha joined #gluster
14:58 Gill Hey partner still around? I’m back to the client issue again its not connecting again
15:00 raghug joined #gluster
15:03 Gill is the mount command different between gluster version 3.2 and 3.5?
15:03 julim_ joined #gluster
15:04 mbukatov joined #gluster
15:07 ubungu joined #gluster
15:08 jbrooks joined #gluster
15:11 wushudoin joined #gluster
15:15 kkeithley Gill: no, same command:   mount -t glusterfs $host:$volname $mntpt
15:16 Gill ok cool its working on 1 client after i upgrade but on the other that has to connect to the OpenVPN first its not working anymore
15:16 Gill kkeithley:  would you recommend using SSL even though its all on a VPN?
15:19 glusterbot News from newglusterbugs: [Bug 1181203] glusterd/libglusterfs: Various failures when multi threading epoll, due to racy state updates/maintenance <https://bugzilla.redhat.com/show_bug.cgi?id=1181203>
15:20 jiku joined #gluster
15:24 xaeth joined #gluster
15:24 Pupeno joined #gluster
15:24 Pupeno joined #gluster
15:25 xaeth hey all.  it was suggested that i hop in here to ask about 2 use cases i'm putting together a POC for with gluster
15:26 rafi1 joined #gluster
15:26 xaeth 1: data store for syslog server (so streaming writes) and 2: content storage for Katello/Pulp (which uses lots of hardlinks)
15:31 jobewan joined #gluster
15:32 Pupeno joined #gluster
15:32 Pupeno joined #gluster
15:33 neofob left #gluster
15:35 nishanth joined #gluster
15:44 elico joined #gluster
15:49 misch joined #gluster
15:50 neofob joined #gluster
15:54 kkeithley Gill: I know the trend is toward using secure comm everywhere, because you never know. On one hand I wouldn't recommend SSL unless you really need it.  On the other hand....
15:55 Gill kkeithley: makes sense. is there much overhead with enabling SSL?
15:56 Gill I have 2/3 clients connecting
15:56 Gill this is weird though same issue as before but i am using route-up now as partner suggested earlier and it was woking before the upgrade. if i run the mount script manually it works also
15:57 kkeithley There is overhead, no two ways about it. I don't know if there's (too) much overhead. I suggest you try it with your workload and see it it's going to work for you.
15:57 Gill ok cool
15:57 kkeithley See if it's going to be acceptable for you.
15:57 Gill thanks kkeithley I need to figure out this last mount first
15:57 Gill then ill give SSL a try
15:58 Gill kkeithley++
15:58 glusterbot Gill: kkeithley's karma is now 22
15:58 lmickh joined #gluster
16:00 sysadmin1138 joined #gluster
16:03 coredump joined #gluster
16:05 sysadmin1138 I have a tricky snapshot problem on 3.6.1. I'm trying to set them up, but the create only goes part way before erroring out with "Commit failed on localhost." A snapshot volume is created, but not cleaned up, and the other node in this pair never creates one.
16:05 bennyturns joined #gluster
16:13 Dw_Sn joined #gluster
16:18 Dw_Sn_ joined #gluster
16:21 partner Gill: back
16:22 Gill hey partner
16:22 Gill after the upgrade the original issue returned :(
16:22 roost joined #gluster
16:23 partner did you upgrade clients also?
16:23 sysadmin1138 Some extracted logs from my issue: http://fpaste.org/168705/79739142/
16:24 m0ellemeister joined #gluster
16:24 Gill yep i upgraded the clients too
16:24 Gill 2/3 clients work
16:25 Gill the 2 that are on the openvpn networks
16:25 Gill the remote one on openvpn isnt
16:25 Gill script works manually
16:25 Gill and its being called and started but times out
16:25 jonb1 joined #gluster
16:26 partner so the openvpn client started to fail again after the upgrade?
16:27 partner or the mount part of it, still with --route-up ?
16:29 Gill the mount is fialing
16:29 Gill and its still on route-up
16:30 partner Gill: maybe, as was earlier discussed, you try to add some loop or add delay as i am suspecting its still due to trying to mount too quickly
16:30 Gill i put a sleep in ill put a longer one and give it a try
16:30 partner easiest would be to add some sleep n to the start of the script, long enough for starters to see if it makes any difference
16:30 Gill brb thanks partner!
16:35 Gill i feel like a cowboy when i say thanks partner lol
16:36 Gill ok partner I put a 60 seconds sleep in
16:42 wushudoin joined #gluster
16:43 y4m4_ joined #gluster
16:45 jamesc joined #gluster
16:47 misch joined #gluster
16:48 coredump joined #gluster
16:48 _Bryan_ joined #gluster
16:54 partner Gill: you went to take a nap aswell?-)
16:55 Gill sorry
16:55 Gill just had an emergency
16:55 Gill one of my data centers telco room lost power momentarily
16:55 Gill sorry about that
16:55 Gill things are back up now
16:56 Gill im going to start the test over now
16:57 Gill same issue
16:57 Gill ill pastebin
16:58 partner use fpaste.org to save some bot noise
16:59 partner and do check your logs under /var/log/glusterfs/ - there should be some named based on your mountpoint
16:59 Gill ok
17:00 partner hmm maybe not.. anyways check the available logs for any hints
17:00 partner nvm, should be there, confusing myself here :o
17:01 Gill https://fpaste.org/168718/21082039/
17:01 Gill yea those are my logs on the client side
17:02 bet_ joined #gluster
17:02 partner so connection timed out to the management daemon (port 24007)
17:02 partner it obviously cannot reach any servers and as it doesn't get any volume files it cannot proceed
17:03 partner the line 11 is something to concentrate on, can you even telnet to it?
17:03 Gill yep telnet works
17:03 partner uh, i need to background, need to give some time for the daughter right now, bbl
17:03 Gill and when I run my mount script ot just mount -t it mounts perfectly
17:04 Gill ok cool thnks man
17:05 PeterA joined #gluster
17:07 _Bryan_ joined #gluster
17:13 coredump joined #gluster
17:17 saurabh joined #gluster
17:26 cfeller joined #gluster
17:29 jonb1 Hello all, I am looking into geo-replication and have a question. I've read through the disaster-recovery section of the documentation a few times now and something I'm not clear on is how clients fail-over to use the slave volume.
17:29 jonb1 Do all the clients need to go through an "un-mount master" & "re-mount slave" maneuver?
17:31 misch joined #gluster
17:32 shylesh__ joined #gluster
17:35 eka joined #gluster
17:39 JoeJulian There is no specific process for that, jonb1, and I haven't seen anyone publish their solution to that question either.
17:41 virusuy joined #gluster
17:44 jonb1 Ok, Thank you. Good to know I didn't just skip over some huge section of the documentation.
17:46 jonb1 Also JoeJulian, thank you for numerous posts and guides, they got me up and running with Gluster about a year ago and we've been thrilled with it ever since.
17:46 JoeJulian Thanks for the feedback. I'm happy to have been some help.
17:47 partner now go and write some more ;)
17:47 jonb1 If we do end up using geo-rep we will likely attempt to invent an auto-failover mechanism of some kind. If it works I will do what I can to get it posted back to the community.
17:47 JoeJulian Hehe, I'm working on some ceph vs gluster posts.
17:48 JoeJulian That would be great, jonb1. Let us know and we'll add it to gluster.org.
17:49 Dw_Sn joined #gluster
17:50 partner JoeJulian: is it in positive tone or greatest flame of all times towards ceph?-)
17:50 JoeJulian partner: neutral.
17:50 JoeJulian You know me... I'm all about choices and application to use cases.
17:50 partner i'm kind of neutral on that area aswell, we opted ceph for some cloud stuff thought we do have some gluster with Other(tm) cloud too
17:52 JoeJulian (as long as one of those choices is not ubuntu) ;)
17:52 m0ellemeister joined #gluster
17:54 partner i can guarantee its neither is :)
17:54 bennyturns joined #gluster
17:57 SOLDIERz joined #gluster
17:58 jdarcy joined #gluster
18:01 Pupeno_ joined #gluster
18:04 partner Gill: anyways, i'm a bit puzzled now as it worked already and now does not. brick ports changed but the management daemon port did not. the paste indicates 24007 would not be responding.. so, shouldn't be any firewall issue unless the vpn client would happen to have some different ruleset, confirmed no drops after the upgrade?
18:04 partner and manually works.. hmm
18:05 partner maybe we downgrade back to 3.2 and be happy with a working system :D
18:06 partner i would add something more to debug into that script, pings and what not to see if the remote is responsive before the mount is attempted
18:08 partner JoeJulian: haha, just found my Ubuntu cup from dishwasher, got that when i visited canonical office at london :D
18:09 misch joined #gluster
18:09 Gill partner: if i downgrade i get the other issue
18:09 Gill maybe i should just mount it manually
18:09 Gill the only times i’ll really have to remount are on reboots
18:10 partner the downgrade was a bit of a joke.. this must be some very simple thing, it worked already..
18:10 JoeJulian Save me the scrollback... mount problem on bootup in ubuntu/
18:10 JoeJulian ?
18:10 partner then we upgraded servers and clients and you to my understanding created the volume from scratch
18:11 partner JoeJulian: mount -t gluster after openvpn has made up connection
18:11 JoeJulian Ah, that one.
18:11 partner that failed earlier, got it working but as the version was 3.2 it was upgraded to hmm 3.5 eventually? 3.4 at least
18:11 lalatenduM joined #gluster
18:11 JoeJulian 3.2 uses different ,,(ports) from 3.4+
18:11 glusterbot glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
18:12 JoeJulian Could it be firewall?
18:12 partner yeah, that's what i noted above but error is: connection to 10.113.114.99:24007 failed (Connection timed out) - mgmt port never changed
18:12 Gill im on 3.5 now
18:12 JoeJulian ok
18:12 Gill manual works the script runs in route-up fails
18:12 partner Gill: did you check the firewall after the upgrade?
18:12 JoeJulian selinux?
18:12 partner oh yeah, manual works, can't be fw then either
18:12 Gill yep i just checked it and its OK
18:12 Gill im runing debian
18:13 Gill i can try to kill selinux
18:13 JoeJulian well there's your problem... ;)
18:13 Gill should i just recreate the servers on ubuntu 14.04?
18:13 JoeJulian debian doesn't use selinux does it?
18:13 Gill i can do that if its simpler
18:13 Gill not sure
18:13 JoeJulian I'm just kidding.
18:13 JoeJulian I'm not a deb fan.
18:13 JoeJulian that includes ubuntu
18:13 Gill it does
18:14 partner selinux is not around by default
18:14 Gill oh
18:14 semiosis :O
18:14 JoeJulian Just to be sure, "setenforce 0"
18:14 JoeJulian And nobody tell mhayden I said that.
18:14 Gill yea its not enabled
18:16 partner i'm "happily" in between there aswell with centos boxes with selinux and debians without it, not too hard to guess on which side life is way easier..
18:17 partner if i had the time i would set up a test env for this as no mystery can be left unsolved
18:18 JoeJulian If I were figuring it out, I would add stuff to the route-up script to test pings, maybe a nmap, etc to see what the network state is.
18:18 Gill JoeJulian: I added a sleep of 60 seconds
18:18 JoeJulian I think Sony would agree that setenforce 0 is way easier.
18:18 Gill i can do manual pings while its running
18:19 Gill haha
18:20 Gill yea no pings are coming back
18:20 Gill an the script is running
18:20 partner 20:06 < partner> i would add something more to debug into that script, pings and what not to see if the remote is responsive before the mount is attempted
18:21 Gill ok ill start adding debug
18:21 Gill waiting for it to timeout now to see if the pings start coming back
18:21 partner maybe some telnet / nc to 24007 port, anything that would tell something is working or not
18:21 JoeJulian Maybe the route isn't established at that point? Add "ip route > /tmp/routestate" to the script and see if that's it.
18:21 partner --route-up is _after_ routes has been set up, that fixed the issue earlier (with --up failing)
18:21 vimal joined #gluster
18:22 partner but agree there might be still some delays and what not
18:22 Gill its weird that it worked pre-upgrade
18:22 partner it is :(
18:22 partner case was closed already but then there was this peer thingy
18:22 Gill ok timeout happend and pings started a few second safte
18:23 partner hmm does it prevent openvpn from finalizing things up..
18:23 Gill going for the same test now with no sleep
18:23 Gill see if the results are the same
18:23 partner maybe if you background the mount command with a bit of a sleep.. hacky but i cannot test the exact behaviour right now
18:25 Gill timed out then a few seonds later pings started
18:26 Gill so looks like th eopenvpn doesnt come up fully when the mount script is there
18:26 Gill gona try again without the mount script
18:26 Gill yep right away pings start
18:27 Gill to make it background I add bg like this right? /bin/mount -t glusterfs nfs01.az.internal:/fs_storage bg /mnt/nfs/storage/
18:30 partner try just adding & at the end of the line, lets make it more funky once we've resolved the root cause
18:30 Gill ok cool
18:31 Gill that worked!
18:31 Gill right away
18:31 JoeJulian reasonably nice hack.
18:32 Gill awesome! so can I leave it like that?
18:32 partner the purpose of route-up is (originally) to be able to add any additional routes and what not which you cannot do yet at the --up stage
18:32 partner for the record, what openvpn version are we talking about?
18:33 Gill openvpn 2.2.1
18:36 partner there are also couple of delay options you might want to try such as --route-delay which delays the --route-up command. we have no visibility to your vpn so its lots of guessing here
18:36 Gill they are up to 2.3 already
18:36 Gill debian has such old packages!!
18:37 partner old but stable ;)
18:37 Gill true
18:37 Gill maybe ill upgrade openVPN also
18:38 Gill think I can leave it with this hack?
18:38 partner i would reboot / reinitiate the vpn multiple times to see if it works for real.
18:39 partner also you might want to consider adding umount to the --down part
18:39 semiosis pre-route-down
18:39 partner --down-pre
18:39 Gill ok
18:40 partner as weird things will happen otherwise as your mount is "gone"
18:44 Gill cool just added them and did a reboot
18:45 partner not the most elegant solution but if your usage isn't that huge its probably enough, you even were ready to opt for manual mounts so if its automatic and works i guess its perfect :)
18:46 Gill partner: exactly
18:47 Gill just wish it would work a bit quicker
18:47 Gill gluster seems a little slow but its probably tuning right?
18:47 T3 joined #gluster
18:52 Gill awsome seems to be working
18:52 Gill thankso smuch guys!!!!
18:54 JoeJulian partner++
18:54 glusterbot JoeJulian: partner's karma is now 6
18:54 Gill partner+++++++
18:54 glusterbot Gill: partner+++++'s karma is now 1
18:54 Gill partner++
18:54 glusterbot Gill: partner's karma is now 7
18:54 partner every happy glusterfs user more is good for the project
18:59 tryggvil joined #gluster
19:19 tom[] i have a script called checknode that the load balancer probes reequently. i'm thinking about adding to the script some monitoring of status of the node's glusterfs service. the output of the script to the LB is either: "node OK" or "do not use this node"
19:19 tom[] what should the script look for?
19:20 deniszh joined #gluster
19:23 tom[] it should return "node OK" if the cluster is all in sync, or if other nodes are missing, or if the cluster is out of sync but the node is deemed authoritative, but not if the node's state needs to be synced
19:24 tom[] i'm using general terms, not gluster-specific because i don't know them
19:24 partner i'm unsure what is the gluster part here. say if you have 10 nodes that do something and aside that provide some gluster storage
19:25 partner so if one of the gluster processes is down somewhere
19:25 partner alert?
19:25 partner are you asking how to monitor glusterfs healthy?
19:29 tom[] partner: that is what i am asking
19:29 partner last time i checked i couldn't find a single perfect plugin for gluster (was searching for nagios)
19:30 partner nothing even decent to my taste.. things might have changed since but haven't looked as a specified what i want and my colleague wrote the plugin as he is way more talented with programming than i am
19:30 tom[] i want the load balancer to remove the node from rotation is the node's gluster thinks it is "out of whack", to use the technical term
19:31 partner do you have some glusterfs replica there or just distributed volume?
19:32 partner i mean, if you just go and remove random glusterfs nodes there will be plenty of files missing and i don't want to even think where that ends up eventually
19:32 tom[] replica
19:32 tom[] it is a small HA cluster
19:33 partner somehow i feel your HA is trying to fight against the gluster "ha"
19:33 partner but. you could query the gluster for the bricks being online for example
19:33 tom[] for example, it turned out that checking if my galera node is ok was a matter of testing if the value of wsrep_local_state is either 2 or 4. http://www.percona.com/doc/percona-xtradb-cluster/5.5/wsrep-status-index.html
19:35 partner perhaps 1) get all the volumes 2) check the bricks of each volume and see if their status is Y for online
19:36 partner could check also for peer status thought the HA is attempting to do that already, the view is just different from within the gluster daemon
19:36 partner this is pretty much simplified what our check for gluster does, it will tell if all the bricks of the volume are available or if all the peers are connected
19:36 sadbox joined #gluster
19:37 partner be it distributed or replicated or so forth
19:38 tom[] for example. if a node fails and is repaired/replaced, it comes back online some time later and will spend a while in a condition in which gluster is working properly but it is still catching up with recovering state from the other nodes. until it is ready and "in sync" it should not be used by the LB
19:39 JoeJulian remove the word "node" from your thought process and use "client" and "server".
19:39 partner i'm sorry but i don't understand this picture, the gluster handles all the selfhealing by itself, it probably makes more harm not to let clients access the files
19:40 partner thought split-brain would be a bit different situation
19:40 JoeJulian A "client" which is what your LB is working against, is never "out of whack".
19:40 tom[] JoeJulian: ok
19:40 tom[] so if the client's service is available, the node can be used?
19:41 tom[] when i say client's servcie, i guess mean the mounted fs
19:41 JoeJulian The client connects to all servers that are part of the volume. If one server is stale, the client will use the data from the other replica.
19:41 tom[] JoeJulian: what's a sensible way to probe if the client "feels ok"?
19:42 JoeJulian The server's storage would be the only thing that gets stale, and nothing but glusterfsd ever touches that.
19:42 JoeJulian gluster volume heal info $volname split-brain
19:42 JoeJulian That's the only thing that, if it has new entries, would designate a specific problem.
19:44 JoeJulian To avoid problems with split-brain, see ,,(quorum)
19:44 glusterbot JoeJulian: Error: No factoid matches that key.
19:44 JoeJulian @meh
19:44 glusterbot JoeJulian: I'm not happy about it either
19:44 JoeJulian @lucky glusterfs quorum
19:44 glusterbot JoeJulian: https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/sect-User_Guide-Managing_Volumes-Quorum.html
19:44 JoeJulian close enough.
19:47 tom[] is there documentation for gluster volume heal <VOL> info split-brain ?
19:47 tom[] i searched ant eh best i found is this crummy blog article ;) http://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/
19:48 partner that domain holds the best docs :)
19:49 johnnytran joined #gluster
19:52 tom[] is there a way to get/display volume options, to complement `gluster volume set <VOLNAME> <KEY> <VALUE>` ?
19:53 partner gluster volume set help
20:03 johnnytran joined #gluster
20:05 ProT-0-TypE joined #gluster
20:06 JoeJulian ^ shows the defaults. If they're in "gluster volume info $vol" then they've been changed from default.
20:07 partner yeah but what is nice is that there are descriptions too, not just blunt list of possible options
20:13 JoeJulian Gill: http://www.gluster.org/consultants/
20:14 Gill thanks JoeJulian
20:14 jporterfield joined #gluster
20:17 johnnytran joined #gluster
20:29 sadbox joined #gluster
20:29 JordanHackworth joined #gluster
20:30 a1 joined #gluster
20:30 masterzen joined #gluster
20:30 owlbot joined #gluster
20:33 Bosse joined #gluster
20:44 JordanHackworth joined #gluster
20:56 lpabon joined #gluster
21:00 calum_ joined #gluster
21:12 crashmag_ joined #gluster
21:43 sysadmin1138 left #gluster
21:56 Pupeno joined #gluster
22:08 badone joined #gluster
22:12 LebedevRI joined #gluster
22:30 badone joined #gluster
22:30 smohan joined #gluster
22:53 _Bryan_ joined #gluster
23:02 tryggvil joined #gluster
23:06 Pupeno_ joined #gluster
23:16 jaank joined #gluster
23:23 tryggvil_ joined #gluster
23:30 edong23 joined #gluster
23:39 lmickh joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary