Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-02-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 mattf joined #gluster
00:23 inodb_ joined #gluster
00:43 rastar joined #gluster
01:12 tdasilva left #gluster
01:13 theron joined #gluster
01:16 diegows joined #gluster
01:30 vpshastry joined #gluster
01:36 harish joined #gluster
01:37 theron joined #gluster
01:40 tokik joined #gluster
01:45 jflilley1 joined #gluster
01:49 vpshastry joined #gluster
02:10 gdubreui joined #gluster
02:11 bala joined #gluster
02:15 hflai joined #gluster
02:18 harish joined #gluster
02:22 jag3773 joined #gluster
02:26 Fenuks|2 joined #gluster
02:31 nueces joined #gluster
02:42 zapotah joined #gluster
02:55 P0w3r3d joined #gluster
02:58 davinder joined #gluster
02:59 chirino joined #gluster
03:05 gdubreui joined #gluster
03:20 shubhendu joined #gluster
03:29 vpshastry joined #gluster
03:29 jag3773 joined #gluster
03:36 itisravi joined #gluster
04:06 joaquim__ joined #gluster
04:13 saurabh joined #gluster
04:19 bharata-rao joined #gluster
04:21 vpshastry joined #gluster
04:25 kdhananjay joined #gluster
04:31 Ark_explorys joined #gluster
04:33 cfeller has anyone seen the issue where your system takes an extremely long time to boot when _netdev is used in fstab?
04:34 cfeller I just upgraded two of my boxes to Fedora 20, from Fedora 18 (with an intermediate upgrade to Fedora 19 for good measure), and my machines are suddenly taking 15+ minutes to boot.
04:34 Guest64113 joined #gluster
04:35 cfeller of course, _netdev is needed for Gluster volumes to make sure that the system doesn't try to mount them before the network is available.
04:35 cfeller removing that option allows the system to boot quickly, but my shares don't get mounted automatically.
04:36 cfeller I see all kinds of stuff like this in the logs:
04:36 cfeller [  304.896023] systemd[1]: Job sockets.target/start deleted to break ordering cy
04:36 cfeller cle starting with basic.target/start
04:37 cfeller and this type of "ordering cycle" breaking and changing repeats for 15 minutes until systemd gets things they way it likes it, and then it boots.
04:38 cfeller Just wondering if anyone else had seen this, as I'm surely not the only one running Fedora 20 with their volumes in /etc/fstab.
04:39 cfeller and I can easily reproduce this.
04:39 cfeller and it wasn't an issue in earlier Fedora releases.
04:45 cfeller oh, interesting... it appears to only be an issue when there is also a bind mount from the Gluster volume in the picture....
04:51 shubhendu joined #gluster
04:54 RameshN joined #gluster
04:54 ndarshan joined #gluster
04:55 flrichar joined #gluster
05:01 shylesh joined #gluster
05:02 aravindavk joined #gluster
05:10 bala joined #gluster
05:14 ngoswami joined #gluster
05:18 prasanth joined #gluster
05:19 nshaikh joined #gluster
05:19 cfeller if anyone is curious, here you go: https://bugzilla.redhat.com/show_bug.cgi?id=1062056
05:19 glusterbot Bug 1062056: unspecified, unspecified, ---, systemd-maint, NEW , System takes an extremely long time to boot when _netdev is listed in mntops in fstab
05:22 ndarshan joined #gluster
05:23 sarkis joined #gluster
05:29 hagarth joined #gluster
05:45 rjoseph joined #gluster
05:46 lalatenduM joined #gluster
05:52 davinder joined #gluster
05:55 mohankumar joined #gluster
06:04 shubhendu joined #gluster
06:06 tg2 joined #gluster
06:07 bala joined #gluster
06:07 davinder2 joined #gluster
06:12 Guest90431 joined #gluster
06:12 rastar joined #gluster
06:16 Philambdo joined #gluster
06:16 kanagaraj joined #gluster
06:16 JoeJulian cfeller: Thanks. I'm running F20 on all my desktops and never encountered that since I don't use bind mounts. What if you add _netdev to the bind mount?
06:17 inodb joined #gluster
06:18 vimal joined #gluster
06:19 ppai joined #gluster
06:19 tg2 joined #gluster
06:20 CheRi joined #gluster
06:21 Humble joined #gluster
06:30 surabhi joined #gluster
06:30 raghu joined #gluster
06:35 ktosiek joined #gluster
06:55 ndarshan joined #gluster
06:56 crazifyngers joined #gluster
07:04 stigchristian joined #gluster
07:04 tg2 joined #gluster
07:05 jurrien_ joined #gluster
07:15 ron-slc_ joined #gluster
07:15 glusterbot New news from newglusterbugs: [Bug 969461] RFE: Quota fixes <https://bugzilla.redhat.com/show_bug.cgi?id=969461>
07:21 rossi joined #gluster
07:24 ndarshan joined #gluster
07:24 ghghz joined #gluster
07:24 ghghz Hello
07:24 glusterbot ghghz: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
07:25 ghghz Have a trouble starting gluster in one node..
07:25 ghghz [2014-02-06 07:17:58.656462] I [glusterfsd.c:1910:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.4.2 (/usr/sbin/glusterd --pid-file=/var/run/glusterd.pid)
07:25 ghghz ./var/run/glusterd.pid exists
07:25 ghghz but no processes forked.
07:26 jtux joined #gluster
07:33 keytab joined #gluster
07:42 ekuric joined #gluster
07:42 ctria joined #gluster
07:48 ghghz left #gluster
08:04 eseyman joined #gluster
08:27 hybrid512 joined #gluster
08:35 dusmantkp_ joined #gluster
08:39 spandit joined #gluster
08:45 bharata-rao joined #gluster
08:45 glusterbot New news from newglusterbugs: [Bug 1062118] GlusterFS 3.5 <https://bugzilla.redhat.com/show_bug.cgi?id=1062118>
08:51 askb_ joined #gluster
09:09 pk1 joined #gluster
09:11 rwheeler joined #gluster
09:13 andreask joined #gluster
09:25 tokik joined #gluster
09:30 tokik joined #gluster
09:33 harish joined #gluster
09:36 Humble joined #gluster
09:40 aurigus joined #gluster
09:46 glusterbot New news from newglusterbugs: [Bug 1062118] Not able to install GlusterFS 3.5 in Ubuntu <https://bugzilla.redhat.com/show_bug.cgi?id=1062118>
09:54 olisch joined #gluster
09:59 matclayton joined #gluster
10:00 social kkeithley: Is there way to poke Gluster Build System to try again restests on patch? http://review.gluster.org/#/c/6872/ and http://review.gluster.org/#/c/6873/
10:00 glusterbot Title: Gerrit Code Review (at review.gluster.org)
10:01 RicardoSSP joined #gluster
10:03 tokik joined #gluster
10:12 inodb joined #gluster
10:13 calum_ joined #gluster
10:14 Slash joined #gluster
10:16 elyograg joined #gluster
10:16 elyograg bug 873763
10:16 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=873763 is not accessible.
10:16 elyograg i guess glusterbot doesn't have more permission than I do.
10:17 elyograg so I can't tell whether bug 921084 is fixed.
10:17 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=921084 unspecified, high, ---, pkarampu, CLOSED DUPLICATE, gluster nfs process gets OOM killed
10:27 elyograg I found the bug number in release notes for Red Hat Storage, and there's a workaround listed there - enable the NFS write behind cache.
10:28 ctria joined #gluster
10:37 pk1 elyograg: ping
10:38 pk1 elyograg: I was working on that bug 921084....
10:38 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=921084 unspecified, high, ---, pkarampu, CLOSED DUPLICATE, gluster nfs process gets OOM killed
10:38 vpshastry joined #gluster
10:39 pk1 elyograg: what happened?
10:40 shyam joined #gluster
10:42 shubhendu joined #gluster
10:44 psharma joined #gluster
10:44 NuxRo joined #gluster
10:49 blook joined #gluster
10:49 ppai joined #gluster
10:50 spandit joined #gluster
10:50 rwheeler joined #gluster
10:51 blook hi gluster folks, today i had to disable one of six bricks to do a fsck on it. everything went fine until i started the gluster-server + bricks on this node again. all of my nfs-clients began to hang
10:52 blook touch and rm were working in short test case, but perhaps there was just one replica pair involved, which didn't respond anymore
10:52 blook any ideas about it?
10:52 ndarshan joined #gluster
10:54 blook restarting glusterd processes on every node could have been the solution, but i don't know because i disabled the brick after a short time again by stopping the process
10:55 lanning joined #gluster
10:58 saltsa joined #gluster
11:02 Nuxr0 joined #gluster
11:03 jmarley joined #gluster
11:04 klaxa|work joined #gluster
11:16 ctria joined #gluster
11:19 kdhananjay joined #gluster
11:19 shubhendu joined #gluster
11:20 ndarshan joined #gluster
11:23 DV__ joined #gluster
11:24 jtux joined #gluster
11:28 kanagaraj joined #gluster
11:28 dusmantkp_ joined #gluster
11:28 bala joined #gluster
11:29 RameshN joined #gluster
11:49 pk1 left #gluster
11:54 kkeithley1 joined #gluster
11:56 RameshN joined #gluster
12:02 MarkR_ joined #gluster
12:05 kdhananjay joined #gluster
12:05 itisravi joined #gluster
12:16 glusterbot New news from resolvedglusterbugs: [Bug 1062118] Not able to install GlusterFS 3.5 in Ubuntu <https://bugzilla.redhat.com/show_bug.cgi?id=1062118>
12:17 MarkR_ I use https://launchpad.net/~semiosis/+archive/ubuntu-glusterfs-3.4?field.series_filter=precise to get my GlusterFS packages for Ubuntu 12.04. But last night, 3.4.2-ubuntu1~quantal6 (listed as a precise package!) got installed. Should I revert to 3.4.2-ubuntu1~precise6?
12:17 glusterbot Title: ubuntu-glusterfs-3.4 : semiosis (at launchpad.net)
12:18 CheRi joined #gluster
12:22 klaxa|work left #gluster
12:24 lalatenduM MarkR_, If you are using 12.04 then u should use packages for 12.04
12:25 MarkR_ Hi LatatenduM, that's what I did. To my surprise I got a quantal (12.10) labeled package. Something went wrong in the PPA repository...
12:26 inodb joined #gluster
12:26 eseyman joined #gluster
12:27 marcoceppi joined #gluster
12:29 harish joined #gluster
12:44 ktosiek Hi! I'm just starting to play with glusterfs, do I need a virtual IP for a HA cluster, or can I just pass multiple IPs to client and let it connect to whichever?
12:50 samppah ktosiek: are you going to use native glusterfs or nfs?
12:51 ktosiek native, I only have Linux clients and full control over them
12:53 samppah ok, clients connect to all servers defined in gluster volume configuration
12:53 samppah so basicly you only need virtual IP to fetch volume information when mounting
12:53 samppah or you can use rr DNS which points to different servers
12:54 samppah also there is backupvolfile-server mount option
12:55 ktosiek cool, backupvolfile-server looks exactly like what I want
12:55 hagarth joined #gluster
12:56 rfortier1 joined #gluster
12:56 ktosiek samppah: thanks :-D
12:56 18WAFPW3T joined #gluster
12:59 rfortier joined #gluster
13:00 MarkR_ Okay, it appears that Semiosis made a mistake when building the latest Quantal package (http://tinyurl.com/l94yc9k). Accidentally, he labeled it as an Precise package (http://tinyurl.com/npppy63). Hopefully he'll fix it soon (and I must not install this package automatically ;) )
13:00 glusterbot Title: Packages in “ubuntu-glusterfs-3.4” : ubuntu-glusterfs-3.4 : semiosis (at tinyurl.com)
13:05 rfortier1 joined #gluster
13:07 P0w3r3d joined #gluster
13:10 matclayton joined #gluster
13:13 kkeithley_ @later tell semiosis to please check Quantal/Precise packaging
13:13 glusterbot kkeithley_: The operation succeeded.
13:29 social hi, gluster rebalance fix-layout is still required when I add bricks?
13:29 social or simple rebalance?
13:30 MarkR_ Hm, Launchpad removed glusterfs - 3.4.2-ubuntu1~precise6 as it was superseded by glusterfs - 3.4.2-ubuntu1~quantal6 :(
13:30 DV joined #gluster
13:31 vimal joined #gluster
13:35 edward3 joined #gluster
13:37 edward3 joined #gluster
13:42 Ark_explorys joined #gluster
13:49 eryc joined #gluster
13:49 eryc joined #gluster
13:51 sroy joined #gluster
13:52 Ark_explorys joined #gluster
13:53 kanagaraj joined #gluster
13:53 edward2 joined #gluster
13:54 edward2 joined #gluster
13:54 oxae Hi, if I remove-brick with invalid brick name the gluster command silently fails
13:55 dusmantkp_ joined #gluster
13:55 oxae it shows up in the log file though
13:56 RameshN joined #gluster
13:56 awheeler_ joined #gluster
13:57 bennyturns joined #gluster
13:59 oxae glusterfs is version 3.4.2
14:00 edward2 joined #gluster
14:02 japuzzo joined #gluster
14:04 plarsen joined #gluster
14:04 plarsen joined #gluster
14:05 diegows joined #gluster
14:07 sarkis joined #gluster
14:10 theron joined #gluster
14:11 B21956 joined #gluster
14:12 DV joined #gluster
14:12 jmarley joined #gluster
14:17 ctria joined #gluster
14:18 ppai joined #gluster
14:18 cfeller JoeJulian: if I add _netdev to the bind mount, then the bind mount doesn't get mounted at boot, but the main gluster volume does (and the machine boots in a normal timeframe).
14:21 gmcwhistler joined #gluster
14:26 Slash joined #gluster
14:28 gmcwhistler joined #gluster
14:32 itisravi joined #gluster
14:33 benperiton joined #gluster
14:35 vpshastry left #gluster
14:41 social kkeithley_: JoeJulian: I have gluster in broken state where gluster volume replace-brick was last operation running before the node where replace brick took place was restarted, It seems to be in quite broken state now any ideas/links on tickets?
14:41 jmarley joined #gluster
14:42 social replace-brick: transport.address-family not specified. Could not guess default value from (remote-host:(null) or transport.unix.connect-path:(null)) :/
14:44 social ok google just answered me that tis is known issue and I did wrong by running replace-brick, but is there way to recover the cluster? it seems to be in global lock now
14:44 ctria joined #gluster
14:47 social ok I see I have rb_dst_brick.vol file in my volfile, but I don't think I can simply remove this
14:47 glusterbot New news from newglusterbugs: [Bug 1062255] remove-brick with invalid brick name, gluster command silently fails <https://bugzilla.redhat.com/show_bug.cgi?id=1062255>
14:51 NuxRo joined #gluster
14:51 NuxRo joined #gluster
14:53 failshell joined #gluster
14:59 itisravi joined #gluster
15:00 theron joined #gluster
15:01 theron joined #gluster
15:01 Slash__ joined #gluster
15:03 dneary joined #gluster
15:05 social kkeithley_: JoeJulian: ugly, stop se daemon on affected node and remove rb_dst_brick.vol from volfile :/
15:06 dbruhn joined #gluster
15:14 bit4man joined #gluster
15:16 NuxRo joined #gluster
15:17 ctria joined #gluster
15:17 glusterbot New news from newglusterbugs: [Bug 1062287] On breaking the connection between replicated volumes certain files return -ENOTCONN <https://bugzilla.redhat.com/show_bug.cgi?id=1062287>
15:18 Nuxr0 joined #gluster
15:19 georgeh|workstat joined #gluster
15:22 bugs_ joined #gluster
15:27 bala joined #gluster
15:28 Gluster joined #gluster
15:33 theron joined #gluster
15:36 vpshastry joined #gluster
15:46 lpabon joined #gluster
15:51 pravka_ joined #gluster
15:51 Gluster1 joined #gluster
15:52 elyograg ok, turning on performance.nfs.write-behind is NOT fixing the problem with the NFS service process being killed by the OOM killer.
15:53 elyograg yesterday we added another gluster volume.  expanding the existing volume results in a need to rebalance, which is unworkable in the short term due to bugs in 3.3.1, and unworkable in the long term due to the extreme amount of data that has to be moved every time we expand.
15:55 FilipeCifali joined #gluster
15:59 _Bryan_ joined #gluster
15:59 DV joined #gluster
16:01 Gluster joined #gluster
16:02 ktosiek_ joined #gluster
16:03 Gluster1 joined #gluster
16:05 jag3773 joined #gluster
16:07 Gluster joined #gluster
16:08 Gluster2 joined #gluster
16:09 Gluster1 joined #gluster
16:11 Gluster3 joined #gluster
16:11 Gluster4 joined #gluster
16:18 mattapperson joined #gluster
16:20 ctria joined #gluster
16:24 dusmant joined #gluster
16:25 bennyturns joined #gluster
16:28 georgeh|workstat joined #gluster
16:30 rotbeard joined #gluster
16:48 blook joined #gluster
17:02 shyam joined #gluster
17:04 portante left #gluster
17:05 portante joined #gluster
17:05 portante left #gluster
17:13 vpshastry left #gluster
17:13 JoeJulian social: See if there's a bug already regarding the ugly workaround and if not, please file one.
17:14 JoeJulian elyograg: My understanding is that 3.4 changes the mappings such that less overall data has to be moved to rebalance.
17:16 mattappe_ joined #gluster
17:26 hchiramm_ joined #gluster
17:27 social JoeJulian: I'm still not sure about it, I don't think the volume is in correct state yet
17:33 elyograg JoeJulian: that would be good. id there a url, readme, or anything else I can jump to on the issue?
17:35 jobewan joined #gluster
17:36 Mo___ joined #gluster
17:36 JoeJulian Oh sure... now you want me to remember where I got that info... :P
17:38 zerick joined #gluster
17:38 RameshN joined #gluster
17:41 mattapperson joined #gluster
17:44 sac`away joined #gluster
17:45 Humble joined #gluster
17:45 ngoswami joined #gluster
17:52 theron joined #gluster
17:52 kaptk2 joined #gluster
17:58 recidive joined #gluster
17:59 mattf left #gluster
17:59 JoeJulian elyograg: Ok, maybe he (whomever "he" was) was referring to commit dafd31b7188057367cb9fb780f921f4bb8a930fb in 3.5.
18:01 rossi_ joined #gluster
18:02 elyograg found the commit with google.  says that it adds filename pattern support.  that's a good thing, but if the rebalance itself doesn't get a lot smarter, i don't think that specific commit does much.
18:04 JoeJulian yeah
18:05 JoeJulian Jeff Darcy did a presentation on a "concentric ring" rebalance that he wanted to implement. He never got management approval to spend time on that, apparently.
18:08 elyograg is there any info anywhere that I can look at on that?  I guess if it didn't get approval it won't be coming, but I'm curious about the details.
18:08 sac`away joined #gluster
18:08 elyograg i googled for myself but didn't find anything other than IRC logs for this channel and stuff about Pride and Prejudice. :)
18:11 lalatenduM joined #gluster
18:12 JoeJulian I think it might have been this one, but I can't access it: https://access.redhat.com/site/videos/215123
18:13 glusterbot Title: Red Hat Summit 2012 - A Deep Dive Into Red Hat Storage - Red Hat Customer PortalRed Hat Customer Portal (at access.redhat.com)
18:13 elyograg that would proably be an awesome video to watch.  if you don't have access, I definitely won't.
18:14 JoeJulian Ah, here's his slides: https://access.redhat.com/site/videos/215123
18:14 glusterbot Title: Red Hat Summit 2012 - A Deep Dive Into Red Hat Storage - Red Hat Customer PortalRed Hat Customer Portal (at access.redhat.com)
18:14 JoeJulian er, no...
18:14 lalatenduM semiosis, ping
18:14 semiosis lalatenduM: yo
18:14 semiosis kkeithley_: got your message
18:15 JoeJulian elyograg: http://pl.atyp.us/papers/summit-glusterfs-2012.odp
18:15 lalatenduM semiosis, need you help on this bug https://bugzilla.redhat.com/show_bug.cgi?id=1062118
18:15 glusterbot Bug 1062118: high, unspecified, ---, lmohanty, ASSIGNED , Not able to install GlusterFS 3.5 in Ubuntu
18:15 semiosis lalatenduM: ok
18:15 lalatenduM semiosis, can I assign to you ? :)
18:15 lalatenduM s/to/ it to/
18:15 glusterbot What lalatenduM meant to say was: semiosis, can I assign  it to you ? :)
18:15 SFLimey joined #gluster
18:15 rossi__ joined #gluster
18:15 semiosis lalatenduM: fine with me. (i didnt know bugs could be assigned to me!)
18:16 lalatenduM semiosis, I think it can be done , if you an account in bugzilla
18:16 lalatenduM s/an/ have an/
18:16 glusterbot What lalatenduM meant to say was: semiosis, I think it c have an be done , if you an account in bugzilla
18:17 semiosis oh i think i know what the problem is
18:17 semiosis let me check some things
18:17 semiosis please do assign to me, if you can
18:18 sac`away joined #gluster
18:18 ngoswami joined #gluster
18:18 Humble joined #gluster
18:18 lalatenduM semiosis, awesome!! what email id you have in bugzilla?
18:18 semiosis lalatenduM: pm'd you email, also dropped a comment on the ticket
18:20 lalatenduM semiosis, perfect, thanks a lot. I have assigned the bug to you
18:20 semiosis cool!
18:26 ^rcaskey joined #gluster
18:26 KyleG joined #gluster
18:26 KyleG joined #gluster
18:34 glusterbot New news from resolvedglusterbugs: [Bug 1062118] Not able to install GlusterFS 3.5 in Ubuntu Raring Ringtail (13.04) <https://bugzilla.redhat.com/show_bug.cgi?id=1062118>
18:34 semiosis lalatenduM: ^^
18:35 semiosis lalatenduM: summary: impossible to publish packages for ubuntu raring because it's no longer supported
18:35 lalatenduM semiosis, checking
18:35 rcaskey joined #gluster
18:36 lalatenduM semiosis, perfect!! :)
18:36 semiosis tyvm :)
18:37 lalatenduM semiosis, I missed that part , as Ubuntu intermediate relases are supported only for 9 months right?
18:38 semiosis right, and that policy change went into effect after quantal quetzal (12.10) so that intermediate release is still supported even tho raring, which is 6 months *newer* is not, because it came out after the policy change
18:39 lalatenduM semiosis, got it. Thanks a lot for your help :) really appreciate it semiosis++
18:39 semiosis my pleasure
18:47 ndk joined #gluster
18:54 tyrfing_ joined #gluster
19:04 rotbeard joined #gluster
19:06 theron joined #gluster
19:07 mattappe_ joined #gluster
19:08 mattapp__ joined #gluster
19:12 aixsyd joined #gluster
19:12 aixsyd dbruhn: heya buddy - got an idea to pass by ya
19:12 aixsyd or jclift_ .
19:12 dbruhn sup
19:13 aixsyd so im having a helluva time trying to find an IB switch with an *internal* subnet manager. its either 9024's or they dont exist until the $1000+ price point
19:13 aixsyd and as we spoke before - if i run a single subnet manager on a node and that node goes offline, all my clusters' fabric goes away
19:14 aixsyd but I had a thought.
19:16 dbruhn aixsyd, I was doing some reading the other day and apparently opensm can run on multiple servers, the first one up becomes the master and the rest go into standby
19:16 dbruhn http://pkg-ofed.alioth.debian.org/howto/infiniband-howto-4.html
19:16 glusterbot Title: Infiniband HOWTO: Setting up a basic infiniband network (at pkg-ofed.alioth.debian.org)
19:16 dbruhn section 4.5
19:17 aixsyd dbruhn: really? so in theory, opensm can run on all 5?
19:18 glusterbot New news from newglusterbugs: [Bug 1059833] Crash in quota <https://bugzilla.redhat.com/show_bug.cgi?id=1059833>
19:18 CheRi joined #gluster
19:18 aixsyd interesting. because i thought of another way around it
19:18 aixsyd and maybe my idea is still the best - lemme know your thoughts.
19:19 aixsyd so i have 2x 10gb ports on each server. i have 5 servers, 2 sets of clusters and a VM node.
19:19 aixsyd http://i.imgur.com/c95UsCE.jpg
19:24 aixsyd so each cluster set would run two IB networks. one thats dedicated to just gluster, and the other like the "outside" network. opensm would run on all 5 nodes - but cluster A would only be listening to ports ib0 and ib0 one that cluster node. same with cluster set B. subnet C would have opensm running on the 5'th node, and be seen on both ports for that.
19:25 aixsyd or am i getting ovrly complicated? >.<
19:34 denaitre joined #gluster
19:39 tyrfing_ joined #gluster
19:48 Matthaeus joined #gluster
19:51 xymox joined #gluster
19:59 xymox joined #gluster
20:00 theron joined #gluster
20:02 rossi_ joined #gluster
20:07 andreask joined #gluster
20:11 georgeh|workstat joined #gluster
20:11 dbruhn aixsyd, sorry was away from my desk. I am having a hard time wrapping my head around what you are proposing at first glance. Have a diagram?
20:12 dbruhn oh never mind it's there. Overly complicated in my opinion.
20:17 KyleG left #gluster
20:20 matclayton joined #gluster
20:23 JoeJulian he's gone
20:24 dbruhn Ahh poo, I'll catch him next time
20:26 jobewan joined #gluster
20:38 KyleG joined #gluster
20:38 KyleG joined #gluster
20:43 mattappe_ joined #gluster
20:43 blook hi there anyone here? :)
20:44 blook im really upset - i had to take one brick down for a day and wanted to bring it back right now
20:45 blook but the whole storage/gluster was not really responsive anymore and now i had to disable it again
20:45 blook im using 3.4
20:45 NuxRo joined #gluster
20:46 blook it had poor performance for more than 3 hours and wasn't able till yet to resync everything due to massive cpu load
20:46 blook as far as i could see a lot of context switches :(
20:47 Nuxr0 joined #gluster
20:49 mattappe_ joined #gluster
20:49 nikk_ joined #gluster
20:51 nikk_ i'm messing with rhel7, i only see glusterfsd packages in the repo, no gluster.. any ideas?  can't create new volumes with just the daemon :[
20:52 mattappe_ joined #gluster
20:53 JoeJulian Damned impatient people... blook might have liked this answer: Ooh, this might be a good reason to use purpleidea's suggestion the other day of using cgroups. http://www.andrewklau.com/controlling-glusterfsd-cpu-outbreaks-with-cgroups/
20:53 glusterbot Title: Controlling glusterfsd CPU outbreaks with cgroups | (at www.andrewklau.com)
20:53 JoeJulian @yum repo
20:53 glusterbot JoeJulian: The official community glusterfs packages for RHEL (including CentOS, SL, etc) are available at http://download.gluster.org/pub/gluster/glusterfs/. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates (or updates-testing) repository.
20:53 mattapp__ joined #gluster
20:54 JoeJulian ^ that's for you nikk_
20:56 nikk_ oh, there's already a rhel7 branch?
20:57 nikk_ haha no shit, *facepalm*
20:57 JoeJulian :D
20:58 nikk_ thankyou sir :]
20:58 JoeJulian You're welcome
20:58 Nuxr0 joined #gluster
21:00 chirino joined #gluster
21:01 Nuxr0 joined #gluster
21:01 Matthaeus1 joined #gluster
21:02 Nuxr0 joined #gluster
21:04 dramagods joined #gluster
21:05 RedShift joined #gluster
21:05 mattappe_ joined #gluster
21:07 rossi_ joined #gluster
21:09 Nuxr0 joined #gluster
21:13 Nuxr0 joined #gluster
21:15 nikk_ hm.. on rhel7 $releasever is set to "Everything" for me - i've never seen it be anything other than client, server, or workstation
21:15 nikk_ which makes the glusterfs-epel.repo no worky
21:17 JoeJulian Well that's wierd.
21:18 JoeJulian Just replace $releasever by hand.
21:18 nikk_ yeah i am
21:18 nikk_ might be a new rhel7'ism
21:18 JoeJulian ... and file a bug report.
21:18 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
21:18 JoeJulian might be
21:18 JoeJulian kkeithley_: ^^
21:22 ktosiek joined #gluster
21:28 recidive joined #gluster
21:34 ktosiek joined #gluster
21:41 kkeithley_ JoeJulian, nikk_: you want an epel-7Everything, a la epel-6Server, in the yum repos?
21:42 kkeithley_ I presume yes. I've made symlinks
21:43 VerboEse joined #gluster
21:47 mattap___ joined #gluster
21:50 rcaskey what's the appropriate syntax for listing multiple servers for mounting?
21:50 rcaskey (in fstab)
21:50 semiosis rcaskey: use ,,(rrdns)
21:50 glusterbot rcaskey: You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
21:50 semiosis put the rr address in fstab
21:53 portante joined #gluster
21:54 rcaskey semiosis, I think i've got a dep issue because ultimately the vm holding the on-site dns server may live in gluster
21:54 rcaskey and the router, so the off-site one would also be inaccessible
21:55 dbruhn That sounds messy, and maybe a reason for a server off of the vm host relying on gluster
21:56 semiosis +1
21:56 rcaskey yeah trying to keep the # of physical boxen onsite to a minimum
21:56 semiosis throw openwrt or ddwrt on a netgear :)
21:56 JoeJulian rcaskey: You can list one backupvolfile-server in the mount options.
21:56 semiosis or that
21:57 rcaskey i think i gots it
21:57 JoeJulian but we don't like that. ;)
21:57 JoeJulian And if you do it, we'll talk about you at the Christmas party.
21:57 rcaskey nooooo
21:58 rcaskey for my final step i'm gonna virtualize my vm hosts and they will exist in pure nirvana
21:58 rcaskey safe from io with the currupted physical world
21:59 dbruhn I like where this is going.
21:59 dbruhn I mean going to the christmas party and listening to Nirvana
22:00 JoeJulian +1
22:02 rcaskey no i meant the state of like...detachment
22:03 JoeJulian That's like just south of the state of California, right?
22:09 rcaskey semiosis, you aware of any issues that cause gluster volumes not to mount properly at boot via fstab?
22:09 rcaskey perhaps fuse being too late in the dep chain?
22:09 semiosis *painfully* aware of those issues!
22:09 rcaskey semiosis, is there an easy fix?
22:10 semiosis yes
22:10 semiosis many
22:10 semiosis all kludgy
22:10 semiosis what distro are you on?
22:10 rcaskey ubuntu 13.04ish i think
22:10 semiosis raring ringtail?
22:10 rcaskey I lied, this one is 12.04, but i can dist-upgrade it
22:10 semiosis no, stay on precise!
22:10 rcaskey k, staying on precise then :P
22:11 Ark_explorys joined #gluster
22:11 semiosis what's 'dpkg -l | grep gluserfs-server' say?
22:11 rcaskey empty
22:11 semiosis sorry
22:12 semiosis s/gluser/gluster/
22:12 glusterbot What semiosis meant to say was: what's 'dpkg -l | grep glusterfs-server' say?
22:12 rcaskey I did btw
22:12 rcaskey its just empty anwyay
22:12 semiosis try -client
22:12 rcaskey 3.2.5-1ubuntu1
22:12 semiosis there's yer problem right there
22:12 semiosis use my ,,(ppa) packages!
22:12 glusterbot The official glusterfs packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.4 stable: http://goo.gl/u33hy -- 3.5 QA: http://goo.gl/Odj95k
22:13 semiosis 3.4.2 is the current release of glusterfs, with all those pesky boot-time mount problems fixed the right way
22:13 rcaskey thought i was, I think I forgot to apt-get update after adding your ppa
22:14 semiosis make sure all your gluster hosts have the same version
22:14 semiosis 3.2 & 3.4 are not compatible
22:17 dbruhn So apparently the other day when freenode was totally a mess everywhere was because GCHQ was ddos'ing it to keep anonymous from communicating....
22:18 semiosis i saw an article about gchq ddosing irc, but it stopped short of drawing teh freenode connection
22:18 semiosis i assumed as much though
22:18 semiosis dbruhn: got a link?
22:19 JoeJulian Ah damn... I should go back and look at the mailllogs and see if spam levels went down at the same time.
22:19 mattappe_ joined #gluster
22:20 dbruhn http://www.bbc.co.uk/news/technology-26049448
22:20 glusterbot Title: BBC News - Snowden leaks: GCHQ 'attacked Anonymous' hackers (at www.bbc.co.uk)
22:20 dbruhn they don't say free node
22:20 dbruhn but it lines up
22:20 semiosis yep
22:21 mattap___ joined #gluster
22:24 JoeJulian Oh, that looks like it's an old attack that's just come to light. If anything, I would lean toward anonymous ddosing freenode just to show GCHQ how its done.
22:24 dbruhn lol
22:26 KyleG left #gluster
22:28 chirino joined #gluster
22:29 mattappe_ joined #gluster
22:29 mattapperson joined #gluster
22:38 mattappe_ joined #gluster
22:39 xymox joined #gluster
22:40 chirino joined #gluster
22:44 mattappe_ joined #gluster
22:49 xymox joined #gluster
22:52 Matthaeus joined #gluster
22:56 mattappe_ joined #gluster
22:56 mattapperson joined #gluster
23:00 xymox joined #gluster
23:09 mattappe_ joined #gluster
23:12 mattap___ joined #gluster
23:12 xymox joined #gluster
23:17 gdubreui joined #gluster
23:22 xymox joined #gluster
23:27 mattappe_ joined #gluster
23:28 mattapperson joined #gluster
23:32 mattappe_ joined #gluster
23:37 mattapperson joined #gluster
23:39 kaptk2 joined #gluster
23:49 glusterbot New news from newglusterbugs: [Bug 1062437] stripe does not work with empty xlator <https://bugzilla.redhat.com/show_bug.cgi?id=1062437>
23:53 georgeh|workstat joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary