Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-05-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 badone__ joined #gluster
00:13 gdubreui joined #gluster
00:13 DV joined #gluster
00:18 zerodeux joined #gluster
00:26 elyograg probed, bricks added, fix-layout underway.
00:27 nueces joined #gluster
00:28 zerodeux hello gluster people
00:29 zerodeux I currently have a broken Gluster server with glusterd which just refused to restart
00:30 zerodeux for some reason the configuration of my volume found in /var/lib/gluster does not seem to match what was actually running before I stopped glusterd
00:30 zerodeux for instance I detached a peer before stopping it, and now it insists on finding it while restarting (and fail)
00:31 zerodeux I was wondering : since I know my volume setup, can I recreate my volume with the existing bricks, without losing any data ?
00:32 elyograg I'm getting this in the logs sixteen times for each directory.  There were 16 replica sets before the fix-layout, now there are 24.
00:32 elyograg [2014-05-13 00:30:10.391108] W [client-rpc-fops.c:327:client3_3_mkdir_cbk] 0-mdf
00:32 elyograg s-client-33: remote operation failed: File exists. Path: /newscom/mdfs/RTR/rtrphotosfour/docs/047 (00000000-0000-0000-0000-000000000000)
00:33 primechuck joined #gluster
00:35 elyograg hmm.  looks like it's a different 16 brick (client) numbers for each directory.  If i had to guess, I would say that the client numbers correspond to the new bricks that I just added.
00:35 elyograg 342 through 47, in different numeric orders.
00:35 elyograg s/342/32/
00:35 glusterbot elyograg: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
00:36 * elyograg thwaps glusterbot with a tuna.
00:36 cyberj joined #gluster
00:38 keytab joined #gluster
00:41 yinyin_ joined #gluster
00:43 chirino joined #gluster
00:44 chirino joined #gluster
00:45 plarsen joined #gluster
01:04 edong23 joined #gluster
01:05 badone__ joined #gluster
01:10 jmarley joined #gluster
01:16 mjsmith2 joined #gluster
01:27 PLATOSCAVE joined #gluster
01:31 aviksil joined #gluster
01:50 zerodeux left #gluster
01:50 Ylann joined #gluster
01:52 Ylann left #gluster
02:02 an joined #gluster
02:02 yinyin_ joined #gluster
02:23 PLATOSCAVE joined #gluster
02:28 DV joined #gluster
02:37 gdubreui joined #gluster
02:39 bharata-rao joined #gluster
02:47 bala joined #gluster
02:56 haomaiwa_ joined #gluster
02:57 rwheeler joined #gluster
02:57 nueces joined #gluster
03:24 shubhendu joined #gluster
03:24 haomaiwa_ joined #gluster
03:32 RameshN joined #gluster
03:34 kdhananjay joined #gluster
03:36 gdubreui joined #gluster
03:36 haomaiw__ joined #gluster
03:48 shilpa_ joined #gluster
03:49 DV__ joined #gluster
03:52 itisravi joined #gluster
04:05 shylesh__ joined #gluster
04:08 aviksil joined #gluster
04:17 Paul-C joined #gluster
04:18 ChamaGluster joined #gluster
04:19 an joined #gluster
04:20 yinyin_ joined #gluster
04:20 ChamaGluster Hey JoeJulian, don't know if you are up, but thanks for all your responses via email.  I watch the list and am glad someone knows whats up. :)
04:20 ngoswami joined #gluster
04:23 kdhananjay joined #gluster
04:27 rastar joined #gluster
04:32 ppai joined #gluster
04:34 DV joined #gluster
04:36 Joe630 left #gluster
04:36 ChamaGluster left #gluster
04:39 rjoseph joined #gluster
04:52 TvL2386 joined #gluster
04:53 nishanth joined #gluster
04:58 vpshastry joined #gluster
04:58 atinmu joined #gluster
05:01 yinyin_ joined #gluster
05:03 kdhananjay1 joined #gluster
05:03 hagarth joined #gluster
05:10 ndarshan joined #gluster
05:11 ndarshan joined #gluster
05:14 bharata-rao joined #gluster
05:15 prasanthp joined #gluster
05:20 psharma joined #gluster
05:27 DV joined #gluster
05:27 gtobon joined #gluster
05:28 gtobon hello
05:28 glusterbot gtobon: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
05:28 davinder joined #gluster
05:31 gtobon I have Gluster 3.3 and I have the follow information
05:31 gtobon gluster volume info
05:31 gtobon
05:31 gtobon Volume Name: gv0_shares
05:31 gtobon Type: Replicate
05:31 gtobon Volume ID: a7716225-37e4-4fee-97c7-1acc08bb9307
05:31 gtobon Status: Started
05:31 gtobon Number of Bricks: 1 x 2 = 2
05:31 bala joined #gluster
05:31 gtobon Transport-type: tcp
05:31 gtobon Bricks:
05:31 gtobon Brick1: 10.50.50.220:/shares
05:31 gtobon Brick2: 10.50.50.221:/shares
05:31 gtobon Options Reconfigured:
05:31 gtobon geo-replication.indexing: on
05:31 gtobon gluster volume status
05:31 gtobon operation failed
05:31 gtobon
05:31 gtobon Failed to get names of volumes
05:32 gtobon How can I fix the gluster volume status
05:32 gtobon I'm wondering if i can upgrade to Gluster 3.5 with no issues
05:33 kanagaraj joined #gluster
05:50 hagarth joined #gluster
05:52 hagarth left #gluster
05:53 hagarth joined #gluster
05:54 rjoseph joined #gluster
05:57 nshaikh joined #gluster
05:57 nueces joined #gluster
06:09 ppai joined #gluster
06:10 shubhendu_ joined #gluster
06:22 hagarth joined #gluster
06:22 rjoseph joined #gluster
06:28 rahulcs joined #gluster
06:28 shilpa_ joined #gluster
06:29 vimal joined #gluster
06:30 ktosiek joined #gluster
06:32 kanagaraj joined #gluster
06:45 d-fence joined #gluster
06:56 mshadle grr. "stale nfs handle" on heal operations between two nodes
06:59 ade_b joined #gluster
07:00 ade_b any one able to help me with http://fpaste.org/101283/99643021/
07:00 glusterbot Title: #101283 Fedora Project Pastebin (at fpaste.org)
07:00 ctria joined #gluster
07:00 hchiramm__ ade_b, looking into it
07:01 ade_b hchiramm_, cool, thanks
07:01 Humble ade_b, which command was executed ?
07:01 ade_b hchiramm_, a little background  - its a 2 node gluster setup with oVirt running on top
07:01 Humble k
07:02 ade_b hchiramm_, so this is probaby coming from oVirt trying to do something
07:02 ade_b I can do gluster volume status on the one node (that doenst have oVirt manager on)
07:02 ade_b but I cant on the other node
07:02 Humble what error it gives ?
07:03 ade_b "Another transaction could be in progress. Please try again after sometime"
07:03 ade_b thats for "gluster volume status"
07:03 Humble ok.. so its within the timeout ..
07:03 ade_b but "gluster volume info" works
07:04 mshadle i have a bunch of "gfid" in the volume heal info, and it doesn't seem to be able to heal.
07:05 ade_b this is gluster volume info http://paste.fedoraproject.org/101284/13999646
07:05 glusterbot Title: #101284 Fedora Project Pastebin (at paste.fedoraproject.org)
07:06 Humble ade_b, can u tell me which compatibility level has been set for ur ovirt DC ?
07:06 ade_b this is gluster volume status from the "good node" http://paste.fedoraproject.org/101285/96478213
07:06 glusterbot Title: #101285 Fedora Project Pastebin (at paste.fedoraproject.org)
07:06 eseyman joined #gluster
07:06 ade_b 3.3
07:07 Humble hmmm .. I think thats the issue
07:07 Humble which version of ovirt is in place ?
07:07 Paul-C left #gluster
07:07 Humble http://gerrit.ovirt.org/#/c/23982/
07:07 glusterbot Title: Gerrit Code Review (at gerrit.ovirt.org)
07:09 ade_b Humble, ovirt-release-11.0.2-1.noarch
07:09 Humble ade_b, wait .. I am cross-checking..
07:09 ade_b Humble, ok
07:10 ade_b heres some more log stuff http://fpaste.org/101287/13999650/
07:10 glusterbot Title: #101287 Fedora Project Pastebin (at fpaste.org)
07:12 Humble ade_b, lets switch channel  :)
07:13 ade_b Humble, yup
07:13 ade_b Humble, thanks
07:14 badone__ joined #gluster
07:14 sahina joined #gluster
07:14 AaronGr joined #gluster
07:15 keytab joined #gluster
07:16 davinder joined #gluster
07:22 davinder joined #gluster
07:23 the-me joined #gluster
07:27 an joined #gluster
07:36 maduser joined #gluster
07:38 fsimonce joined #gluster
07:41 kdhananjay joined #gluster
07:44 ndarshan joined #gluster
07:46 ramteid joined #gluster
07:47 liquidat joined #gluster
07:55 calum_ joined #gluster
07:55 edward2 joined #gluster
07:57 ceiphas joined #gluster
07:59 ceiphas hi floks! i have a gluster volume with an enormous directory with millions of files. when i do a find in this dir it takes about 30sec the first time and 1sec the second time. but if i wait about 5minutes it takes 30sec again. how can i optimize this volume that the cache stays longer?
08:02 kdhananjay joined #gluster
08:09 ngoswami joined #gluster
08:15 PLATOSCAVE joined #gluster
08:15 an joined #gluster
08:15 ProT-0-TypE joined #gluster
08:17 aravindavk joined #gluster
08:31 dusmant joined #gluster
08:32 ppai joined #gluster
08:47 franc joined #gluster
08:47 franc joined #gluster
08:47 saravanakumar joined #gluster
08:48 rjoseph left #gluster
08:54 dusmant joined #gluster
08:54 olisch joined #gluster
08:55 kanagaraj joined #gluster
09:00 davinder joined #gluster
09:06 64MAAOCVQ joined #gluster
09:06 5EXABADVL joined #gluster
09:07 an joined #gluster
09:07 kdhananjay joined #gluster
09:15 rahulcs joined #gluster
09:22 dusmant joined #gluster
09:24 Philambdo joined #gluster
09:33 jmarley joined #gluster
09:33 jmarley joined #gluster
09:34 saurabh joined #gluster
09:35 kanagaraj joined #gluster
09:41 rahulcs joined #gluster
09:42 jwww joined #gluster
09:42 jwww Hello.
09:42 glusterbot jwww: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:45 ctria joined #gluster
09:47 jwww I'm new to glusterfs 3.3, what do you guys use when you have multiple disks in one serveur and want all the space avalaible in one brick ?
09:51 foobar why specifically 1 brick ?
09:52 foobar you could create a jbod or raid0 array ... but it's risky
09:52 jwww I just need one replicated volume, I thinked that 1 brick on each serveur would be fine.
09:53 jwww raid 0 array sound nice, why do you say it's risky ?
09:53 glusterbot New news from newglusterbugs: [Bug 1077452] Unable to setup/use non-root Geo-replication <https://bugzilla.redhat.co​m/show_bug.cgi?id=1077452>
09:53 ade_b joined #gluster
09:57 spandit joined #gluster
09:58 foobar jwww: because if 1 disk fails... you lose your data
09:58 foobar on all the other disks as well
09:58 foobar better to have gluster use the multiple bricks.... then you only lose a single brick
09:59 jwww I see.
10:01 jwww it's possible to have all disk space avalaible in just one volume ? I got 2 servers with 2 hard disk of 2TB each ,  but I need 4TB avalaible in one volume.
10:05 rahulcs_ joined #gluster
10:07 foobar you can put as many bricks in a volume as you want
10:07 foobar so 2 bricks on server A mirrored on 2 bricks on server B
10:07 foobar or 4 bricks deviced on 2 servers all contatinated (no redundancy...)
10:08 foobar or quad-redundacy, with 2 bricks replicated on A and 2 on B
10:08 foobar s/deviced/deviced/
10:08 glusterbot What foobar meant to say was: or 4 bricks deviced on 2 servers all contatinated (no redundancy...)
10:08 foobar s/contatinated/concatinated
10:08 foobar devided!
10:09 foobar I need more coffee
10:09 foobar divided even...
10:13 jwww I think I misunderstood how bricks make volumes. I go re read the documentation
10:13 jwww thanks foobar .
10:19 ctria joined #gluster
10:24 rahulcs joined #gluster
10:25 rahulcs_ joined #gluster
10:26 sulky joined #gluster
10:30 kkeithley1 joined #gluster
10:33 an joined #gluster
10:37 Slashman joined #gluster
10:49 nshaikh joined #gluster
10:51 rgustafs joined #gluster
10:56 plarsen joined #gluster
10:58 andreask joined #gluster
11:00 spandit joined #gluster
11:00 jkroon_ joined #gluster
11:01 jkroon_ is it possible to view the changelogs for the 3.4 branch online somewhere without having to download the individual source archives?
11:13 diegows joined #gluster
11:19 kkeithley1 It seems odd that it's not possible to get the changelog from the github repo. Maybe I'm missing something.  Anyway, http://kkeithle.fedorapeople.org/changelog-3.4
11:21 shubhendu_ joined #gluster
11:28 ccha2 kkeithley_: is it git log master.. ?
11:30 kdhananjay joined #gluster
11:31 an joined #gluster
11:32 kkeithley_ ccha2: I don't understand what you're asking
11:33 ccha2 to display the changelog
11:34 kkeithley_ On the github web site.
11:35 kkeithley_ Sure, once I've cloned the tree I can do a `git log $whatever` in the tree
11:36 kkeithley_ jkroon_ asked about the changelog without cloning the tree.
11:38 sjm joined #gluster
11:39 ccha2 hum about git hosting, it looks like on forge.gluster.org there isn't the 3.5 branch yet
11:41 ccha2 there is no v3.5.0 tag neither
11:42 kkeithley_ that's very strange
11:43 ccha2 the main git is the github one ?
11:44 edward2 joined #gluster
11:44 kkeithley_ The main git repo is git.gluster.org
11:45 ccha2 oh ok
11:46 kkeithley_ github and forge.gluster.org are mirrors
11:46 ccha2 I follow the http://www.gluster.org/download/ page
11:46 kkeithley_ If you want to submit patches you need to clone from git.gluster.org
11:46 kkeithley_ If you just want to check out the source then it doesn't matter
11:49 calum_ joined #gluster
11:54 sulky joined #gluster
11:54 ppai joined #gluster
12:01 kkeithley_ ccha2: release-3.5 branch has been pushed to the forge.gluster.org repo.
12:05 haomaiwa_ joined #gluster
12:07 plarsen joined #gluster
12:10 Ark joined #gluster
12:14 haomaiw__ joined #gluster
12:15 sulky joined #gluster
12:16 B21956 joined #gluster
12:17 itisravi_ joined #gluster
12:24 glusterbot New news from newglusterbugs: [Bug 1097224] Disconnects of peer and brick is logged while snapshot creations were in progress during IO <https://bugzilla.redhat.co​m/show_bug.cgi?id=1097224>
12:29 haomaiwang joined #gluster
12:38 sprachgenerator joined #gluster
12:48 mjsmith2 joined #gluster
12:52 PLATOSCAVE joined #gluster
12:53 kdhananjay joined #gluster
12:53 chirino joined #gluster
12:57 sprachgenerator joined #gluster
13:03 hagarth joined #gluster
13:05 shilpa_ joined #gluster
13:06 sroy joined #gluster
13:07 jmarley joined #gluster
13:07 jmarley joined #gluster
13:07 japuzzo joined #gluster
13:15 prasanthp joined #gluster
13:15 coredump joined #gluster
13:16 kdhananjay joined #gluster
13:17 Scott6 joined #gluster
13:18 Peanut Hi folks - I've set up a test environment with Ubuntu 14.04, gluster-3.5 from semiosis PPA, and qemu-glusterfs from André Bauer's PPA for libgfapi integration in qemu. However, when I try to start a vm after changing its XML to use gluster as backend, 'virsh start guest' just sits there, and the guest stays stuck in 'shut off' :-(
13:18 glusterbot New news from resolvedglusterbugs: [Bug 911420] Missing Development Process Workflow Documentation <https://bugzilla.redhat.com/show_bug.cgi?id=911420>
13:19 bennyturns joined #gluster
13:19 Peanut Also, the "transport='tcp'" keeps disappearing from the XML.
13:19 dusmant joined #gluster
13:24 glusterbot New news from newglusterbugs: [Bug 1093594] Glfs_fini() not freeing the resources <https://bugzilla.redhat.co​m/show_bug.cgi?id=1093594>
13:27 Philambdo joined #gluster
13:32 an joined #gluster
13:33 Philambdo joined #gluster
13:40 api984 joined #gluster
13:41 api984 hello
13:41 glusterbot api984: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:41 api984 did anyone try to use glusterfs with maildirs or in a mail cluster to be exact….
13:45 mjsmith2 joined #gluster
13:45 kaptk2 joined #gluster
13:48 glusterbot New news from resolvedglusterbugs: [Bug 927290] Code Hygiene - GlusterFileSystem.getFileBlockLocations(Path p, long start, long len) is empty <https://bugzilla.redhat.com/show_bug.cgi?id=927290> || [Bug 951645] build server error. JAR file isn't up to date <https://bugzilla.redhat.com/show_bug.cgi?id=951645> || [Bug 975560] if UIDs and GIDs dont match across the cluster, file not found error when running map reduce job <ht
13:54 John_HPC joined #gluster
14:00 Scott6_ joined #gluster
14:04 davinder joined #gluster
14:04 dcherednik joined #gluster
14:04 primechuck joined #gluster
14:07 dcherednik Hello. Why does gluster use ports less then 1024 for  one side of rpc connections?
14:09 dberry joined #gluster
14:09 Peanut dcherednik: that's a way to prove that it is coming from a program that has root privileges.
14:10 Peanut If you need to access gluster from a non-root program, you can disable that check.
14:11 dberry is there a tutorial on adding bricks to a replicate volume? Basically I have gfs1:/brick1 and gfs2:/brick1 in a replicated volume of 2 bricks and now I want to add gfs3:/brick and gfs4:/brick1 and then remove gfs1 and gfs2
14:12 dberry version is 3.3.2
14:12 hagarth1 joined #gluster
14:13 dcherednik Peanut: Do you mean if i set "option rpc-auth-allow-insecure on" gluster should not use ports from privilege range?
14:13 scuttle_ joined #gluster
14:13 Peanut dcherednik: no, it just tell gluster to accept connections from ports outside that range.
14:14 dbruhn joined #gluster
14:17 wushudoin joined #gluster
14:17 dcherednik Peanut: Is a way to configure (or make easy patch) to use ports not from privilege range? My situation is, I have 300 nodes in cluster, and it looks like netstat -nap | grep "192.168.1.2:" | awk '{split($4,a,":"); if (a[2] < 1024) {printf $4 "  "; print $7} }' | grep gluster| wc -l \n 831
14:18 dcherednik 831 connection
14:18 glusterbot New news from resolvedglusterbugs: [Bug 912465] Missing Hadoop Abstract DFS Unit Tests for Jenkins <https://bugzilla.redhat.com/show_bug.cgi?id=912465> || [Bug 949200] Missing CLI and Other Advanced hadoop-glusterfs unit tests <https://bugzilla.redhat.com/show_bug.cgi?id=949200> || [Bug 812924] poor write performance using gluster-plugin <https://bugzilla.redhat.com/show_bug.cgi?id=812924>
14:21 kdhananjay joined #gluster
14:27 LoudNoises joined #gluster
14:37 plarsen joined #gluster
14:38 chirino joined #gluster
14:45 B21956 joined #gluster
14:47 aviksil joined #gluster
14:47 elyograg left #gluster
14:49 LoudNoises joined #gluster
14:53 sputnik1_ joined #gluster
15:00 vpshastry joined #gluster
15:04 hagarth joined #gluster
15:17 jag3773 joined #gluster
15:20 saravanakumar left #gluster
15:22 hagarth joined #gluster
15:22 plarsen joined #gluster
15:23 jbd1 joined #gluster
15:24 failshell joined #gluster
15:27 kanagaraj joined #gluster
15:27 Lookcrabs anyone have a gluster volume 1petabyte or larger? Im having what i think are fs corruption issues and a fsck on the whole volume would probably never end (exaggerating) and a fix layout seems to behave the same. Any tips on tracking this down. iewould a a fsck simultaneously across the bricks work be a possible option?
15:28 theron joined #gluster
15:29 sputnik1_ joined #gluster
15:30 haomaiwa_ joined #gluster
15:33 theron joined #gluster
15:36 LoudNoises do you have raid underneath?
15:45 dbruhn Lookcrabs, what is your volume configuration, and how large are your bricks?
15:55 davinder joined #gluster
15:58 firemanxbr joined #gluster
16:09 ron-slc joined #gluster
16:14 Lookcrabs volume is a distributed volume and the bricks are pretty big. It's a hardware raid with xfs -i size=512. The bricks are probably around 100 or so terabytes in size plus or minus and we have around 80 to 100 bricks
16:15 Lookcrabs LoudNoises: dbruhn
16:16 dbruhn what version of gluster?
16:17 zerick joined #gluster
16:17 ktosiek joined #gluster
16:18 dbruhn and how full is your system today?
16:20 ndk` joined #gluster
16:20 dbruhn Lookcrabs, where I am going with that question is, do you have the ability to house the data on that brick if you were to remove it from the system
16:23 mjsmith2 joined #gluster
16:26 zerick joined #gluster
16:28 Lookcrabs We probably could house a brick or two if we were to remove a brick or two at a time dbruhn (i figured you were going here). We have been removing troublesome bricks so we are quite full.
16:29 dbruhn My only thought is to remove a brick from the cluster, copy the data back into the volume and then reintroduce the brick after you've healed the file system.
16:29 dbruhn or take the system down and run the fsck
16:29 dbruhn neither are probably the answer you want
16:29 Lookcrabs hahaha nope those are my current options. I was hoping for a magical third hahaha
16:30 dbruhn Sorry to be less than magical!
16:30 dbruhn Why are you running with such large bricks?
16:34 monotek left #gluster
16:34 kmai007 joined #gluster
16:34 Lookcrabs The units we had on hand are 36 disk units and the previous admin set them up as large xfs volumes. dividing the bricks up would probably be the right move once the volume is healthy again
16:34 kmai007 man i could not get on #gluster on kiwi
16:34 kmai007 but i'm here
16:34 aravindavk joined #gluster
16:36 kanagaraj joined #gluster
16:39 chirino joined #gluster
16:40 bala joined #gluster
16:45 LoudNoises yea the redhat docs say you shouldn't really go above 100TB on an XFS volume, we've had much better luck with smaller bricks on our system (we're at about 1PB so not quite as big)
16:47 an joined #gluster
16:50 daMaestro joined #gluster
16:51 nated joined #gluster
16:55 ktosiek_ joined #gluster
16:56 kmai007_ joined #gluster
16:57 nated Hi, I have a dumb geo-replication question.  Is it possible to "pre-seed" the target of a geo-replication pair with a backup of the source?
16:58 nated My use case is making the inital sync between two sites seperated by a low-bandwith link be faster
16:58 sprachgenerator joined #gluster
16:59 theron joined #gluster
17:00 failshel_ joined #gluster
17:04 vpshastry left #gluster
17:04 sjm joined #gluster
17:04 sjusthome joined #gluster
17:17 kmai007_ ami still here?
17:17 kmai007_ its so quiet
17:20 plarsen joined #gluster
17:21 dberry is 3.5 compatible with 3.3?
17:22 kmai007_ i believe its 3.5 to 3.4
17:41 jbd1 lots of lurkers here
17:42 plarsen joined #gluster
17:42 an joined #gluster
17:47 jcsp *lurk*
17:51 rotbeard joined #gluster
17:55 theron joined #gluster
17:58 giannello joined #gluster
17:59 dhsmith joined #gluster
18:03 Ark joined #gluster
18:03 cvdyoung joined #gluster
18:09 David_H_Smith joined #gluster
18:38 dbruhn It's been fairly quiet the last couple days
18:39 dbruhn Lookcrabs, sorry to hear about a previous admin, always rough dealing with someone else's design decisions you don't agree with.
18:42 saravanakumar joined #gluster
18:47 Lookcrabs dbruhn: he was a really smart guy and I am really new. If I had to choose based on my raid experience though I would do many smaller bricks to put more weight on our network. Thanks again for your help!
18:52 cvdyoung Hi, if I have two servers setup for replication, how exactly is a write being performed to them both?  Is the write IO being split at the client side, or does it route to one of the servers, and then replicate that write to the second?
18:57 glusterbot New news from newglusterbugs: [Bug 1097417] strcat usage in libglusterfs/src/common-utils.c <https://bugzilla.redhat.co​m/show_bug.cgi?id=1097417>
18:58 jag3773 joined #gluster
19:05 dbruhn cvdyoung, in the current versions of the software the client writes the data to both of the gluster bricks in a replication pair
19:05 dbruhn or all of the members of the replication group
19:10 sage__ joined #gluster
19:12 kkeithley_ And/but for NFS the client writes once to the NFS server, then one of the gNFS server does the replica writes to each of the servers in the replication group.
19:19 daMaestro joined #gluster
19:21 dblack joined #gluster
19:25 Ark joined #gluster
19:25 cvdyoung Thanks dbruhn!  I'm trying to figure out how these operations work.
19:25 dbruhn No problem, ask away
19:27 edward1 joined #gluster
19:35 \malex\ i'm going through the gluster docs, but i seem to be missing a fundamental monitoring command. how do i tell what the synchronization status is? how much still has to be synchronized and how fast it's going?
19:36 kmai007_ its not missing
19:36 kmai007_ its just not there
19:36 kmai007_ desired, yes!
19:37 \malex\ ah ok, that makes me feel better about my documentation reading skills :)
19:37 kmai007_ you can do do an info on a rebalance, if that is the operation you are referring to
19:38 \malex\ nah, i care more about the case where a server node has been down for some period of time, and is coming back into replication
19:38 kmai007_ that would be an awesome suggestion
19:38 kmai007_ in that case, it would be called "healed"
19:39 kmai007_ gluster volume heal <volname> info | healed
19:39 \malex\ i'm replacing a drbd 2 node cluster with 2 gluster server-clients
19:39 kmai007_ it doesn't give you progress estimates, but files that weren't there are no
19:39 \malex\ i'll have to play with it
19:39 kmai007_ now*
19:40 \malex\ so far, gluster seems a lot less finicky than drbd
19:41 dbruhn you can always file a request in bugzilla, or see if there is an existing request
19:41 kmai007_ kkeithley_: do you know what would cause an NFS socket to break?
19:41 \malex\ yeah, i just may. some monitoring like that would be handy
19:42 kmai007_ i'm trying to go over my logs for an incident, and i'mi seeing where the gluster storage nodes break down and give an RPC error
19:42 kmai007_ +1 \malex\
19:42 kmai007_ i agree
19:43 sadbox joined #gluster
19:43 \malex\ that's one thing i really liked about vertias volume replicator. lots of stats of various kinds
19:54 kkeithley_ an NFS socket to break? A network glitch? A bug in the TCP stack?
19:56 hagarth joined #gluster
19:56 daMaestro joined #gluster
19:56 kmai007_ kkeithley_: i thought of that, so network glitch, i was hoping the network.ping-timeout would came into play,
19:57 kmai007_ then TCP stack, ugh, i'm not good at wireshark to see what could have caused it
20:02 DV joined #gluster
20:04 John_HPC So I had a drive fail on me. During the rebuild I had a "punctured block". The split brain gave me some files to look at; thats fine. What does this entery mean
20:04 John_HPC 2014-05-13 19:56:03 <gfid:53dc39e2-e15e-48f5-915e-5193456e189f>
20:07 Ark joined #gluster
20:11 gdubreui joined #gluster
20:11 B21956 joined #gluster
20:13 chirino joined #gluster
20:18 cvdyoung joined #gluster
20:46 jag3773 joined #gluster
20:53 dblack joined #gluster
20:57 gmcwhistler joined #gluster
21:00 badone__ joined #gluster
21:04 glusterbot joined #gluster
21:05 cvdyoung joined #gluster
21:06 saravanakumar joined #gluster
21:45 chirino joined #gluster
21:53 MugginsM joined #gluster
21:55 theron joined #gluster
22:02 dbruhn left #gluster
22:08 abyss^ joined #gluster
22:08 firemanxbr joined #gluster
22:11 diegows joined #gluster
22:13 Ark joined #gluster
22:27 qdk joined #gluster
22:28 siel joined #gluster
22:36 B21956 joined #gluster
22:45 chirino joined #gluster
22:49 DV joined #gluster
22:58 mjsmith2 joined #gluster
23:01 DV joined #gluster
23:04 primechuck joined #gluster
23:05 ctria joined #gluster
23:16 theron_ joined #gluster
23:38 theron joined #gluster
23:41 theron_ joined #gluster
23:55 theron joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary